CN107092962A - A kind of distributed machines learning method and platform - Google Patents

A kind of distributed machines learning method and platform Download PDF

Info

Publication number
CN107092962A
CN107092962A CN201610090044.7A CN201610090044A CN107092962A CN 107092962 A CN107092962 A CN 107092962A CN 201610090044 A CN201610090044 A CN 201610090044A CN 107092962 A CN107092962 A CN 107092962A
Authority
CN
China
Prior art keywords
module
importation
algoritic
algoritic module
algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610090044.7A
Other languages
Chinese (zh)
Other versions
CN107092962B (en
Inventor
毛仁歆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Advanced New Technologies Co Ltd
Advantageous New Technologies Co Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201610090044.7A priority Critical patent/CN107092962B/en
Publication of CN107092962A publication Critical patent/CN107092962A/en
Application granted granted Critical
Publication of CN107092962B publication Critical patent/CN107092962B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • G06N5/022Knowledge engineering; Knowledge acquisition
    • G06N5/025Extracting rules from data

Abstract

The application provides a kind of distributed machines learning method and platform, and wherein platform includes:Logic Architecture module, the execution logic for building data processing task, the data processing task includes polyalgorithm module, and each algoritic module includes:Importation, algorithm part, output par, c, and the importation and output par, c have identical interface format, for being concatenated between the multiple algoritic module according to the interface format;The importation includes the Dependency Specification between this algoritic module and other algoritic modules;Algorithm performs module, for the execution logic according to the logical architecture module construction, performs each described algoritic module, and the algorithm part in the algoritic module, calls the algorithms library in resource layer to carry out computing respectively.The application improves the efficiency of data processing.

Description

A kind of distributed machines learning method and platform
Technical field
The application is related to computer technology, more particularly to a kind of distributed machines learning method and platform.
Background technology
Big data treatment technology progressively develops, it is possible to use big data builds the data mould applied in business Type, and the data model is applied to the prediction to business result.When data volume is small, The operational capability of single computer is enough;But when data volume scale is huger, it is necessary to One distributed calculating platform carries out a whole set of modeling process.It is flat in Distributed Calculation in correlation technique When platform is modeled, the multiple functional modules that can include modeling process are deployed in different set respectively Standby upper progress calculating processing, still, when each functional module is concatenated, due to more multiple between module Miscellaneous dependence so that module concatenation is not smooth, such as, parses each module of series connection, thus manually So that data processing is less efficient.
The content of the invention
In view of this, the application provides a kind of distributed machines learning method and platform, to improve at data The efficiency of reason.
Specifically, the application is achieved by the following technical solution:
First aspect includes there is provided a kind of distributed machines learning platform, the platform:
Logic Architecture module, the execution logic for building data processing task, the data processing task Including polyalgorithm module, each algoritic module includes:Importation, algorithm part, output section Point, and the importation and output par, c have identical interface format, for the multiple algorithm mould Concatenated between block according to the interface format;The importation includes this algoritic module and calculated with other Dependency Specification between method module;
Algorithm performs module, for the execution logic according to the logical architecture module construction, is performed respectively Each described algoritic module, and the algorithm part in the algoritic module, call the calculation in resource layer Faku County carries out computing.
Second aspect there is provided a kind of distributed machines learning method, including:
According to the execution logic of the data processing task of structure, perform in the data processing task and wrap respectively The polyalgorithm module included;Each algoritic module includes:Importation, algorithm part, output section Point, and the importation and output par, c have identical interface format;According in the algoritic module Algorithm part, call algorithms library in resource layer to carry out computing;
Between this algoritic module and other algoritic modules that importation in the algoritic module includes Dependency Specification, and the interface format carries out the concatenation between the multiple algoritic module.
The application provide distributed machines learning method and platform, by by modeling process include it is multiple Algoritic module, is deployed in progress calculating processing in different equipment respectively, also, each algoritic module can By identical interface format, to realize smooth concatenation, so that the efficiency of data processing is improved, In the example of application distribution formula machine learning platform modeling, modeling efficiency is improved.
Brief description of the drawings
Fig. 1 is a kind of framework of distributed machines learning platform shown in the exemplary embodiment of the application one;
Fig. 2 is a kind of structure design of algoritic module shown in the exemplary embodiment of the application one;
Fig. 3 is the concatenation schematic diagram of the algoritic module shown in the exemplary embodiment of the application one;
Fig. 4 is a kind of flow chart of distributed machines learning method shown in the exemplary embodiment of the application one.
Embodiment
Here exemplary embodiment will be illustrated in detail, its example is illustrated in the accompanying drawings.Following When description is related to accompanying drawing, unless otherwise indicated, the same numbers in different accompanying drawings represent same or analogous Key element.Embodiment described in following exemplary embodiment does not represent the institute consistent with the application There is embodiment.On the contrary, they are only one be described in detail in such as appended claims, the application The example of the consistent apparatus and method of a little aspects.
The embodiment of the present application provides a kind of distributed machines learning platform, and data mining teacher can use this Platform performs data processing task, for example, the data processing task can be built according to the data got Vertical forecast model, and assess the accuracy rate of the forecast model.
Fig. 1 illustrates the framework of the distributed machines learning platform, as shown in figure 1, the platform includes patrolling Collect frame modules 11, algorithm performs module 12 and resource layer 13.Wherein, data processing task is being performed When, such as build model during, various algorithms will be used, resource layer 13 as base layer support, Can be with integrated many algorithms storehouse, such as example in Fig. 1, the standalone version algorithms library such as R, Python, also Have Hadoop, ODPS, Spark distributed algorithms library, in addition, it can include MLlib, Mahout, Other algorithms libraries such as Xlib, will not enumerate display in Fig. 1.
Above-mentioned resource layer 13 is equivalent to the base layer support for performing data processing task, for example, modeling process In data processing, feature selecting, model training etc. will all use various algorithms, will call Algorithms library in resource layer 13 performs specific processing.Logic Architecture module 11 is used to build data processing The execution logic of task, for example, the data processing task can include polyalgorithm module, referring to Fig. 1 Example, the distributed machines learning platform can build DAG (Directed in Logic Architecture module 11 Acyclic Graph, directed acyclic graph) execution logic, the DAG execution logics can be represented at data Call relation between each algoritic module of reason task.
Fig. 1 illustrates the DAG logics between algoritic module, for example, algoritic module 1 can be to original Beginning gathered data carries out the module of data processing, and algoritic module 2 can be laggard to original data processing The module of row signature analysis, can carry out feature selecting or Feature Dimension Reduction etc., and algoritic module 3 can be root The feature obtained according to algoritic module 2 carries out the module that model training obtains model, and algoritic module n can be with It is the module that effect prediction is carried out to the model that training is obtained.Above-mentioned example is only to illustrate, in practical application It can be represented according to division polyalgorithm module the characteristics of data processing task, and by building DAG figures The implementation procedure of the task.
The execution logic that algorithm performs module 12 can be built according to Logic Architecture module 11, is performed respectively Each algoritic module, can be by calling the algorithms library in resource layer 13 to carry out when performing algoritic module Computing.In this example, resource layer 13 includes standalone version algorithms library and distributed algorithm storehouse, as far as possible So that resource layer 13 includes more comprehensive polytype algorithms library, algorithm performs module 12 is being held , can the data volume size according to this processing, the accuracy requirement to algorithm during some algoritic module of row Etc. factor, the suitable algorithms library of Selection and call is performed in resource layer 13.Such as, illustrated in Fig. 1 pair In the algoritic module that one of them builds in Logic Architecture module 11, algorithm performs module 12 can be from money An algorithms library is selected to perform computing in Hadoop, ODPS and Spark in active layer 13.
Pass through the description above, it will be appreciated that the general architecture of the distributed machines learning platform of this example, That is, Logic Architecture module 11 can build the DAG execution logics of data processing task, there is shown the number Each algoritic module and its correlation included according to processing task, and algorithm performs module 12 can basis The execution logic that Logic Architecture module 11 is built, calls the algorithms library in resource layer 13 to perform each algorithm Module.Wherein, in the present embodiment, each algoritic module is designed as to unified architecture, with convenient Concatenation and distributed deployment between module.
Fig. 2 illustrates the structure design of an algoritic module, as shown in Fig. 2 each algoritic module can be with Including:Importation 21, algorithm part 22 and output par, c 23.Importation 21 (input) conduct The input of algorithm part (algorithm), output par, c 23 (output) as algorithm part output, The input and output have identical interface format, and its information type can be at least one of following three kinds: Data (data), model (model) or result (evaluation).For example, data can be sampling Data, split after data etc., model can be that some obtained model is trained according to data, as a result It can be the result obtained according to model prediction.
Importation 21 can also include the Dependency Specification between this algoritic module and other algoritic modules, example Such as, which algoritic module can use that the module id of algoritic module represents to be relied on is, such as, this Module depends on data, model or the result of a upper module.Other algorithms that importation 21 is relied on The quantity of module can be at least one.Algorithm part 22 is used to represent using which kind of algorithm to input data The information of 21 inputs is handled.And output par, c 23 can be used for illustrating that the algoritic module whether there is The output of data, model or result.
As follows by an example, the structure design of the lower algoritic module of signal:
Above-mentioned example, to the importation inputs of algoritic module, algorithm part algorithm and defeated Go out part outputs and all each carried out standard definition, each algoritic module is carried out according to this configuration Design.For example, with reference in above example, inputs parts, the taskId of the algoritic module depended on are inputted It is ' 10002 ', also, data, model and the result of the algoritic module ' 10002 ', all it is used as this The input of algoritic module.Referring back to the output outputs parts of above-mentioned example, the output of this algoritic module, Including data and result (true), without output model (false).The algorithm part of this algoritic module In algorithm, the algorithm used claims to be logistic regression algorithm logisticRegression.
In addition, for the clarity of DAG logics, can specify that each algoritic module can only output it is unique One data, model or result, but multiple data, model or result can be introduced.For example, above Example in, the algoritic module that importation inputs is relied on only have taskId be ' 10002 ' algorithm Module, data, model and the result of this module as this algoritic module input.In others application In scene, the algoritic module that importation inputs is relied on can have more.
Exemplary, a kind of example of multiple inputs is illustrated as follows, referring to the importation inputs of the example, The input dependence of this algoritic module three, including taskId be respectively ' 10002 ', ' 10003 ' and ' 10004 ' three algoritic modules, the data data that ' 10002 ' are exported, ' 10003 ' outputs Model models and ' 10004 ' output result evaluations, as this algoritic module Input.Certainly, other scenes are can also be in practical application, for example, importation can only have data Data and model models, without result evaluations, is no longer lifted in detail.
In the present embodiment, unified definition is still carried out to the information type that each algoritic module is exported, for example, For data data, the data of intermediate result can be temporarily stored in local or distributed system, use one Individual Schema files can transmit data between different algoritic modules., can for model models To be expressed with PMML (Predictive Model Markup Language, Predictive Model Markup Language) Model parameter, PMML is a kind of de facto standard language, for data mining model, and PMML to be presented It can be used between different algoritic modules sharing forecast analysis model., can for result evaluations The result data of model evaluation is deposited in the form of using JSON, also, the result data can also enter Row visual presentation.
It can be seen that, data processing task divide into multiple independent algoritic modules, and this by the application A little algoritic modules have unified interface format, and this feature allows to be divided these algoritic modules Cloth is disposed, and the unified interface form is by for facilitating the smooth concatenation between each algoritic module. For example, with reference to Fig. 3 example, the Fig. 3 illustrates three algoritic modules G1, G2 and G3, wherein, The data and model of G1 outputs and the data and result of G2 outputs, can serve as G3 input, And G3 output model can as other modules input.In this process, G1, G2 and G3 Concatenation, because G1 (or G2) output and G3 input have the definition of identical form, be all Data, model or result, it is easy to realize the concatenation between module, in terms of interface standard will not being produced Conflict.Therefore, by identical interface format, each module splicing can be assembled into a complete DAG Logic is for execution.
When being modeled using the distributed machines learning platform of this example, modeling process can be included Polyalgorithm module, progress calculating processing, also, each algorithm are deployed in different equipment respectively Module can realize smooth concatenation, so as to improve the effect of data processing by identical interface format Rate, in the example of application distribution formula machine learning platform modeling, that is, improves modeling efficiency.
Fig. 4 illustrates the distributed machines study side performed using the distributed machines learning platform of the application Method, as shown in figure 4, this method can include:
In step 401, according to the execution logic of the data processing task of structure, the number is performed respectively The polyalgorithm module included according to processing task;Each algoritic module includes:Importation, calculation Method part, output par, c, and the importation and output par, c have identical interface format;According to Algorithm part in the algoritic module, calls the algorithms library in resource layer to carry out computing.
In step 402, the importation in the algoritic module includes this algoritic module and its Dependency Specification between his algoritic module, and the interface format, carry out the multiple algoritic module it Between concatenation.
401 above-mentioned and 402 execution sequence is not intended to limit, for example, distributed machines learning platform can Algoritic module in DAG logics is performed with one side, while each algoritic module is concatenated.In addition, When calling of algorithms library is being carried out according to algoritic module, can be the algorithm part in algoritic module Algorithm, calls the algorithms library in resource layer to carry out computing.Also, the machine learning of the present embodiment is put down The same algorithm that platform can will be distributed over diverse location is packaged, and is wanted according to data volume, algorithm computing The factor such as seek, select suitable algorithms library to perform algorithm computing.
For example, logistic regression algorithm logisticRegression training module has in R, Python Standalone version algorithms library is provided, while there is also distributed algorithm storehouse on Mahout, MLlib, but nothing By being standalone version or distributed algorithm storehouse, the parameter of algorithm in itself has no too big difference, the machine of this example Device learning platform can carry out these algorithms libraries unified encapsulation.Also, the algorithm performs module of platform can According to factors such as data volume size, the stability of algorithm, accuracy requirements, to assess, selection is suitable to calculate Run Faku County.If for example, data volume is smaller, standalone version algorithms library can be selected, when data volume is larger, Distributed algorithm storehouse can be selected to improve processing speed.
In addition, in data processing task using to algorithm types can have a variety of, for example, data processing Algorithm in terms of the algorithm of aspect, Feature Engineering, the algorithm in terms of model training assessment.In data processing Aspect, can carry out data sampling processing, data deconsolidation process, missing values processing etc., in Feature Engineering Aspect, can carry out the calculating of feature importance, characteristic crossover calculating, the choosing of feature sliding-model control, feature Select, in terms of model training assessment, model training, the PMML intelligence of model parameter expression can be carried out Energy assembling, the prediction of model and assessment, intelligent optimizing of model parameter etc..
The distributed machines learning platform of the embodiment of the present application, it is possible to achieve sharing for many algorithms storehouse, it is most Potentially include more comprehensive many algorithms storehouse;The expression model of DAG clear logics can also be built The association of modeling process and each algoritic module;Also, the algoritic module interface lattice unified by designing Formula so that each algoritic module be able to can both be ensured between module again with relatively independent distributed deployment Smooth concatenation, so as to improve data-handling efficiency.
The preferred embodiment of the application is the foregoing is only, it is all at this not to limit the application Within the spirit and principle of application, any modification, equivalent substitution and improvements done etc. should be included in Within the scope of the application protection.

Claims (10)

1. a kind of distributed machines learning platform, it is characterised in that the platform includes:
Logic Architecture module, the execution logic for building data processing task, the data processing task Including polyalgorithm module, each algoritic module includes:Importation, algorithm part, output section Point, and the importation and output par, c have identical interface format, for the multiple algorithm mould Concatenated between block according to the interface format;The importation includes this algoritic module and calculated with other Dependency Specification between method module;
Algorithm performs module, for the execution logic according to the logical architecture module construction, is performed respectively Each described algoritic module, and the algorithm part in the algoritic module, call the calculation in resource layer Faku County carries out computing.
2. platform according to claim 1, it is characterised in that the importation and output par, c Interface format, including:
The importation inputted as algorithm part and the output par, c exported as algorithm part, bag Include at least one of following information type:Data, model or result.
3. platform according to claim 1, it is characterised in that the importation relied on its The quantity of his algoritic module is at least one.
4. platform according to claim 1, it is characterised in that the resource layer includes:Unit Version algorithms library and distributed algorithm storehouse.
5. platform according to claim 1, it is characterised in that the Dependency Specification, including:According to The module id of bad algoritic module.
6. a kind of distributed machines learning method, it is characterised in that including:
According to the execution logic of the data processing task of structure, perform in the data processing task and wrap respectively The polyalgorithm module included;Each algoritic module includes:Importation, algorithm part, output section Point, and the importation and output par, c have identical interface format;According in the algoritic module Algorithm part, call algorithms library in resource layer to carry out computing;
Between this algoritic module and other algoritic modules that importation in the algoritic module includes Dependency Specification, and the interface format carries out the concatenation between the multiple algoritic module.
7. method according to claim 6, it is characterised in that with following information type at least A kind of importation or output par, c as the algoritic module:Data, model or result.
8. method according to claim 6, it is characterised in that the importation relied on its The quantity of his algoritic module is at least one.
9. method according to claim 6, it is characterised in that the resource layer includes:Unit Version algorithms library and distributed algorithm storehouse.
10. method according to claim 6, it is characterised in that the Dependency Specification, including: The module id of the algoritic module of dependence.
CN201610090044.7A 2016-02-17 2016-02-17 Distributed machine learning method and platform Active CN107092962B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610090044.7A CN107092962B (en) 2016-02-17 2016-02-17 Distributed machine learning method and platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610090044.7A CN107092962B (en) 2016-02-17 2016-02-17 Distributed machine learning method and platform

Publications (2)

Publication Number Publication Date
CN107092962A true CN107092962A (en) 2017-08-25
CN107092962B CN107092962B (en) 2021-01-26

Family

ID=59649265

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610090044.7A Active CN107092962B (en) 2016-02-17 2016-02-17 Distributed machine learning method and platform

Country Status (1)

Country Link
CN (1) CN107092962B (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108153815A (en) * 2017-11-29 2018-06-12 北京京航计算通讯研究所 Towards the index classification method of big data
CN108897587A (en) * 2018-06-22 2018-11-27 北京优特捷信息技术有限公司 Plug type machine learning algorithm operation method, device and readable storage medium storing program for executing
CN108960433A (en) * 2018-06-26 2018-12-07 第四范式(北京)技术有限公司 For running the method and system of machine learning modeling process
CN109325756A (en) * 2018-08-03 2019-02-12 上海小渔数据科技有限公司 Data processing method and device, server for data algorithm transaction
CN109343833A (en) * 2018-09-20 2019-02-15 北京神州泰岳软件股份有限公司 Data processing platform (DPP) and data processing method
CN110120251A (en) * 2018-02-07 2019-08-13 北京第一视角科技有限公司 The statistical analysis technique and system of multidimensional health data based on Spark
CN110598868A (en) * 2018-05-25 2019-12-20 腾讯科技(深圳)有限公司 Machine learning model building method and device and related equipment
CN110825511A (en) * 2019-11-07 2020-02-21 北京集奥聚合科技有限公司 Operation flow scheduling method based on modeling platform model
CN110880036A (en) * 2019-11-20 2020-03-13 腾讯科技(深圳)有限公司 Neural network compression method and device, computer equipment and storage medium
CN110909761A (en) * 2019-10-12 2020-03-24 平安科技(深圳)有限公司 Image recognition method and device, computer equipment and storage medium
TWI706378B (en) * 2018-12-29 2020-10-01 鴻海精密工業股份有限公司 Cloud device, terminal device, and image classification method
CN112488365A (en) * 2020-11-17 2021-03-12 深圳供电局有限公司 Load prediction system and method based on load prediction pipeline framework language
CN114489867A (en) * 2022-04-19 2022-05-13 浙江大华技术股份有限公司 Algorithm module scheduling method, algorithm module scheduling device and readable storage medium
CN114997414A (en) * 2022-05-25 2022-09-02 北京百度网讯科技有限公司 Data processing method and device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030033263A1 (en) * 2001-07-31 2003-02-13 Reel Two Limited Automated learning system
CN101782976A (en) * 2010-01-15 2010-07-21 南京邮电大学 Automatic selection method for machine learning in cloud computing environment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030033263A1 (en) * 2001-07-31 2003-02-13 Reel Two Limited Automated learning system
CN101782976A (en) * 2010-01-15 2010-07-21 南京邮电大学 Automatic selection method for machine learning in cloud computing environment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
谢通: "分布式单类支持向量机聚类算法研究", 《万方学位论文集》 *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108153815A (en) * 2017-11-29 2018-06-12 北京京航计算通讯研究所 Towards the index classification method of big data
CN110120251A (en) * 2018-02-07 2019-08-13 北京第一视角科技有限公司 The statistical analysis technique and system of multidimensional health data based on Spark
CN110598868A (en) * 2018-05-25 2019-12-20 腾讯科技(深圳)有限公司 Machine learning model building method and device and related equipment
CN110598868B (en) * 2018-05-25 2023-04-18 腾讯科技(深圳)有限公司 Machine learning model building method and device and related equipment
CN108897587A (en) * 2018-06-22 2018-11-27 北京优特捷信息技术有限公司 Plug type machine learning algorithm operation method, device and readable storage medium storing program for executing
CN108897587B (en) * 2018-06-22 2021-11-12 北京优特捷信息技术有限公司 Pluggable machine learning algorithm operation method and device and readable storage medium
CN108960433A (en) * 2018-06-26 2018-12-07 第四范式(北京)技术有限公司 For running the method and system of machine learning modeling process
CN108960433B (en) * 2018-06-26 2022-04-05 第四范式(北京)技术有限公司 Method and system for running machine learning modeling process
CN109325756A (en) * 2018-08-03 2019-02-12 上海小渔数据科技有限公司 Data processing method and device, server for data algorithm transaction
CN109343833A (en) * 2018-09-20 2019-02-15 北京神州泰岳软件股份有限公司 Data processing platform (DPP) and data processing method
TWI706378B (en) * 2018-12-29 2020-10-01 鴻海精密工業股份有限公司 Cloud device, terminal device, and image classification method
WO2021068529A1 (en) * 2019-10-12 2021-04-15 平安科技(深圳)有限公司 Image recognition method and apparatus, computer device and storage medium
CN110909761A (en) * 2019-10-12 2020-03-24 平安科技(深圳)有限公司 Image recognition method and device, computer equipment and storage medium
CN110825511A (en) * 2019-11-07 2020-02-21 北京集奥聚合科技有限公司 Operation flow scheduling method based on modeling platform model
CN110880036A (en) * 2019-11-20 2020-03-13 腾讯科技(深圳)有限公司 Neural network compression method and device, computer equipment and storage medium
CN110880036B (en) * 2019-11-20 2023-10-13 腾讯科技(深圳)有限公司 Neural network compression method, device, computer equipment and storage medium
CN112488365A (en) * 2020-11-17 2021-03-12 深圳供电局有限公司 Load prediction system and method based on load prediction pipeline framework language
CN114489867A (en) * 2022-04-19 2022-05-13 浙江大华技术股份有限公司 Algorithm module scheduling method, algorithm module scheduling device and readable storage medium
CN114997414A (en) * 2022-05-25 2022-09-02 北京百度网讯科技有限公司 Data processing method and device, electronic equipment and storage medium
CN114997414B (en) * 2022-05-25 2024-03-08 北京百度网讯科技有限公司 Data processing method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN107092962B (en) 2021-01-26

Similar Documents

Publication Publication Date Title
CN107092962A (en) A kind of distributed machines learning method and platform
KR102342604B1 (en) Method and apparatus for generating neural network
US8810595B2 (en) Declarative approach for visualization
CN108090516A (en) Automatically generate the method and system of the feature of machine learning sample
CN108563548A (en) Method for detecting abnormality and device
CN107168952A (en) Information generating method and device based on artificial intelligence
CN107450902A (en) System architecture with visual modeling tool
CN106095942B (en) Strong variable extracting method and device
CN104267938B (en) A kind of method and device of the quick development deployment of the application of streaming computing
CN107220217A (en) Characteristic coefficient training method and device that logic-based is returned
CN110163723A (en) Recommended method, device, computer equipment and storage medium based on product feature
CN110443222A (en) Method and apparatus for training face's critical point detection model
CN109410253B (en) For generating method, apparatus, electronic equipment and the computer-readable medium of information
CN108182063A (en) A kind of implementation method of big data analysis visual configuration
US9304746B2 (en) Creating a user model using component based approach
US20170345029A1 (en) User action data processing method and device
CN108255706A (en) Edit methods, device, terminal device and the storage medium of automatic test script
CN106657192A (en) Method used for presenting service calling information and equipment thereof
CN107169574A (en) Using nested machine learning model come the method and system of perform prediction
CN107679141A (en) Data storage method, device, equipment and computer-readable recording medium
US11610030B2 (en) Method and system for optimizing shipping methodology for 1-directional wall panels
CN110110001A (en) Service performance data processing method, device, storage medium and system
CN107273979A (en) The method and system of machine learning prediction are performed based on service class
JP2018010435A (en) Sales prediction device, sales prediction method and program
CN108898604A (en) Method and apparatus for handling image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20200923

Address after: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman Islands

Applicant after: Innovative advanced technology Co.,Ltd.

Address before: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman Islands

Applicant before: Advanced innovation technology Co.,Ltd.

Effective date of registration: 20200923

Address after: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman Islands

Applicant after: Advanced innovation technology Co.,Ltd.

Address before: A four-storey 847 mailbox in Grand Cayman Capital Building, British Cayman Islands

Applicant before: Alibaba Group Holding Ltd.

GR01 Patent grant
GR01 Patent grant