CN110688205B - Execution device, related method and related device for machine learning task - Google Patents

Execution device, related method and related device for machine learning task Download PDF

Info

Publication number
CN110688205B
CN110688205B CN201910816452.XA CN201910816452A CN110688205B CN 110688205 B CN110688205 B CN 110688205B CN 201910816452 A CN201910816452 A CN 201910816452A CN 110688205 B CN110688205 B CN 110688205B
Authority
CN
China
Prior art keywords
machine learning
execution
learning task
task
training data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910816452.XA
Other languages
Chinese (zh)
Other versions
CN110688205A (en
Inventor
金昭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Inspur Data Technology Co Ltd
Original Assignee
Beijing Inspur Data Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Inspur Data Technology Co Ltd filed Critical Beijing Inspur Data Technology Co Ltd
Priority to CN201910816452.XA priority Critical patent/CN110688205B/en
Publication of CN110688205A publication Critical patent/CN110688205A/en
Application granted granted Critical
Publication of CN110688205B publication Critical patent/CN110688205B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5017Task decomposition

Abstract

The application discloses executive device of machine learning task includes: the machine learning task acquisition module is used for generating executable codes according to the type of the training data and an expected output result, and forming the training data and the executable codes into a machine learning task; the machine learning task cutting module is used for cutting the machine learning task into a plurality of machine learning subtasks; the machine learning task distribution module is used for distributing the plurality of machine learning subtasks to the plurality of nodes so that the browser plug-in of each node executes the corresponding machine learning subtask to obtain an execution subtask result, and returns the execution subtask result; and the processing result summarizing module is used for collating all the received execution sub-results to obtain the execution results. The execution efficiency of the machine learning task is improved, and the execution cost of the machine learning task is reduced. The application also discloses another execution device, two execution methods, a server and a computer readable storage medium, which have the beneficial effects.

Description

Execution device, related method and related device for machine learning task
Technical Field
The present application relates to the field of computer technologies, and in particular, to an execution device, two execution methods, a server, and a computer-readable storage medium for machine learning tasks.
Background
With the development of information technology, machine learning technology appears for better adoption of machines to identify and process data, is a multi-field cross subject, and relates to multiple subjects such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. The special research on how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. At the application level, this can be understood as an algorithm of a production algorithm. Feature extraction is needed, and then the vectorization of features is carried out and then the vectorization is carried out on a machine for training. In general, a large amount of data is required for processing in performing machine learning training, which means that a strong computational performance is required to support the training process.
However, at present, if a single computer with ordinary performance is used for machine learning training, the machine learning training cannot be realized quickly, and the special machine learning equipment has high cost, so that the machine learning training task cannot be guaranteed to be completed effectively.
Therefore, how to reduce the execution cost of executing the machine learning task is a key issue to be focused on by those skilled in the art.
Disclosure of Invention
The application aims to provide an execution device, two execution methods, a server and a computer readable storage medium for machine learning tasks, and by means of dividing the whole machine learning task into a plurality of nodes, execution efficiency of the machine learning task is improved, and execution cost of the machine learning task is reduced.
In order to solve the above technical problem, the present application provides a method for executing a machine learning task, including:
generating executable codes according to the type of training data and an expected output result, and forming a machine learning task by the training data and the executable codes;
cutting the machine learning task into a plurality of machine learning subtasks;
distributing the plurality of machine learning subtasks to a plurality of nodes so that the browser plug-in of each node executes the corresponding machine learning subtask to obtain an execution subtask, and returning the execution subtask;
and sorting all the received execution sub-results to obtain execution results.
Optionally, generating an executable code according to the type of the training data and the expected output result, and forming a machine learning task by using the training data and the executable code, including:
Determining a machine learning model according to the type of the training data and the expected output result, and generating a code template according to the machine learning model;
adding the received parameters into the code template to obtain the executable code;
and forming the training data and the executable code into a machine learning task.
Optionally, the machine learning task is divided into a plurality of machine learning subtasks, including:
searching task cutting historical data of the same type according to the type of the machine learning task;
and cutting the machine learning task into a plurality of machine learning subtasks according to the task cutting historical data.
Optionally, distributing the plurality of machine learning subtasks to a plurality of nodes includes:
and distributing the plurality of machine learning subtasks to corresponding nodes according to the performance resources of the plurality of nodes.
The application also provides an execution method of the machine learning task, which comprises the following steps:
the browser plug-in executes the received machine learning subtask by using the idle resource to obtain an execution subtask; the machine learning subtask is a task obtained by cutting and distributing the generated machine learning task by the target node;
And sending the execution sub-result to the target node so that the target node can sort the received execution sub-result to obtain an execution result.
The present application further provides an execution device of a machine learning task, including:
the machine learning task acquisition module is used for generating executable codes according to the type of training data and an expected output result, and forming a machine learning task by the training data and the executable codes;
the machine learning task cutting module is used for cutting the machine learning task into a plurality of machine learning subtasks;
the machine learning task distribution module is used for distributing the plurality of machine learning subtasks to a plurality of nodes so that the browser plug-in of each node executes the corresponding machine learning subtask to obtain an execution subtask and returns the execution subtask;
and the processing result summarizing module is used for collating all the received execution sub-results to obtain the execution results.
Optionally, the machine learning task obtaining module includes:
the code template generating unit is used for determining a machine learning model according to the type of the training data and the expected output result and generating a code template according to the machine learning model;
The executable code acquisition unit is used for adding the received parameters into the code template to obtain the executable code;
and the learning task acquisition unit is used for forming a machine learning task by the training data and the executable code.
The present application further provides an execution device of a machine learning task, including:
the subtask execution module is used for operating the browser plug-in to execute the received machine learning subtask by using the idle resource to obtain an execution subtask result; the machine learning subtask is a task obtained by cutting and distributing the generated machine learning task by the target node;
and the execution result sending module is used for sending the execution sub-results to the target node so that the target node can sort the received execution sub-results to obtain the execution results.
The present application further provides a server, comprising:
a memory for storing a computer program;
a processor for implementing the steps of the execution method as described above when executing the computer program.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, carries out the steps of the method of execution as described above.
The application provides a method for executing a machine learning task, which comprises the following steps: generating executable codes according to the type of training data and an expected output result, and forming a machine learning task by the training data and the executable codes; cutting the machine learning task into a plurality of machine learning subtasks; distributing the plurality of machine learning subtasks to a plurality of nodes so that a browser plug-in of each node executes the corresponding machine learning subtask to obtain an execution subtask, and returning the execution subtask; and sorting all the received execution sub-results to obtain execution results.
The machine learning task is obtained through the training data and the expected output result, the difficulty in obtaining the machine learning task is reduced, the usability of the machine learning task execution method is improved, and the machine learning task is further cut and distributed, so that the machine learning task is divided by a plurality of nodes, the machine learning task is prevented from being executed in one machine device, the execution efficiency of the machine learning task is improved, the performance resources of other nodes are reasonably utilized, and the maximization of the performance utilization rate is kept.
The present application further provides another execution device, two execution methods, a server, and a computer-readable storage medium, which have the above beneficial effects and are not described herein again.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a flowchart of a method for executing a machine learning task according to an embodiment of the present disclosure;
FIG. 2 is a flow chart of another method for performing a machine learning task according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of an apparatus for performing a machine learning task according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of another device for executing a machine learning task according to an embodiment of the present disclosure.
Detailed Description
The core of the application is to provide an execution device, two execution methods, a server and a computer readable storage medium for machine learning tasks, and by means of dividing the whole machine learning task into a plurality of nodes, the execution efficiency of the machine learning task is improved, and the execution cost of the machine learning task is reduced.
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the prior art, a large amount of data is needed for processing when machine learning training is performed, namely, powerful computational performance is needed to support the training process. However, at present, if a single computer with ordinary performance is used for machine learning training, the machine learning training cannot be realized quickly, and the special machine learning equipment has high cost, so that the effective completion of the machine learning training task cannot be guaranteed.
The application provides an execution method of a machine learning task, the machine learning task is obtained through training data and an expected output result, the difficulty of obtaining the machine learning task is reduced, the usability of the execution method of the machine learning task is improved, and further the machine learning task is cut and distributed, so that a plurality of nodes divide the machine learning task, the machine learning task is prevented from being executed in a piece of machine equipment, the execution efficiency of the machine learning task is improved, performance resources of other nodes are reasonably utilized, and the maximization of the performance utilization rate is kept.
Referring to fig. 1, fig. 1 is a flowchart illustrating a method for executing a machine learning task according to an embodiment of the present disclosure.
In this embodiment, the method may include:
s101, generating executable codes according to the type of training data and an expected output result, and forming a machine learning task by the training data and the executable codes;
this step is intended to capture the machine learning task. The machine learning task mainly comprises training data and executable codes. The training data is a data set for training a model, and the executable code is data obtained by coding the model. And the executable code is mainly generated according to the type of the training data and the expected output result. That is, the type of the obtained training data and the expected output result may be used to determine the model to be used, then the model is coded, and finally parameters are added to the coded model, so as to obtain the executable code.
Therefore, optionally, the process of acquiring the machine learning task in this step may include: determining a machine learning model according to the type of the training data and an expected output result, and generating a code template according to the machine learning model; adding the received parameters into a code template to obtain an executable code; the training data and executable code are combined into a machine learning task.
The machine learning model may be model data stored in a database in advance, may be model data acquired from other data, or may be model data input by a user. As can be seen, the manner of acquiring the machine learning model in this embodiment is not exclusive, and is not specifically limited herein.
Then, the machine learning model is coded to generate a corresponding code template. The coding method may be any one of the coding methods provided in the prior art, and is not limited herein.
Second, the received parameters are added to the code template. The received parameters may be parameters input by the user, or similar parameters obtained from historical data. It is to be understood that the manner of obtaining the parameters in this step is not particularly limited.
Finally, when the executable code is retrieved, the training data and the executable code may be combined into machine learning code.
In addition, in this step, software in the computer may execute the steps in this embodiment to implement the technical solution of the present application. In order to reduce the implementation difficulty of the embodiment and improve the development efficiency, the embodiment is implemented by a plug-in of a browser. That is, all the steps of the present embodiment are realized by the processor operating the browser.
S102, cutting the machine learning task into a plurality of machine learning subtasks;
on the basis of S101, this step is intended to cut the acquired machine learning task into a plurality of machine learning subtasks. Specifically, the step of cutting the machine learning task is to cut the training data in the machine learning task into training data subsets of different sizes.
It is conceivable that, in this step, the machine learning task may be executed in different ways to perform the cutting according to different required steps and different machine learning tasks. The machine learning task may be cut evenly or may be classified according to the type of training data, wherein each subset of the training data is divided into different types.
Therefore, in this step, the machine learning task may be divided into a plurality of machine learning subtasks having different data amounts in order to take into account the difference in processing performance and idle performance between each node. Specifically, the total computing power of all the nodes may be counted first, and then the task duty ratio of each node may be obtained according to the total computing power and the local computing power of each node. And cutting the machine learning task according to all task occupation ratios.
The complex operation of the cutting process can be avoided, and the machine learning task is cut into a plurality of machine learning subtasks with average sizes. The number of the machine learning subtasks is larger than that of the nodes, so that each node is allocated with one machine learning subtask first, and then when the node completes the processing of the machine learning subtask, a new machine learning subtask is allocated to the node until all the machine learning subtasks are completed.
Optionally, the method for cutting the machine learning task in this step may include: searching task cutting historical data of the same type according to the type of the machine learning task; and cutting the machine learning task into a plurality of machine learning subtasks according to the task cutting historical data.
S103, distributing the plurality of machine learning subtasks to a plurality of nodes so that the browser plug-in of each node executes the corresponding machine learning subtask to obtain an execution subtask, and returning the execution subtask;
on the basis of S102, this step is intended to distribute the cut multiple machine learning subtasks to multiple nodes, so that each node executes one machine learning subtask respectively.
It is conceivable that two or more machine learning subtasks may be distributed to the same node according to the processing performance, that multiple machine learning subtasks may be sent to the same node according to the idle resource of the node, and that one machine learning subtask may be sent to all the nodes. Specifically, it is not particularly limited herein. When the distribution process is executed, a suitable distribution mode is selected to distribute the machine learning subtasks, so that each node can execute the corresponding machine learning subtask.
Optionally, the method for distributing the machine learning subtask in this step may include: and distributing the plurality of machine learning subtasks to the corresponding nodes according to the performance resources of the plurality of nodes.
And S104, sorting all the received execution sub-results to obtain execution results.
On the basis of S103, this step aims to perform a summary arrangement on all the received execution children to obtain a final execution result.
In this embodiment, any data summarizing and sorting method provided in the prior art may be adopted, and is not specifically limited herein.
In summary, in the embodiment, the machine learning task is obtained through the training data and the expected output result, the difficulty in obtaining the machine learning task is reduced, the usability of the machine learning task execution method is improved, and further, the machine learning task is cut and distributed, so that the plurality of nodes divide the machine learning task, the machine learning task is prevented from being executed in one machine device, the execution efficiency of the machine learning task is improved, the performance resources of other nodes are reasonably utilized, and the maximization of the performance utilization rate is maintained.
In the following, a method for executing a machine learning task provided by the present application is described with another embodiment and with a node as an angle.
Referring to fig. 2, fig. 2 is a flowchart illustrating another method for executing a machine learning task according to an embodiment of the present disclosure.
In this embodiment, the method may include:
s201, the browser plug-in executes the received machine learning subtask by using idle resources to obtain an execution subtask; the machine learning subtask is a task obtained by cutting and distributing the generated machine learning task by the target node;
s202, sending the execution sub-result to the target node so that the target node can sort the received execution sub-result to obtain the execution result.
The machine learning task is formed by that a target node generates executable codes according to the type of training data and an expected output result, and the training data and the executable codes are combined. And then, the machine learning task is cut into a plurality of machine learning subtasks, and the plurality of machine learning subtasks are distributed, so that each node can execute the corresponding task.
It can be seen that the present embodiment mainly explains, from the perspective of the node, what kind of processing is performed after the machine learning subtask is received. In order to avoid the influence on the task normally executed by the node, the browser plug-in may execute the received machine learning subtask by using the idle resource to obtain an execution subtask. And finally, each node sends the execution sub-result obtained by execution to the target node, so that the target node can obtain the execution result after the arrangement.
Therefore, the target in this embodiment obtains the machine learning task through the training data and the expected output result, the difficulty in obtaining the machine learning task is reduced, the usability of the machine learning task execution method is improved, and further, the machine learning task is cut and distributed, so that the plurality of nodes divide the work of the machine learning task, the machine learning task is prevented from being executed in one machine device, the execution efficiency of the machine learning task is improved, the performance resources of other nodes are reasonably utilized, and the maximization of the performance utilization rate is maintained.
The method for performing a machine learning task provided by the present application is further described below by way of another specific embodiment.
In this embodiment, the method may include:
step 1, a machine learning code generator generates an input code according to user requirements and transmits the input code to a front-end interface interaction device;
specifically, based on the user data type, and the desired output, the appropriate machine learning model is determined and the template JS machine learning code is generated. For example: the original data are classified clearly, and the final user needs an automatic classification model, the generator recommends a similar naive Bayes algorithm model to participate in the calculation and training of the data, and a training template is generated. And finally, inputting the generated input data into the front-end interface interactor in a json format. For example: { model: <? And 6, code: <? >.
And 2, the front-end interface interaction device acquires the code template transmitted from the front and generates a parameter adjusting interface according to the machine learning model. For example: the code is acquired, for example: the code is { model: <? And 6, code: <? } >; correspondingly adding the parameters in the parameter adjustment page into the code template, for example: { a:, b: } - > { a:5, b:6 }.
Step 3, the back-end distributed task distributing and collecting device receives the code analysis sent by the front end, generates a distributed task, and sends the code analysis to the front end by key: and distributing tasks in a value mode, summarizing final output, and summarizing a result to a browser of a user.
For example: and (3) cutting tasks by back-end distributed task distribution and collection: { K1: V1, K2: V2, K3: V3 … } into different browser plug-ins.
Therefore, the machine learning task is obtained through the training data and the expected output result, the difficulty in obtaining the machine learning task is reduced, the usability degree of the machine learning task execution method is improved, and the machine learning task is further cut and distributed, so that the plurality of nodes divide the machine learning task, the machine learning task is prevented from being executed in one machine device, the execution efficiency of the machine learning task is improved, the performance resources of other nodes are reasonably utilized, and the maximization of the performance utilization rate is kept.
In the following, a device for executing a machine learning task according to an embodiment of the present application is introduced, and the device for executing a machine learning task described below and the method for executing a machine learning task described above may be referred to in correspondence.
Referring to fig. 3, fig. 3 is a schematic structural diagram of an apparatus for executing a machine learning task according to an embodiment of the present disclosure.
In this embodiment, the apparatus may include:
a machine learning task obtaining module 110, configured to generate an executable code according to the type of the training data and an expected output result, and compose the training data and the executable code into a machine learning task;
A machine learning task cutting module 120 for cutting the machine learning task into a plurality of machine learning subtasks;
the machine learning task distribution module 130 is configured to distribute the multiple machine learning subtasks to multiple nodes, so that the browser plug-in of each node executes the corresponding machine learning subtask to obtain an execution subtask, and returns the execution subtask;
and the processing result summarizing module 140 is configured to sort all the received execution sub-results to obtain an execution result.
Optionally, the machine learning task obtaining module 110 may include:
the code template generating unit is used for determining a machine learning model according to the type of the training data and an expected output result and generating a code template according to the machine learning model;
the executable code acquisition unit is used for adding the received parameters into the code template to obtain executable codes;
and the learning task acquisition unit is used for forming the machine learning task by the training data and the executable codes.
Another embodiment of an apparatus for performing a machine learning task provided in the present application is further described below.
Referring to fig. 4, fig. 4 is a schematic structural diagram of another device for executing a machine learning task according to an embodiment of the present disclosure.
In this embodiment, the apparatus may include:
the subtask execution module 210 is configured to operate the browser plug-in to execute the received machine learning subtask by using the idle resource, so as to obtain an execution subtask result; the machine learning subtask is a task obtained by cutting and distributing the generated machine learning task by the target node;
the execution result sending module 220 is configured to send the execution sub-result to the target node, so that the target node sorts the received execution sub-result to obtain an execution result.
An embodiment of the present application further provides a server, including:
a memory for storing a computer program;
a processor for implementing the steps of the execution method according to the above embodiment and/or the steps of another execution method according to the above embodiment when executing the computer program.
Embodiments of the present application also provide a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the steps of the execution method according to the above embodiment and/or the steps of another execution method according to the above embodiment.
The computer-readable storage medium may include: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The embodiments are described in a progressive mode in the specification, the emphasis of each embodiment is on the difference from the other embodiments, and the same and similar parts among the embodiments can be referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The above detailed descriptions are provided for an execution device, two execution methods, a server, and a computer-readable storage medium for machine learning tasks. The principles and embodiments of the present application are explained herein using specific examples, which are provided only to help understand the method and the core idea of the present application. It should be noted that, for those skilled in the art, it is possible to make several improvements and modifications to the present application without departing from the principle of the present application, and such improvements and modifications also fall within the scope of the claims of the present application.

Claims (6)

1. An apparatus for performing a machine learning task, comprising:
the machine learning task acquisition module is used for generating executable codes according to the type of training data and an expected output result, and forming a machine learning task by the training data and the executable codes;
the machine learning task cutting module is used for cutting the machine learning task into a plurality of machine learning subtasks;
the machine learning task distribution module is used for distributing the plurality of machine learning subtasks to a plurality of nodes so that the browser plug-in of each node executes the corresponding machine learning subtask to obtain an execution subtask result, and returns the execution subtask result;
The processing result summarizing module is used for collating all the received execution sub-results to obtain execution results;
wherein, the machine learning task acquisition module includes:
the code template generating unit is used for determining a machine learning model according to the type of the training data and the expected output result and generating a code template according to the machine learning model;
the executable code acquisition unit is used for adding the received parameters into the code template to obtain the executable code;
and the learning task acquisition unit is used for forming a machine learning task by the training data and the executable code.
2. A method of performing a machine learning task, comprising:
generating executable codes according to the type of training data and an expected output result, and forming a machine learning task by the training data and the executable codes;
cutting the machine learning task into a plurality of machine learning subtasks;
distributing the plurality of machine learning subtasks to a plurality of nodes so that a browser plug-in of each node executes the corresponding machine learning subtask to obtain an execution subtask, and returning the execution subtask;
Sorting all received execution sub-results to obtain execution results;
generating executable codes according to the type of training data and an expected output result, and forming a machine learning task by the training data and the executable codes, wherein the method comprises the following steps:
determining a machine learning model according to the type of the training data and the expected output result, and generating a code template according to the machine learning model;
adding the received parameters into the code template to obtain the executable code;
and forming the training data and the executable code into a machine learning task.
3. The method of claim 2, wherein the cutting the machine learning task into a plurality of machine learning subtasks comprises:
searching task cutting historical data of the same type according to the type of the machine learning task;
and cutting the machine learning task into a plurality of machine learning subtasks according to the task cutting historical data.
4. The method of claim 2, wherein distributing the plurality of machine learning subtasks to a plurality of nodes comprises:
and distributing the plurality of machine learning subtasks to corresponding nodes according to the performance resources of the plurality of nodes.
5. A server, comprising:
a memory for storing a computer program;
processor for implementing the steps of the execution method of any one of claims 2 to 4 when executing the computer program.
6. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, carries out the steps of the method of execution according to any one of claims 2 to 4.
CN201910816452.XA 2019-08-30 2019-08-30 Execution device, related method and related device for machine learning task Active CN110688205B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910816452.XA CN110688205B (en) 2019-08-30 2019-08-30 Execution device, related method and related device for machine learning task

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910816452.XA CN110688205B (en) 2019-08-30 2019-08-30 Execution device, related method and related device for machine learning task

Publications (2)

Publication Number Publication Date
CN110688205A CN110688205A (en) 2020-01-14
CN110688205B true CN110688205B (en) 2022-06-10

Family

ID=69107616

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910816452.XA Active CN110688205B (en) 2019-08-30 2019-08-30 Execution device, related method and related device for machine learning task

Country Status (1)

Country Link
CN (1) CN110688205B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111460832B (en) * 2020-03-27 2023-11-24 北京百度网讯科技有限公司 Method, device, system, equipment and computer storage medium for object coding
CN111310922A (en) * 2020-03-27 2020-06-19 北京奇艺世纪科技有限公司 Method, device, equipment and storage medium for processing deep learning calculation task
CN111782592A (en) * 2020-06-30 2020-10-16 北京百度网讯科技有限公司 Method, device and system for dividing data
CN113127446B (en) * 2021-04-01 2023-04-07 山东英信计算机技术有限公司 Cluster tuning method and device based on Ottertune service

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106529682A (en) * 2016-10-28 2017-03-22 北京奇虎科技有限公司 Method and apparatus for processing deep learning task in big-data cluster
CN109299785A (en) * 2018-09-17 2019-02-01 浪潮软件集团有限公司 Method and device for realizing machine learning model
CN109324901A (en) * 2018-09-20 2019-02-12 北京京东尚科信息技术有限公司 Deep learning distributed computing method, system and node based on block chain
CN109816114A (en) * 2018-12-29 2019-05-28 大唐软件技术股份有限公司 A kind of generation method of machine learning model, device
CN109977988A (en) * 2018-12-29 2019-07-05 天津南大通用数据技术股份有限公司 The machine learning method and system classified in batches for magnanimity categorical data

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10332001B2 (en) * 2016-12-15 2019-06-25 WaveOne Inc. Enhanced coding efficiency with progressive representation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106529682A (en) * 2016-10-28 2017-03-22 北京奇虎科技有限公司 Method and apparatus for processing deep learning task in big-data cluster
CN109299785A (en) * 2018-09-17 2019-02-01 浪潮软件集团有限公司 Method and device for realizing machine learning model
CN109324901A (en) * 2018-09-20 2019-02-12 北京京东尚科信息技术有限公司 Deep learning distributed computing method, system and node based on block chain
CN109816114A (en) * 2018-12-29 2019-05-28 大唐软件技术股份有限公司 A kind of generation method of machine learning model, device
CN109977988A (en) * 2018-12-29 2019-07-05 天津南大通用数据技术股份有限公司 The machine learning method and system classified in batches for magnanimity categorical data

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于机器学习的编译优化自适应调优;李峰乾;《万方》;20170104;全文 *

Also Published As

Publication number Publication date
CN110688205A (en) 2020-01-14

Similar Documents

Publication Publication Date Title
CN110688205B (en) Execution device, related method and related device for machine learning task
CN110413396B (en) Resource scheduling method, device and equipment and readable storage medium
CN114298322A (en) Federal learning method, device, system, electronic equipment and computer readable medium
CN111652468A (en) Business process generation method and device, storage medium and computer equipment
Ramaswamy et al. Turbocharging treewidth-bounded Bayesian network structure learning
CN106708875B (en) Feature screening method and system
CN113362118B (en) User electricity consumption behavior analysis method and system based on random forest
CN110909888A (en) Method, device and equipment for constructing generic decision tree and readable storage medium
Czarnul et al. Simulation of parallel similarity measure computations for large data sets
CN106992901B (en) Method and apparatus for resource scheduling analog pressure
CN110782340B (en) Interactive modeling method, device and equipment of decision tree model and storage medium
CN111967521A (en) Cross-border active user identification method and device
CN107544248B (en) Task optimization method and device in mobile robot
CN108830302B (en) Image classification method, training method, classification prediction method and related device
CN110851173A (en) Report generation method and device
CN115794358A (en) Cloud workflow task scheduling method and device, electronic equipment and storage medium
CN109901931B (en) Reduction function quantity determination method, device and system
CN110851647B (en) Intelligent distribution method, device and equipment for audio content flow and readable storage medium
CN112417304A (en) Data analysis service recommendation method and system for constructing data analysis process
CN110727442B (en) Data storage optimization method and system for embedded platform
González et al. A parameterized scheme of metaheuristics with exact methods for determining the principle of least action in data envelopment analysis
CN111179048B (en) SPARK-based user information personalized analysis method, device and system
CN113095645B (en) Heterogeneous unmanned aerial vehicle task allocation method aiming at emergency scene with uneven task distribution
CN110427356B (en) Parameter configuration method and equipment
CN111988389B (en) Request scheduling mechanism of server based on HTTP/3 protocol

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant