CN108924187A - Task processing method, device and terminal device based on machine learning - Google Patents

Task processing method, device and terminal device based on machine learning Download PDF

Info

Publication number
CN108924187A
CN108924187A CN201810578498.8A CN201810578498A CN108924187A CN 108924187 A CN108924187 A CN 108924187A CN 201810578498 A CN201810578498 A CN 201810578498A CN 108924187 A CN108924187 A CN 108924187A
Authority
CN
China
Prior art keywords
request
parameter
merging
queue
machine learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810578498.8A
Other languages
Chinese (zh)
Other versions
CN108924187B (en
Inventor
秦铎浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201810578498.8A priority Critical patent/CN108924187B/en
Publication of CN108924187A publication Critical patent/CN108924187A/en
Application granted granted Critical
Publication of CN108924187B publication Critical patent/CN108924187B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/51Discovery or management thereof, e.g. service location protocol [SLP] or web services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Computer And Data Communications (AREA)

Abstract

The present invention proposes a kind of task processing method based on machine learning, device and terminal device, wherein method includes:Task processing request is added in corresponding task processing queue the type that request is handled according to the task of machine learning model;Wherein, task processing queue includes merging request queue and ordered request queue;Merging request queue includes parameter updating request, and ordered request queue includes parameter acquisition request;In the case where meeting merging condition, each parameter updating request in request queue will be merged and merged, and update the model parameter of machine learning model according to the parameter updating request after merging;According to the sequencing that parameter acquisition request in ordered request queue arranges, the model parameter of machine learning model is successively obtained, and returns to corresponding client.Using the present invention, the task of energy classification processing machine study handles request, avoids interdepending between different type request, improves the task treatment effeciency of machine-learning process.

Description

Task processing method, device and terminal device based on machine learning
Technical field
The present invention relates to field of computer technology more particularly to a kind of task processing methods based on machine learning, device And terminal device.
Background technique
Machine learning (Machine Learning, ML) is mainly by the feature in learning training data, to adjust it The process of itself model structure or model parameter.Wherein, machine learning can largely be joined during training in model Number is trained update.Training data may include the data of text formatting.A line in these texts is a training sample, Total entry number (namely line number) of training sample can reach the other order of magnitude of tera-scale, and the feature in sample can also reach hundred billion The order of magnitude of rank.With the rising of the quantity of feature, the speed of the training process of machine learning also can become therewith slow.
As shown in Figure 1, the training mission of traditional machine learning is Multi-computer Processing, basic role is divided into two kinds, including visitor Family end and server-side.Wherein, client is used for training data, mainly according to single training sample or a small quantities of training sample pair Gradient value (delta) is calculated, and the format of gradient value is a kv to data (parameter key key and its assignment value Combination).Server-side can constantly receive the kv of the gradient value of client (worker) transmission to data, receive the acquisition ginseng of client Number request.Server-side can be updated the assignment of model parameter according to the gradient value that client provides, according to client requirements The corresponding parameter value of the model parameter extraction of acquisition returns to client.The model parameter of machine learning model is in server-side It is stored jointly by more machines.Server-side updates the request of request after receiving parameter updating request based on the parameters received Sequence successively updates more machines of server-side.For any one request, the gradient value of all machines in server-side has been updated And then packet is sent back to client.
Under this traditional processing framework, server-side is requested since computing capability is limited in received task processing In the case where excessive, the task in server is easy accumulation or congestion, treatment effeciency are low.
Summary of the invention
The embodiment of the present invention provides a kind of task processing method based on machine learning, device, storage medium and terminal and sets It is standby, to solve or alleviate above one or more technical problems in the prior art.
In a first aspect, the embodiment of the invention provides a kind of task processing methods based on machine learning, including:
Corresponding task is added in task processing request by the type that request is handled according to the task of machine learning model It handles in queue;Wherein, the task processing queue includes merging request queue and ordered request queue;If at the task The type of reason request is parameter updating request, then the merging request queue is added, if the type of task processing request For parameter acquisition request, then the ordered request queue is added;In the case where meeting merging condition, team is requested into the merging Each parameter updating request in column merges, and the model of the machine learning model is updated according to the parameter updating request after merging Parameter;According to the sequencing that parameter acquisition request in the ordered request queue arranges, the machine learning mould is successively obtained The model parameter of type, and return to corresponding client.
With reference to first aspect, in the first embodiment of first aspect, the task according to machine learning model Task processing request is added in corresponding task processing queue the type for handling request, including:Judge at the task Manage the type of request;If the type of the task processing request is parameter acquisition request, the parameter acquisition request is pressed It is sequentially arranged in ordered request queue according to the sequencing of request time;If the type of the task processing request is parameter Request is updated, then according to the affiliated range of model parameter updated is requested in the parameter updating request, parameter update is asked It asks in the merging request queue for being added to and being consistent with the affiliated range of the model parameter.
It is with reference to first aspect, described in the case where meeting merging condition in second of embodiment of first aspect, Each parameter updating request in the merging request queue is merged, mode is included any of the following:Judge that the merging is asked Whether the quantity for the parameter updating request for asking queue to include reaches amount threshold, when it is described merging request queue include parameter more When the quantity newly requested reaches the amount threshold, each parameter updating request in the merging request queue is merged;Judgement It is described to merge whether the last duration for executing the operation for merging request of request queue distance reaches duration threshold value, when the merging When the last duration for executing the operation for merging request of request queue distance reaches the duration threshold value, team is requested into the merging Each parameter updating request in column merges;Judge whether the quantity for the parameter updating request that the merging request queue includes reaches Amount threshold, and, judge whether the last duration for executing the operation for merging request of merging request queue distance reaches Duration threshold value when the quantity for the parameter updating request that the merging request queue includes reaches the amount threshold, and works as institute It states when merging the last duration for executing the operation for merging request of request queue distance and reaching the duration threshold value, by the merging Each parameter updating request in request queue merges.
With reference to first aspect and its any embodiment, described by institute in the third embodiment of first aspect Each parameter updating request merged in request queue is stated to merge, including:Each parameter update in the merging request queue is asked It asks the assignment including model parameter to carry out assignment merging respectively, obtains unique assignment of each model parameter, updated with composition parameter Group, and the parameter update group is added in the parameter updating request after merging.
The third embodiment with reference to first aspect, it is described according to conjunction in the 4th kind of embodiment of first aspect Parameter updating request after and updates the model parameter of the machine learning model, including:It is asked according to the parameter update after merging The generation time asked or current time, the cryptographic Hash of the parameter updating request after calculating the merging;After the merging The cryptographic Hash of parameter updating request determines the initial position of the undated parameter of the machine learning model;After the merging Parameter updating request one by one update each parameter of the parameter update group since the initial position of the undated parameter The assignment of the model parameter of corresponding parameter position.
With reference to first aspect and its any embodiment, in the 5th kind of embodiment of first aspect, the parameter Acquisition request includes the model parameter of multiple requests, the elder generation arranged according to parameter acquisition request in the ordered request queue Sequence afterwards, successively obtains the model parameter of the machine learning model, including:For each ginseng in the ordered request queue Number acquisition request, the cryptographic Hash of the parameter acquisition request is calculated according to current time;According to the Kazakhstan of the parameter acquisition request Uncommon value, determines the initial position getparms of the machine learning model;According to the parameter acquisition request, from the acquisition The initial position of parameter starts, and one by one obtains model parameter from the corresponding parameter position of the model parameter of the request Assignment.
With reference to first aspect and its any embodiment, in the 7th kind of embodiment of first aspect, the task Processing method further includes:Judge the write parameters position of the parameter position and reads whether parameter position meets synchronous condition;It is described Write parameters position includes the position for undated parameter, and the reading parameter position includes being used for position getparms;When described It is when write parameters position and the reading parameter position meet the synchronous condition, the model parameter in the write parameters position is synchronous To the reading parameter position.
With reference to first aspect and its any embodiment, in the 8th kind of embodiment of first aspect, according to conjunction After parameter updating request after and updates the model parameter of the machine learning model, the method includes:To the merging The corresponding client of each parameter updating request in request queue sends back package informatin;Described time package informatin is used for receiving Parameter update is completed in the Client-Prompt for stating back package informatin.
Second aspect, the embodiment of the present invention provide a kind of Task Processing Unit based on machine learning, including:
Classification of task module is handled the task for handling the type of request according to the task of machine learning model Request is added in corresponding task processing queue;Wherein, the task processing queue includes merging request queue and ordered request Queue;If the type of the task processing request is parameter updating request, the merging request queue is added, if described The type of task processing request is parameter acquisition request, then the ordered request queue is added;Merge update module, for full In the case where sufficient merging condition, each parameter updating request in the merging request queue is merged, and according to the ginseng after merging Number updates the model parameter that request updates the machine learning model;And parameter acquisition module, for orderly being asked according to described The sequencing that parameter acquisition request arranges in queue is sought, successively obtains the model parameter of the machine learning model, and return To corresponding client.
The function of described device can also execute corresponding software realization by hardware realization by hardware.It is described Hardware or software include one or more modules corresponding with above-mentioned function.
It include processor and memory, institute in the Task Processing based on machine learning in a possible design It states at the task that memory is executed for the Task Processing Unit based on machine learning in above-mentioned first aspect based on machine learning Program is managed, the processor is configured to for executing the program stored in the memory.The appointing based on machine learning Processing unit of being engaged in can also include communication interface, for Task Processing Unit and other equipment or communication network based on machine learning Network communication.
The third aspect, the embodiment of the present invention also provide a kind of computer readable storage medium, are based on engineering for storing Computer software instructions used in the Task Processing Unit of habit, including for execute above-mentioned first aspect based on engineering Program involved in the task processing method of habit.
Any one technical solution in above-mentioned technical proposal has the following advantages that or beneficial effect:
The embodiment of the present invention is asked task processing by handling the type requested according to the task of machine learning model It asks and is added in corresponding task processing queue.If the type of the task processing request is parameter updating request, institute is added Merging request queue is stated, if the type of task processing request is parameter acquisition request, the ordered request team is added Column.For merging request queue, in the case where meeting merging condition, each parameter update in the merging request queue is asked Merging is asked, and updates the model parameter of the machine learning model according to the parameter updating request after merging, update can be improved The efficiency of parameter.For ordered request queue, according to the sequencing that parameter acquisition request in the ordered request queue arranges, The model parameter of the machine learning model is successively obtained, and returns to corresponding client.The embodiment of the present invention realizes pair Different types of task processing request carries out classification processing, avoids interdepending between different type request, improves engineering The task treatment effeciency of habit process.
Above-mentioned general introduction is merely to illustrate that the purpose of book, it is not intended to be limited in any way.Except foregoing description Schematical aspect, except embodiment and feature, by reference to attached drawing and the following detailed description, the present invention is further Aspect, embodiment and feature, which will be, to be readily apparent that.
Detailed description of the invention
In the accompanying drawings, unless specified otherwise herein, otherwise indicate the same or similar through the identical appended drawing reference of multiple attached drawings Component or element.What these attached drawings were not necessarily to scale.It should be understood that these attached drawings depict only according to the present invention Disclosed some embodiments, and should not serve to limit the scope of the present invention.
Fig. 1 is the block schematic illustration of the method for the task processing based on machine learning that the prior art provides;
Fig. 2 is the process signal of one embodiment of the method for the task processing provided by the invention based on machine learning Figure;
Fig. 3 is the flow diagram of one embodiment of task processing requests classification provided by the invention;
Fig. 4 is the flow diagram of one embodiment that model parameter provided by the invention updates;
Fig. 5 is the process signal of another embodiment of the method for the task processing provided by the invention based on machine learning Figure;
Fig. 6 is the process signal of one embodiment of the device of the task processing provided by the invention based on machine learning Figure;
Fig. 7 is the process signal of another embodiment of the device of the task processing provided by the invention based on machine learning Figure;
Fig. 8 is parameter renewal process provided by the invention using exemplary block schematic illustration;
Fig. 9 is parameter acquisition procedure provided by the invention using exemplary block schematic illustration;
Figure 10 is the structural schematic diagram of one embodiment of terminal device provided by the invention.
Specific embodiment
Hereinafter, certain exemplary embodiments are simply just described.As one skilled in the art will recognize that Like that, without departing from the spirit or scope of the present invention, described embodiment can be modified by various different modes. Therefore, attached drawing and description are considered essentially illustrative rather than restrictive.
Since the type of server received task processing request includes request (the hereinafter referred to as parameter of undated parameter value Update request) and the value that gets parms request (hereinafter referred to as parameter acquisition request).If server separately handles both and asks It asks, is placed in same task processing queue, task processing request is handled by putting in order for task processing queue, can be led The dependence for having front and back in task processing request process is caused, the extension of whole time-consuming is caused.In addition, server is in undated parameter When, it may be necessary to the parameter in multiple parameters block is updated, each parameter updating request needs to ask until the update of previous parameter All parameters asked all have updated to be started to process again, further extends the processing time.
Referring to Fig. 2, can be applied to the embodiment of the invention provides a kind of task processing method based on machine learning Server.The present embodiment includes step S100 to step S300, specific as follows:
S100 handles the type of request according to the task of machine learning model, and task processing request is added corresponding In business processing queue;Wherein, task processing queue includes merging request queue and ordered request queue;If task processing request Type be parameter updating request, then merging request queue is added, if task processing request type be parameter acquisition request, Ordered request queue is then added.
In the present embodiment, server can provide reception request interface, at the task for receiving client transmission Reason request.Task processing request may include parameter updating request and parameter acquisition request.Wherein, parameter updating request is for more The model parameter of new engine learning model.Parameter acquisition request is used to obtain the model parameter of machine learning model.
In the present embodiment, task can be handled into queue and is divided into two classes, team is handled to the task of both types respectively Column are handled, i.e., execute step S200 and step S300 respectively.
S200 will merge each parameter updating request in request queue and merge, and root in the case where meeting merging condition The model parameter of machine learning model is updated according to the parameter updating request after merging.Wherein, merging condition can be according to time, team The factors such as the cushion space of column are configured.
S300 successively obtains machine learning mould according to the sequencing that parameter acquisition request in ordered request queue arranges The model parameter of type, and return to corresponding client.
The embodiment of the present invention handles the type of request according to the task of machine learning model, and task processing request is added Enter in corresponding task processing queue.If the type of the task processing request is parameter updating request, the conjunction is added And the ordered request queue is added if the type of task processing request is parameter acquisition request in request queue.It is right In merging request queue, in the case where meeting merging condition, each parameter updating request in the merging request queue is closed And and the model parameter of the machine learning model is updated according to the parameter updating request after merging, undated parameter can be improved Efficiency.For ordered request queue, according to the sequencing that parameter acquisition request in the ordered request queue arranges, successively The model parameter of the machine learning model is obtained, and returns to corresponding client.The embodiment of the present invention is to different types of Task processing request carries out classification processing, can improve machine-learning process to avoid interdepending between different type request In task treatment effeciency.
In one possible implementation, as shown in figure 3, step S100 may include:
Step S101, server judges the type of task processing request;
If step S102, the type of task processing request is parameter acquisition request, server is by parameter acquisition request It is sequentially arranged in ordered request queue according to the sequencing of request time;
If step S103, the type of task processing request is parameter updating request, server is asked according to parameter update Parameter updating request, is added to the conjunction being consistent with the affiliated range of model parameter by the affiliated range of model parameter for asking middle request to update And in request queue.
In the present embodiment, server may include an ordered request queue and multiple merging request queues, wherein every A task processing request for merging request queue and being responsible for a portion.It is responsible for handling model for example, merging request queue A The request that the affiliated range of parameter is 1~10, merging request queue B to be responsible for handling model parameter affiliated range is 11~20 to ask It asks, and so on.And each task processing request may include multiple model parameters, such as:Task processing request a includes mould Shape parameter 1 is to model parameter 3, and task requests b includes model parameter 5 to model parameter 8, and task processing request c includes model ginseng Number 1, model parameter 3 and model parameter 4, task processing request d include module parameter 11 to model parameter 15 etc..In a kind of tool In body example, merging request queue A and task processing request d, which can be added, in task processing request a, b and c can be added to conjunction And in request queue B.
In one possible implementation, in step s 200, for any merging request queue, server can To judge to merge whether request queue meets merging condition.In the case where meeting merging condition, server can be asked merging It asks each parameter updating request in queue to merge, judges whether to meet merging condition for example, by using any one following mode:
First, judging whether the quantity for merging the parameter updating request that request queue includes reaches amount threshold, work as merging When the quantity for the parameter updating request that request queue includes reaches amount threshold, expression meets merging condition, requests team for merging Each parameter updating request in column merges.
Second, judging to merge whether the last duration for executing the operation for merging request of request queue distance reaches duration threshold Value indicates to meet conjunction when the duration for merging the operation that the last execution of request queue distance merges request reaches duration threshold value And condition, each parameter updating request in request queue will be merged and merged.
Third, judge whether the quantity for merging the parameter updating request that request queue includes reaches amount threshold, and, sentence It is disconnected to merge whether the last duration for executing the operation for merging request of request queue distance reaches duration threshold value, team is requested when merging When the quantity for the parameter updating request that column include reaches amount threshold, and asked when the last execution of merging request queue distance merges When the duration for the operation asked reaches duration threshold value, expression meets merging condition, and each parameter update merged in request queue is asked Ask merging.
In one possible implementation, the combined side of the merging of each parameter updating request in request queue will be merged Formula may include:Assignment conjunction is carried out respectively to the assignment that each parameter updating request merged in request queue includes model parameter And unique assignment of each model parameter is obtained, with composition parameter update group, and the parameter after merging is added more in parameter update group In new request.
In the present embodiment, request queue is merged for any one, includes that multiple parameters update request, Mei Gecan in queue It may include multiple model parameters that number, which updates request, then the assignment for the same model parameter for including in queue has multiple.For example, same One parameter key corresponds to multiple value assignment.Model parameter each in queue assignment can be carried out at this time to merge to obtain each mould Unique assignment of shape parameter.The mode that assignment merges may include taking mean value, to model to multiple value assignment of model parameter Multiple value assignment of parameter take mean value to be weighted the modes such as summation.Merge after requesting, each model for including in queue Parameter corresponds to an assignment, can form a parameter update group, subsequent directly to update grouping machine device study mould according to the parameter The model parameter of type is updated.In this way, multiple merging request queues can not be interfere with each other with concurrent working, task is effectively solved The problem of handling congestion.
In one possible implementation, as shown in figure 4, in step s 200, being asked according to the parameter update after merging The model parameter for updating machine learning model is sought, may include:
Step S201, according to the generation time of the parameter updating request after merging or current time, the ginseng after merging is calculated Number updates the cryptographic Hash of request.
Step S202, according to the cryptographic Hash of the parameter updating request after merging, the undated parameter of machine learning model is determined Initial position.
Step S203, it according to the parameter updating request after merging, since the initial position of undated parameter, one by one updates The assignment of the model parameter of the corresponding parameter position of each parameter of parameter update group.
In the present embodiment, the model parameter of machine learning model is segmented into multiple machines, memory or memory block/area It is stored.For example, each memory block can store a part of model parameter in machine learning model.In one example, The model parameter of machine learning model includes model parameter key1~key20, this 20 moulds can be stored using 4 memory blocks Shape parameter.Wherein, the responsible processing model parameter key1~key5 of memory block 1, the responsible processing model parameter key6 of memory block 2~ Key10, the responsible processing model parameter key11~key15 of memory block 3, the responsible processing model parameter key16 of memory block 4~ key20.So, after merging the parameter updating request in a queue, according to the generation of the parameter updating request after merging Time or current time calculate a cryptographic Hash, are opened with determination from which memory block (i.e. the initial position of undated parameter) Begin.Then, model parameter update is carried out based on the parameter updating request after merging.
Illustratively, it is assumed that parameter update group M includes the assignment of model parameter key3~key12, and more according to parameter Memory block determined by the cryptographic Hash of new group M is memory block 1, then parameter update group M can be submitted in memory block 1, be started The assignment of model parameter key3~key5 of memory block 1 is updated.After memory block 1 has updated, according to memory block It puts in order and memory block is responsible for the model parameter handled, determine that next memory block is memory block 2, parameter update group M is mentioned Give memory block 2.Then, start to be updated model parameter key6~key10 of memory block 2.And so on, until to ginseng Model parameter update in number update group M finishes.In this way, server can handle multiple parameters update group (multiple merging simultaneously Parameter updating request afterwards), it realizes asynchronous chain type processing, further increases the efficiency of task processing.
In one possible implementation, parameter acquisition request may include the model ginseng of one or more requests Number, the embodiment of above-mentioned steps S300 may include:
Firstly, for each parameter acquisition request in ordered request queue, parameter acquisition is calculated according to current time and is asked The cryptographic Hash asked.Then, according to the cryptographic Hash of parameter acquisition request, the initial bit getparms of machine learning model is determined It sets.Finally, according to parameter acquisition request, since initial position getparms, one by one from the model parameter of request Corresponding parameter position obtains the assignment of model parameter.
In the present embodiment, similar with the initial position of undated parameter is determined, it can be according to the Hash of parameter acquisition request Value determines initial position getparms, then, model parameter is obtained since the determining initial position.
Illustratively, it is assumed that parameter acquisition request N needs the assignment of request model parameter key3~key12, and Assuming that the memory block according to determined by the cryptographic Hash of parameter acquisition request N is memory block 2, then parameter acquisition request can be submitted To memory block 2, start to obtain model parameter key6~key10 from memory block 2.After memory block 2 has updated, according to storage Block put in order and memory block be responsible for processing model parameter, determine next memory block be memory block 3, by parameter acquisition ask N is asked to submit to memory block 3.Then, start to obtain model parameter key11~key12 from memory block 3.When memory block 3 has updated Later, according to memory block put in order and memory block be responsible for processing model parameter, determine next memory block be memory block 1, parameter acquisition request N is submitted into memory block 1.In turn, start to obtain model parameter key3~key5 from memory block 1.With This analogizes, and is also to execute same processing to other parameter acquisition requests.In this way, server can handle multiple parameters simultaneously Acquisition request realizes asynchronous chain type processing.
In one possible implementation, parameter position may include write parameters position and reading parameter position, write parameters Position includes the position for undated parameter, and reading parameter position includes being used for position getparms.For example, being with memory block 1 Example, memory block 1 may include write parameters position 1 and read parameter position 1, and write parameters position 1 executes the operation (example of write request Such as, undated parameter request), read parameter position 1 execute read request operation (for example, the request that gets parms), read-write requests it Between will not influence each other.
Thus, as shown in figure 5, task processing method provided in an embodiment of the present invention can also include:
Step S501, judge the write parameters position of parameter position and read whether parameter position meets synchronous condition;
Step S502, when write parameters position and reading parameter position meet synchronous condition, by the model in write parameters position Parameter synchronization to read parameter position.
For example, clock, when time is up, the content of automatic synchronization write parameters position 1 can be arranged in the inside of server Into reading parameter position 1, the content synchronization of all write parameters positions can also be read in parameter position to corresponding.This implementation Example can realize parameter updating request with parameter acquisition request on memory is isolated, and avoids interdepending between read-write, mentions The task processing speed of high server.
In one possible implementation, as shown in figure 5, updating engineering according to the parameter updating request after merging After the model parameter for practising model, this method further includes:
Step S503, server can be sent back to the corresponding client of each parameter updating request merged in request queue Package informatin;
Step S504, package informatin is returned to update for parameter to be completed to the Client-Prompt for receiving back package informatin.
As shown in fig. 6, the embodiment of the present invention provides a kind of Task Processing Unit based on machine learning, including:
Classification of task module 100, for handling the type of request according to the task of machine learning model, at the task Reason request is added in corresponding task processing queue;Wherein, the task processing queue includes merging request queue and orderly asking Ask queue;If the type of the task processing request is parameter updating request, the merging request queue is added, if institute The type for stating task processing request is parameter acquisition request, then the ordered request queue is added;
Merge update module 200, in the case where meeting merging condition, by each ginseng in the merging request queue Number updates request and merges, and the model parameter of the machine learning model is updated according to the parameter updating request after merging;And
Parameter acquisition module 300, the sequencing for being arranged according to parameter acquisition request in the ordered request queue, The model parameter of the machine learning model is successively obtained, and returns to corresponding client.
In one possible implementation, the classification of task module 100 includes:
Request type judging unit, for judging the type of the task processing request;
Ordered queue arrangement units, if the type for task processing request is parameter acquisition request, by institute Parameter acquisition request is stated to be sequentially arranged in ordered request queue according to the sequencing of request time;
Merge queue adding unit, if the type for task processing request is parameter updating request, basis The affiliated range of model parameter updated is requested in the parameter updating request, and the parameter updating request is added to and the mould In the merging request queue that the affiliated range of shape parameter is consistent.
In one possible implementation, the merging update module includes any of the following unit:
First merges judging unit, for whether judging the quantity for merging the parameter updating request that request queue includes Reach amount threshold, it, will when the quantity for the parameter updating request that the merging request queue includes reaches the amount threshold Each parameter updating request in the merging request queue merges;
Second merges judging unit, for judging that the last execution of merging request queue distance merges the operation of request Duration whether reach duration threshold value, when the merging request queue distance is last execute merge request operation when be up to When to the duration threshold value, each parameter updating request in the merging request queue is merged;
Third merges judging unit, for whether judging the quantity for merging the parameter updating request that request queue includes Reach amount threshold, and, whether judge the last duration for executing the operation for merging request of merging request queue distance Reach duration threshold value, when the quantity for the parameter updating request that the merging request queue includes reaches the amount threshold, and It, will be described when the last duration for executing the operation for merging request of merging request queue distance reaches the duration threshold value Each parameter updating request merged in request queue merges.
In one possible implementation, the merging update module 200 is specifically used for:To the merging request queue In each parameter updating request include that the assignment of model parameter carries out assignment merging respectively, obtain unique tax of each model parameter Value with composition parameter update group, and the parameter update group is added in the parameter updating request after merging.
In one possible implementation, the merging update module 200 includes:
First Hash calculation unit, for according to the generation time of the parameter updating request after merging or current time, meter The cryptographic Hash of parameter updating request after calculating the merging;
First position determination unit determines the machine for the cryptographic Hash according to the parameter updating request after the merging The initial position of the undated parameter of device learning model;
Parameter updating unit, for according to the parameter updating request after the merging, from the initial bit of the undated parameter Beginning is set, the assignment of the model parameter of the corresponding parameter position of each parameter of the parameter update group is one by one updated.
In one possible implementation, the parameter acquisition request includes the model parameter of multiple requests, institute Stating parameter acquisition module 300 includes:
Second Hash calculation unit, for for each parameter acquisition request in the ordered request queue, according to working as The preceding time calculates the cryptographic Hash of the parameter acquisition request;
Second position determination unit determines the machine learning mould for the cryptographic Hash according to the parameter acquisition request The initial position getparms of type;
Parameter acquiring unit is used for according to the parameter acquisition request, since the initial position getparms, by The assignment of model parameter is obtained from the corresponding parameter position of the model parameter of the request aly.
In one possible implementation, as shown in fig. 7, the Task Processing Unit further includes:
Synchronous judgment module 400, for judging whether the write parameters position of the parameter position meets with reading parameter position Synchronous condition;The write parameters position includes the position for undated parameter, and the reading parameter position includes for getting parms Position;
Parameter synchronization module 500, for meeting the synchronous condition when the write parameters position and the reading parameter position When, the model parameter in the write parameters position is synchronized to the reading parameter position.
In one possible implementation, described device further includes:
Packet sending module 600 is returned, for updating the machine learning model according to the parameter updating request after merging After model parameter, the corresponding client of each parameter updating request into the merging request queue sends back package informatin;Institute Package informatin is stated back to update for parameter to be completed to the Client-Prompt for receiving described time package informatin.
The function of described device can also execute corresponding software realization by hardware realization by hardware.It is described Hardware or software include one or more modules corresponding with above-mentioned function.
It include processor and memory, institute in the Task Processing based on machine learning in a possible design It states at the task that memory is executed for the Task Processing Unit based on machine learning in above-mentioned first aspect based on machine learning Program is managed, the processor is configured to for executing the program stored in the memory.The appointing based on machine learning Processing unit of being engaged in can also include communication interface, for Task Processing Unit and other equipment or communication network based on machine learning Network communication.
As shown in figure 8, be the embodiment of the present invention provide a kind of parameter renewal process using exemplary schematic diagram.This application Exemplary server may include receive request module, multiple thread tables, merge request module, multiple parameters processing module and Send back packet module.
Wherein, it receives request module and is used to receive the task processing from client and request, and according in request at request Range assignment belonging to the model parameter of reason is into corresponding thread table.Per thread table can be received disorderly asks from reception The request of modulus block.The corresponding one or more merging request modules of per thread table, per thread table can be uniformly reception To parameter acquisition request issue the corresponding merging request module of the thread table.
Merge request module in the case where meeting merging condition, can update request to the multiple parameters received has phase Same key value, then merge multiple value values corresponding with the key value in these parameter updating requests.Merged with Afterwards, the corresponding value value of a key value, one group of kv of composition organize interior key value and do not repeat to parameter group.Wherein, merge plan Slightly (namely judging whether to meet merging condition) may include a variety of, such as:One time-out time is set, is had timed out out when the time Beginning merges the request in queue.For another example:Kv pairs of the spatial cache temporarily stored in merging request module is full In the case of, triggering merges.Combined mode can be multiple value values corresponding to a key value and take mean value, be also possible to root It is weighted according to business demand multiple value values corresponding to a key value and asks conjunction.Merging request module can be by the kv after merging To (one of sequential flow 1, sequence 2 and sequence 3 between the merging request module in Fig. 5 and parameter processing module), pass through The mode of Hash (hash) is sent to subsequent parameter processing module.Hash mode the expanding using no lock of the present embodiment A kind of mode of exhibition.
Each parameter processing module can be handled by single thread, and each parameter processing module can handle a part The corresponding parameter value of key value.For example, Fig. 8 includes 3 parameter processing modules, the range of the key value of model parameter may include 1 To 10, parameter processing module 1 is responsible for the corresponding parameter value of processing key value 1-3, and parameter processing module 2 handles pair of key value 4-6 The parameter value answered, parameter processing module N handle the corresponding parameter value of key value 7-10.
After kv after merging finishes the update of corresponding model parameter, parameter processing module sends the letter for updating and completing It ceases to packet module is sent back, sends back packet module and send the corresponding package informatin that returns to corresponding client.
As shown in figure 9, its be the embodiment of the present invention provide a kind of parameter acquisition procedure using exemplary schematic diagram.It should With exemplary server may include receive request module, multiple thread tables, ordered request module, multiple parameters processing module with And send back packet module.
In the present embodiment, parameter acquisition request is similar with the processing mode of parameter updating request above-mentioned, and difference is: Merging request module has been changed into ordered request module.Ordered request module herein maintains a team using single thread Column.All parameter updating requests received are successively arranged according to the sequence for entering queue, then carry out the place of parameter updating request Reason.Merging request module can be safeguarded by resource thread pool.Ordered request module can be by single unified queue By the first parameters sortnig acquisition request later of request time, it can prioritize processing the parameter acquisition request first received, received after avoiding The parameter acquisition request arrived is by priority processing.After parameter processing resume module completes a parameter updating request, it will acquire To all parameters return to and send back packet module.It sends back packet module and the parameter received is returned into transmission parameter acquisition The client of request.
In this application example, the inside of each parameter processing module can safeguard two pieces of memories respectively, be respectively used to It saves and reads data and write data.For read request for requesting reading buffer, write request, can be to avoid read-write for requesting writing buffer Influencing each other between request.Clock can also be arranged in the inside of parameter processing module, the buffer area of automatic synchronization write request Content is to the buffer area of read request.
The embodiment of the present invention realizes the distributed treatment of task in machine-learning process, can be in the training of machine learning Congestion is farthest reduced in task processes;Read request can be isolated on memory with write request, read-write is avoided to ask Influencing each other between asking.In addition, parameter updating request and parameter acquisition request can not depend on respectively in respective operating process From preceding paragraph request processing progress so that the resource utilization of server is relatively high, time-consuming very short, treatment effeciency is high. Moreover, processing mode has been changed to asynchronous chain from the mode gathered afterwards is first distributed during being updated or obtaining to parameter The mode of formula processing is conducive to improve request processing speed.
The embodiment of the present invention also provides a kind of terminal device, and as shown in Figure 10, which includes:Memory 21 and processor 22, being stored in memory 21 can be in the computer program on processor 22.Processor 22 is realized when executing computer program State the task processing method based on machine learning in embodiment.The quantity of memory 21 and processor 22 can be one or more It is a.
The equipment further includes:
Communication interface 23, for the communication between processor 22 and external equipment.
Memory 21 may include high speed RAM memory, it is also possible to further include nonvolatile memory (non-volatile Memory), a for example, at least magnetic disk storage.
If memory 21, processor 22 and the independent realization of communication interface 23, memory 21, processor 22 and communication are connect Mouth 23 can be connected with each other by bus and complete mutual communication.Bus can be industry standard architecture (ISA, Industry Standard Architecture) bus, external equipment interconnection (PCI, Peripheral Component) be total Line or extended industry-standard architecture (EISA, Extended Industry Standard Component) bus etc..Always Line can be divided into address bus, data/address bus, control bus etc..Only to be indicated with a thick line in Figure 10, but simultaneously convenient for indicating Only a bus or a type of bus are not indicated.
Optionally, in specific implementation, if memory 21, processor 22 and communication interface 23 are integrated in chip piece On, then memory 21, processor 22 and communication interface 23 can complete mutual communication by internal interface.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show The description of example " or " some examples " etc. means specific features, structure, material or spy described in conjunction with this embodiment or example Point is included at least one embodiment or example of the invention.Moreover, particular features, structures, materials, or characteristics described It may be combined in any suitable manner in any one or more of the embodiments or examples.In addition, without conflicting with each other, this The technical staff in field can be by the spy of different embodiments or examples described in this specification and different embodiments or examples Sign is combined.
In addition, term " first ", " second " are used for descriptive purposes only and cannot be understood as indicating or suggesting relative importance Or implicitly indicate the quantity of indicated technical characteristic." first " is defined as a result, the feature of " second " can be expressed or hidden It include at least one this feature containing ground.In the description of the present invention, the meaning of " plurality " is two or more, unless otherwise Clear specific restriction.
Any process described otherwise above or method description are construed as in flow chart or herein, and expression includes It is one or more for realizing specific logical function or process the step of executable instruction code module, segment or portion Point, and the range of the preferred embodiment of the present invention includes other realization, wherein can not press shown or discussed suitable Sequence, including according to related function by it is basic simultaneously in the way of or in the opposite order, to execute function, this should be of the invention Embodiment person of ordinary skill in the field understood.
Expression or logic and/or step described otherwise above herein in flow charts, for example, being considered use In the order list for the executable instruction for realizing logic function, may be embodied in any computer-readable medium, for Instruction execution system, device or equipment (such as computer based system, including the system of processor or other can be held from instruction The instruction fetch of row system, device or equipment and the system executed instruction) it uses, or combine these instruction execution systems, device or set It is standby and use.For the purpose of this specification, " computer-readable medium ", which can be, any may include, stores, communicates, propagates or pass Defeated program is for instruction execution system, device or equipment or the dress used in conjunction with these instruction execution systems, device or equipment It sets.
The computer-readable medium of the embodiment of the present invention can be computer-readable signal media or computer-readable deposit Storage media either the two any combination.The more specific example at least (non-exclusive of computer readable storage medium List) it include following:Electrical connection section (electronic device) with one or more wiring, portable computer diskette box (magnetic dress Set), random access memory (RAM), read-only memory (ROM), erasable edit read-only storage (deposit by EPROM or flash Reservoir), fiber device and portable read-only memory (CDROM).In addition, computer readable storage medium can even is that Can the paper of print routine or other suitable media on it because can for example be swept by carrying out optics to paper or other media It retouches, is then edited, interprets or handled when necessary with other suitable methods electronically to obtain program, then will It is stored in computer storage.
In embodiments of the present invention, computer-readable signal media may include in a base band or as carrier wave a part The data-signal of propagation, wherein carrying computer-readable program code.The data-signal of this propagation can use a variety of Form, including but not limited to electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media is also It can be any computer-readable medium other than computer readable storage medium, which can send, pass It broadcasts or transmits for instruction execution system, input method or device use or program in connection.Computer can The program code for reading to include on medium can transmit with any suitable medium, including but not limited to:Wirelessly, electric wire, optical cable, penetrate Frequently (Radio Frequency, RF) etc. or above-mentioned any appropriate combination.
It should be appreciated that each section of the invention can be realized with hardware, software, firmware or their combination.Above-mentioned In embodiment, software that multiple steps or method can be executed in memory and by suitable instruction execution system with storage Or firmware is realized.It, and in another embodiment, can be under well known in the art for example, if realized with hardware Any one of column technology or their combination are realized:With for realizing the logic gates of logic function to data-signal Discrete logic, with suitable combinational logic gate circuit specific integrated circuit, programmable gate array (PGA), scene Programmable gate array (FPGA) etc..
Those skilled in the art are understood that realize all or part of step that above-described embodiment method carries Suddenly be that relevant hardware can be instructed to complete by program, program can store in a kind of computer readable storage medium In, which when being executed, includes the steps that one or a combination set of embodiment of the method.
It, can also be in addition, each functional unit in each embodiment of the present invention can integrate in a processing module It is that each unit physically exists alone, can also be integrated in two or more units in a module.Above-mentioned integrated mould Block both can take the form of hardware realization, can also be realized in the form of software function module.If integrated module with The form of software function module is realized and when sold or used as an independent product, also can store computer-readable at one In storage medium.Storage medium can be read-only memory, disk or CD etc..
More than, only a specific embodiment of the invention, but scope of protection of the present invention is not limited thereto, and it is any to be familiar with Those skilled in the art in the technical scope disclosed by the present invention, can readily occur in its various change or replacement, these It should be covered by the protection scope of the present invention.Therefore, protection scope of the present invention should be with scope of protection of the claims It is quasi-.

Claims (11)

1. a kind of task processing method based on machine learning, which is characterized in that including:
Task processing request is added corresponding task and handled by the type that request is handled according to the task of machine learning model In queue;Wherein, the task processing queue includes merging request queue and ordered request queue;If the task processing is asked The type asked is parameter updating request, then the merging request queue is added, if the type of task processing request is ginseng Number acquisition request, then be added the ordered request queue;
In the case where meeting merging condition, each parameter updating request in the merging request queue is merged, and according to conjunction Parameter updating request after and updates the model parameter of the machine learning model;
According to the sequencing that parameter acquisition request in the ordered request queue arranges, the machine learning model is successively obtained Model parameter, and return to corresponding client.
2. as described in claim 1 based on the task processing method of machine learning, which is characterized in that described according to machine learning The type of the task processing request of model, task processing request is added in corresponding task processing queue, including:
Judge the type of the task processing request;
If the type of the task processing request is parameter acquisition request, by the parameter acquisition request according to request time Sequencing be sequentially arranged in ordered request queue;
If the type of the task processing request is parameter updating request, updated according to request in the parameter updating request The affiliated range of model parameter, the parameter updating request is added to the merging being consistent with the affiliated range of the model parameter and is asked It asks in queue.
3. as described in claim 1 based on the task processing method of machine learning, which is characterized in that described to meet merging item In the case where part, each parameter updating request in the merging request queue is merged, mode is included any of the following:
Judge whether the quantity for the parameter updating request that the merging request queue includes reaches amount threshold, when the merging is asked When the quantity for the parameter updating request for asking queue to include reaches the amount threshold, by each parameter in the merging request queue Request is updated to merge;
Judge whether the last duration for executing the operation for merging request of merging request queue distance reaches duration threshold value, when When the duration for merging the last operation for executing merging request of request queue distance reaches the duration threshold value, by the conjunction And each parameter updating request in request queue merges;
Judge whether the quantity for the parameter updating request that the merging request queue includes reaches amount threshold, and, judge institute It states and merges whether the last duration for executing the operation for merging request of request queue distance reaches duration threshold value, when the merging is asked When the quantity for the parameter updating request for asking queue to include reaches the amount threshold, and when the merging request queue is apart from upper one When the secondary duration for executing the operation for merging request reaches the duration threshold value, each parameter in the merging request queue is updated Request merges.
4. the task processing method as described in any one of claims 1 to 3 based on machine learning, which is characterized in that described to incite somebody to action Each parameter updating request in the merging request queue merges, including:
Assignment merging is carried out to the assignment that each parameter updating request in the merging request queue includes model parameter respectively, is obtained Unique assignment of each model parameter is obtained, with composition parameter update group, and the parameter after merging is added more in the parameter update group In new request.
5. as claimed in claim 4 based on the task processing method of machine learning, which is characterized in that it is described according to merging after Parameter updating request updates the model parameter of the machine learning model, including:
Parameter updating request according to the generation time of the parameter updating request after merging or current time, after calculating the merging Cryptographic Hash;
According to the cryptographic Hash of the parameter updating request after the merging, the initial of the undated parameter of the machine learning model is determined Position;
According to the parameter updating request after the merging, since the initial position of the undated parameter, one by one described in update The assignment of the model parameter of the corresponding parameter position of each parameter of parameter update group.
6. the task processing method as described in any one of claims 1 to 3 based on machine learning, which is characterized in that the ginseng Number acquisition request includes the model parameter of multiple requests, is arranged according to parameter acquisition request in the ordered request queue Sequencing successively obtains the model parameter of the machine learning model, including:
For each parameter acquisition request in the ordered request queue, the parameter acquisition request is calculated according to current time Cryptographic Hash;
According to the cryptographic Hash of the parameter acquisition request, the initial position getparms of the machine learning model is determined;
According to the parameter acquisition request, since the initial position getparms, one by one from the request The corresponding parameter position of model parameter obtains the assignment of model parameter.
7. the task processing method as described in any one of claims 1 to 3 based on machine learning, which is characterized in that further include:
Judge the write parameters position of the parameter position and reads whether parameter position meets synchronous condition;The write parameters position packet The position for undated parameter is included, the reading parameter position includes being used for position getparms;
When the write parameters position and the reading parameter position meet the synchronous condition, by the mould in the write parameters position Shape parameter is synchronized to the reading parameter position.
8. the task processing method as described in any one of claims 1 to 3 based on machine learning, which is characterized in that in basis After parameter updating request after merging updates the model parameter of the machine learning model, the method includes:
The corresponding client of each parameter updating request into the merging request queue sends back package informatin;Described time package informatin It is updated for parameter to be completed to the Client-Prompt for receiving described time package informatin.
9. a kind of Task Processing Unit based on machine learning, which is characterized in that including:
Classification of task module requests task processing for handling the type of request according to the task of machine learning model It is added in corresponding task processing queue;Wherein, the task processing queue includes merging request queue and ordered request queue; If the type of the task processing request is parameter updating request, the merging request queue is added, if the task The type of processing request is parameter acquisition request, then the ordered request queue is added;
Merge update module, in the case where meeting merging condition, each parameter in the merging request queue to be updated Request merges, and the model parameter of the machine learning model is updated according to the parameter updating request after merging;
Parameter acquisition module, the sequencing for being arranged according to parameter acquisition request in the ordered request queue, is successively obtained The model parameter of the machine learning model is taken, and returns to corresponding client.
10. a kind of task processing terminal equipment realized based on machine learning, which is characterized in that the terminal device includes:
One or more processors;
Storage device, for storing one or more programs;
When one or more of programs are executed by one or more of processors, so that one or more of processors Realize the task processing method based on machine learning as described in any in claim 1-9.
11. a kind of computer readable storage medium, is stored with computer program, which is characterized in that the program is held by processor The task processing method based on machine learning as described in any in claim 1-9 is realized when row.
CN201810578498.8A 2018-06-07 2018-06-07 Task processing method and device based on machine learning and terminal equipment Active CN108924187B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810578498.8A CN108924187B (en) 2018-06-07 2018-06-07 Task processing method and device based on machine learning and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810578498.8A CN108924187B (en) 2018-06-07 2018-06-07 Task processing method and device based on machine learning and terminal equipment

Publications (2)

Publication Number Publication Date
CN108924187A true CN108924187A (en) 2018-11-30
CN108924187B CN108924187B (en) 2020-05-08

Family

ID=64420220

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810578498.8A Active CN108924187B (en) 2018-06-07 2018-06-07 Task processing method and device based on machine learning and terminal equipment

Country Status (1)

Country Link
CN (1) CN108924187B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110298031A (en) * 2019-05-28 2019-10-01 北京百度网讯科技有限公司 A kind of Directory Service system and model version consistency allocator
CN110321213A (en) * 2019-07-08 2019-10-11 杭州港盛软件科技有限公司 A kind of request processing method, device, equipment and readable storage medium storing program for executing
CN110874643A (en) * 2019-11-08 2020-03-10 中科寒武纪科技股份有限公司 Conversion method and device of machine learning instruction, board card, mainboard and electronic equipment
CN111124708A (en) * 2019-12-10 2020-05-08 广州小鹏汽车科技有限公司 Microservice-oriented batch inference method, server and computer-readable storage medium
CN111124671A (en) * 2019-12-10 2020-05-08 广州小鹏汽车科技有限公司 Batch inference dynamic waiting method, server, and computer-readable storage medium
WO2021142637A1 (en) * 2020-01-14 2021-07-22 Oppo广东移动通信有限公司 Artificial intelligence operation processing method and apparatus, system, terminal, and network device
CN113885902A (en) * 2021-08-23 2022-01-04 北京房江湖科技有限公司 Application program interface updating method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103631624A (en) * 2013-11-29 2014-03-12 华为技术有限公司 Method and device for processing read-write request
CN104714852A (en) * 2015-03-17 2015-06-17 华中科技大学 Parameter synchronization optimization method and system suitable for distributed machine learning
US20150379072A1 (en) * 2014-06-30 2015-12-31 Amazon Technologies, Inc. Input processing for machine learning
CN107622310A (en) * 2017-08-30 2018-01-23 第四范式(北京)技术有限公司 For performing the distributed system and its method of machine learning
CN107679563A (en) * 2017-09-15 2018-02-09 广东欧珀移动通信有限公司 Image processing method and device, system, computer equipment
CN107944566A (en) * 2017-11-28 2018-04-20 杭州云脑科技有限公司 A kind of machine learning method, host node, working node and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103631624A (en) * 2013-11-29 2014-03-12 华为技术有限公司 Method and device for processing read-write request
US20150379072A1 (en) * 2014-06-30 2015-12-31 Amazon Technologies, Inc. Input processing for machine learning
CN104714852A (en) * 2015-03-17 2015-06-17 华中科技大学 Parameter synchronization optimization method and system suitable for distributed machine learning
CN107622310A (en) * 2017-08-30 2018-01-23 第四范式(北京)技术有限公司 For performing the distributed system and its method of machine learning
CN107679563A (en) * 2017-09-15 2018-02-09 广东欧珀移动通信有限公司 Image processing method and device, system, computer equipment
CN107944566A (en) * 2017-11-28 2018-04-20 杭州云脑科技有限公司 A kind of machine learning method, host node, working node and system

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110298031A (en) * 2019-05-28 2019-10-01 北京百度网讯科技有限公司 A kind of Directory Service system and model version consistency allocator
CN110321213A (en) * 2019-07-08 2019-10-11 杭州港盛软件科技有限公司 A kind of request processing method, device, equipment and readable storage medium storing program for executing
CN110874643A (en) * 2019-11-08 2020-03-10 中科寒武纪科技股份有限公司 Conversion method and device of machine learning instruction, board card, mainboard and electronic equipment
CN111124708A (en) * 2019-12-10 2020-05-08 广州小鹏汽车科技有限公司 Microservice-oriented batch inference method, server and computer-readable storage medium
CN111124671A (en) * 2019-12-10 2020-05-08 广州小鹏汽车科技有限公司 Batch inference dynamic waiting method, server, and computer-readable storage medium
CN111124671B (en) * 2019-12-10 2023-05-16 广州小鹏汽车科技有限公司 Batch reasoning dynamic waiting method, server and computer readable storage medium
CN111124708B (en) * 2019-12-10 2023-05-16 广州小鹏汽车科技有限公司 Microservice-oriented batch reasoning method, server and computer readable storage medium
WO2021142637A1 (en) * 2020-01-14 2021-07-22 Oppo广东移动通信有限公司 Artificial intelligence operation processing method and apparatus, system, terminal, and network device
CN113885902A (en) * 2021-08-23 2022-01-04 北京房江湖科技有限公司 Application program interface updating method
CN113885902B (en) * 2021-08-23 2022-10-11 贝壳找房(北京)科技有限公司 Application program interface updating method

Also Published As

Publication number Publication date
CN108924187B (en) 2020-05-08

Similar Documents

Publication Publication Date Title
CN108924187A (en) Task processing method, device and terminal device based on machine learning
CN106651097A (en) Data collection method, data collection device and data collection server based on crowd sourcing
CN110427256A (en) Job scheduling optimization method, equipment, storage medium and device priority-based
CN108009642A (en) Distributed machines learning method and system
CN105956666B (en) A kind of machine learning method and system
CN105989076A (en) Data statistical method and device
CN101150485A (en) A management method for network data transmission of zero copy buffer queue
CN110428124A (en) Method for allocating tasks, task allocation apparatus, storage medium and computer equipment
CN109960688A (en) A kind of file mergences method and apparatus
CN107077390A (en) A kind of task processing method and network interface card
CN105653652B (en) A kind of method of data synchronization and system
CN109558133A (en) A kind of page processing method, device and storage medium
CN110109868A (en) Method, apparatus and computer program product for index file
CN110134215A (en) Data processing method, device, electronic equipment and readable storage medium storing program for executing
CN107291629A (en) A kind of method and apparatus for accessing internal memory
CN116680060B (en) Task allocation method, device, equipment and medium for heterogeneous computing system
CN108629560A (en) Task distributing method, electronic equipment and storage medium
CN103440182A (en) Adaptive allocation method and device and adaptive replica consistency method
CN107633001A (en) Hash partition optimization method and device
CN104216834B (en) A kind of method of internal storage access, buffer scheduling device and memory modules
US6915248B1 (en) Method and apparatus for transforming test stimulus
CN110515872A (en) Direct memory access method, apparatus, dedicated computing chip and heterogeneous computing system
CN116089477B (en) Distributed training method and system
CN110175073A (en) Dispatching method, sending method, device and the relevant device of data exchange operation
CN109213561A (en) The equipment scheduling method and device of virtual desktop based on container

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant