CN109978149A - Dispatching method and relevant apparatus - Google Patents
Dispatching method and relevant apparatus Download PDFInfo
- Publication number
- CN109978149A CN109978149A CN201711467783.4A CN201711467783A CN109978149A CN 109978149 A CN109978149 A CN 109978149A CN 201711467783 A CN201711467783 A CN 201711467783A CN 109978149 A CN109978149 A CN 109978149A
- Authority
- CN
- China
- Prior art keywords
- computing device
- circuit
- data
- target
- server
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Biomedical Technology (AREA)
- Pure & Applied Mathematics (AREA)
- Mathematical Optimization (AREA)
- Biophysics (AREA)
- Mathematical Analysis (AREA)
- Computational Mathematics (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Neurology (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Algebra (AREA)
- Databases & Information Systems (AREA)
- Debugging And Monitoring (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The embodiment of the present application discloses a kind of dispatching method and relevant apparatus, and wherein method is based on the server comprising multiple computing devices, comprising: receives M operation and requests;At least one target computing device is chosen from the multiple computing device according to the processor active task of each operation request in M operation request, and determines the corresponding operational order of each target computing device at least one described target computing device;It requests corresponding operational data to calculate the M operation according to the corresponding operational order of target computing device each at least one described target computing device, obtains M final operation results;Each final operation result in the M final operation results is sent to corresponding electronic equipment.The embodiment of the present application can choose computing device corresponding with the operation request received in server and carry out operation, improve the operational efficiency of server.
Description
Technical field
This application involves field of computer technology, and in particular to a kind of dispatching method and relevant apparatus.
Background technique
Neural network is the basis of current many artificial intelligence applications, with the further expansion of the application range of neural network
Greatly, various neural network models are stored using server or cloud computing service, and for the fortune that user submits
It calculates request and carries out operation.In face of numerous neural network models and large batch of request, the operation efficiency of server how is improved
It is those skilled in the art's technical problem to be solved.
Summary of the invention
The embodiment of the present application proposes a kind of dispatching method and relevant apparatus, can choose and the operation that receives in server
It requests corresponding computing device to carry out operation, improves the operational efficiency of server.
In a first aspect, the embodiment of the present application provides a kind of dispatching method, it is described based on the server of multiple computing devices
Method includes:
M operation request is received, the M is positive integer;
At least one target computing device corresponding with the M operation request is chosen from the multiple computing device;
The fortune of corresponding operation request is executed based on each target computing device at least one described target computing device
It calculates, obtains M final operation results;
Each final operation result in the M final operation results is sent to corresponding electronic equipment.
Second aspect, the embodiment of the present application provide a kind of server, and the server includes multiple computing devices,
In:
Receiving unit, for receiving M operation request;
Scheduling unit, for from the multiple computing device choose it is corresponding with the M operation request at least one
Target computing device;
Arithmetic element, it is corresponding for being executed based on each target computing device at least one described target computing device
The operation of operation request obtains M final operation results;
Transmission unit, for each final operation result in the M final operation results to be sent to corresponding electronics
Equipment.
The third aspect, the embodiment of the present application provide another server, including processor, memory, communication interface with
And one or more programs, wherein one or more of programs are stored in the memory, and are configured by described
Processor executes, and described program includes the instruction for the step some or all of as described in first aspect.
Fourth aspect, the embodiment of the present application provide a kind of computer readable storage medium, the computer storage medium
It is stored with computer program, the computer program includes program instruction, and described program instruction makes institute when being executed by a processor
State the method that processor executes above-mentioned first aspect.
After above-mentioned dispatching method and relevant apparatus, include from server based on the operation request received
M multiple computing devices in choose the target computing device for executing M operation request, and be based on target computing device according to it
Corresponding operation request carries out operation, and requests corresponding final operation result to be sent to corresponding electronics each operation and set
It is standby, i.e., unified distribution computing resource is requested according to operation, so that multiple computing devices in server are effectively cooperated, from
And improve the operation efficiency of server.
Detailed description of the invention
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is the application
Some embodiments for those of ordinary skill in the art without creative efforts, can also basis
These attached drawings obtain other attached drawings.
Wherein:
Fig. 1 is a kind of structural schematic diagram of server provided by the embodiments of the present application;
Fig. 1 a is a kind of structural schematic diagram of computing unit provided by the embodiments of the present application;
Fig. 1 b is a kind of structural schematic diagram of main process task circuit provided by the embodiments of the present application;
Fig. 1 c is a kind of data distribution schematic diagram of computing unit provided by the embodiments of the present application;
Fig. 1 d is a kind of data back schematic diagram of computing unit provided by the embodiments of the present application;
Fig. 1 e is a kind of operation schematic diagram of neural network structure provided by the embodiments of the present application;
Fig. 2 is a kind of flow diagram of dispatching method provided by the embodiments of the present application;
Fig. 3 is the structural schematic diagram of another server provided by the embodiments of the present application;
Fig. 4 is the structural schematic diagram of another server provided by the embodiments of the present application.
Specific embodiment
Below in conjunction with the attached drawing in the embodiment of the present application, technical solutions in the embodiments of the present application carries out clear, complete
Site preparation description, it is clear that described embodiment is some embodiments of the present application, instead of all the embodiments.Based on this Shen
Please in embodiment, every other implementation obtained by those of ordinary skill in the art without making creative efforts
Example, shall fall in the protection scope of this application.
It should be appreciated that ought use in this specification and in the appended claims, term " includes " and "comprising" instruction
Described feature, entirety, step, operation, the presence of element and/or component, but one or more of the other feature, whole is not precluded
Body, step, operation, the presence or addition of element, component and/or its set.
It is also understood that mesh of the term used in this present specification merely for the sake of description specific embodiment
And be not intended to limit the application.As present specification and it is used in the attached claims, unless on
Other situations are hereafter clearly indicated, otherwise " one " of singular, "one" and "the" are intended to include plural form.
It will be further appreciated that the term "and/or" used in present specification and the appended claims is
Refer to any combination and all possible combinations of one or more of associated item listed, and including these combinations.
As used in this specification and in the appended claims, term " if " can be according to context quilt
Be construed to " when ... " or " once " or " in response to determination " or " in response to detecting ".Similarly, phrase " if it is determined that " or
" if detecting [described condition or event] " can be interpreted to mean according to context " once it is determined that " or " in response to true
It is fixed " or " once detecting [described condition or event] " or " in response to detecting [described condition or event] ".
The embodiment of the present application proposes a kind of dispatching method and relevant apparatus, can choose and the operation that receives in server
It requests corresponding computing device to carry out operation, improves the operational efficiency of server.Below in conjunction with specific embodiment, and referring to attached
The application is further described in figure.
Fig. 1 is please referred to, Fig. 1 is a kind of structural schematic diagram of server provided by the embodiments of the present application.On as shown in Figure 1,
Stating server includes multiple computing devices, and computing device includes but is not limited to server computer, can also be personal computer
(personal computer, PC), network PC, minicomputer, mainframe computer etc..
In this application, it establishes connection by wired or wireless between each computing device for including in server and transmits
Data, and each computing device includes at least one calculating carrier, such as: central processing unit (Central Processing
Unit, CPU), image processor (graphics processing unit, GPU), processor board etc..And involved by the application
And server can also be Cloud Server, provide cloud computing service for electronic equipment.
Wherein, each carrier that calculates may include the computing unit that at least one is used for neural network computing, such as: processing core
Piece etc..The specific structure of computing unit is not construed as limiting, Fig. 1 a is please referred to, Fig. 1 a is a kind of structural representation of computing unit
Figure.As shown in Figure 1a, which includes: main process task circuit, basic handling circuit and branch process circuit.Specifically, main
Processing circuit and branch process circuit connection, at least one basic handling circuit of branch process circuit connection.
The branch process circuit, for receiving and dispatching the data of main process task circuit or basic handling circuit.
B refering to fig. 1, Fig. 1 b are a kind of structural schematic diagram of main processing circuit, and as shown in Figure 1 b, main process task circuit can wrap
Register and/or on piece buffer circuit are included, which can also include: control circuit, vector operation device circuit, ALU
(arithmetic and logic unit, arithmetic logic circuit) circuit, accumulator circuit, DMA (Direct Memory
Access, direct memory access) circuits such as circuit, certainly in practical applications, above-mentioned main process task circuit can also add, conversion
Circuit (such as matrix transposition circuit), data rearrangement circuit or active circuit etc. others circuit.
Main process task circuit further includes data transmitting line, data receiver circuit or interface, which can collect
At data distribution circuit and data broadcasting circuit, certainly in practical applications, data distribution circuit and data broadcasting circuit
It can also be respectively set;Above-mentioned data transmitting line and data receiver circuit also can integrate shape together in practical applications
At data transmit-receive circuit.For broadcast data, that is, need to be sent to the data of each based process circuit.For distributing data,
Need selectively to be sent to the data of part basis processing circuit, specific selection mode can be by main process task circuit foundation
Load and calculation are specifically determined.For broadcast transmission mode, i.e., broadcast data is sent to the forms of broadcasting
Each based process circuit.(broadcast data in practical applications, is sent to each based process by way of once broadcasting
Broadcast data can also be sent to each based process circuit, the application specific implementation by way of repeatedly broadcasting by circuit
Mode is not intended to limit the number of above-mentioned broadcast), for distributing sending method, i.e., distribution data are selectively sent to part base
Plinth processing circuit.
Realizing that the control circuit of main process task circuit is to some or all of based process circuit transmission number when distributing data
According to (data may be the same or different, specifically, if sending data by the way of distribution, each reception data
The data that based process circuit receives can be different, naturally it is also possible to which the data for having part basis processing circuit to receive are identical;
Specifically, when broadcast data, the control circuit of main process task circuit is to some or all of based process circuit transmission
Data, each based process circuit for receiving data can receive identical data, i.e. broadcast data may include all bases
Processing circuit is required to the data received.Distribution data may include: the data that part basis processing circuit needs to receive.
The broadcast data can be sent to all branch process circuits, branch process electricity by one or many broadcast by main process task circuit
The road broadcast data is transmitted to all based process circuits.
Optionally, the vector operation device circuit of above-mentioned main process task circuit can execute vector operation, including but not limited to: two
A vector addition subtraction multiplication and division, vector and constant add, subtract, multiplication and division operation, or executes any operation to each element in vector.
Wherein, continuous operation is specifically as follows, and vector and constant add, subtract, multiplication and division operation, activating operation, accumulating operation etc..
Each based process circuit may include base register and/or basic on piece buffer circuit;Each based process
Circuit can also include: one or any combination in inner product operation device circuit, vector operation device circuit, accumulator circuit etc..On
Stating inner product operation device circuit, vector operation device circuit, accumulator circuit can be integrated circuit, above-mentioned inner product operation device electricity
Road, vector operation device circuit, accumulator circuit may be the circuit being separately provided.
The connection structure of branch process circuit and tandem circuit can be arbitrary, and be not limited to the H-type structure of Fig. 1 b.It can
Choosing, main process task circuit to tandem circuit is the structure of broadcast or distribution, and tandem circuit to main process task circuit is to collect
(gather) structure.Broadcast, distribution and collection are defined as follows:
The data transfer mode of the main process task circuit to tandem circuit may include:
Main process task circuit is respectively connected with multiple branch process circuits, each branch process circuit again with multiple tandem circuits
It is respectively connected with.
Main process task circuit is connected with a branch process circuit, which reconnects a branch process electricity
Road, and so on, multiple branch process circuits of connecting, then, each branch process circuit distinguish phase with multiple tandem circuits again
Even.
Main process task circuit is respectively connected with multiple branch process circuits, and each branch process circuit is connected multiple basic electric again
Road.
Main process task circuit is connected with a branch process circuit, which reconnects a branch process electricity
Road, and so on, multiple branch process circuits of connecting, then, each branch process circuit is connected multiple tandem circuits again.
When distributing data, main process task circuit transmits data, each base for receiving data to some or all of tandem circuit
The data that plinth circuit receives can be different;
When broadcast data, main process task circuit transmits data, each base for receiving data to some or all of tandem circuit
Plinth circuit receives identical data.
When collecting data, part or all of tandem circuit is to main process task circuit transmission data.It should be noted that such as Fig. 1 a institute
The computing unit shown can be an individual phy chip, and certainly in practical applications, which also can integrate
In other chips (such as CPU, GPU), the application specific embodiment is not intended to limit the physical behavior shape of said chip device
Formula.
C refering to fig. 1, Fig. 1 c are a kind of data distribution schematic diagram of computing unit, and as shown in the arrow of Fig. 1 c, which is
The distribution direction of data, as illustrated in figure 1 c, after main process task circuit receives external data, after external data is split, point
Multiple branch process circuits are sent to, branch process circuit is sent to basic handling circuit for data are split.
D refering to fig. 1, Fig. 1 d are a kind of data back schematic diagram of computing unit, and as shown in the arrow of Fig. 1 d, which is
Data (such as inner product calculated result) is returned to branch process by the upstream direction of data, as shown in Figure 1 d, basic handling circuit
Circuit, branch process circuit are being back to main process task circuit.
It can be specifically vector, matrix, multidimensional (three-dimensional or four-dimensional or more) data, for defeated for input data
A specific value for entering data, is properly termed as an element of the input data.
Present disclosure embodiment also provides a kind of calculation method of computing unit as shown in Figure 1a, the calculation method apply with
In neural computing, specifically, the computing unit can be used for the input data to one or more layers in multilayer neural network
Operation is executed with weight data.
Specifically, computing unit described above is used for the input data to one or more layers in trained multilayer neural network
Operation is executed with weight data;
Or the computing unit is used for the input data and power to one or more layers in the multilayer neural network of forward operation
Value Data executes operation.
Above-mentioned operation includes but is not limited to: convolution algorithm, Matrix Multiplication matrix operation, Matrix Multiplication vector operation, biasing operation,
One of full connection operation, GEMM operation, GEMV operation, activation operation or any combination.
GEMM calculating refers to: the operation of the matrix-matrix multiplication in the library BLAS.The usual representation of the operation are as follows: C=
Alpha*op (S) * op (P)+beta*C, wherein S and P is two matrixes of input, and C is output matrix, and alpha and beta are
Scalar, op represents certain operation to matrix S or P, in addition, also having the integer of some auxiliary as a parameter to illustrating matrix
The width of S and P is high;
GEMV calculating refers to: the operation of the Matrix-Vector multiplication in the library BLAS.The usual representation of the operation are as follows: C=
Alpha*op (S) * P+beta*C, wherein S is input matrix, and P is the vector of input, and C is output vector, and alpha and beta are
Scalar, op represent certain operation to matrix S.
The application is not construed as limiting for calculating the connection relationship between carrier in computing device, can be isomorphism or isomery
Carrier is calculated, is also not construed as limiting for calculating the connection relationship in carrier between computing unit, is carried by the calculating of above-mentioned isomery
Body or computing unit execute parallel task, and operation efficiency can be improved.
Computing device as described in Figure 1 includes that at least one calculates carrier, wherein calculate carrier includes at least one meter again
Unit is calculated, i.e., selected target computing device depends on the connection relationship between each computing device and each meter in the application
Calculate the attribute letter that the specific physical hardware such as neural network model, Internet resources disposed in device supports situation and operation request
The calculating carrier of same type, then can be deployed in the same computing device, such as the calculating carrier portion that will be used for propagated forward by breath
It is deployed on the same computing device, without being different computing device, effectively reduces the expense communicated between computing device, just
In raising operation efficiency;Specific neural network model can also be deployed in specific calculating carrier, i.e. server is receiving needle
When requesting the operation of specified neural network, the corresponding calculating carrier of above-mentioned specified neural network is called to execute above-mentioned operation request
, the time of determining processing task is saved, operation efficiency is improved.
In this application, will be disclosed, and the neural network model used extensively is as specified neural network model (example
Such as: LeNet, AlexNet, ZFnet in convolutional neural networks (convolutional neural network, CNN),
GoogleNet、VGG、ResNet)。
Optionally, it obtains specified neural network model and concentrates the operation demand of each specified neural network model and described more
The hardware attributes of each computing device obtain multiple operation demands and multiple hardware attributes in a computing device;According to the multiple
Operation demand and the multiple hardware attributes are corresponding by the specified each specified neural network model of neural network model concentration
Specified computing device on dispose corresponding specified neural network model.
Wherein, specifying neural network model collection includes multiple specified neural network models, the hardware attributes packet of computing device
Network bandwidth, memory capacity, the processor host frequency rate etc. for including computing device itself further include that carrier or meter are calculated in computing device
Calculate the hardware attributes of unit.That is, according to the selection of the hardware attributes of each computing device and specified neural network model
The corresponding computing device of operation demand, can avoid processing leads to server failure not in time, and energy is supported in the operation for improving server
Power.
The input neuron and output neuron mentioned in the application do not mean that refreshing in the input layer of entire neural network
Through neuron in member and output layer, but the mind for two layers of arbitrary neighborhood in network, in network feed forward operation lower layer
It is input neuron through member, the neuron in network feed forward operation upper layer is output neuron.With convolutional Neural net
For network, if a convolutional neural networks have L layers, K=1,2 ..., L-1, for K layers and K+1 layers, by K layers
Referred to as input layer, neuron therein are the input neuron, and K+1 layers are known as output layer, and neuron therein is described
Output neuron.I.e. in addition to top, each layer all can serve as input layer, and next layer is corresponding output layer.
The operation being mentioned above all is one layer in neural network of operation, for multilayer neural network, realizes process
As shown in fig. le, the arrow of dotted line indicates reversed operation in figure, and the arrow of solid line indicates forward operation.In forward operation, when
Upper one layer of artificial neural network executes complete after, using upper one layer of obtained output neuron as next layer of input neuron
It carries out operation (or the input neuron that certain operations are re-used as next layer is carried out to the output neuron), meanwhile, it will weigh
Value also replaces with next layer of weight.In reversed operation, after the completion of the reversed operation of upper one layer of artificial neural network executes,
The input neuron gradient that upper one layer obtains is subjected to operation (or to the input as next layer of output neuron gradient
Neuron gradient carries out the output neuron gradient that certain operations are re-used as next layer), while weight is replaced with next layer
Weight.
The forward operation of neural network is that input data is input to the calculating processes of final output data, reversed operation with just
It is to the direction of propagation of operation on the contrary, anti-for final output data and the loss of desired output data or the corresponding loss function of loss
To the calculating process by forward operation.By information forward operation and reversed operation in cycles, according to loss or loss
Functional gradient decline mode correct each layer weight, each layer weight is adjusted and neural network learning training process,
The loss of network output can be reduced.
Fig. 2 is referred to, Fig. 2 is a kind of flow diagram of dispatching method provided by the embodiments of the present application, as shown in Fig. 2,
This method is applied to server as shown in Figure 1, and this method is related to the above-mentioned electronic equipment for allowing to access above-mentioned server, should
Electronic equipment may include the various handheld devices with wireless communication function, mobile unit, wearable device, calculate equipment or
Other processing equipments and various forms of user equipmenies (user equipment, UE) of radio modem are connected to,
Mobile station (mobile station, MS), terminal device (terminal device) etc..
201: receiving M operation request.
In this application, M is positive integer, and server receives the M operation request that the electronic equipment for allowing to access is sent, right
It is not construed as limiting in the quantity for the operation request that the quantity of electronic equipment and each electronic equipment are sent, i.e., above-mentioned M operation request
It can be what an electronic equipment was sent, be also possible to what multiple electronic equipments were sent.
Operation request includes target nerve network mould involved in processor active task (training mission or test assignment), operation
The attribute informations such as type.Wherein, training mission is for being trained target nerve network model, i.e., to the neural network model into
Row forward operation and reversed operation, until training is completed;And test assignment is used to be carried out once according to target nerve network model
Forward operation.
Above-mentioned target nerve network model can be user and send the nerve net uploaded when operation request by electronic equipment
Network model is also possible to the neural network model stored in server etc., number of the application for target nerve network model
Amount is also not construed as limiting, i.e., each operation request can correspond at least one target nerve network model.
202: at least one target computing device corresponding with the M operation request is chosen from multiple computing devices.
The application is not construed as limiting for how to choose target computing device, can be according to the quantity and target nerve that operation is requested
The quantity of network model is chosen, such as: an operation request if it exists, and a corresponding target nerve network mould is requested in the operation
When type, corresponding operational order can be requested to be classified to obtain parallel instruction and serial command operation, parallel instruction is distributed
Operation is carried out to different target computing devices, serial command is distributed to the target computing device progress operation for being good at processing,
The operation efficiency of each instruction is improved, to improve operation efficiency;Multiple operation requests if it exists, and multiple operations request corresponds to
When one target nerve network model, the target computing device comprising target nerve network model can be used, multiple operations are requested
Corresponding operational data carries out batch processing, avoids the time caused by operation repeatedly from wasting, and avoid between different computing devices
Communication generate overhead, to improve operation efficiency;Multiple operation requests if it exists, and multiple operations request correspondence is more
When a target nerve network model, it can search respectively and be good at the computing device for handling the target nerve network model or dispose before
The computing device of the target nerve network model completes operation request, that is, eliminates the time of netinit, improve fortune
Calculate efficiency.
Optionally, if the processor active task of target operation request is test assignment, packet is chosen from the multiple computing device
The computing device for including the forward direction operation of the corresponding target nerve network model of the processor active task obtains first object computing device;
If the processor active task of the target operation request is training mission, choosing from the multiple computing device includes that the operation is appointed
It is engaged in the forward direction operation of corresponding target nerve network model and computing device trained backward obtains the first object and calculates dress
It sets.
Wherein, target operation request is any operation request in M operation request, and first object computing device is
Target computing device corresponding with the target operation request at least one described target computing device.
That is, first object computing device is that can be used for if the processor active task of target operation request is test assignment
The computing device of the forward direction operation of performance objective neural network model;And when processor active task is training mission, first object meter
Calculating device is the forward direction operation that can be used for performance objective neural network model and computing device trained backward, that is, passes through dedicated meter
Calculating device processing operation request can be improved the accuracy rate and operation efficiency of operation.
It for example, include the first computing device and the second computing device in server, wherein the first computing device only wraps
Containing for specifying the forward direction operation of neural network model, the second computing device can both execute above-mentioned specified neural network model
Forward direction operation, and the backward trained operation of above-mentioned specified neural network model can be executed.When the target operation request received
In target nerve network model be above-mentioned specified neural network model, and processor active task be test assignment when, determine the first meter
It calculates device and executes above-mentioned target operation request.
Optionally, according to the attribute information of operation request each in M operation request from auxiliary dispatching set of algorithms
Choose auxiliary dispatching algorithm;At least one described target is chosen from the multiple computing device according to the auxiliary dispatching algorithm
Computing device.
Wherein, auxiliary dispatching set of algorithms includes but is not limited to the next item down: polling dispatching (Round-Robin
Scheduling) algorithm, weighted polling (Weighted Round Robin) algorithm, minimum link (Least
Connections) algorithm, minimum link (the Weighted Least Connections) algorithm of weighting, based on locality most
Few link (Locality-Based Least Connections) algorithm, tape copy are at least linked based on locality
(Locality-Based Least Connections with Replication) algorithm, destination address hash
(Destination Hashing) algorithm, source address hash (Source Hashing) algorithm.
The application is not construed as limiting for how to choose auxiliary dispatching algorithm according to attribute information, for example, if multiple mesh
It marks computing device and handles operation request of the same race, then auxiliary dispatching algorithm can be polling dispatching algorithm;If different targets calculates
The anti-pressure ability of device is different, it should which the target computing device high to configuration, load is low distributes more operations and requests, then assists
Dispatching algorithm can be Weighted Round Robin;If the workload that each target computing device is assigned in multiple target computing devices
It is not quite similar, then auxiliary dispatching algorithm can be minimum chained scheduling algorithm, dynamically chooses and wherein currently overstocks connection number most
A few target computing device handles current request, improves the utilization efficiency of target computing device as much as possible, can also
To be the minimum chained scheduling algorithm of weighting.
That is, on the basis of the dispatching method as involved in above-described embodiment, in conjunction with auxiliary dispatching algorithm picks
The final computing device for executing operation request, to further increase the operation efficiency of server.
203: corresponding operation request is executed based on each target computing device at least one described target computing device
Operation obtain M final operation results.
The application requests corresponding operational data to be not construed as limiting each operation, can be the image for image recognition
Data are also possible to the voice data etc. for speech recognition;When processor active task is test assignment, operational data is on user
The data of biography, and when processor active task is training mission, operational data can be the training set of user's upload, be also possible to service
The training set corresponding with target nerve network model stored in device.
It can produce multiple intermediate calculation results in the calculating process of operational order, can be obtained according to multiple intermediate calculation results
Corresponding final operation result is requested in each operation.The embodiment of the present application does not limit the operation method of target computing device
It is fixed, the calculation method of computing unit as is shown in figs. la to ld can be used.
204: each final operation result in the M final operation results is sent to corresponding electronic equipment.
It is held it is appreciated that being chosen from the M for including in server multiple computing devices based on the operation request received
The target computing device of M operation of row request, and operation is carried out according to its corresponding operation request based on target computing device, and
It requests corresponding final operation result to be sent to corresponding electronic equipment each operation, i.e., unified distribution meter is requested according to operation
Resource is calculated, so that multiple computing devices in server are effectively cooperated, to improve the operation efficiency of server.
Optionally, the method also includes: wait the first preset duration, detect at least one described target computing device
Whether each target computing device obtains corresponding final operation result, if it is not, by the mesh for not obtaining final operation result
Computing device is marked as Delay computing device;It is chosen and the delay meter from the idle computing device of the multiple computing device
It calculates the corresponding operation of device and requests corresponding spare computing device;The Delay computing dress is executed based on the spare computing device
Set the operation of corresponding operation request.
That is, when the first preset duration reaches, using the computing device of unfinished operational order as Delay computing
Device, the operation executed according to Delay computing device request to choose from the idle computing device of computing devices multiple in server
Spare computing device completes the operation of the corresponding operation request of Delay computing device based on spare computing device, to improve fortune
Calculate efficiency.
Optionally, the corresponding operation request of the Delay computing device is executed based on the spare computing device described
After operation, the method also includes: it obtains and obtains at first between the Delay computing device and the spare computing device
Final operation result;To the calculating for not obtaining final operation result between the Delay computing device and the spare computing device
Device sends pause instruction.
Wherein, pause instruction is used to indicate between Delay computing device and the spare computing device and does not return to final operation
As a result computing device pause executes corresponding operational order.That is, executing Delay computing dress by spare computing device
The operation of corresponding operation request is set, and chooses and obtains final operation knot between spare computing device and Delay computing device at first
Fruit requests corresponding final operation result as operation, and to by not obtaining between Delay computing device and spare computing device
The computing device of final operation result sends pause instruction, that is, suspends the operation for not completing the computing device of operation request, thus
Save power consumption.
Optionally, the method also includes: wait the second preset duration, detect the Delay computing device and whether obtain pair
The final operation result answered, if it is not, sending faulting instruction using the Delay computing device as calculation of fault device.
Wherein, for faulting instruction for informing that operation maintenance personnel calculation of fault device breaks down, the second preset duration is greater than institute
State the first preset duration.That is, the second preset duration reach when, if do not receive Delay computing device obtain it is final
Operation result then judges that executing Delay computing device breaks down, and informs corresponding operation maintenance personnel, to improve the place of failure
Reason ability.
Optionally, the method also includes: every object time threshold value, update the hash table of the multiple computing device.
Wherein, hash table (Hash table, be also Hash table), be according to key value (Key value) and directly into
The data structure of row access.In this application, using the IP address of multiple computing devices as key value, pass through hash function
The position that (mapping function) maps that in hash table can quickly be found that is, after determining target computing device
The physical resource that target computing device is distributed.The concrete form of hash table is not construed as limiting, can be artificially be arranged it is quiet
The hash table of state is also possible to the hardware resource distributed according to IP address.Every object time threshold value, to multiple computing devices
Hash table is updated, and improves the accuracy and search efficiency of lookup.
Consistent with the embodiment in above-mentioned Fig. 2, referring to figure 3., Fig. 3 is the knot of another server provided herein
Structure schematic diagram, above-mentioned server include multiple computing devices.As shown in figure 3, above-mentioned server 300 includes:
Receiving unit 301, for receiving M operation request, the M is positive integer;
Scheduling unit 302, for being chosen and the M operation request corresponding at least one from the multiple computing device
A target computing device;
Arithmetic element 303, for based on target computing device execution pair each at least one described target computing device
The operation for the operation request answered obtains M final operation results;
Transmission unit 304, for each final operation result in the M final operation results to be sent to corresponding electricity
Sub- equipment.
Optionally, if the processor active task that the scheduling unit 302 is specifically used for target operation request is test assignment, from institute
State the calculating dress that the forward direction operation including the corresponding target nerve network model of the processor active task is chosen in multiple computing devices
It sets, obtains first object computing device, the target operation request is any operation request in M operation request, institute
Stating first object computing device is target meter corresponding with the target operation request at least one described target computing device
Calculate device;If the processor active task of the target operation request is training mission, choosing from the multiple computing device includes institute
The computing device stating the forward direction operation of the corresponding target nerve network model of processor active task and training backward, obtains first mesh
Mark computing device.
Optionally, the scheduling unit 302 is specifically used for the attribute according to operation request each in M operation request
Information chooses auxiliary dispatching algorithm from auxiliary dispatching set of algorithms, and the auxiliary dispatching set of algorithms includes at least one of the following: wheel
Inquiry dispatching algorithm, Weighted Round Robin at least link algorithm, the minimum link algorithm of weighting, Locality-Based Least Connections Scheduling calculation
Method, tape copy at least link algorithm, destination address hashing algorithm, source address hashing algorithm based on locality;According to described auxiliary
Auxiliary dispatch algorithm chooses at least one described target computing device from the multiple computing device.
Optionally, the server further includes detection unit 306, and for waiting the first preset duration, detection is described at least
Whether each target computing device obtains corresponding final operation result in one target computing device, if it is not, described will not obtain
To final operation result target computing device as Delay computing device;By the scheduling unit 302 from the multiple calculating
Operation corresponding with the Delay computing device is chosen in the idle computing device of device requests corresponding spare computing device;By
The arithmetic element 303 executes the operation of the corresponding operation request of the Delay computing device based on the spare computing device.
Optionally, the acquiring unit 305, be also used to obtain the Delay computing device and the spare computing device it
Between the final operation result that obtains at first;It is filled from the transmission unit 304 to the Delay computing device and the spare calculating
The computing device for not obtaining final operation result between setting sends pause instruction.
Optionally, the detection unit 306 is also used to wait the second preset duration, and detecting the Delay computing device is
It is no to obtain corresponding final operation result, if it is not, using the Delay computing device for not returning to final operation result as failure
Computing device;Faulting instruction is sent by the transmission unit 304, the faulting instruction is by informing based on failure described in operation maintenance personnel
It calculates device to break down, second preset duration is greater than first preset duration.
Optionally, the server further includes updating unit 307, for updating the service every object time threshold value
The hash table of device.
Optionally, the acquiring unit 305 is also used to obtain specified neural network model and concentrates each specified neural network
The hardware attributes of each computing device obtain multiple operation demands and more in the operation demand of model and the multiple computing device
A hardware attributes;
The server further includes deployment unit 308, for according to the multiple operation demand and the multiple hardware category
Property the specified neural network model concentrated to dispose on the corresponding specified computing device of each specified neural network model correspond to
Specified neural network model.
Optionally, the computing device includes that at least one calculates carrier, and the calculating carrier includes at least one calculating
Unit.
Optionally, the computing device includes that at least one calculates carrier, and the calculating carrier includes at least one calculating
Unit, the computing unit execute fortune to the input data and weight data of one or more layers in trained multilayer neural network
It calculates, or operation, the fortune is executed to the input data and weight data of one or more layers in the multilayer neural network of forward operation
Calculation include: convolution algorithm, Matrix Multiplication matrix operation, Matrix Multiplication vector operation, biasing operation, entirely connect operation, GEMM operation,
One of GEMV operation, activation operation or any combination.
Optionally, the computing unit includes: main process task circuit, branch process circuit and based process circuit, the master
Processing circuit and branch process circuit connection, the based process circuit and branch process circuit connection, in which:
The main process task circuit is divided into broadcast number for obtaining the data other than the computing unit, and by the data
According to distribution data, the broadcast data is sent to all branch process circuits with broadcast mode, by the distribution data choosing
Selecting property is distributed to different branch process circuits;
The branch process circuit, for forwarding data between the main process task circuit and the based process circuit;
The based process circuit, for receiving the broadcast data and distribution data of the branch process circuit forwarding, and
Operation is executed to the broadcast data and distribution data and obtains operation result, which is sent to the branch process electricity
Road;
The main process task circuit, be also used to receive branch process circuit forwarding the based process circuit with operation knot
Fruit is handled the operation result to obtain calculated result.
Optionally, the main process task circuit is specifically used for the broadcast data once to broadcast or repeatedly broadcast to
All branch process circuits.
Optionally, the based process circuit is specifically used for executing inner product operation to the broadcast data and distribution data, multiply
Product operation or vector operation obtain operation result.
It is held it is appreciated that being chosen from the M for including in server multiple computing devices based on the operation request received
The target computing device of M operation of row request, and operation is carried out according to its corresponding operation request based on target computing device, and
It requests corresponding final operation result to be sent to corresponding electronic equipment each operation, i.e., unified distribution meter is requested according to operation
Resource is calculated, so that multiple computing devices in server are effectively cooperated, to improve the operation efficiency of server.
In one embodiment, as shown in figure 4, including processor 401 this application discloses another server 400, depositing
Reservoir 402, communication interface 403 and one or more programs 404, wherein one or more programs 404 are stored in memory
In 402, and it is configured to be executed by processor, described program 404 includes for executing portion described in above-mentioned dispatching method
Point or Overall Steps instruction.
A kind of computer readable storage medium, above-mentioned computer-readable storage medium are provided in another embodiment of the invention
Matter is stored with computer program, and above-mentioned computer program includes program instruction, and above procedure instruction makes when being executed by a processor
Above-mentioned processor executes implementation described in dispatching method.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure
Member and algorithm steps, can be realized with electronic hardware, computer software, or a combination of the two, in order to clearly demonstrate hardware
With the interchangeability of software, each exemplary composition and step are generally described according to function in the above description.This
A little functions are implemented in hardware or software actually, the specific application and design constraint depending on technical solution.Specially
Industry technical staff can use different methods to achieve the described function each specific application, but this realization is not
It is considered as beyond the scope of this invention.
It is apparent to those skilled in the art that for convenience of description and succinctly, the end of foregoing description
The specific work process at end and unit, can refer to corresponding processes in the foregoing method embodiment, details are not described herein.
In several embodiments provided herein, it should be understood that disclosed terminal and method can pass through it
Its mode is realized.For example, the apparatus embodiments described above are merely exemplary, for example, the division of said units, only
Only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or components can be tied
Another system is closed or is desirably integrated into, or some features can be ignored or not executed.In addition, shown or discussed phase
Mutually between coupling, direct-coupling or communication connection can be through some interfaces, the INDIRECT COUPLING or communication of device or unit
Connection is also possible to electricity, mechanical or other form connections.
Above-mentioned unit as illustrated by the separation member may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple
In network unit.Some or all of unit therein can be selected to realize the embodiment of the present invention according to the actual needs
Purpose.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit
It is that each unit physically exists alone, is also possible to two or more units and is integrated in one unit.It is above-mentioned integrated
Unit both can take the form of hardware realization, can also realize in the form of software functional units.
If above-mentioned integrated unit is realized in the form of SFU software functional unit and sells or use as independent product
When, it can store in a computer readable storage medium.Based on this understanding, technical solution of the present invention is substantially
The all or part of the part that contributes to existing technology or the technical solution can be in the form of software products in other words
It embodies, which is stored in a storage medium, including some instructions are used so that a computer
Equipment (can be personal computer, server or the network equipment etc.) executes the complete of each embodiment above method of the present invention
Portion or part steps.And storage medium above-mentioned include: USB flash disk, mobile hard disk, read-only memory (Read-Only Memory,
ROM), random access memory (Random Access Memory, RAM), magnetic or disk etc. are various can store program
The medium of code.
It should be noted that in attached drawing or specification text, the implementation for not being painted or describing is affiliated technology
Form known to a person of ordinary skill in the art, is not described in detail in field.In addition, the above-mentioned definition to each element and method is simultaneously
It is not limited only to various specific structures, shape or the mode mentioned in embodiment, those of ordinary skill in the art can carry out letter to it
It singly changes or replaces.
Above specific embodiment has carried out further specifically the purpose of the application, technical scheme and beneficial effects
It is bright, it should be understood that the above is only the specific embodiments of the application, are not intended to limit this application, all the application's
Within spirit and principle, any modification, equivalent substitution, improvement and etc. done be should be included within the scope of protection of this application.
Claims (15)
1. a kind of dispatching method, which is characterized in that the method is based on the server comprising multiple computing devices, the method packet
It includes:
M operation request is received, the M is positive integer;
At least one target computing device corresponding with the M operation request is chosen from the multiple computing device;
The operation that corresponding operation request is executed based on each target computing device at least one described target computing device, is obtained
To M final operation results;
Each final operation result in the M final operation results is sent to corresponding electronic equipment.
2. the method according to claim 1, wherein described choose and the M from the multiple computing device
At least one corresponding target computing device is requested in a operation, comprising:
If the processor active task of target operation request is test assignment, choosing from the multiple computing device includes that the operation is appointed
Be engaged in corresponding target nerve network model forward direction operation computing device, obtain first object computing device, the target fortune
Calculating request is any operation request in M operation request, and the first object computing device is at least one described mesh
Mark target computing device corresponding with the target operation request in computing device;
If the processor active task of the target operation request is training mission, choosing from the multiple computing device includes the fortune
The forward direction operation of the corresponding target nerve network model of calculation task and computing device trained backward, obtain the first object meter
Calculate device.
3. the method according to claim 1, wherein described choose and the M from the multiple computing device
At least one corresponding target computing device is requested in a operation, comprising:
Auxiliary dispatching is chosen from auxiliary dispatching set of algorithms according to the attribute information of operation request each in M operation request
Algorithm, the auxiliary dispatching set of algorithms include at least one of the following: that polling dispatching algorithm, Weighted Round Robin, minimum link are calculated
Method, the minimum link algorithm of weighting, Locality-Based Least Connections Scheduling algorithm, tape copy at least linked based on locality algorithm,
Destination address hashing algorithm, source address hashing algorithm;
At least one described target computing device is chosen from the multiple computing device according to the auxiliary dispatching algorithm.
4. method according to claim 1-3, which is characterized in that the method also includes:
It waits the first preset duration, detects each target computing device at least one described target computing device and whether obtain pair
The final operation result answered, if it is not, using the target computing device for not obtaining final operation result as Delay computing device;
Operation request pair corresponding with the Delay computing device is chosen from the idle computing device of the multiple computing device
The spare computing device answered;
The operation of the corresponding operation request of the Delay computing device is executed based on the spare computing device.
5. according to the method described in claim 4, it is characterized in that, being based on prolonging described in the spare computing device execution described
After the corresponding operation request of computing device late, the method also includes:
Obtain the final operation result obtained at first between the Delay computing device and the spare computing device;
It is sent to the computing device for not obtaining final operation result between the Delay computing device and the spare computing device
Pause instruction.
6. method according to claim 4 or 5, which is characterized in that the method also includes:
The second preset duration is waited, detects whether the Delay computing device obtains corresponding final operation result, if it is not, by institute
Delay computing device is stated as calculation of fault device, sends faulting instruction, the faulting instruction is for informing described in operation maintenance personnel
Calculation of fault device breaks down, and second preset duration is greater than first preset duration.
7. method according to claim 1-6, which is characterized in that the method also includes:
Every object time threshold value, the hash table of the server is updated.
8. method according to claim 1-7, which is characterized in that the method also includes:
Obtain the operation demand for specifying neural network model to concentrate each specified neural network model and the multiple computing device
In the hardware attributes of each computing device obtain multiple operation demands and multiple hardware attributes;
The specified neural network model is concentrated into each specify according to the multiple operation demand and the multiple hardware attributes
Corresponding specified neural network model is disposed on the corresponding specified computing device of neural network model.
9. method according to claim 1-8, which is characterized in that the computing device includes at least one calculating
Carrier, the calculating carrier include at least one computing unit, and the computing unit is to one layer in trained multilayer neural network
Or the input data of multilayer and weight data execute operation, or to the defeated of one or more layers in the multilayer neural network of forward operation
Enter data and weight data and execute operation, the operation include: convolution algorithm, Matrix Multiplication matrix operation, Matrix Multiplication vector operation,
It biases operation, connect one of operation, GEMM operation, GEMV operation, activation operation or any combination entirely.
10. according to the method described in claim 9, it is characterized in that, the computing unit includes: main process task circuit, bifurcation
Manage circuit and based process circuit, the main process task circuit and branch process circuit connection, the based process circuit and branch
Processing circuit connection, in which:
The main process task circuit obtains the data other than the computing unit, and the data are divided into broadcast data and distribution number
According to, the broadcast data is sent to all branch process circuits with broadcast mode, by it is described distribution data selectivity distribution
To different branch process circuits;
The branch process circuit forwards data between the main process task circuit and the based process circuit;
The based process circuit receives the broadcast data and distribution data of the branch process circuit forwarding, and to the broadcast number
Operation result is obtained according to operation is executed with distribution data, which is sent to the branch process circuit;
The main process task circuit receives the based process circuit of branch process circuit forwarding and operation result, by the operation
As a result it is handled to obtain calculated result.
11. according to the method described in claim 10, it is characterized in that, the main process task circuit is by the broadcast data to broadcast
Mode is sent to all branch process circuits, comprising:
All branch process circuits are once broadcasted or repeatedly broadcast to the broadcast data by the main process task circuit.
12. according to the method described in claim 10, it is characterized in that, the based process circuit is to the broadcast data and distribution
Data execute operation and obtain operation result, comprising:
The based process circuit executes inner product operation, product calculation or vector operation to the broadcast data and distribution data and obtains
Operation result.
13. a kind of server, which is characterized in that the server includes multiple computing devices, the server further include: be used for
Execute the unit such as the described in any item methods of claim 1-12.
14. a kind of server, which is characterized in that including processor, memory, communication interface and one or more program,
In, one or more of programs are stored in the memory, and are configured to be executed by the processor, described program
Include the steps that requiring the instruction in any one of 1-12 method for perform claim.
15. a kind of computer readable storage medium, the computer-readable recording medium storage has computer program, the calculating
Machine program includes program instruction, and described program instruction makes the processor execute such as claim 1-12 when being executed by a processor
Described in any item methods.
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711467783.4A CN109978149B (en) | 2017-12-28 | 2017-12-28 | Scheduling method and related device |
PCT/CN2018/098324 WO2019128230A1 (en) | 2017-12-28 | 2018-08-02 | Scheduling method and related apparatus |
EP18895350.9A EP3731089B1 (en) | 2017-12-28 | 2018-08-02 | Scheduling method and related apparatus |
US16/767,415 US11568269B2 (en) | 2017-12-28 | 2018-08-02 | Scheduling method and related apparatus |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711467783.4A CN109978149B (en) | 2017-12-28 | 2017-12-28 | Scheduling method and related device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109978149A true CN109978149A (en) | 2019-07-05 |
CN109978149B CN109978149B (en) | 2020-10-09 |
Family
ID=67075511
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711467783.4A Active CN109978149B (en) | 2017-12-28 | 2017-12-28 | Scheduling method and related device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109978149B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112712456A (en) * | 2021-02-23 | 2021-04-27 | 中天恒星(上海)科技有限公司 | GPU processing circuit structure |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102360309A (en) * | 2011-09-29 | 2012-02-22 | 中国科学技术大学苏州研究院 | Scheduling system and scheduling execution method of multi-core heterogeneous system on chip |
US20130179485A1 (en) * | 2012-01-06 | 2013-07-11 | International Business Machines Corporation | Distributed parallel computation with acceleration devices |
US20140032457A1 (en) * | 2012-07-27 | 2014-01-30 | Douglas A. Palmer | Neural processing engine and architecture using the same |
CN107018184A (en) * | 2017-03-28 | 2017-08-04 | 华中科技大学 | Distributed deep neural network cluster packet synchronization optimization method and system |
CN107239829A (en) * | 2016-08-12 | 2017-10-10 | 北京深鉴科技有限公司 | A kind of method of optimized artificial neural network |
-
2017
- 2017-12-28 CN CN201711467783.4A patent/CN109978149B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102360309A (en) * | 2011-09-29 | 2012-02-22 | 中国科学技术大学苏州研究院 | Scheduling system and scheduling execution method of multi-core heterogeneous system on chip |
US20130179485A1 (en) * | 2012-01-06 | 2013-07-11 | International Business Machines Corporation | Distributed parallel computation with acceleration devices |
US20140032457A1 (en) * | 2012-07-27 | 2014-01-30 | Douglas A. Palmer | Neural processing engine and architecture using the same |
CN107239829A (en) * | 2016-08-12 | 2017-10-10 | 北京深鉴科技有限公司 | A kind of method of optimized artificial neural network |
CN107018184A (en) * | 2017-03-28 | 2017-08-04 | 华中科技大学 | Distributed deep neural network cluster packet synchronization optimization method and system |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112712456A (en) * | 2021-02-23 | 2021-04-27 | 中天恒星(上海)科技有限公司 | GPU processing circuit structure |
Also Published As
Publication number | Publication date |
---|---|
CN109978149B (en) | 2020-10-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107196869B (en) | The adaptive load balancing method, apparatus and system of Intrusion Detection based on host actual loading | |
CN107734558A (en) | A kind of control of mobile edge calculations and resource regulating method based on multiserver | |
CN108667878A (en) | Server load balancing method and device, storage medium, electronic equipment | |
CN107172187A (en) | A kind of SiteServer LBS and method | |
CN105208133B (en) | A kind of server, load equalizer and server load balancing method and system | |
CN103516744A (en) | A data processing method, an application server and an application server cluster | |
CN109474681A (en) | Resource allocation methods, system and the server system of mobile edge calculations server | |
CN105518625A (en) | Computation hardware with high-bandwidth memory interface | |
CN102685237A (en) | Method for requesting session maintaining and dispatching in cluster environment | |
TWI786527B (en) | User code operation method of programming platform, electronic equipment and computer-readable storage medium | |
CN104461748A (en) | Optimal localized task scheduling method based on MapReduce | |
CN109189571A (en) | Calculating task dispatching method and system, fringe node, storage medium and terminal | |
US11568269B2 (en) | Scheduling method and related apparatus | |
CN109978129A (en) | Dispatching method and relevant apparatus | |
TWI768167B (en) | Integrated circuit chip device and related products | |
CN109978149A (en) | Dispatching method and relevant apparatus | |
CN110505168A (en) | A kind of NI interface controller and data transmission method | |
CN109976809A (en) | Dispatching method and relevant apparatus | |
CN109976887A (en) | Dispatching method and relevant apparatus | |
CN110019243A (en) | Transmission method and device, equipment, the storage medium of data in Internet of Things | |
JP7081529B2 (en) | Application placement device and application placement program | |
CN109617960A (en) | A kind of web AR data presentation method based on attributed separation | |
CN114579311B (en) | Method, device, equipment and storage medium for executing distributed computing task | |
CN104468379B (en) | Virtual Hadoop clustered nodes system of selection and device based on most short logical reach | |
CN106408793B (en) | A kind of Service Component sharing method and system suitable for ATM business |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: 100000 room 644, No. 6, No. 6, South Road, Beijing Academy of Sciences Applicant after: Zhongke Cambrian Technology Co., Ltd Address before: 100000 room 644, No. 6, No. 6, South Road, Beijing Academy of Sciences Applicant before: Beijing Zhongke Cambrian Technology Co., Ltd. |
|
CB02 | Change of applicant information | ||
GR01 | Patent grant | ||
GR01 | Patent grant |