CN108491253A - A kind of calculating task processing method and edge calculations equipment - Google Patents

A kind of calculating task processing method and edge calculations equipment Download PDF

Info

Publication number
CN108491253A
CN108491253A CN201810090281.2A CN201810090281A CN108491253A CN 108491253 A CN108491253 A CN 108491253A CN 201810090281 A CN201810090281 A CN 201810090281A CN 108491253 A CN108491253 A CN 108491253A
Authority
CN
China
Prior art keywords
task
sub
calculating
container
calculating task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810090281.2A
Other languages
Chinese (zh)
Inventor
于静
张雁鹏
于治楼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinan Inspur Hi Tech Investment and Development Co Ltd
Original Assignee
Jinan Inspur Hi Tech Investment and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinan Inspur Hi Tech Investment and Development Co Ltd filed Critical Jinan Inspur Hi Tech Investment and Development Co Ltd
Priority to CN201810090281.2A priority Critical patent/CN108491253A/en
Publication of CN108491253A publication Critical patent/CN108491253A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources

Abstract

The present invention provides a kind of calculating task processing methods and edge calculations equipment, this method to include:Chain task container is set;Receive the calculating task that exterior terminal equipment is sent;The calculating task is decomposed at least two sub- calculating tasks;Parallel computation is carried out to described at least two sub- calculating tasks using the chain task container, obtains the result of calculation of the calculating task.Therefore, scheme provided by the invention can improve the processing speed of calculating task.

Description

A kind of calculating task processing method and edge calculations equipment
Technical field
The present invention relates to field of computer technology, more particularly to a kind of calculating task processing method and edge calculations are set It is standby.
Background technology
With the development of science and technology, since edge calculations equipment can be terminal device (such as camera, micro-base station nearby Equipment, intelligent movable equipment) the processing service of calculating task is provided, therefore edge calculations equipment application is more and more extensive.
Currently, when edge calculations equipment receives calculating task, it is first determined go out the corresponding each height meter of calculating task Then calculation task calculates each sub- calculating task successively.But since calculating task usually has prodigious data Amount.When being calculated successively each sub- calculating task, can take a substantial amount of time.Therefore, existing mode, calculating task Processing speed it is slower.
Invention content
An embodiment of the present invention provides a kind of calculating task processing method and edge calculations equipment, can improve calculating and appoint The processing speed of business.
In a first aspect, an embodiment of the present invention provides a kind of calculating task processing method, this method includes:
Chain task container is set;
Receive the calculating task that exterior terminal equipment is sent;
The calculating task is decomposed at least two sub- calculating tasks;
Parallel computation is carried out to described at least two sub- calculating tasks using the chain task container, obtains the calculating The result of calculation of task.
Preferably,
The setting chain task container, including:
Build the algorithms library of at least one algorithm model;
Build multiple containers node, wherein the multiple container node is connected to chain;
Each described container node is established with the algorithms library respectively and is connected.
Preferably,
It is described that parallel computation is carried out to described at least two sub- calculating tasks using the chain task container, it obtains described The result of calculation of calculating task, including:
A target container node each described sub- calculating task being separately input in the multiple container node In;
It is performed simultaneously using target container described in each:At least one target algorithm model is called in the algorithms library The sub- calculating task of input is calculated, corresponding sub- result of calculation is obtained;
It polymerize the sub- result of calculation that each described target container respectively obtains, obtains the calculating knot of the calculating task Fruit.
Preferably,
It is described that at least one target algorithm model is called to calculate the sub- calculating task of input in the algorithms library, Corresponding sub- result of calculation is obtained, including:
Determine the corresponding at least one sub- calculating task parameter of sub- calculating task of input;
At least one target at least one sub- calculating task match parameters is found in the algorithms library Algorithm model, and call;
It is calculated using at least one target algorithm model and at least one sub- calculating task parameter of calling, Obtain the sub- result of calculation.
Preferably,
The sub- result of calculation for polymerizeing each described target container and respectively obtaining, obtains the calculating of the calculating task As a result, including:
A target container is selected in each target container as aggregation container;
The sub- result of calculation of each target container is converged in the aggregation container, and is utilized preset Aggregate number shape structure determination goes out the correlation between each sub- result of calculation;
It is transferred at least one of algorithms library algorithm model using the aggregation container and meets what polymerization required Algorithm model;
According to the correlation, it polymerize each sub- result of calculation using the algorithm model transferred, obtains the meter Calculate result.
Preferably,
The calculating task include at least one task type, the corresponding demand duration of each described task type and Task parameters;
It is described that the calculating task is decomposed at least two sub- calculating tasks, including:
According at least one task type, the corresponding demand duration of each described task type and task ginseng Number, decomposes the calculating task using preset decomposition tree, obtains multiple nodes layer by layer;Wherein, often One node corresponds to a task to be calculated;
It regard the node that follow-up node is not present in the multiple node as destination node;
The corresponding task to be calculated of each described destination node is determined as the sub- calculating task.
Preferably,
Further comprise:
The calculating task is sent to external high in the clouds, so that the high in the clouds obtains learning into according to the calculating task Fruit model;
Receive the learning outcome model of the high in the clouds feedback;
The chain task container is updated according to the learning outcome model.
Second aspect, an embodiment of the present invention provides a kind of edge calculations equipment, which includes:
Setup module, for chain task container to be arranged;
Receiving module, the calculating task for receiving the transmission of exterior terminal equipment;
Decomposing module, the calculating task for receiving the receiving module, which is decomposed at least two sons and calculates, appoints Business;
Computing module, for being decomposed to the decomposing module using the chain task container of setup module setting The described at least two sub- calculating tasks gone out carry out parallel computation, obtain the result of calculation of the calculating task.
Preferably,
The setup module, including:Construction unit and connection unit;
The construction unit, the algorithms library for building at least one algorithm model, and structure multiple containers Node, wherein the multiple container node is connected to chain;
The connection unit is connected for establishing each described container node with the algorithms library respectively.
Preferably,
The computing module, including:Input unit, computing unit and polymerized unit;
The input unit, for each described sub- calculating task to be separately input in the multiple container node In one target container node;
The computing unit, for being performed simultaneously using target container described in each:Called in the algorithms library to A few target algorithm model calculates the sub- calculating task of input, obtains corresponding sub- result of calculation;
The polymerized unit, the sub- result of calculation respectively obtained for polymerizeing each described target container obtain described The result of calculation of calculating task.
Preferably,
The computing unit, the corresponding at least one sub- calculating task parameter of sub- calculating task for determining input; At least one target algorithm model at least one sub- calculating task match parameters is found in the algorithms library, and It calls;It is calculated, is obtained using at least one target algorithm model and at least one sub- calculating task parameter of calling To the sub- result of calculation.
Preferably,
The polymerized unit, including:Determination subelement and polymerization subelement;
The determination subelement is held for selecting a target container in each target container as polymerization Device;The sub- result of calculation of each target container is converged in the aggregation container, and utilizes preset polymerization Number shape structure determination goes out the correlation between each sub- result of calculation;
The polymerization subelement, for utilizing the aggregation container at least one of algorithms library algorithm model It transfers and meets the algorithm model that polymerization requires;According to the correlation, it polymerize each son using the algorithm model transferred Result of calculation obtains the result of calculation.
Preferably,
The calculating task include at least one task type, the corresponding demand duration of each described task type and Task parameters;
The decomposing module, including:Tree-shaped resolving cell and task determination unit;
The tree-shaped resolving cell, for being corresponded to according at least one task type, each described task type Demand duration and task parameters, the calculating task is decomposed layer by layer using preset decomposition tree, Obtain multiple nodes;Wherein, each described node corresponds to a task to be calculated;
The task determination unit, for regarding the node that follow-up node is not present in the multiple node as target knot Point;The corresponding task to be calculated of each described destination node is determined as the sub- calculating task.
Preferably,
Further comprise:Update module;
The update module, the calculating task for receiving the receiving module are sent to external high in the clouds, with The high in the clouds is set to obtain learning outcome model according to the calculating task;Receive the learning outcome mould of the high in the clouds feedback Type;The chain task container is updated according to the learning outcome model.
An embodiment of the present invention provides a kind of calculating task processing method and edge calculations equipment, this method is applied to side Edge computing device.Chain task container is provided first, then when receiving the calculating task of terminal device transmission, will calculate Task-decomposing is two or more sub- calculating tasks.And each height decomposited is calculated using the chain task container of setting and is appointed Business carries out parallel computation, obtains the result of calculation of calculating task.By above-mentioned it is found that being carried out to calculating task in this programme It decomposes, and parallel computation has been carried out to the multiple calculating subtasks decomposited, since parallel computation can save a large amount of calculating Time, therefore, scheme provided in an embodiment of the present invention can improve the processing speed of calculating task.
Description of the drawings
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technology description to be briefly described, it should be apparent that, the accompanying drawings in the following description is the present invention Some embodiments for those of ordinary skill in the art without creative efforts, can also basis These attached drawings obtain other attached drawings.
Fig. 1 is a kind of calculating task processing method applied to edge calculations equipment provided by one embodiment of the present invention Flow chart;
Fig. 2 is the structural schematic diagram between a kind of multiple containers node provided by one embodiment of the present invention;
Fig. 3 is a kind of structural schematic diagram decomposing tree provided by one embodiment of the present invention;
Fig. 4 is a kind of calculating task processing method applied to edge calculations equipment that another embodiment of the present invention provides Flow chart;
Fig. 5 is a kind of structural schematic diagram of edge calculations equipment provided by one embodiment of the present invention;
Fig. 6 is a kind of structural schematic diagram for edge calculations equipment that another embodiment of the present invention provides;
Fig. 7 is a kind of edge meter including tree-shaped resolving cell and task determination unit provided by one embodiment of the present invention Calculate the structural schematic diagram of equipment;
Fig. 8 is a kind of structural representation of edge calculations equipment including update module provided by one embodiment of the present invention Figure.
Specific implementation mode
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention In attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is A part of the embodiment of the present invention, instead of all the embodiments, based on the embodiments of the present invention, those of ordinary skill in the art The every other embodiment obtained without making creative work, shall fall within the protection scope of the present invention.
As shown in Figure 1, an embodiment of the present invention provides a kind of calculating task processing method applied to edge calculations equipment, This method may comprise steps of:
Step 101:Chain task container is set;
Step 102:Receive the calculating task that exterior terminal equipment is sent;
Step 103:The calculating task is decomposed at least two sub- calculating tasks;
Step 104:Parallel computation is carried out to described at least two sub- calculating tasks using the chain task container, is obtained The result of calculation of the calculating task.
Embodiment according to figure 1, this method are applied to edge calculations equipment.Chain task container is provided first, Then when receiving the calculating task of terminal device transmission, calculating task is decomposed into two or more sub- calculating tasks.And Parallel computation is carried out to each sub- calculating task decomposited using the chain task container of setting, obtains the calculating of calculating task As a result.By above-mentioned it is found that decomposed to calculating task in this programme, and multiple calculating subtasks to decompositing into Gone parallel computation, due to parallel computation can save it is a large amount of calculate the time, scheme provided in an embodiment of the present invention can To improve the processing speed of calculating task.
In an embodiment of the invention, terminal device involved in the step 102 in flow chart shown in above-mentioned Fig. 1 can To be determined according to business need.For example, can be camera, micro-base station equipment, intelligent movable equipment (smart mobile phone, tablet electricity Brain), radar etc..
In an embodiment of the invention, chain task container is arranged in the step 101 in flow chart shown in above-mentioned Fig. 1, can To include:
Build the algorithms library of at least one algorithm model;
Build multiple containers node, wherein the multiple container node is connected to chain;
Each described container node is established with the algorithms library respectively and is connected.
In the present embodiment, algorithm model can be determined according to specific type of service.For example, being line in type of service When circuit planning type of service, rate algorithm model, displacement algorithm model are can include but is not limited at least one algorithm model And at least one of angle algorithm model or a variety of.
In the present embodiment, after constructing multiple containers node, each container node is connected to as shown in Figure 2 Chain.In fig. 2,201 container node (6 container nodes are illustrated only in figure) is indicated.In addition, each container node corresponds to Memory space can be determined according to business need.But should be noted that the size of the memory space of container node, can Accommodate a sub- calculating task.
In the present embodiment, each container node will be established respectively with algorithms library and be connected, so as to defeated in container node When entering to have sub- calculating task, container node can transfer algorithm model in algorithms library and calculate sub- calculating task.
According to above-described embodiment, due to including multiple containers node and algorithms library in chain task container, chain Shape task container can not only input multiple sub- calculating tasks simultaneously, but also can also provide each sub- calculating task of execution Algorithm model, so as to carry out timely processing to calculating task.
In an embodiment of the invention, calculating task involved in flow chart shown in above-mentioned Fig. 1 includes at least one Task type, the corresponding demand duration of each described task type and task parameters;
Then, the calculating task is decomposed at least two sons and calculated and appointed by the step 103 in flow chart shown in above-mentioned Fig. 1 It is engaged in, may include:
According at least one task type, the corresponding demand duration of each described task type and task ginseng Number, decomposes the calculating task using preset decomposition tree, obtains multiple nodes layer by layer;Wherein, often One node corresponds to a task to be calculated;
It regard the node that follow-up node is not present in the multiple node as destination node;
The corresponding task to be calculated of each described destination node is determined as the sub- calculating task.
In the present embodiment, a length of when demand that duration requirement is returned to result.For example, when 1 corresponding demand of task type It a length of 3 milliseconds, then within 3 milliseconds after receiving task type 1, needs to return to 1 corresponding result of calculation of task type.
In the present embodiment, task parameters may include at least one input parameter and at least one output parameter.With Allow the algorithm model of subsequent calls that the corresponding numerical value of each output parameter is calculated according at least one input parameter.
In the present embodiment, calculating task may include one or more task types.For example, being nobody in calculating task When driving the driving task of vehicle, including task type may include car light setting task type and route planning task class Type.
In the present embodiment, below using calculating task as the driving task of automatic driving vehicle, and automatic driving vehicle Driving task car light is arranged for task type and route planning task type, and tree is decomposed to calculating task to utilizing Decompose layer by layer and illustrates:As shown in figure 3, using the driving task of automatic driving vehicle as root node 30, by root node 30 resolve into the corresponding first layer node 301 of car light setting task type and the corresponding first layer knot of route planning task type Point 302.Then it is performed both by for each first layer node:When according to the demand of the corresponding task type of current first layer node Long and task parameters, according to demand duration and task parameters decomposed layer by layer, until cannot be until being decomposed.Than Such as, can include in figure 3 3011,3012 and 302 with last layer of node.Wherein, 3011 corresponding tasks to be calculated are position Move calculating task;3012 corresponding tasks to be calculated are travel direction calculating task;302 open brightness calculation task for car light. Displacement calculating task, travel direction calculating task and car light are then opened into brightness calculation task and are used as sub- calculating task.
According to above-described embodiment, each sub- calculating task is according to task type, the corresponding demand of each task type Duration and task parameters, and decomposition determination layer by layer is carried out using tree.Due to decomposable process clear logic, go out The probability for now omitting sub- calculating task is relatively low, and the determination of sub- calculating task is more accurate.
In an embodiment of the invention, the step 104 in flow chart shown in above-mentioned Fig. 1 utilizes the chain task container Parallel computation is carried out to described at least two sub- calculating tasks, the result of calculation of the calculating task is obtained, may include:
A1:A target container section each described sub- calculating task being separately input in the multiple container node Point in;
A2:It is performed simultaneously using target container described in each:At least one target algorithm is called in the algorithms library Model calculates the sub- calculating task of input, obtains corresponding sub- result of calculation;
A3:It polymerize the sub- result of calculation that each described target container respectively obtains, obtains the calculating of the calculating task As a result.
In the present embodiment, step A1 may include:Multiple target container nodes are selected in multiple containers node.Mesh The quantity for marking container node is identical as the quantity of sub- calculating task, and multiple target container nodes are sequentially connected, and when current Between it is unoccupied.Then it determines the corresponding target container node of each sub- calculating task, and inputs.Selection is sequentially connected Multiple target container nodes be advantageous in that, each target container node call algorithm model calculating task is counted When calculation, when lacking partial data, can be obtained in coupled each target container node lacking in partial data, The execution so that sub- calculating task can be well on.
In the present embodiment, step A2 may include:
Determine the corresponding at least one sub- calculating task parameter of sub- calculating task of input;
At least one target at least one sub- calculating task match parameters is found in the algorithms library Algorithm model, and call;
It is calculated using at least one target algorithm model and at least one sub- calculating task parameter of calling, Obtain the sub- result of calculation.
Specifically, at least one sub- calculating task parameter can be at least one input parameter and extremely of sub- calculating task A few output parameter.For example, when sub- calculating task is the task of calculating speed, sub- calculating task parameter can be current speed Degree, current location, current time and target velocity.
Specifically, at least one at least one sub- calculating task match parameters is found in the algorithms library A target algorithm model may include:Determine the corresponding at least one sample parameter of each algorithm model in algorithms library; By an at least sub- calculating task parameter, at least one sample parameter corresponding with each algorithm model compares respectively It is right, determine at least one target algorithm model.Wherein, the corresponding at least one sample ginseng of each target algorithm model The matching degree of number and at least one sub- calculating task parameter reaches the matching value of setting.
Specifically, using at least one target algorithm model of calling and at least one sub- calculating task parameter into Row calculates, and obtains the sub- result of calculation, at least may include the following two kinds method:
Method one:It is performed both by for each target algorithm model:Invocation target algorithm model is at least one sub- calculating Task parameters are calculated, and results of intermediate calculations is obtained.Summarize each results of intermediate calculations, obtains sub- result of calculation.
Method two determines the calling sequence of at least one target algorithm model;
B1:Call a target algorithm model as current goal algorithm model successively according to calling sequence;
B2:The target algorithm model of a calling is judged whether, if so, executing B4;Otherwise, B3 is executed;
B3:At least one sub- calculating task parameter is calculated using current algorithm object module, obtains current algorithm The corresponding results of intermediate calculations of object module, and execute B5;
B4:The results of intermediate calculations obtained using the target algorithm model of current goal algorithm model and a upper calling At least one sub- calculating task parameter is calculated, the corresponding results of intermediate calculations of current goal algorithm model is obtained;
B5:Judge whether current goal algorithm model is the last one target algorithm model, if so, executing B6;Otherwise, Execute B1.
B6:Using the obtained corresponding results of intermediate calculations of current goal algorithm model as sub- result of calculation.
In the present embodiment, step A3 may include:
A target container is selected in each target container as aggregation container;
The sub- result of calculation of each target container is converged in the aggregation container, and is utilized preset Aggregate number shape structure determination goes out the correlation between each sub- result of calculation;
It is transferred at least one of algorithms library algorithm model using the aggregation container and meets what polymerization required Algorithm model;
According to the correlation, it polymerize each sub- result of calculation using the algorithm model transferred, obtains the meter Calculate result.
Specifically, a target container is selected in each target container as aggregation container, may include:Really The corresponding residual memory space of each fixed target container, holds the most target container of residual memory space as polymerization Device.
Specifically, after converging to the sub- result of calculation of each target container in aggregation container, according to aggregate number Shape structure determination goes out the correlation between each sub- result of calculation, and wherein correlation may include set membership and brother Relationship.Then it is called in algorithms library and meets the algorithm model that polymerization requires.Using algorithm model according to each sub- result of calculation Between correlation, each sub- result of calculation is polymerize, the corresponding result of calculation of calculating task is obtained.
In an embodiment of the invention, it can further be wrapped applied to the calculating task processing method of edge calculations equipment Include following steps:
The calculating task is sent to external high in the clouds, so that the high in the clouds obtains learning into according to the calculating task Fruit model;
Receive the learning outcome model of the high in the clouds feedback;
The chain task container is updated according to the learning outcome model.
In the present embodiment, the purpose for calculating task being sent to high in the clouds be in order to allow high in the clouds according to calculating task, Depth training is carried out using deep learning algorithm, to obtain the learning outcome model for more meeting calculating task demand.Wherein, should May include algorithm model in learning outcome model.
In the present embodiment, when receiving learning outcome model, chain task can be held according to learning outcome model Device is updated.For example, being carried out more to the algorithm model in chain task container using the algorithm model in learning outcome model Newly.
According to above-described embodiment, calculating task is sent to high in the clouds, so that high in the clouds can be learnt according to calculating task Achievement model.When receiving the learning outcome model of high in the clouds feedback, using learning outcome model to chain task container It is updated.So that chain task container more can quickly handle calculating task.
Below by taking calculating task is the driving task of automatic driving vehicle as an example.Expansion illustrates calculating task processing method, As shown in figure 4, this is used to may include steps of with the calculating task processing method of edge calculations equipment:
Step 401:Build the algorithms library of at least one algorithm model.
In this step, it at least one algorithm model may include intensity of light algorithm model, displacement algorithm model, side To model and rate algorithm model.
Step 402:Build multiple containers node, wherein multiple containers node is connected to chain.
In this step, 6 container nodes as shown in Figure 2 are constructed.
Step 403:Each container node is established with algorithms library respectively and is connected.
Step 404:Receive the calculating task that exterior terminal equipment is sent, wherein calculating task includes at least one task Type and the corresponding demand duration of each task type and task parameters, and execute step 405 and step 417。
In this step, the driving task of automatic driving vehicle includes driving task car light setting task type and route Planning tasks type and the corresponding demand duration of driving task car light setting task type and task parameters, route planning The corresponding demand duration of task type and task parameters.
Step 405:Joined according at least one task type, the corresponding demand duration of each task type and task Number, decomposes calculating task using preset decomposition tree, obtains multiple nodes layer by layer;Wherein, each Node corresponds to a task to be calculated.
In this step, calculating task is decomposed layer by layer, obtains decomposition result as shown in Figure 3.
Step 406:It regard the node that follow-up node is not present in multiple nodes as destination node.
In this step, since 3011,3012 and 302 be follow-up node, it is used as destination node.
Step 407:The corresponding task to be calculated of each destination node is determined as sub- calculating task.
In this step, 3011 corresponding tasks to be calculated are displacement calculating task;3012 corresponding tasks to be calculated are Travel direction calculating task;302 open brightness calculation task for car light.Then by displacement calculating task, travel direction calculating task And car light opens brightness calculation task and is used as sub- calculating task.
Step 408:Each sub- calculating task is separately input to a target container node in multiple containers node In.
In this step, displacement calculating task, travel direction calculating task and car light are opened into brightness calculation task point It is not placed in preceding 3 container nodes as shown in Figure 2.
Step 409:It regard each target container as current goal container simultaneously.
Step 410:Determine the corresponding at least one sub- calculating task ginseng of the sub- calculating task inputted in current goal container Number.
In this step, it is said using the target container for inputting car light unlatching brightness calculation task as current goal container It is bright:Determine that the sub- calculating task parameter that car light opens brightness calculation task is:Current time, current light degree, target are opened Brightness.
Step 411:At least one target at least one sub- calculating task match parameters is found in algorithms library Algorithm model, and call.
In this step, it is said using the target container for inputting car light unlatching brightness calculation task as current goal container It is bright:It is lamp to be found out in algorithms library and open the target algorithm model that brightness matches with current time, current light degree, target Luminous intensity algorithm model, and call.
Step 412:It is carried out using at least one target algorithm model and at least one sub- calculating task parameter of calling It calculates, obtains sub- result of calculation.
In this step, it is said using the target container for inputting car light unlatching brightness calculation task as current goal container It is bright:Brightness is opened using the intensity of light algorithm model and current time, current light degree, target of calling, obtains sub- calculating As a result include that target opens brightness.
Step 413:A target container is selected in each target container as aggregation container.
In this step, select it is as shown in Figure 2 in positioned at first target container as aggregation container.
Step 414:The sub- result of calculation of each target container is converged in aggregation container, and is utilized preset Aggregate number shape structure determination goes out the correlation between each sub- result of calculation.
In this step, sub- result of calculation " target unlatching brightness, target travel that each target container respectively obtains are determined Correlation between direction and displacement of targets ".
Step 415:It is transferred at least one of algorithms library algorithm model using aggregation container and meets what polymerization required Algorithm model.
Step 416:According to correlation, it polymerize each sub- result of calculation using the algorithm model transferred, obtains calculating knot Fruit, and terminate current process.
In this step, the correlation between " target opens brightness, target travel direction and displacement of targets " and Target opens brightness, target travel direction and displacement of targets and obtains result of calculation, which is then automatic driving vehicle Driving task driving data.
Step 417:Calculating task is sent to external high in the clouds, so that high in the clouds obtains learning outcome mould according to calculating task Type.
In this step, the purpose for calculating task being sent to high in the clouds is to allow high in the clouds according to calculating task, adopt Depth training is carried out with deep learning algorithm, to obtain the learning outcome model for more meeting calculating task demand.Wherein, It may include algorithm model to practise in achievement model.
Step 418:Receive high in the clouds feedback learning outcome model when, according to learning outcome model to chain container into Row update.
It in this step, can be according to learning outcome model to chain task container when receiving learning outcome model It is updated.For example, being updated to the algorithm model in chain task container using the algorithm model in learning outcome model.
As shown in figure 5, an embodiment of the present invention provides a kind of edge calculations equipment, which includes:
Setup module 501, for chain task container to be arranged;
Receiving module 502, the calculating task for receiving the transmission of exterior terminal equipment;
Decomposing module 503, the calculating task by receiving the receiving module 502 are decomposed into based at least two sons Calculation task;
Computing module 504, for the chain task container using the setup module 501 setting to the decomposition mould The described at least two sub- calculating tasks that block 503 decomposites carry out parallel computation, obtain the result of calculation of the calculating task.
Embodiment according to figure 5, when receiving module receives the calculating task of terminal device transmission, decomposing module Calculating task can be decomposed into two or more sub- calculating tasks.Calculating task utilizes the chain task container of setup module setting Parallel computation is carried out to each sub- calculating task that decomposing module decomposites, obtains the result of calculation of calculating task.By above-mentioned It is found that being decomposed to calculating task in this programme, and parallel computation is carried out to the multiple calculating subtasks decomposited, Since parallel computation can save a large amount of calculating time, scheme provided in an embodiment of the present invention can improve calculating and appoint The processing speed of business.
In an embodiment of the invention, edge calculations equipment can be placed on the carrier for meeting business needs.For example, It is placed in pilotless automobile, is placed on intelligent road-lamp.
In an embodiment of the invention, as shown in fig. 6, the setup module 501 may include:Construction unit 5011 with And connection unit 5012;
The construction unit 5011, the algorithms library for building at least one algorithm model, and structure are multiple Container node, wherein the multiple container node is connected to chain;
The connection unit 5012 is connected for establishing each described container node with the algorithms library respectively.
In an embodiment of the invention, as shown in fig. 6, the computing module 504 may include:Input unit 5041, Computing unit 5042 and polymerized unit 5043;
The input unit 5041, for each described sub- calculating task to be separately input to the multiple container node In a target container node in;
The computing unit 5042, for being performed simultaneously using target container described in each:It is adjusted in the algorithms library The sub- calculating task of input is calculated at least one target algorithm model, obtains corresponding sub- result of calculation;
The polymerized unit 5043, the sub- result of calculation respectively obtained for polymerizeing each described target container, obtains The result of calculation of the calculating task.
In an embodiment of the invention, the computing unit 5042, the sub- calculating task for determining input are corresponding At least one sub- calculating task parameter;It is found in the algorithms library and at least one sub- calculating task match parameters At least one target algorithm model, and call;Utilize at least one target algorithm model of calling and described at least one Sub- calculating task parameter is calculated, and the sub- result of calculation is obtained.
In an embodiment of the invention, as shown in fig. 6, the polymerized unit 5043 may include:Determination subelement 5043A and polymerization subelement 5043B;
The determination subelement 5043A, for selecting a target container in each target container as poly- Close container;The sub- result of calculation of each target container is converged in the aggregation container, and is utilized preset Aggregate number shape structure determination goes out the correlation between each sub- result of calculation;
The polymerization subelement 5043B, for utilizing the aggregation container at least one of algorithms library algorithm It is transferred in model and meets the algorithm model that polymerization requires;According to the correlation, it is polymerize using the algorithm model transferred each Sub- result of calculation obtains the result of calculation.
In an embodiment of the invention, as shown in fig. 7, the calculating task include at least one task type, it is every When the corresponding demand duration of one task type and task parameters,
The decomposing module 503 may include:Tree-shaped resolving cell 5031 and task determination unit 5032;
The tree-shaped resolving cell 5031, for according at least one task type, each described task type Corresponding demand duration and task parameters divide the calculating task using preset decomposition tree layer by layer Solution, obtains multiple nodes;Wherein, each described node corresponds to a task to be calculated;
The task determination unit 5032, for regarding the node that follow-up node is not present in the multiple node as mesh Mark node;The corresponding pending task of each described destination node is determined as the sub- calculating task.
In an embodiment of the invention, as shown in figure 8, edge calculations equipment may further include:Update module 601;
The update module 601, the calculating task for receiving the receiving module 401 are sent to external cloud End, so that the high in the clouds obtains learning outcome model according to the calculating task;Receive the study of high in the clouds feedback at Fruit model;The chain container is updated according to the learning outcome model.
A kind of readable medium is provided in one embodiment of the invention, which includes:It executes instruction, when storage is controlled When being executed instruction described in the processor execution of device processed, the storage control executes calculating task processing described in any one of the above embodiments Method.
A kind of storage control is provided in one embodiment of the invention, which includes:Processor, memory And bus;The memory is executed instruction for storing;The processor is connect with the memory by the bus;Work as institute When stating storage control operation, the processor executes the described of memory storage and executes instruction, so that the storage is controlled Device processed executes calculating task processing method described in any one of the above embodiments.
The contents such as the information exchange between each unit, implementation procedure in above-mentioned apparatus, due to implementing with the method for the present invention Example is based on same design, and particular content can be found in the narration in the method for the present invention embodiment, and details are not described herein again.
In conclusion following advantageous effect at least may be implemented in each embodiment of the present invention:
1, in embodiments of the present invention, chain task container is provided first, is then receiving terminal device transmission When calculating task, calculating task is decomposed into two or more sub- calculating tasks.And using the chain task container of setting to dividing Each sub- calculating task solved carries out parallel computation, obtains the result of calculation of calculating task.By above-mentioned it is found that in this programme In calculating task is decomposed, and parallel computation has been carried out to the multiple calculating subtasks decomposited, due to parallel computation The a large amount of calculating time can be saved, therefore, scheme provided in an embodiment of the present invention can improve the processing speed of calculating task.
2, in embodiments of the present invention, due to including multiple containers node and algorithms library in chain task container, because This, chain task container can not only input multiple sub- calculating tasks simultaneously, but also can also provide and execute each height calculating The algorithm model of task, so as to carry out timely processing to calculating task.
3, in embodiments of the present invention, each sub- calculating task is corresponding according to task type, each task type Demand duration and task parameters, and decomposition determination layer by layer is carried out using tree.Due to decomposable process clear logic, because The probability that this occurs omitting sub- calculating task is relatively low, and the determination of sub- calculating task is more accurate.
4, in embodiments of the present invention, calculating task is sent to high in the clouds, so that high in the clouds can be obtained according to calculating task Learning outcome model.When receiving the learning outcome model of high in the clouds feedback, using learning outcome model to chain task Container is updated.So that chain task container more can quickly handle calculating task.
It should be noted that herein, such as first and second etc relational terms are used merely to an entity Or operation is distinguished with another entity or operation, is existed without necessarily requiring or implying between these entities or operation Any actual relationship or order.Moreover, the terms "include", "comprise" or its any other variant be intended to it is non- It is exclusive to include, so that the process, method, article or equipment including a series of elements includes not only those elements, But also include other elements that are not explicitly listed, or further include solid by this process, method, article or equipment Some elements.In the absence of more restrictions, the element limited by sentence " including one ", is not arranged Except there is also other identical factors in the process, method, article or apparatus that includes the element.
One of ordinary skill in the art will appreciate that:Realize that all or part of step of above method embodiment can pass through The relevant hardware of program instruction is completed, and program above-mentioned can be stored in computer-readable storage medium, the program When being executed, step including the steps of the foregoing method embodiments is executed;And storage medium above-mentioned includes:ROM, RAM, magnetic disc or light In the various media that can store program code such as disk.
Finally, it should be noted that:The foregoing is merely presently preferred embodiments of the present invention, is merely to illustrate the skill of the present invention Art scheme, is not intended to limit the scope of the present invention.Any modification for being made all within the spirits and principles of the present invention, Equivalent replacement, improvement etc., are included within the scope of protection of the present invention.

Claims (10)

1. a kind of calculating task processing method, which is characterized in that it is applied to edge calculations equipment,
Chain task container is set;
Further include:
Receive the calculating task that exterior terminal equipment is sent;
The calculating task is decomposed at least two sub- calculating tasks;
Parallel computation is carried out to described at least two sub- calculating tasks using the chain task container, obtains the calculating task Result of calculation.
2. according to the method described in claim 1, it is characterized in that,
The setting chain task container, including:
Build the algorithms library of at least one algorithm model;
Build multiple containers node, wherein the multiple container node is connected to chain;
Each described container node is established with the algorithms library respectively and is connected.
3. according to the method described in claim 2, it is characterized in that,
It is described that parallel computation is carried out to described at least two sub- calculating tasks using the chain task container, obtain the calculating The result of calculation of task, including:
In the target container node that each described sub- calculating task is separately input in the multiple container node;
It is performed simultaneously using target container described in each:Call at least one target algorithm model to defeated in the algorithms library The sub- calculating task entered is calculated, and corresponding sub- result of calculation is obtained;
It polymerize the sub- result of calculation that each described target container respectively obtains, obtains the result of calculation of the calculating task.
4. according to the method described in claim 3, it is characterized in that,
It is described that at least one target algorithm model is called to calculate the sub- calculating task of input in the algorithms library, it obtains Corresponding sub- result of calculation, including:
Determine the corresponding at least one sub- calculating task parameter of sub- calculating task of input;
At least one target algorithm at least one sub- calculating task match parameters is found in the algorithms library Model, and call;
It is calculated, is obtained using at least one target algorithm model and at least one sub- calculating task parameter of calling The sub- result of calculation;
And/or
The sub- result of calculation for polymerizeing each described target container and respectively obtaining, obtains the calculating knot of the calculating task Fruit, including:
A target container is selected in each target container as aggregation container;
The sub- result of calculation of each target container is converged in the aggregation container, and utilizes preset polymerization Number shape structure determination goes out the correlation between each sub- result of calculation;
It is transferred at least one of algorithms library algorithm model using the aggregation container and meets the algorithm that polymerization requires Model;
According to the correlation, it polymerize each sub- result of calculation using the algorithm model transferred, obtains the calculating knot Fruit.
5. method according to any one of claims 1 to 4, which is characterized in that
The calculating task includes at least one task type, the corresponding demand duration of each described task type and task Parameter;
It is described that the calculating task is decomposed at least two sub- calculating tasks, including:
According at least one task type, the corresponding demand duration of each described task type and task parameters, profit The calculating task is decomposed layer by layer with preset decomposition tree, obtains multiple nodes;Wherein, each institute It states node and corresponds to a task to be calculated;
It regard the node that follow-up node is not present in the multiple node as destination node;
The corresponding task to be calculated of each described destination node is determined as the sub- calculating task;
And/or
Further comprise:
The calculating task is sent to external high in the clouds, so that the high in the clouds obtains learning outcome mould according to the calculating task Type;
Receive the learning outcome model of the high in the clouds feedback;
The chain task container is updated according to the learning outcome model.
6. a kind of edge calculations equipment, which is characterized in that including:
Setup module, for chain task container to be arranged;
Receiving module, the calculating task for receiving the transmission of exterior terminal equipment;
Decomposing module, the calculating task for receiving the receiving module are decomposed at least two sub- calculating tasks;
Computing module, what the chain task container for being arranged using the setup module decomposited the decomposing module At least two sub- calculating task carries out parallel computation, obtains the result of calculation of the calculating task.
7. edge calculations equipment according to claim 6, which is characterized in that
The setup module, including:Construction unit and connection unit;
The construction unit, the algorithms library for building at least one algorithm model, and structure multiple containers node, Wherein, the multiple container node is connected to chain;
The connection unit is connected for establishing each described container node with the algorithms library respectively.
8. edge calculations equipment according to claim 7, which is characterized in that
The computing module, including:Input unit, computing unit and polymerized unit;
The input unit, one for being separately input to each described sub- calculating task in the multiple container node In target container node;
The computing unit, for being performed simultaneously using target container described in each:At least one is called in the algorithms library A target algorithm model calculates the sub- calculating task of input, obtains corresponding sub- result of calculation;
The polymerized unit, the sub- result of calculation respectively obtained for polymerizeing each described target container, obtains the calculating The result of calculation of task.
9. edge calculations equipment according to claim 8, which is characterized in that
The computing unit, the corresponding at least one sub- calculating task parameter of sub- calculating task for determining input;Described At least one target algorithm model at least one sub- calculating task match parameters is found in algorithms library, and is adjusted With;It is calculated, is obtained using at least one target algorithm model and at least one sub- calculating task parameter of calling The sub- result of calculation;
And/or
The polymerized unit, including:Determination subelement and polymerization subelement;
The determination subelement, for selecting a target container in each target container as aggregation container;It will The sub- result of calculation of each target container converges in the aggregation container, and utilizes preset aggregate number shape knot Structure determines the correlation between each sub- result of calculation;
The polymerization subelement, for being transferred at least one of algorithms library algorithm model using the aggregation container Meet the algorithm model that polymerization requires;According to the correlation, it polymerize each sub- calculating using the algorithm model transferred As a result, obtaining the result of calculation.
10. according to any edge calculations equipment of claim 6 to 9, which is characterized in that
The calculating task includes at least one task type, the corresponding demand duration of each described task type and task Parameter;
The decomposing module, including:Tree-shaped resolving cell and task determination unit;
The tree-shaped resolving cell, for according at least one task type, the corresponding need of each described task type Duration and task parameters are asked, the calculating task is decomposed layer by layer using preset decomposition tree, is obtained Multiple nodes;Wherein, each described node corresponds to a task to be calculated;
The task determination unit, for regarding the node that follow-up node is not present in the multiple node as destination node; The corresponding task to be calculated of each described destination node is determined as the sub- calculating task;
And/or
Further comprise:Update module;
The update module, the calculating task for receiving the receiving module is sent to external high in the clouds, so that institute It states high in the clouds and learning outcome model is obtained according to the calculating task;Receive the learning outcome model of the high in the clouds feedback;Root The chain task container is updated according to the learning outcome model.
CN201810090281.2A 2018-01-30 2018-01-30 A kind of calculating task processing method and edge calculations equipment Pending CN108491253A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810090281.2A CN108491253A (en) 2018-01-30 2018-01-30 A kind of calculating task processing method and edge calculations equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810090281.2A CN108491253A (en) 2018-01-30 2018-01-30 A kind of calculating task processing method and edge calculations equipment

Publications (1)

Publication Number Publication Date
CN108491253A true CN108491253A (en) 2018-09-04

Family

ID=63343961

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810090281.2A Pending CN108491253A (en) 2018-01-30 2018-01-30 A kind of calculating task processing method and edge calculations equipment

Country Status (1)

Country Link
CN (1) CN108491253A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109491790A (en) * 2018-11-02 2019-03-19 中山大学 Industrial Internet of Things edge calculations resource allocation methods and system based on container
CN109491793A (en) * 2018-11-15 2019-03-19 郑州云海信息技术有限公司 Method and device for business processing in cloud computing system
CN109829130A (en) * 2019-01-29 2019-05-31 武汉轻工大学 Double integral calculation method, device, terminal device and readable storage medium storing program for executing
CN109947339A (en) * 2019-03-28 2019-06-28 武汉轻工大学 Method for drafting, device, equipment and the storage medium of parabolic cylinder
CN109948107A (en) * 2019-03-26 2019-06-28 武汉轻工大学 Area calculation of curved surface integral method, apparatus, equipment and storage medium
CN109947398A (en) * 2019-03-25 2019-06-28 武汉轻工大学 Triple integral method for solving, device, terminal device and readable storage medium storing program for executing
CN110022300A (en) * 2019-03-06 2019-07-16 百度在线网络技术(北京)有限公司 Edge calculations implementation method, equipment, system and storage medium
CN110580019A (en) * 2019-07-24 2019-12-17 浙江双一智造科技有限公司 edge calculation-oriented equipment calling method and device
CN110839220A (en) * 2019-10-28 2020-02-25 无锡职业技术学院 Distributed computing method and system based on wireless ad hoc network
CN110968051A (en) * 2018-09-28 2020-04-07 西门子股份公司 Method and engineering system for planning an automation system
CN112199196A (en) * 2020-10-21 2021-01-08 上海交通大学 Resource allocation method, medium and server

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070226743A1 (en) * 2006-03-27 2007-09-27 Fujitsu Limited Parallel-distributed-processing program and parallel-distributed-processing system
CN101308468A (en) * 2008-06-13 2008-11-19 南京邮电大学 Grid calculation environment task cross-domain control method
US20110041136A1 (en) * 2009-08-14 2011-02-17 General Electric Company Method and system for distributed computation
US8209703B2 (en) * 2006-12-08 2012-06-26 SAP France S.A. Apparatus and method for dataflow execution in a distributed environment using directed acyclic graph and prioritization of sub-dataflow tasks
CN102541880A (en) * 2010-12-17 2012-07-04 金蝶软件(中国)有限公司 Gantt chart generating method and system
CN106127678A (en) * 2016-06-23 2016-11-16 北京天文馆 The cluster platform for making system of digital ball-screen cinema and method of work
CN106886503A (en) * 2017-02-08 2017-06-23 无锡十月中宸科技有限公司 heterogeneous system, data processing method and device
CN107436806A (en) * 2016-05-27 2017-12-05 苏宁云商集团股份有限公司 A kind of resource regulating method and system
CN107608795A (en) * 2017-09-19 2018-01-19 百度在线网络技术(北京)有限公司 cloud computing method and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070226743A1 (en) * 2006-03-27 2007-09-27 Fujitsu Limited Parallel-distributed-processing program and parallel-distributed-processing system
US8209703B2 (en) * 2006-12-08 2012-06-26 SAP France S.A. Apparatus and method for dataflow execution in a distributed environment using directed acyclic graph and prioritization of sub-dataflow tasks
CN101308468A (en) * 2008-06-13 2008-11-19 南京邮电大学 Grid calculation environment task cross-domain control method
US20110041136A1 (en) * 2009-08-14 2011-02-17 General Electric Company Method and system for distributed computation
CN102541880A (en) * 2010-12-17 2012-07-04 金蝶软件(中国)有限公司 Gantt chart generating method and system
CN107436806A (en) * 2016-05-27 2017-12-05 苏宁云商集团股份有限公司 A kind of resource regulating method and system
CN106127678A (en) * 2016-06-23 2016-11-16 北京天文馆 The cluster platform for making system of digital ball-screen cinema and method of work
CN106886503A (en) * 2017-02-08 2017-06-23 无锡十月中宸科技有限公司 heterogeneous system, data processing method and device
CN107608795A (en) * 2017-09-19 2018-01-19 百度在线网络技术(北京)有限公司 cloud computing method and device

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110968051A (en) * 2018-09-28 2020-04-07 西门子股份公司 Method and engineering system for planning an automation system
CN109491790B (en) * 2018-11-02 2021-08-27 中山大学 Container-based industrial Internet of things edge computing resource allocation method and system
CN109491790A (en) * 2018-11-02 2019-03-19 中山大学 Industrial Internet of Things edge calculations resource allocation methods and system based on container
CN109491793A (en) * 2018-11-15 2019-03-19 郑州云海信息技术有限公司 Method and device for business processing in cloud computing system
CN109829130A (en) * 2019-01-29 2019-05-31 武汉轻工大学 Double integral calculation method, device, terminal device and readable storage medium storing program for executing
CN110022300A (en) * 2019-03-06 2019-07-16 百度在线网络技术(北京)有限公司 Edge calculations implementation method, equipment, system and storage medium
CN109947398A (en) * 2019-03-25 2019-06-28 武汉轻工大学 Triple integral method for solving, device, terminal device and readable storage medium storing program for executing
CN109947398B (en) * 2019-03-25 2020-12-25 武汉轻工大学 Triple integral solving method and device, terminal equipment and readable storage medium
CN109948107A (en) * 2019-03-26 2019-06-28 武汉轻工大学 Area calculation of curved surface integral method, apparatus, equipment and storage medium
CN109948107B (en) * 2019-03-26 2023-05-12 武汉轻工大学 Area curved surface integral calculation method, device, equipment and storage medium
CN109947339A (en) * 2019-03-28 2019-06-28 武汉轻工大学 Method for drafting, device, equipment and the storage medium of parabolic cylinder
CN109947339B (en) * 2019-03-28 2020-10-23 武汉轻工大学 Drawing method, device and equipment of parabolic cylinder and storage medium
CN110580019A (en) * 2019-07-24 2019-12-17 浙江双一智造科技有限公司 edge calculation-oriented equipment calling method and device
CN110580019B (en) * 2019-07-24 2021-03-02 湖州因迈科技有限公司 Edge calculation-oriented equipment calling method and device
CN110839220A (en) * 2019-10-28 2020-02-25 无锡职业技术学院 Distributed computing method and system based on wireless ad hoc network
CN110839220B (en) * 2019-10-28 2022-12-20 无锡职业技术学院 Distributed computing method based on wireless ad hoc network
CN112199196B (en) * 2020-10-21 2022-03-18 上海交通大学 Resource allocation method, medium and server
CN112199196A (en) * 2020-10-21 2021-01-08 上海交通大学 Resource allocation method, medium and server

Similar Documents

Publication Publication Date Title
CN108491253A (en) A kind of calculating task processing method and edge calculations equipment
CN108122027B (en) Training method, device and chip of neural network model
US20220363259A1 (en) Method for generating lane changing decision-making model, method for lane changing decision-making of unmanned vehicle and electronic device
CN108122032A (en) A kind of neural network model training method, device, chip and system
CN105897584B (en) Paths planning method and controller
Long et al. Modeling and distributed simulation of supply chain with a multi-agent platform
CN111752302B (en) Path planning method and device, electronic equipment and computer readable storage medium
CN107895225A (en) A kind of cooperation type method for allocating tasks of multi-Agent Lothrus apterus
US11295254B2 (en) Flexible product manufacturing planning
CN112100155A (en) Cloud edge cooperative digital twin model assembling and fusing method
CN104217258A (en) Method for power load condition density prediction
CN109993205A (en) Time Series Forecasting Methods, device, readable storage medium storing program for executing and electronic equipment
CN109086936B (en) Production system resource allocation method, device and equipment for intelligent workshop
CN104778254B (en) A kind of distributed system and mask method of non-parametric topic automatic marking
CN114693036A (en) Intelligent management method, device and medium for building system based on knowledge graph
CN106228029B (en) Quantification problem method for solving and device based on crowdsourcing
CN108873737B (en) Automatic sorting control and decision-making system based on M-HSTPN model
CN116414094A (en) Intelligent scheduling method and system for welding assembly
CN115062868B (en) Pre-polymerization type vehicle distribution path planning method and device
CN116430805A (en) Workshop scheduling control method and device, production line and working machine
CN115471124A (en) Driving scheduling method, system, equipment and medium based on deep reinforcement learning
CN115437321A (en) Micro-service-multi-agent factory scheduling model based on deep reinforcement learning network
CN112613830A (en) Material storage center site selection method
CN111160831B (en) Task generation method and device for intensive warehouse and electronic equipment
CN103345672B (en) The vehicle-mounted task control system of Container Yard

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180904

RJ01 Rejection of invention patent application after publication