CN108540568B - Computing capacity sharing method and intelligent equipment - Google Patents
Computing capacity sharing method and intelligent equipment Download PDFInfo
- Publication number
- CN108540568B CN108540568B CN201810367000.3A CN201810367000A CN108540568B CN 108540568 B CN108540568 B CN 108540568B CN 201810367000 A CN201810367000 A CN 201810367000A CN 108540568 B CN108540568 B CN 108540568B
- Authority
- CN
- China
- Prior art keywords
- intelligent
- subtask
- computing power
- events
- node
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/60—Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
- H04L67/61—Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources taking into account QoS or priority requirements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
- H04L67/1004—Server selection for load balancing
- H04L67/1008—Server selection for load balancing based on parameters of servers, e.g. available memory or workload
Abstract
The embodiment of the invention relates to the technical field of Internet and discloses a computing power sharing method and intelligent equipment. In the embodiment of the invention, the computing capacity sharing method is applied to intelligent equipment, the intelligent equipment is an intelligent node in a distributed database, the distributed database comprises M intelligent nodes, and each intelligent node is a block chain node; the method comprises the following steps: when a task event is received, estimating the processing time required for completing the task event; if the processing time is longer than the preset duration, acquiring N intelligent nodes meeting the preset condition; wherein the preset conditions at least include: the intelligent node is in an idle state; n is less than or equal to M; the task events are disassembled into N subtask events, and the subtask events are distributed to the intelligent nodes; and the N subtask events correspond to the N intelligent nodes one to one. By adopting the implementation mode of the invention, the dependence on the background server when processing the task event of large-scale computation can be solved, and the computation cost is reduced.
Description
Technical Field
The embodiment of the invention relates to the technical field of Internet, in particular to a computing capacity sharing method and intelligent equipment.
Background
With the continuous development of scientific technology and communication technology, more and more intelligent devices step into the daily life of people. In order to meet the increasing application requirements of people, the functions of the intelligent devices are more and more, and the demand on computing power is also more and more.
However, the applicant of the present patent application has found that the prior art has at least the following drawbacks:
the computing power of the intelligent device in the prior art is determined by the hardware resource of the intelligent device, and the computing power is relatively limited. In practical application, in order to avoid the situation that hardware resources are occupied for a long time and other application requirements of a user cannot be responded when the intelligent device processes certain task events needing large-scale computation, technicians usually send the task events needing large-scale computation to a background server, compute and obtain computation results by means of the background server, and send the computation results to the intelligent device through the background server, so that the hardware resources of the intelligent device are prevented from being occupied for a long time, but certain cost is generated and computation cost is increased. Moreover, the configuration of the background server itself may affect the response speed and the calculation result of the calculation.
Disclosure of Invention
The embodiment of the invention aims to provide a computing power sharing method and intelligent equipment, which can solve the problem of dependence on a background server when processing a task event of large-scale computing and reduce computing cost.
In order to solve the above technical problem, an embodiment of the present invention provides a computing power sharing method, which is applied to an intelligent device, where the intelligent device is an intelligent node in a distributed database, the distributed database includes M intelligent nodes, and each intelligent node is a block chain node; m is a positive integer greater than 1;
the method comprises the following steps:
when a task event is received, estimating the processing time required for completing the task event;
if the processing time is longer than the preset duration, acquiring N intelligent nodes meeting the preset condition; wherein the preset conditions at least include: the intelligent node is in an idle state; n is less than or equal to M and is a positive integer greater than 0;
the task events are disassembled into N subtask events, and the subtask events are distributed to the intelligent nodes; and the N subtask events correspond to the N intelligent nodes one to one.
An embodiment of the present invention further provides an intelligent device, including: at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the above-described computing power sharing method.
Embodiments of the present invention also provide a computer-readable storage medium storing a computer program, which when executed by a processor implements the above-described computing power sharing method.
Compared with the prior art, the method and the system have the advantages that the plurality of intelligent devices are used as the block chain nodes to form the distributed database. When a task event is received by some intelligent device, if the processing time estimated for processing the task event is longer than the preset time, it is indicated that the task event is a large-scale calculated task event for the intelligent device. At this time, the intelligent device receiving the task event disassembles and divides the task event, distributes the task event to other intelligent nodes in an idle state in the distributed database, and enables the intelligent devices corresponding to the intelligent nodes to process the task event. Therefore, when a task event of large-scale computation is encountered, other idle intelligent nodes can be called to perform computation processing on the task event, so that a background server is not required to be arranged, the dependence on the background server when the task event of large-scale computation is processed is solved, the computation cost is reduced, and the utilization rate of hardware resources is improved.
In addition, the task event is disassembled into N subtask events, and the subtask events are distributed to the intelligent nodes, which specifically includes: acquiring the computing power of each intelligent node in the N intelligent nodes; according to the computing power of the intelligent node, a subtask event corresponding to the computing power is disassembled from the task events; and distributing the subtask event to the intelligent node. Therefore, each intelligent node in the idle state can acquire the subtask event matched with the computing capability of the intelligent node, and the utilization rate of hardware resources is high.
In addition, the task event is disassembled into N subtask events, which specifically includes: acquiring the computing power of each intelligent node in the N intelligent nodes; acquiring the lowest computing power of the N computing powers, and setting a reference computing power according to the lowest computing power; wherein the reference computing capacity is less than or equal to the minimum computing capacity; and N subtask events corresponding to the reference computing capacity are disassembled from the task events. Therefore, a certain computing capacity margin can be reserved for the intelligent nodes in the idle state, and a foundation is provided for the intelligent nodes to respond to other application requirements of users.
In addition, after the task event is disassembled into N subtask events and the subtask events are distributed to the intelligent nodes, the method further includes: counting the number of subtask events completed by each intelligent node in the M intelligent nodes; setting a priority for the intelligent node according to the number of subtask events completed by the intelligent node; wherein the higher the number of intelligent nodes, the higher the priority. Therefore, each intelligent node in the distributed database can be classified, and more reference information is provided for the follow-up distribution of subtask events.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in which like reference numerals refer to similar elements and which are not to scale unless otherwise specified.
FIG. 1 is a detailed flow diagram of a computing power sharing method according to a first embodiment;
FIG. 2 is a detailed flow diagram of a computing power sharing method according to a third embodiment;
fig. 3 is a schematic diagram of a smart device according to a fourth embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the embodiments of the present invention will be described in detail below with reference to the accompanying drawings. However, it will be appreciated by those of ordinary skill in the art that numerous technical details are set forth in order to provide a better understanding of the present application in various embodiments of the present invention. However, the technical solution claimed in the present application can be implemented without these technical details and various changes and modifications based on the following embodiments.
The first embodiment of the invention relates to a computing power sharing method, and the specific flow is shown in fig. 1. The computing power sharing method in the embodiment is implemented on intelligent equipment, and the intelligent equipment can be a mobile terminal such as a mobile phone, a computer and a tablet personal computer, and can also be intelligent household equipment such as an intelligent doorbell and an intelligent refrigerator. The intelligent device in this embodiment and a plurality of other intelligent devices form a distributed database, the intelligent device in this embodiment is an intelligent node in the distributed database, and the intelligent node is a block chain node.
Specifically, when receiving a task event, the intelligent device estimates the calculated amount required by the task event and obtains the calculation speed according to the hardware resource currently in an idle state. And the intelligent equipment predicts the processing time required by completing the task event according to the predicted calculated amount and the obtained calculating speed. For example, the intelligent device takes the ratio of the estimated calculated amount to the obtained calculated speed as the processing time required for completing the task event.
And 102, judging whether the processing time is longer than a preset time length. If yes, go to step 103, otherwise, end the process.
Specifically, the preset time period may be preset by a technician and stored in the smart device. More specifically, after the intelligent device obtains the preset time, the stored preset time can be distributed to each node in the distributed database, and the intelligent device of each node in the distributed database can obtain the preset time, so that a user does not need to perform operation of setting and storing the preset time in each intelligent device, user operation is simplified, and the intelligent degree is high.
And 103, acquiring N intelligent nodes meeting preset conditions.
In this embodiment, the preset conditions are as follows: the intelligent node is in an idle state. For example, the distributed database includes 6 intelligent nodes, which are intelligent node a, intelligent node B, intelligent node C, intelligent node D, intelligent node E, and intelligent node f. If the intelligent node receiving the task event is assumed to be: the intelligent node A, the intelligent node in idle state at present is: intelligent node C, intelligent node D, intelligent node E. Step 103 acquires 3 intelligent nodes meeting preset conditions: intelligent node C, intelligent node D, intelligent node E.
And 104, disassembling the task event into N subtask events, and distributing the subtask events to the intelligent nodes.
Specifically, when the intelligent device disassembles the task event into N subtask events, the computing power of each intelligent node in the N intelligent nodes may be obtained to obtain the lowest computing power of the N computing powers, a reference computing power is set according to the lowest computing power, and N subtask events corresponding to the reference computing power are disassembled from the task event. Therefore, when the intelligent nodes in idle states process the subtask events, certain computing capacity margin can be reserved, and a foundation is provided for responding to other application requirements of the user.
More specifically, the reference computing capacity is less than or equal to the minimum computing capacity. When the reference calculation capacity is set, the intelligent device can acquire the preset proportion, and the minimum calculation capacity of the preset proportion is used as the reference calculation capacity. If the preset ratio is 80%, the reference calculation capacity may be 80% of the minimum calculation capacity. Wherein, the preset proportion can be input in advance by a technician and stored in the intelligent device.
The following illustrates an example of splitting a task event into N subtask events by the intelligent device in this embodiment:
if the intelligent device is an intelligent node a, the 3 intelligent nodes meeting the preset conditions obtained in step 103 are: intelligent node C, intelligent node D, intelligent node E. The computing capacity of the intelligent node C is greater than or equal to the processing capacity of the intelligent node D; the computing power of the intelligent node D is larger than or equal to that of the intelligent node E. The intelligent device obtains the computing power of the 3 intelligent nodes, namely the intelligent node C, the intelligent node D and the intelligent node E, and obtains the lowest computing power of the three intelligent nodes: the computing power of the intelligent node E. The intelligent device may set the reference computation capability to 80% of the computation capability of the intelligent node E, thereby disassembling 3 subtask events corresponding to the reference computation capability from the task events.
Compared with the prior art, the implementation mode of the invention takes a plurality of intelligent devices as the block chain nodes to form the distributed database. When a task event is received by a certain intelligent device, if the estimated processing time for processing the task event is longer than the preset time length, the task event is a large-scale calculated task event for the intelligent device. At this time, the intelligent device receiving the task event disassembles and divides the task event and distributes the task event to other intelligent nodes in an idle state in the distributed database, so that the intelligent devices corresponding to the other intelligent nodes in the idle state can process the task event conveniently. Therefore, when a task event of large-scale computation is encountered, other idle intelligent nodes can be called to perform computation processing on the task event, so that a background server is not required to be arranged, the dependence on the background server when the task event of large-scale computation is processed is solved, the computation cost is reduced, and the utilization rate of hardware resources is improved.
A second embodiment of the present invention relates to a computing power sharing method. The second embodiment is substantially the same as the first embodiment, and mainly differs therefrom in that: the second embodiment of the invention provides another specific implementation form for the intelligent device to disassemble the task event into N subtask events, thereby increasing the flexibility of the embodiment of the invention.
In this embodiment, when the intelligent device disassembles the task event into N subtask events, the computing capability of each of the N intelligent nodes may be obtained, a subtask event corresponding to the computing capability is disassembled in the task event according to the computing capability of the intelligent node, and the subtask event is distributed to the intelligent node.
Specifically, assume that the smart device is a smart node a, and the acquired 3 smart nodes that meet the preset conditions are: intelligent node C, intelligent node D, intelligent node E. The computing capacity of the intelligent node C is recorded as C; the computing power of the intelligent node D is recorded as D; the computing power of the intelligent node E is denoted as E. The intelligent device disassembles a subtask event 1 corresponding to the computing power C from the task events according to the computing power C of the intelligent node C, and distributes the subtask event 1 to the intelligent node C; the intelligent equipment disassembles a subtask event 2 corresponding to the computing power D from the task events according to the computing power D of the intelligent node D, and distributes the subtask event 1 to the intelligent node D; and the intelligent equipment disassembles a subtask event 3 corresponding to the computing power E from the task events according to the computing power E of the intelligent node E, and distributes the subtask event 3 to the intelligent node E. Therefore, each intelligent node in the idle state can acquire the subtask event matched with the computing capability of the intelligent node, so that the hardware resources of each intelligent node in the idle state are fully utilized.
The third embodiment of the invention relates to a computing power sharing method, and the specific flow is shown in fig. 2. The third embodiment of the present invention is an improvement of the first or second embodiment, and the main improvement is that: in the third embodiment of the present invention, each intelligent node in the distributed database is also ranked, so as to provide more reference information for subsequent subtask event distribution. The following is specifically described:
and step 205, counting the number of subtask events completed by each intelligent node in the M intelligent nodes.
Specifically, the number of subtask events completed by the intelligent node is the number of all subtask events completed historically after the intelligent node joins the distributed database.
And step 206, setting the priority for the intelligent node according to the number of the subtask events completed by the intelligent node.
Specifically, the corresponding relation between the number interval and the priority is preset in the intelligent device, and the intelligent device sets the priority corresponding to the number interval for the intelligent node according to the number interval in which the number of subtask events completed by the intelligent node is located. For example, the correspondence relationship between the number interval and the priority can be as the following table one:
table one:
and if the number of the subtask events completed by the intelligent node is 1, the priority of the intelligent node is the priority I. If the number of the subtask events completed by the intelligent node is 150, the priority of the intelligent node is the priority II. And if the number of the subtask events completed by the intelligent node is 1000, the priority of the intelligent node is the priority III. The table is only an exemplary illustration of the correspondence relationship between the number intervals and the priorities, and the correspondence relationship between the number intervals and the priorities is not limited in this embodiment.
In this embodiment, the higher the number of the intelligent nodes, the higher the priority, and the preset condition may further include that the priority is lower than or equal to the priority of the intelligent node where the intelligent device is located. Therefore, the task events of the intelligent nodes with higher participation degree and higher activity degree in the distributed database can be processed preferentially, and the participation enthusiasm of each intelligent node in the distributed database can be improved.
The steps of the above methods are divided for clarity, and the implementation may be combined into one step or split some steps, and the steps are divided into multiple steps, so long as the same logical relationship is included, which are all within the protection scope of the present patent; it is within the scope of the patent to add insignificant modifications to the algorithms or processes or to introduce insignificant design changes to the core design without changing the algorithms or processes.
A fourth embodiment of the present invention relates to a smart device, as shown in fig. 3, including: at least one processor 301; and a memory 302 communicatively coupled to the at least one processor 301; wherein the memory 302 stores instructions executable by the at least one processor 301, the instructions being executable by the at least one processor 301 to enable the at least one processor 301 to perform the computing power sharing method of the above method embodiments.
Where the memory 302 and the processor 301 are coupled in a bus, the bus may comprise any number of interconnected buses and bridges, the buses coupling one or more of the various circuits of the processor 301 and the memory 302. The bus may also connect various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface provides an interface between the bus and the transceiver. The transceiver may be one element or a plurality of elements, such as a plurality of receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. The data processed by the processor 301 is transmitted over a wireless medium through an antenna, which further receives the data and transmits the data to the processor 301.
The processor 301 is responsible for managing the bus and general processing and may also provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. And memory 302 may be used to store data used by processor 301 in performing operations.
Compared with the prior art, the implementation mode of the invention can call other idle intelligent nodes to perform calculation processing on the task event when the task event of large-scale calculation is encountered, so that a background server is not required to be arranged, the dependence on the background server when the task event of large-scale calculation is processed is solved, the calculation cost is reduced, and the utilization rate of hardware resources is improved.
A fifth embodiment of the present invention relates to a computer-readable storage medium storing a computer program. The computer program realizes the above-described method embodiments when executed by a processor.
Compared with the prior art, the implementation mode of the invention can call other idle intelligent nodes to perform calculation processing on the task event when the task event of large-scale calculation is encountered, so that a background server is not required to be arranged, the dependence on the background server when the task event of large-scale calculation is processed is solved, the calculation cost is reduced, and the utilization rate of hardware resources is improved.
That is, as can be understood by those skilled in the art, all or part of the steps in the method for implementing the above embodiments may be implemented by a program instructing related hardware, where the program is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific embodiments for practicing the invention, and that various changes in form and details may be made therein without departing from the spirit and scope of the invention in practice.
Claims (6)
1. A computing power sharing method is applied to intelligent equipment, the intelligent equipment is an intelligent node in a distributed database, the distributed database comprises M intelligent nodes, and each intelligent node is a block chain node; m is a positive integer greater than 1;
the method comprises the following steps:
when a task event is received, estimating the processing time required for completing the task event;
if the processing time is longer than the preset time, acquiring N intelligent nodes meeting preset conditions; wherein the preset conditions at least include: the intelligent node is in an idle state; n is less than or equal to M and is a positive integer greater than 0;
decomposing the task event into N subtask events, and distributing the subtask events to the intelligent nodes; the N subtask events correspond to the N intelligent nodes one to one;
counting the number of subtask events completed by each intelligent node in the M intelligent nodes;
setting a priority for the intelligent node according to the number of subtask events completed by the intelligent node; the higher the number of the intelligent nodes is, the higher the priority is; the preset conditions further include: the priority is lower than or equal to the priority of the intelligent node where the intelligent equipment is located.
2. The computing power sharing method according to claim 1, wherein the disassembling the task event into N subtask events and distributing the subtask events to the intelligent node specifically includes:
acquiring the computing power of each intelligent node in the N intelligent nodes;
according to the computing power of the intelligent node, a subtask event corresponding to the computing power is disassembled from the task events;
and distributing the subtask event to the intelligent node.
3. The computing power sharing method according to claim 1, wherein the splitting the task event into N subtask events specifically includes:
acquiring the computing power of each intelligent node in the N intelligent nodes;
acquiring the lowest computing power of the N computing powers, and setting a reference computing power according to the lowest computing power; wherein the reference computing capacity is less than or equal to the minimum computing capacity;
and N subtask events corresponding to the reference computing capacity are disassembled from the task events.
4. The computing power sharing method according to claim 3, wherein the setting of the reference computing power according to the minimum computing power specifically includes:
acquiring a preset proportion;
taking the lowest computing power of the preset proportion as the reference computing power.
5. A smart device, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the computing power sharing method of any of claims 1 to 4.
6. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the computing power sharing method of any of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810367000.3A CN108540568B (en) | 2018-04-23 | 2018-04-23 | Computing capacity sharing method and intelligent equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810367000.3A CN108540568B (en) | 2018-04-23 | 2018-04-23 | Computing capacity sharing method and intelligent equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108540568A CN108540568A (en) | 2018-09-14 |
CN108540568B true CN108540568B (en) | 2021-06-01 |
Family
ID=63477444
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810367000.3A Active CN108540568B (en) | 2018-04-23 | 2018-04-23 | Computing capacity sharing method and intelligent equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108540568B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109144969A (en) * | 2018-10-09 | 2019-01-04 | 上海点融信息科技有限责任公司 | For the data processing method of block chain network system, device and storage medium |
CN109327529B (en) * | 2018-10-31 | 2022-02-25 | 北京知道创宇信息技术股份有限公司 | Distributed scanning method and system |
CN109873868A (en) * | 2019-03-01 | 2019-06-11 | 深圳市网心科技有限公司 | A kind of computing capability sharing method, system and relevant device |
CN109960575B (en) * | 2019-03-26 | 2023-09-15 | 深圳市网心科技有限公司 | Computing capacity sharing method, system and related equipment |
CN110147278A (en) * | 2019-04-08 | 2019-08-20 | 西安万像电子科技有限公司 | Data processing method and device |
CN110489488B (en) * | 2019-08-21 | 2021-06-15 | 腾讯科技(深圳)有限公司 | Data processing method and device |
CN113406894A (en) * | 2021-07-22 | 2021-09-17 | 深圳市伟峰科技有限公司 | Intelligent household control system, method, equipment and storage medium based on cloud computing |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104243579A (en) * | 2014-09-12 | 2014-12-24 | 清华大学 | Computational node control method and system applied to water conservancy construction site |
CN105912399A (en) * | 2016-04-05 | 2016-08-31 | 杭州嘉楠耘智信息科技有限公司 | Task processing method, device and system |
CN106033371A (en) * | 2015-03-13 | 2016-10-19 | 杭州海康威视数字技术股份有限公司 | Method and system for dispatching video analysis task |
CN106844018A (en) * | 2015-12-07 | 2017-06-13 | 阿里巴巴集团控股有限公司 | A kind of task processing method, apparatus and system |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130081027A1 (en) * | 2011-09-23 | 2013-03-28 | Elwha LLC, a limited liability company of the State of Delaware | Acquiring, presenting and transmitting tasks and subtasks to interface devices |
-
2018
- 2018-04-23 CN CN201810367000.3A patent/CN108540568B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104243579A (en) * | 2014-09-12 | 2014-12-24 | 清华大学 | Computational node control method and system applied to water conservancy construction site |
CN106033371A (en) * | 2015-03-13 | 2016-10-19 | 杭州海康威视数字技术股份有限公司 | Method and system for dispatching video analysis task |
CN106844018A (en) * | 2015-12-07 | 2017-06-13 | 阿里巴巴集团控股有限公司 | A kind of task processing method, apparatus and system |
CN105912399A (en) * | 2016-04-05 | 2016-08-31 | 杭州嘉楠耘智信息科技有限公司 | Task processing method, device and system |
Also Published As
Publication number | Publication date |
---|---|
CN108540568A (en) | 2018-09-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108540568B (en) | Computing capacity sharing method and intelligent equipment | |
WO2021142609A1 (en) | Information reporting method, apparatus and device, and storage medium | |
CN107241281B (en) | Data processing method and device | |
CN104243405A (en) | Request processing method, device and system | |
CN106131185B (en) | Video data processing method, device and system | |
CN111078404B (en) | Computing resource determining method and device, electronic equipment and medium | |
CN112445857A (en) | Resource quota management method and device based on database | |
CN108924203B (en) | Data copy self-adaptive distribution method, distributed computing system and related equipment | |
CN104202305B (en) | A kind of trans-coding treatment method, device and server | |
CN111131841A (en) | Live indirect access method and device, electronic equipment and storage medium | |
CN114816738A (en) | Method, device and equipment for determining calculation force node and computer readable storage medium | |
CN103731281A (en) | Frequency channel processing method and device | |
CN114155026A (en) | Resource allocation method, device, server and storage medium | |
CN109800261A (en) | Dynamic control method, device and the relevant device of double data library connection pool | |
CN115033352A (en) | Task scheduling method, device and equipment for multi-core processor and storage medium | |
CN112465615A (en) | Bill data processing method, device and system | |
CN113242149B (en) | Long connection configuration method, apparatus, device, storage medium, and program product | |
CN114546646A (en) | Processing method and processing apparatus | |
CN112615726B (en) | Low-power-consumption processing method and device with variable wake-up time | |
CN112817753A (en) | Task processing method and device, storage medium and electronic device | |
CN109308219B (en) | Task processing method and device and distributed computer system | |
CN104065684A (en) | Information processing method, electronic device and terminal device | |
CN112966005B (en) | Timing message sending method, device, computer equipment and storage medium | |
CN112770358B (en) | Multi-rate mode data transmission control method and device based on service data | |
CN114598662A (en) | Message queue cluster federal management system and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |