CN115562824A - Computing resource cooperative scheduling system, method, device and storage medium - Google Patents
Computing resource cooperative scheduling system, method, device and storage medium Download PDFInfo
- Publication number
- CN115562824A CN115562824A CN202211160591.XA CN202211160591A CN115562824A CN 115562824 A CN115562824 A CN 115562824A CN 202211160591 A CN202211160591 A CN 202211160591A CN 115562824 A CN115562824 A CN 115562824A
- Authority
- CN
- China
- Prior art keywords
- task
- computing power
- computing
- scheduling
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/445—Program loading or initiating
- G06F9/44594—Unloading
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/505—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5083—Techniques for rebalancing the load in a distributed system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/5021—Priority
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention relates to the technical field of data processing, and particularly discloses a system, a method and a device for computing power resource cooperative scheduling and a storage medium, wherein the system comprises a cloud server, edge nodes and terminal equipment; the cloud server comprises a computing cluster and a scheduling module; the scheduling module is used for monitoring the computing cluster and the residual computing resources of the edge nodes; the task processing module is also used for receiving a task processing request of the terminal equipment; the task processing request comprises the total data processing amount; the scheduling module is used for estimating computing power requirements according to the data processing total amount; judging whether the residual computing power resources of the edge nodes are met, and unloading the task to the edge nodes if the residual computing power resources of the edge nodes meet the computing power requirements; the scheduling terminal equipment sends the data to be processed to the edge node; and if the computing power requirement is not met, unloading the task to the computing cluster, and sending the data to be processed to the computing cluster by the scheduling terminal equipment. By adopting the technical scheme of the invention, the task processing efficiency can be improved.
Description
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a system, a method, a device, and a storage medium for computing resource cooperative scheduling.
Background
With the continuous improvement of the technical level of software and hardware and the vigorous development of the artificial intelligence technology, the application fields of electric power operation inspection, safety supervision, marketing and the like have stronger requirements for data application of images, videos, voices, languages, texts and the like, for example, in a power transmission and transformation inspection service scene, the operation and maintenance are mainly carried out manually, and the operation and inspection efficiency is improved by the image recognition and other technologies, so that the high-efficiency operation of equipment is ensured; safety supervision business currently depends on manual safety supervision, and the safety guarantee of operators is improved by technologies such as video image processing and the like; the customer service business mainly relies on manual conversation to process the problem of electricity consumption of customers, and technologies such as voice recognition and natural language processing are needed to realize intelligent customer service requirements so as to improve customer service user experience and reduce operation cost.
Because artificial intelligence needs the support of business drive and data, and also needs the support of professional algorithm and large-scale computing power, the AI can be promoted to be applied to various business fields to play a role only by integrating company-level AI sample resources, comprehensively planning heterogeneous computing power application, and processing the relationship between the AI power and a business system, a data center, a business center and an internet of things management platform so as to support the application requirements of artificial intelligence image technologies of different units, different business domains and different scenes, and each unit can apply the artificial intelligence technology with the minimum cost and the maximum convenience.
However, a large amount of data generated by various devices are uploaded to the cloud and processed through the AI model, so that great pressure is applied to the cloud, the uploading of the data and the issuing of the processing result are greatly affected by the network conditions, the delay is increased along with the increase of the physical distance between the cloud and the devices, and more communication cost is required.
Therefore, a computing resource cooperative scheduling system, method, apparatus, and storage medium for improving task processing efficiency is needed.
Disclosure of Invention
One of the objects of the present invention is to provide a computational resource cooperative scheduling system capable of improving task processing efficiency.
In order to solve the technical problem, the present application provides the following technical solutions:
a computing resource cooperative scheduling system comprises a cloud server, edge nodes and terminal equipment; the cloud server comprises a computing cluster and a scheduling module;
the scheduling module is used for monitoring the computing cluster and the residual computing resources of the edge nodes; the task processing module is also used for receiving a task processing request of the terminal equipment; the task processing request comprises the total data processing amount;
the scheduling module is used for estimating computing power requirements according to the data processing total amount; judging whether the residual computing power resources of the edge nodes are met, and unloading the tasks to the edge nodes if the residual computing power resources of the edge nodes meet the computing power requirements; the scheduling terminal equipment sends the data to be processed to the edge node; and if the computing power requirement is not met, unloading the task to the computing cluster, and sending the data to be processed to the computing cluster by the scheduling terminal equipment.
The basic scheme principle and the beneficial effects are as follows:
in the scheme, the calculation tasks are reasonably distributed to the edge nodes or the cloud server by estimating the calculation force requirements of the tasks. The advantage of low transmission cost at the edge side is fully utilized to complete part of calculation, reduce the data transmission amount and save the bandwidth and the communication cost; meanwhile, the strong computing power of the cloud server is utilized to complete another part of computing tasks, end-edge-cloud three-level cooperation is achieved, the overall computing tasks are efficiently executed, and the load of multiple computing devices is balanced.
Further, the scheduling module is further configured to set priorities for different terminal devices, determine whether the priority of the corresponding terminal device is high when it is determined that the remaining computation resources of the edge node do not meet the computation demand, and unload the task to the computing cluster if the priority is high; and if the priority is low, adding the task into a processing queue of the current edge node.
And the tasks with low priority are placed at the edge nodes for processing, so that the bandwidth and the communication cost can be saved.
Further, the scheduling module is further configured to monitor tasks processed at the edge nodes, and if the computing resources required by the current task are greater than the total computing resources of the edge nodes, unload the task to the computing cluster for operation.
The problem of inaccurate calculation power requirement estimation can be solved, and normal processing of tasks is guaranteed.
Further, the cloud server also comprises a model library and a service module;
a plurality of AI models are deployed in the model library;
the service module is used for receiving a task processing request of the terminal equipment and sending the task processing request to the scheduling module, wherein the task processing request also comprises a task type;
the scheduling module is further used for matching the AI model according to the task type, and when the task is distributed to the edge node, the matched AI model is sent to the edge node through the service module.
The AI model is stored in the model base of the cloud server, and the edge node is issued only when needed, so that the safety of the AI model can be ensured.
Further, the model base is also used for generating a public key and a private key and sending the private key to the edge node through the service module; and determining a public key according to the edge node of the task processing, encrypting the matched AI model by using the public key, and sending the encrypted AI model to the edge node through the service module.
And ensuring the safety of the AI model in transmission.
The invention also aims to provide a calculation power resource cooperative scheduling method, which comprises the following steps:
s1, monitoring and calculating residual computing resources of a cluster and edge nodes in real time; receiving a task processing request of terminal equipment; the task processing request comprises the total data processing amount;
s2, estimating computing power demand according to the data processing total amount; judging whether the residual computing power resources of the edge nodes meet the computing power requirements, if so, jumping to S3, and if not, jumping to S5;
s3, unloading the task to the edge node;
s4, monitoring the tasks processed at the edge nodes, judging whether the computing power resource required by the current task is greater than the total computing power resource of the edge nodes, and if so, jumping to S5; if the number is not larger than the preset value, keeping the task running at the edge node;
and S5, unloading the task to the computing cluster.
In the scheme, the calculation tasks are reasonably distributed to the edge nodes or the cloud server by estimating the calculation force requirements of the tasks. The advantage of low transmission cost at the edge side is fully utilized to complete part of calculation, reduce the data transmission amount and save the bandwidth and the communication cost; meanwhile, the strong computing power of the cloud server is utilized to complete another part of computing tasks, so that end-edge-cloud three-level cooperation, efficient execution of the overall computing tasks and load balancing of multiple computing devices are realized.
Further, in step S2, priorities are set for different terminal devices, when it is determined that the remaining computation resources of the edge nodes do not meet the computation requirements, it is determined whether the priority of the corresponding terminal device is high, and if the priority is high, the process jumps to step S5; and if the priority is low, adding the task into a processing queue of the current edge node, and jumping to S3.
And the tasks with low priority are placed at the edge nodes for processing, so that the bandwidth and the communication cost can be saved.
Further, the method comprises the step S0 of deploying a plurality of AI models in a model library, generating a public key and a private key by the model library, and sending the private key to the edge node;
in step S3, the task processing request also comprises a task type; whether an AI model needs to be used is judged according to the task type, if yes, the AI model is matched according to the task type, a public key is determined according to an edge node of task processing, the matched AI model is encrypted by using the public key, and the encrypted AI model is sent to the edge node; if the AI model is not needed, jump directly to S4.
The AI model is stored in the model base of the cloud server, and the edge node is issued only when needed, so that the safety of the AI model can be ensured.
It is a further object of the present invention to provide a computational resource cooperative scheduling apparatus using the above system.
It is a fourth object of the present invention to provide a storage medium storing a computer program which, when executed by a processor, implements the steps for the method described above.
Drawings
Fig. 1 is a flowchart of a computational resource cooperative scheduling method according to an embodiment.
Detailed Description
The following is further detailed by way of specific embodiments:
example one
The computational resource cooperative scheduling system comprises a cloud server, edge nodes and terminal equipment;
the cloud server comprises a model base, a service module and a heterogeneous resource cluster; the heterogeneous resource cluster includes a compute cluster and a scheduling module. The compute cluster includes a number of CPUs and a number of GPUs.
The model base is deployed with a plurality of trained AI models, such as Convolutional Neural Networks (CNN), generative countermeasure networks (GAN), support Vector Machines (SVM), and the like. The model base is also used for generating a public key and a private key and sending the private key to the edge node through the service module.
The service module is used for receiving a task processing request of the terminal equipment and sending the task processing request to the scheduling module, wherein the task processing request comprises a task type and a data processing total amount.
The scheduling module is used for monitoring the computing cluster and the residual computing resources of the edge nodes; priorities are set for different terminal devices, and in this embodiment, the priorities include high, medium, and low.
The scheduling module is also used for estimating computing power requirements according to the data processing total amount; different terminal devices usually need to process one or more fixed task types, and according to the task types and the past processing experiences, the corresponding relation between the total data processing amount and the computing power demand interval can be obtained and used as the basis for estimating the computing power demand. The scheduling module is also used for judging whether the residual computing power resources of the edge nodes are met, and unloading the tasks to the edge nodes if the residual computing power resources of the edge nodes meet the computing power requirements; the scheduling terminal equipment sends the data to be processed to the edge node; if the computing power requirement is not met, judging the priority of the corresponding terminal equipment, and if the priority is low or medium and the estimated computing power requirement is less than or equal to the total computing power resource of the edge node, adding the task into a processing queue of the current edge node;
and if the priority is high or the estimated computing power demand is larger than the total computing power of the edge nodes, unloading the tasks to the computing cluster, and sending the data to be processed to the computing cluster by the scheduling terminal equipment.
The scheduling module is also used for monitoring tasks processed by the edge nodes, and unloading the tasks to the computing cluster for operation if the computing resources required by the current tasks are greater than the total computing resources of the edge nodes.
The scheduling module is further used for matching the AI model according to the task type, determining a public key according to the edge node processed by the task when the task is distributed to the edge node by the model base, encrypting the matched AI model by using the public key, and sending the encrypted AI model to the edge node through the service module. For example, if the task type is image recognition of a line foreign object, the convolutional neural network model is matched. And the edge node is used for decrypting the AI model through a private key and operating. The edge node is also used for deleting the AI model at the set time after the AI model operation is finished. The AI model is deleted, for example, at 24. On one hand, the safety of the AI model can be guaranteed, and on the other hand, after the latest AI model is deployed in the model library, the latest AI model can be synchronized.
The embodiment also provides a computing resource cooperative scheduling device which uses the system.
As shown in fig. 1, this embodiment further provides a computation resource cooperative scheduling method, which includes the following steps:
s0, deploying a plurality of AI models in a model library, generating a public key and a private key by the model library, and sending the private key to an edge node;
s1, monitoring and calculating residual computing power resources of a cluster and edge nodes in real time; receiving a task processing request of terminal equipment; the task processing request comprises the total data processing amount and the task type;
s2, setting priorities for different terminal devices, and estimating computing power requirements according to the total data processing amount; judging whether the residual computing power resources of the edge nodes meet the computing power requirements or not, if so, jumping to S3, if not, judging whether the priority of the corresponding terminal equipment is high, and if so, jumping to S5; if the priority is low or medium, judging whether the estimated calculation power demand is less than or equal to the total calculation power resource of the edge node, if so, adding the task into a processing queue of the current edge node, and jumping to S3; if the estimated calculation power demand is larger than the total calculation power of the edge nodes, jumping to S5;
s3, unloading the task to the edge node; judging whether an AI model needs to be used according to the task type, if so, matching the AI model according to the task type, determining a public key according to an edge node of task processing, encrypting the matched AI model by using the public key, and sending the encrypted AI model to the edge node; if not, jumping to S4;
s4, monitoring the tasks processed at the edge nodes, judging whether the computing power resource required by the current task is greater than the total computing power resource of the edge nodes, and if so, jumping to S5; if not, keeping the task running at the edge node;
and S5, unloading the task to the computing cluster.
The computational resource co-scheduling method can be stored in a storage medium if it is implemented in the form of a software functional unit and sold or used as an independent product. Based on such understanding, all or part of the flow in the method according to the above embodiments may be implemented by a computer program, which may be stored in a storage medium and executed by a processor, to instruct related hardware to implement the steps of the above method embodiments. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, read-Only Memory (ROM), random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like.
The above are merely examples of the present invention, and the present invention is not limited to the field related to this embodiment, and the common general knowledge of the known specific structures and characteristics in the schemes is not described herein too much, and those skilled in the art can know all the common technical knowledge in the technical field before the application date or the priority date, can know all the prior art in this field, and have the ability to apply the conventional experimental means before this date, and those skilled in the art can combine their own ability to perfect and implement the scheme, and some typical known structures or known methods should not become barriers to the implementation of the present invention by those skilled in the art in light of the teaching provided in the present application. It should be noted that, for those skilled in the art, without departing from the structure of the present invention, several changes and modifications can be made, which should also be regarded as the protection scope of the present invention, and these will not affect the effect of the implementation of the present invention and the practicability of the patent. The scope of the claims of the present application shall be determined by the contents of the claims, and the description of the embodiments and the like in the specification shall be used to explain the contents of the claims.
Claims (10)
1. A computing power resource cooperative scheduling system is characterized by comprising a cloud server, edge nodes and terminal equipment; the cloud server comprises a computing cluster and a scheduling module;
the scheduling module is used for monitoring the computing cluster and the residual computing resources of the edge nodes; the task processing module is also used for receiving a task processing request of the terminal equipment; the task processing request comprises the total data processing amount;
the scheduling module is used for estimating computing power requirements according to the data processing total amount; judging whether the residual computing power resources of the edge nodes are met, and unloading the tasks to the edge nodes if the residual computing power resources of the edge nodes meet the computing power requirements; the scheduling terminal equipment sends the data to be processed to the edge node; and if the computing power requirement is not met, unloading the task to the computing cluster, and sending the data to be processed to the computing cluster by the scheduling terminal equipment.
2. The computational resource co-scheduling system according to claim 1, wherein: the scheduling module is further used for setting priorities for different terminal devices, judging whether the priority of the corresponding terminal device is high or not when judging that the residual computing power resources of the edge nodes do not meet the computing power requirements, and unloading the tasks to the computing cluster if the priority is high; and if the priority is low, adding the task into the processing queue of the current edge node.
3. The computational resource co-scheduling system of claim 2, wherein: the scheduling module is also used for monitoring tasks processed by the edge nodes, and unloading the tasks to the computing cluster for operation if the computing resources required by the current tasks are greater than the total computing resources of the edge nodes.
4. The computational resource co-scheduling system of claim 3, wherein: the cloud server also comprises a model base and a service module;
a plurality of AI models are deployed in the model library;
the service module is used for receiving a task processing request of the terminal equipment and sending the task processing request to the scheduling module, wherein the task processing request also comprises a task type;
the scheduling module is also used for matching the AI model according to the task type, and sending the matched AI model to the edge node through the service module when the task is distributed to the edge node.
5. The computational resource co-scheduling system according to claim 4, wherein: the model base is also used for generating a public key and a private key and sending the private key to the edge node through the service module; and determining a public key according to the edge node of the task processing, encrypting the matched AI model by using the public key, and sending the encrypted AI model to the edge node through the service module.
6. A computing power resource cooperative scheduling method is characterized by comprising the following steps:
s1, monitoring and calculating residual computing power resources of a cluster and edge nodes in real time; receiving a task processing request of terminal equipment; the task processing request comprises the total data processing amount;
s2, estimating computing power demand according to the data processing total amount; judging whether the residual computing power resources of the edge nodes meet the computing power requirements, if so, jumping to S3, and if not, jumping to S5;
s3, unloading the task to the edge node;
s4, monitoring the tasks processed at the edge nodes, judging whether the computing power resource required by the current task is larger than the total computing power resource of the edge nodes, and if so, jumping to S5; if the number is not larger than the preset value, keeping the task running at the edge node;
and S5, unloading the task to the computing cluster.
7. The computational resource co-scheduling method according to claim 6, wherein: in the step S2, priorities are set for different terminal devices, when the residual computing power resources of the edge nodes are judged not to meet the computing power requirements, whether the priorities of the corresponding terminal devices are high or not is judged, and if the priorities are high, the step is switched to S5; and if the priority is low, adding the task into a processing queue of the current edge node, and jumping to S3.
8. The computational resource co-scheduling method according to claim 7, wherein: s0, deploying a plurality of AI models in a model library, generating a public key and a private key by the model library, and sending the private key to the edge node;
in step S3, the task processing request also comprises a task type; whether an AI model needs to be used is judged according to the task type, if yes, the AI model is matched according to the task type, a public key is determined according to the edge node of task processing, the matched AI model is encrypted by the public key, and the encrypted AI model is sent to the edge node; if the AI model is not needed, jump directly to S4.
9. A computational resource co-scheduling apparatus, using the system of any one of claims 1 to 5.
10. A storage medium, characterized in that the storage medium stores a computer program which, when executed by a processor, carries out the steps of the method of any one of claims 6 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211160591.XA CN115562824A (en) | 2022-09-22 | 2022-09-22 | Computing resource cooperative scheduling system, method, device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211160591.XA CN115562824A (en) | 2022-09-22 | 2022-09-22 | Computing resource cooperative scheduling system, method, device and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115562824A true CN115562824A (en) | 2023-01-03 |
Family
ID=84741434
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211160591.XA Pending CN115562824A (en) | 2022-09-22 | 2022-09-22 | Computing resource cooperative scheduling system, method, device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115562824A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115981874A (en) * | 2023-03-20 | 2023-04-18 | 天津大学四川创新研究院 | Decentralized AI analysis and data storage method and system based on cloud edge cooperation |
CN116385857A (en) * | 2023-06-02 | 2023-07-04 | 山东协和学院 | Calculation power distribution method based on AI intelligent scheduling |
CN116662021A (en) * | 2023-08-01 | 2023-08-29 | 鹏城实验室 | Collaborative scheduling system and method based on end-edge cloud architecture |
CN116708445A (en) * | 2023-08-08 | 2023-09-05 | 北京智芯微电子科技有限公司 | Distribution method, distribution network system, device and storage medium for edge computing task |
CN117611096A (en) * | 2023-12-06 | 2024-02-27 | 广州市烨兴融集团有限公司 | Office data management method and system based on edge calculation |
CN117851023A (en) * | 2023-03-29 | 2024-04-09 | 广州纳指数据智能科技有限公司 | Conversion method and system for computing power of high-performance computer group and local resources |
CN116708445B (en) * | 2023-08-08 | 2024-05-28 | 北京智芯微电子科技有限公司 | Distribution method, distribution network system, device and storage medium for edge computing task |
-
2022
- 2022-09-22 CN CN202211160591.XA patent/CN115562824A/en active Pending
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115981874A (en) * | 2023-03-20 | 2023-04-18 | 天津大学四川创新研究院 | Decentralized AI analysis and data storage method and system based on cloud edge cooperation |
CN115981874B (en) * | 2023-03-20 | 2023-06-13 | 天津大学四川创新研究院 | Decentralised AI analysis and data storage method and system based on cloud edge cooperation |
CN117851023A (en) * | 2023-03-29 | 2024-04-09 | 广州纳指数据智能科技有限公司 | Conversion method and system for computing power of high-performance computer group and local resources |
CN116385857A (en) * | 2023-06-02 | 2023-07-04 | 山东协和学院 | Calculation power distribution method based on AI intelligent scheduling |
CN116385857B (en) * | 2023-06-02 | 2023-08-18 | 山东协和学院 | Calculation power distribution method based on AI intelligent scheduling |
CN116662021A (en) * | 2023-08-01 | 2023-08-29 | 鹏城实验室 | Collaborative scheduling system and method based on end-edge cloud architecture |
CN116708445A (en) * | 2023-08-08 | 2023-09-05 | 北京智芯微电子科技有限公司 | Distribution method, distribution network system, device and storage medium for edge computing task |
CN116708445B (en) * | 2023-08-08 | 2024-05-28 | 北京智芯微电子科技有限公司 | Distribution method, distribution network system, device and storage medium for edge computing task |
CN117611096A (en) * | 2023-12-06 | 2024-02-27 | 广州市烨兴融集团有限公司 | Office data management method and system based on edge calculation |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115562824A (en) | Computing resource cooperative scheduling system, method, device and storage medium | |
Xiao et al. | Distributed optimization for energy-efficient fog computing in the tactile internet | |
US10572828B2 (en) | Transfer learning and domain adaptation using distributable data models | |
US10140572B2 (en) | Memory bandwidth management for deep learning applications | |
CN108536650B (en) | Method and device for generating gradient lifting tree model | |
CN110856018B (en) | Rapid transcoding method and system in monitoring system based on cloud computing | |
CN110383245B (en) | Secure intelligent networking architecture with dynamic feedback | |
US20210042578A1 (en) | Feature engineering orchestration method and apparatus | |
CN111935140B (en) | Abnormal message identification method and device | |
CN110781180B (en) | Data screening method and data screening device | |
US20230132116A1 (en) | Prediction of impact to data center based on individual device issue | |
CN110570075A (en) | Power business edge calculation task allocation method and device | |
Li et al. | A novel genetic service function deployment management platform for edge computing | |
CN114422322A (en) | Alarm compression method, device, equipment and storage medium | |
CN108833588B (en) | Session processing method and device | |
Bulkan et al. | On the load balancing of edge computing resources for on-line video delivery | |
CN113014649A (en) | Cloud Internet of things load balancing method, device and equipment based on deep learning | |
CN110442786A (en) | A kind of method, apparatus, equipment and the storage medium of prompt information push | |
CN110782014A (en) | Neural network increment learning method and device | |
CN115550236A (en) | Data protection method for routing optimization of security middlebox resource pool | |
CN115543582A (en) | Method, system and equipment for unified scheduling of super computing power network | |
Huang et al. | Digital twin-assisted collaborative transcoding for better user satisfaction in live streaming | |
Naik et al. | ARMPC-ARIMA based prediction model for Adaptive Bitrate Scheme in Streaming | |
CN117076057B (en) | AI service request scheduling method, device, equipment and medium | |
CN116528255B (en) | Network slice migration method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |