CN112532751A - Method and system for scheduling distributed heterogeneous computing power of urban brain AI computing center - Google Patents

Method and system for scheduling distributed heterogeneous computing power of urban brain AI computing center Download PDF

Info

Publication number
CN112532751A
CN112532751A CN202110173976.9A CN202110173976A CN112532751A CN 112532751 A CN112532751 A CN 112532751A CN 202110173976 A CN202110173976 A CN 202110173976A CN 112532751 A CN112532751 A CN 112532751A
Authority
CN
China
Prior art keywords
pod
computing
database
scheduling
distributed heterogeneous
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110173976.9A
Other languages
Chinese (zh)
Other versions
CN112532751B (en
Inventor
梅一多
何彬
谷雨明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongguancun Smart City Co Ltd
Original Assignee
Zhongguancun Smart City Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongguancun Smart City Co Ltd filed Critical Zhongguancun Smart City Co Ltd
Priority to CN202110173976.9A priority Critical patent/CN112532751B/en
Publication of CN112532751A publication Critical patent/CN112532751A/en
Application granted granted Critical
Publication of CN112532751B publication Critical patent/CN112532751B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks
    • H04L67/1074Peer-to-peer [P2P] networks for supporting data block transmission mechanisms
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/63Routing a service request depending on the request content or context

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer And Data Communications (AREA)

Abstract

The invention discloses a method for scheduling distributed heterogeneous computational power of an urban brain AI computing center, which comprises the following steps: processing a task request of AI calculation submitted by a client, and storing Pod data to the etcd; wherein, in some alternative embodiments, the task request conforms to the Restful API specification; monitoring resource changes and reacting; checking the change of the database, and creating a desired number of Pod instances; calling a customized scheduling algorithm of the Kubernetes extended scheduler, checking the change of the database again, allocating the Pod which is not allocated to the specific node to the target node according to a set rule, and updating the record of the database; and monitoring the change of the database and managing the subsequent Pod life cycle. The invention also discloses a dispatching system of the distributed heterogeneous computing power of the urban brain AI computing center. The scheduling method of the distributed heterogeneous computational power of the urban brain AI computing center aims to solve the problems of insufficient centralized computational power, insufficient network transmission capability and low data privacy security.

Description

Method and system for scheduling distributed heterogeneous computing power of urban brain AI computing center
Technical Field
The invention belongs to the technical field of artificial intelligence, and particularly relates to a method and a system for scheduling distributed heterogeneous computing power of an urban brain AI computing center.
Background
The urban brain is based on technologies such as cloud computing, big data and intelligent sensors by taking a new capital construction project as an entry point, an urban level nerve sensing network is built, various data resources generated by a city are gathered, and by utilizing artificial intelligence and a block chain technology, application scenes such as urban traffic control, public safety, emergency management, grid prevention and control, medical sanitation, cultural tourism, environmental protection, urban fine management and the like are realized, fine and dynamic management of the city is realized, the 'big urban disease' is relieved, and the living quality of citizens is improved.
The digital technologies such as cloud computing and big data are the cornerstones for constructing the urban brain. The traditional application scheme is based on a government affair network and the internet, so that the storage and the deployment of data and computing power are realized, and the implementation of AI (Artificial Intelligence) training and reasoning tasks is realized based on a centralized computing power cluster. The centralized cloud computing mode is feasible at the initial construction stage of the urban brain, especially when the number of the video cameras, the voice circuits and other intelligent sensors is small, and with the advance of the urban brain construction and the increase of data access amount, the problems of insufficient network transmission capacity and insufficient centralized computing power are inevitably faced.
With the advance of urban brain construction and the increase of data access amount, the centralized cloud computing mode faces the following problems:
1. problem of insufficient centralized computing power
Along with the increase of access data, the computing power requirement of the AI cluster is also increased, and when the computing power is insufficient, the cluster needs to be expanded, so that the construction cost is increased. The problem of huge computing power requirement in the intelligent era is solved, the computing power support which only depends on centralization is far from insufficient, and the computing power can be diffused from the cloud and the end to the network edge.
2. Data privacy security issues
Because the construction of the urban brain is to serve the public and the society, the urban brain can access internet data, and how to protect the data privacy is also a problem to be mainly solved when the data is transmitted safely.
3. Insufficient network transmission capability
Because the centralized cloud computing mode needs to transmit the data of each access device back to the AI computing power cluster, the pressure of network transmission is increased with the increase of the access devices, and the problem of insufficient network bandwidth capacity of the computing power cluster is caused.
In view of the above, those skilled in the art need to provide a method and a system for scheduling distributed heterogeneous computing power of an urban brain AI computing center to solve the above problems.
Disclosure of Invention
Technical problem to be solved
The invention aims to solve the technical problems of insufficient centralized computing power, insufficient network transmission capability and low data privacy security.
(II) technical scheme
The invention provides a method for scheduling distributed heterogeneous computing power of an urban brain AI computing center, which comprises the following steps:
processing a task request of AI calculation submitted by a client, and storing Pod data to the etcd; wherein the task request conforms to Restful API specifications;
monitoring resource changes and reacting;
checking the change of the database, and creating a desired number of Pod instances;
calling a customized scheduling algorithm of a Kubernetes extended scheduler, checking the change of the database again, allocating the Pod which is not allocated to the specific node to the target node according to a set rule, and updating the record of the database;
and monitoring the change of the database and managing the subsequent Pod life cycle.
Further, the data types supported by the task request include JSON and YAML.
Further, the task request is a training task or an inference task.
Further, after the updating the records of the database, the method further comprises:
the allocation of the Pod is recorded.
Further, the monitoring the change of the database and managing the subsequent Pod life cycle specifically includes:
a Pod assigned to run on the node it is in is discovered, and when a new Pod is discovered, the new Pod is run on that node.
Further, the nodes form a virtual computing power network by the heterogeneous computing power through the block chain P2P network, the heterogeneous computing power nodes are changed into an edge computing device with resource and job scheduling capabilities through a decentralized idea, the edge computing device is formed into a distributed heterogeneous computing network with AI computing power, and the computing of an AI computing center is expanded to the edge computing device.
Furthermore, an IPFS (IPFS) interplanetary file system communication protocol is adopted among the nodes of the block chain P2P network.
Further, after the monitoring the change of the database and managing the subsequent Pod life cycle, the method further includes:
managing network communications, service discovery, load balancing.
The second aspect of the present invention provides a scheduling system based on the above scheduling method for distributed heterogeneous computation power of an urban brain AI computation center, where the system includes:
the API Server module is used for processing a task request of AI calculation submitted by a client and storing Pod data to the etcd; wherein the task request conforms to Restful API specifications;
the Controller component module is used for monitoring resource change and making a reaction;
the Replica Set module is used for checking the change of the database and creating a desired number of Pod instances;
the Scheduler module is used for calling a customized scheduling algorithm of the Kubernetes extended Scheduler, checking the change of the database again, allocating the Pod which is not allocated to the specific node to the target node according to a set rule, and updating the record of the database;
and the Kubelet module is used for monitoring the change of the database and managing the subsequent Pod life cycle.
Further, the system further comprises:
and the kubbeproxy module is used for managing network communication, service discovery and load balancing.
(III) advantageous effects
The technical scheme of the invention has the following advantages:
the invention provides a method for scheduling distributed heterogeneous computing power of an urban brain AI computing center, which comprises the following steps: processing a task request of AI calculation submitted by a client, and storing Pod data to the etcd; wherein, in some alternative embodiments, the task request conforms to the Restful API specification; monitoring resource changes and reacting; checking the change of the database, and creating a desired number of Pod instances; calling a customized scheduling algorithm of the Kubernetes extended scheduler, checking the change of the database again, allocating the Pod which is not allocated to the specific node to the target node according to a set rule, and updating the record of the database; and monitoring the change of the database and managing the subsequent Pod life cycle. The scheduling method is mainly used for an AI (artificial intelligence) computing center, and when AI training and reasoning tasks are executed, based on a decentralized heterogeneous computational power network and a block chain technology, massive applications can call computational power resources in different places in real time according to needs, so that the global optimization of connection and computational power in the network is realized, the optimal utilization rate of the computational resources, the optimal network efficiency and the optimal user experience are realized.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments of the present invention will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a scheduling method for distributed heterogeneous computational power of an urban brain AI computing center according to an embodiment of the present invention;
fig. 2 is a flowchart illustrating an execution of a pod by a Kubernetes extended scheduler in the scheduling method for distributed heterogeneous computing power of an urban brain AI computing center according to the embodiment of the present invention;
fig. 3 is a block diagram of a distributed heterogeneous computation power scheduling system of an urban brain AI computation center according to an embodiment of the present invention.
In the figure:
100-API Server module; 200-Controller component module; 300-a Replica Set module; 400-Scheduler module; 500-Kubelet module; 600-kubeproxy module.
Detailed Description
The following detailed description of embodiments of the present invention is provided in connection with the accompanying drawings and examples. The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
According to a first aspect of the embodiments of the present invention, there is provided a distributed heterogeneous computation power scheduling method for an urban brain AI computation center, as shown in fig. 1, the scheduling method includes the following steps:
s1, processing the task request of AI calculation submitted by the client, and storing the Pod data to the etcd; wherein the task request conforms to Restful API specifications;
s2, monitoring resource change and making a response;
s3, checking the change of the database, and creating a desired number of Pod instances;
s4, calling a customized scheduling algorithm of the Kubernetes extended scheduler, checking the change of the database again, allocating the Pod which is not allocated to the specific node to the target node according to a set rule, and updating the record of the database;
and S5, monitoring the change of the database and managing the subsequent Pod life cycle.
In the above embodiment, the scheduling method constructs a trusted distributed computing power network with openness, pertinence, compatibility, interactivity and security based on a new heterogeneous computing power network structure, interconnects dynamically distributed computing power resources based on ubiquitous network connection, and enables massive applications to call computing power resources in different places in real time as required through unified management and collaborative scheduling of multidimensional resources such as network, storage and computing power, so that global optimization of connection and computing power in a network is realized, and optimal utilization rate of computing resources, optimal network efficiency and optimal user experience are realized. The method can enable massive applications to call computing power resources in different places in real time according to needs, realize the global optimization of connection and computing power in the network, solve the problem of insufficient centralized computing power, relieve the problem of insufficient network transmission capability and solve the problem of data privacy safety.
Specifically, in step S1, the client submits a task request for AI computation to the server, where the request conforms to Restful API specification, the supported data types include JSON (JavaScript Object Notation) and YAML, and the requested task may be a training task or an inference task. The request is sent to the API Server via kubecect.
In step S4, the Scheduler will call the customized scheduling algorithm of the Kubernetes extended Scheduler, and check the database change again, find the Pod that has not been assigned to a specific node, then assign the Pod to the node that can run them according to the customized relevant rule, and update the database record, and record the Pod assignment. The Scheduler is able to specify the scheduling policy file by a policy-configuration-file parameter at startup, and we assemble the Predicates and Priority functions according to the specific requirements of the AI computation center. Selecting different filtering functions and priority functions, controlling the weight of the priority functions and adjusting the sequence of the filtering functions.
As shown in fig. 2, in the execution flow of pod by the Kubernetes extended scheduler,
in step S5, the Kubelet monitors the database changes, manages the subsequent Pod lifecycle, and discovers those pods that are assigned to run on the node where it is located. If a new Pod is found, the new Pod will be run on the node. The computing power node integrates idle computing power into a standardized computing unit VCU, forms a virtual computing power network by heterogeneous computing power through a block chain P2P network, changes heterogeneous computing power nodes (including a PC, a mobile phone, an intelligent device, a router and the like) into edge computing equipment with resource and job scheduling capabilities through a decentralized idea, forms the edge computing equipment into a distributed heterogeneous computing network with AI computing capabilities, expands the computing of an AI computing center to the edge computing equipment, and is also a powerful complement of the existing cloud computing. Through a blockchain P2P network among nodes, nodes in the same local area network communicate with each other through broadcasting, other peer points which are not in the same network penetrate through NAT to construct a decentralized heterogeneous computing network, the communication cost is reduced, a fixed public network IP is not needed, a node is uniquely marked through a node id, the node network is uniform, the local part is an autonomous network, through a balance strategy, each point cannot receive the connection of a large number of nodes, and the transverse expansion capability is strong. All schedulers in the network are addressed in a Distributed Hash Table (DHT) mode, each domain scheduler is responsible for a certain number of computing nodes and can achieve resource requests across the schedulers, and all scheduling nodes are monitored by a plurality of standby scheduling nodes (local consensus). An IPFS (Internet File System) inter-satellite File system communication protocol is adopted among network nodes of the block chain P2P, so that 60% of bandwidth can be saved, and pressure of data transmission among networks can be reduced.
Meanwhile, the safety protection of data privacy is realized by using the encryption technology of the block chain, and the resource sharer can obtain the excitation in the mutually trusted network by using the block chain value network.
In some alternative embodiments, the data types supported by the task request include JSON and YAML.
In some alternative embodiments, the task request is a training task or an inference task.
In some optional embodiments, after updating the record of the database, the method further includes:
the allocation of the Pod is recorded.
In some optional embodiments, the change of the database is monitored, and the subsequent Pod life cycle is managed, specifically:
a Pod assigned to run on the node it is in is discovered and when a new Pod is discovered, the new Pod is run on that node.
In some optional embodiments, the nodes form a virtual computing power network by using the block chain P2P network, change the heterogeneous computing power nodes into an edge computing device with resource and job scheduling capability by using a decentralized concept, form the edge computing device into a distributed heterogeneous computing network with AI computing capability, and extend the computation of the AI computing center to the edge computing device.
In some alternative embodiments, the IPFS interplanetary file system communication protocol is used between nodes of the blockchain P2P network.
In some optional embodiments, after monitoring the change of the database and managing the subsequent Pod life cycle, the method further includes:
managing network communications, service discovery, load balancing.
The key points of the technology of the invention are as follows:
based on the decentralized idea, the block chain P2P network supports heterogeneous computational power of a PC, a mobile phone, intelligent equipment, a router and the like;
encrypting and protecting data exchange by using an encryption technology and an intelligent contract of a block chain technology, and rewarding resource sharers by using a value network of the block chain;
and AI training and reasoning tasks are executed on the edge computing equipment nodes, so that the transmission quantity of data is reduced.
The following will be described in detail by taking a specific application scenario as an example:
example 1
Take the intelligent traffic scene "pedestrian structurization" as an example:
step 1, collecting pedestrian videos by a high-definition camera at a traffic intersection;
step 2, transmitting the video stream to nearby matched edge computing equipment in real time;
step 3, filtering AI computational power nodes of the edge domain according to a self-defined rule by a Kubernets scheduler deployed by the edge domain;
step 4, executing the Pod by the kubel of the selected node according to the scheduling result;
step 5, storing the structured result in the edge computing device, pushing the result to an AI computing center, and supporting subsequent intelligent analysis and decision making;
and 6, synchronously updating the Pod state information.
Example 2
Take enforcement on the traffic police scene as an example:
step 1, collecting evidence images by a traffic police at a traffic accident scene by using a mobile phone camera;
step 2, storing the acquired image in a mobile terminal (a mobile phone or a vehicle-mounted intelligent device);
step 3, filtering AI computational force nodes of the mobile terminal (the mobile phone or the vehicle-mounted intelligent equipment) according to a user-defined rule by a Kubernets scheduler deployed by the mobile terminal (the mobile phone or the vehicle-mounted intelligent equipment);
step 4, executing the Pod by the kubel of the selected node according to the scheduling result;
step 5, storing the primary recognition result of the acquired image in a mobile terminal (a mobile phone or a vehicle-mounted intelligent device) and pushing the primary recognition result to an AI (Artificial intelligence) calculation center so as to support subsequent intelligent analysis and study and judgment;
and 6, synchronously updating the Pod state information.
The scheduling method provided by the embodiment of the invention is mainly used for an AI computing center, and when AI training and reasoning tasks are executed, based on a decentralized heterogeneous computational power network and a block chain technology, massive applications can call computational power resources in different places in real time according to needs, so that the global optimization of connection and computational power in a network is realized, the optimal utilization rate of computational resources, the optimal network efficiency and the optimal user experience are realized, and the application effect of the invention mainly comprises the following aspects:
1. the computing power of the heterogeneous nodes (including mobile phones, intelligent equipment, PCs, routers and the like) is scheduled, the requirement of the urban brain on large computing power is met, and the construction cost is reduced;
2. the safe credible distributed computing power network protects the data privacy safety;
3. the incentive mechanism of the block chain is also beneficial to leading scattered idle computing resource owners to actively contribute computing, storing and bandwidth resources to obtain corresponding rewards, so that the idle resources are effectively utilized, and decentralized sharing computing is realized;
4. AI training and reasoning tasks are executed at the edge side, and cloud edge-side cooperation is realized, so that the traditional centralized cloud computing layout is broken through, the internet data transmission mode is improved, and the network transmission pressure is relieved.
According to a second aspect of the embodiments of the present invention, there is provided a scheduling system based on the above scheduling method for distributed heterogeneous computing power of urban brain AI computing centers, as shown in fig. 3, the system includes:
the API Server module 100 is used for processing a task request of AI calculation submitted by a client and storing Pod data to the etcd; wherein the task request conforms to Restful API specifications;
a Controller component module 200 for monitoring resource changes and reacting;
a Replica Set module 300, configured to check a change of the database, and create a desired number of Pod instances;
the Scheduler module 400 is configured to invoke a customized scheduling algorithm of the Kubernetes extended Scheduler, check a change of the database again, allocate Pod that is not allocated to a specific node to a target node according to a set rule, and update a record of the database;
and the Kubelet module 500 is used for monitoring the change of the database and managing the subsequent Pod life cycle.
In some optional embodiments, the system further comprises:
and a kubbeproxy module 600 for managing network communication, service discovery, and load balancing.
In the above embodiment, the Controller component includes a Scheduler, a replication, and an endpoint.
It should be clear that the embodiments in this specification are described in a progressive manner, and the same or similar parts in the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. The present invention is not limited to the specific steps and structures described above and shown in the drawings. Also, a detailed description of known process techniques is omitted herein for the sake of brevity.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and alterations to this application will become apparent to those skilled in the art without departing from the scope of this invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. A scheduling method of distributed heterogeneous computing power of an urban brain AI computing center is characterized by comprising the following steps:
processing a task request of AI calculation submitted by a client, and storing Pod data to the etcd; wherein the task request conforms to Restful API specifications;
monitoring resource changes and reacting;
checking the change of the database, and creating a desired number of Pod instances;
calling a customized scheduling algorithm of a Kubernetes extended scheduler, checking the change of the database again, allocating the Pod which is not allocated to the specific node to the target node according to a set rule, and updating the record of the database;
and monitoring the change of the database and managing the subsequent Pod life cycle.
2. The method for scheduling distributed heterogeneous computing power of an urban brain AI computing center according to claim 1, wherein the data types supported by the task request include JSON and YAML.
3. The method for scheduling distributed heterogeneous computation power of an urban brain AI computation center according to claim 1, wherein said task request is a training task or an inference task.
4. The method for scheduling distributed heterogeneous computation power of an AI computation center in a city according to claim 1, further comprising, after the updating of the records in the database:
the allocation of the Pod is recorded.
5. The method for scheduling distributed heterogeneous computing power of an urban brain AI computing center according to claim 1, wherein the monitoring of the database for changes and the management of subsequent Pod life cycles comprises:
a Pod assigned to run on the node it is in is discovered, and when a new Pod is discovered, the new Pod is run on that node.
6. The method for scheduling distributed heterogeneous computing power of an AI computing center in a city brain according to claim 5, wherein the nodes are virtual computing power networks formed by heterogeneous computing power through a block chain P2P network, the heterogeneous computing power nodes are changed into edge computing devices with resource and job scheduling capabilities through a decentralized concept, the edge computing devices are formed into a distributed heterogeneous computing network with AI computing power, and computing of the AI computing center is expanded to the edge computing devices.
7. The method for scheduling distributed heterogeneous computing power of an urban brain AI computing center according to claim 6, wherein IPFS inter-satellite file system communication protocol is adopted among nodes of said block chain P2P network.
8. The method of claim 1, wherein after monitoring the database for changes and managing the subsequent Pod life cycle, the method further comprises:
managing network communications, service discovery, load balancing.
9. A scheduling system based on the method for scheduling distributed heterogeneous computing power of urban brain AI computing centers according to any one of claims 1 to 8, characterized in that the system comprises:
the API Server module is used for processing a task request of AI calculation submitted by a client and storing Pod data to the etcd; wherein the task request conforms to Restful API specifications;
the Controller component module is used for monitoring resource change and making a reaction;
the Replica Set module is used for checking the change of the database and creating a desired number of Pod instances;
the Scheduler module is used for calling a customized scheduling algorithm of the Kubernetes extended Scheduler, checking the change of the database again, allocating the Pod which is not allocated to the specific node to the target node according to a set rule, and updating the record of the database;
and the Kubelet module is used for monitoring the change of the database and managing the subsequent Pod life cycle.
10. The scheduling system of the distributed heterogeneous computation power scheduling method for the urban brain AI computation center according to claim 9, further comprising:
and the kubbeproxy module is used for managing network communication, service discovery and load balancing.
CN202110173976.9A 2021-02-09 2021-02-09 Method and system for scheduling distributed heterogeneous computing power of urban brain AI computing center Active CN112532751B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110173976.9A CN112532751B (en) 2021-02-09 2021-02-09 Method and system for scheduling distributed heterogeneous computing power of urban brain AI computing center

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110173976.9A CN112532751B (en) 2021-02-09 2021-02-09 Method and system for scheduling distributed heterogeneous computing power of urban brain AI computing center

Publications (2)

Publication Number Publication Date
CN112532751A true CN112532751A (en) 2021-03-19
CN112532751B CN112532751B (en) 2021-05-07

Family

ID=74975562

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110173976.9A Active CN112532751B (en) 2021-02-09 2021-02-09 Method and system for scheduling distributed heterogeneous computing power of urban brain AI computing center

Country Status (1)

Country Link
CN (1) CN112532751B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113094246A (en) * 2021-03-30 2021-07-09 之江实验室 Edge heterogeneous computing environment simulation system
CN113259359A (en) * 2021-05-21 2021-08-13 重庆紫光华山智安科技有限公司 Edge node capability supplementing method, system, medium and electronic terminal
CN113852794A (en) * 2021-09-27 2021-12-28 中关村科学城城市大脑股份有限公司 Edge calculation sharing monitoring rod system based on urban brain
CN116599966A (en) * 2023-05-09 2023-08-15 天津大学 Edge cloud service parallel resource allocation method based on block chain sharing

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104461740A (en) * 2014-12-12 2015-03-25 国家电网公司 Cross-domain colony computing resource gathering and distributing method
CN109508238A (en) * 2019-01-05 2019-03-22 咪付(广西)网络技术有限公司 A kind of resource management system and method for deep learning
CN110780998A (en) * 2019-09-29 2020-02-11 武汉大学 Kubernetes-based dynamic load balancing resource scheduling method
US20200073655A1 (en) * 2018-09-05 2020-03-05 Nanum Technologies Co., Ltd. Non-disruptive software update system based on container cluster
CN111327681A (en) * 2020-01-21 2020-06-23 北京工业大学 Cloud computing data platform construction method based on Kubernetes
CN111522639A (en) * 2020-04-16 2020-08-11 南京邮电大学 Multidimensional resource scheduling method under Kubernetes cluster architecture system
CN111949395A (en) * 2020-07-16 2020-11-17 广州玖的数码科技有限公司 Block chain-based shared computing power data processing method, system and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104461740A (en) * 2014-12-12 2015-03-25 国家电网公司 Cross-domain colony computing resource gathering and distributing method
US20200073655A1 (en) * 2018-09-05 2020-03-05 Nanum Technologies Co., Ltd. Non-disruptive software update system based on container cluster
CN109508238A (en) * 2019-01-05 2019-03-22 咪付(广西)网络技术有限公司 A kind of resource management system and method for deep learning
CN110780998A (en) * 2019-09-29 2020-02-11 武汉大学 Kubernetes-based dynamic load balancing resource scheduling method
CN111327681A (en) * 2020-01-21 2020-06-23 北京工业大学 Cloud computing data platform construction method based on Kubernetes
CN111522639A (en) * 2020-04-16 2020-08-11 南京邮电大学 Multidimensional resource scheduling method under Kubernetes cluster architecture system
CN111949395A (en) * 2020-07-16 2020-11-17 广州玖的数码科技有限公司 Block chain-based shared computing power data processing method, system and storage medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113094246A (en) * 2021-03-30 2021-07-09 之江实验室 Edge heterogeneous computing environment simulation system
CN113094246B (en) * 2021-03-30 2022-03-25 之江实验室 Edge heterogeneous computing environment simulation system
CN113259359A (en) * 2021-05-21 2021-08-13 重庆紫光华山智安科技有限公司 Edge node capability supplementing method, system, medium and electronic terminal
CN113852794A (en) * 2021-09-27 2021-12-28 中关村科学城城市大脑股份有限公司 Edge calculation sharing monitoring rod system based on urban brain
CN116599966A (en) * 2023-05-09 2023-08-15 天津大学 Edge cloud service parallel resource allocation method based on block chain sharing
CN116599966B (en) * 2023-05-09 2024-05-24 天津大学 Edge cloud service parallel resource allocation method based on block chain sharing

Also Published As

Publication number Publication date
CN112532751B (en) 2021-05-07

Similar Documents

Publication Publication Date Title
CN112532751B (en) Method and system for scheduling distributed heterogeneous computing power of urban brain AI computing center
Bittencourt et al. The internet of things, fog and cloud continuum: Integration and challenges
Hu et al. Survey on fog computing: architecture, key technologies, applications and open issues
Jararweh et al. SDIoT: a software defined based internet of things framework
CN112583861B (en) Service deployment method, resource allocation method, system, device and server
CN105426245A (en) Dynamically composed compute nodes comprising disaggregated components
Guerrero-Contreras et al. A context-aware architecture supporting service availability in mobile cloud computing
Dautov et al. Stream processing on clustered edge devices
Meneguette et al. Vehicular clouds leveraging mobile urban computing through resource discovery
WO2022001941A1 (en) Network element management method, network management system, independent computing node, computer device, and storage medium
WO2016095524A1 (en) Resource allocation method and apparatus
Suri et al. Enforcement of communications policies in software agent systems through mobile code
Wang et al. Container orchestration in edge and fog computing environments for real-time iot applications
Dustdar et al. Towards distributed edge-based systems
Kyryk et al. Load balancing method in edge computing
CN109413117B (en) Distributed data calculation method, device, server and computer storage medium
CN115134421B (en) Multi-source heterogeneous data cross-system collaborative management system and method
Verma et al. Hbi-lb: A dependable fault-tolerant load balancing approach for fog based internet-of-things environment
Buzachis et al. An innovative MapReduce-based approach of Dijkstra’s algorithm for SDN routing in hybrid cloud, edge and IoT scenarios
CN116260824A (en) Service data transmission method, system, storage medium and related equipment
KR20140097717A (en) Resource Dependency Service Method for M2M Resource Management
Fadlallah et al. Layered architectural model for collaborative computing in peripheral autonomous networks of mobile devices
CN100438472C (en) Photon grid middleware and its control based on optical network resource allocation on demand
Ye et al. Virtual infrastructure mapping in software-defined elastic optical networks
Alahmadi et al. Energy efficient processing allocation in opportunistic cloud-fog-vehicular edge cloud architectures

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant