CN110753107B - Resource scheduling system, method and storage medium under space-based cloud computing architecture - Google Patents

Resource scheduling system, method and storage medium under space-based cloud computing architecture Download PDF

Info

Publication number
CN110753107B
CN110753107B CN201911000888.8A CN201911000888A CN110753107B CN 110753107 B CN110753107 B CN 110753107B CN 201911000888 A CN201911000888 A CN 201911000888A CN 110753107 B CN110753107 B CN 110753107B
Authority
CN
China
Prior art keywords
scheduling
space
cloud
resource
scheduling system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911000888.8A
Other languages
Chinese (zh)
Other versions
CN110753107A (en
Inventor
赵诣
曹素芝
闫蕾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Technology and Engineering Center for Space Utilization of CAS
Original Assignee
Technology and Engineering Center for Space Utilization of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Technology and Engineering Center for Space Utilization of CAS filed Critical Technology and Engineering Center for Space Utilization of CAS
Priority to CN201911000888.8A priority Critical patent/CN110753107B/en
Publication of CN110753107A publication Critical patent/CN110753107A/en
Application granted granted Critical
Publication of CN110753107B publication Critical patent/CN110753107B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network

Abstract

The invention provides a resource scheduling system, a resource scheduling method and a readable storage medium under a space-based cloud computing architecture, wherein the scheduling system comprises: a scheduler, a network topology database, a hardware information database and an affinity rule database. The scheduling object of the scheduling system is a space-based service deployed in the form of a container. The action nodes of the scheduling system comprise heterogeneous resource nodes in space-based edge clouds and fog satellite clusters. The invention designs three deployment schemes and selects one of the three deployment schemes for the physical position of the dispatching system. The invention can realize the unified scheduling of the space-based cloud heterogeneous resources and the fog heterogeneous resources, the scheduling can utilize historical scheduling data, the adaptability to a dynamic network is realized, and the deployment requirements of various space-based applications can be met.

Description

Resource scheduling system, method and storage medium under space-based cloud computing architecture
Technical Field
The invention relates to the technical field of computers, in particular to a resource scheduling system, a resource scheduling method and a storage medium under a space-based cloud computing architecture.
Background
Cloud computing is a computing storage resource sharing model based on virtualization technology, and is a mainstream computing architecture in the current network. The method has the advantages that the complicated calculation of the user side is unloaded to the cloud (namely, the data center) for processing, and the result is sent to the user from the cloud, so that the problem that the storage and calculation resources of the user terminal equipment are limited is solved. However, the rapid development of the internet of things and mobile applications in recent years has led to the generation of massive amounts of data, which communication between users and the cloud on the one hand places a heavy burden on network bandwidth and on the other hand places intolerable transmission delays and service quality degradation on users. In addition, cloud computing also lacks support for mobility and geographic distribution.
To address the above challenges, cisco released the concept of fog calculation in 2014. Unlike cloud computing, which is supported by a powerful resource-centralized data center, fog computing consists of various computing resources that are weak in performance, distributed, and heterogeneous. It is located closer to the edge of the network, between cloud computing and personal computing. Fog computing is a new generation of distributed computing, conforming to the "decentralized" feature of the internet, which can support and facilitate cloud-unsuitable applications: (1) Delay sensitive applications (such as online gaming and video conferencing); (2) Applications closely related to geographical distribution (e.g., sensor networks); (3) fast moving applications (e.g., intelligent car networking); (4) Large-scale distributed control applications (e.g., intelligent traffic control). Since the concept of fog computing is proposed by Cisco, several scientific and technological companies such as ARM, del, intel, microsoft and the like and Princeton university have joined the concept formation, and a non-profit organization open fog alliance is established, aiming at popularizing and accelerating the fog computing and promoting the development of the Internet of things.
It is worth noting that fog computing is not presented to replace cloud computing, but to supplement cloud computing to reduce bandwidth burden and reduce transmission delay. One side of the fog node can be connected to a user terminal to provide computing service for the user terminal; one side can connect to the cloud in order to take advantage of the rich functionality and application tools of the cloud. This cloud-and-fog-collaborative two-tier computing architecture is called cloud computing (see fig. 1), which combines the advantages of both. At present, practical application deployment based on cloud computing architecture has become an important subject of academic research.
In the research of space-based computing architecture, in order to solve the contradiction between the space application of time delay sensitivity and big data and the limitation of satellite bandwidth, a space-based cloud computing system is proposed under the support of technologies such as software defined satellite, virtualization and space network. The system mainly comprises: the user terminal is used for providing a service request to the space-based edge cloud and transmitting data information needing to be calculated and processed to the space-based edge cloud and/or the fog satellite cluster; the space-based edge cloud acquires resource conditions through a scheduling module and executes a scheduling algorithm according to the resource conditions so as to deploy service nodes, and when the service nodes are deployed on the space-based edge cloud, the data information is calculated and processed; the fog satellite cluster is used for acquiring data information and executing calculation processing on the data information when the service node is deployed in the fog satellite cluster; the computing system is capable of performing edge computations on data information.
The particularity of space-based computing resources is represented by: (1) isomerism: computing resources on the satellite comprise a CPU, an FPGA, a GPU, a memory and the like; (2) dispersibility: satellite computing resources are dispersed at various locations in space; (3) dynamic property: the satellite is in a motion state, and the topology of the spatial information network has time variability.
A scheduling algorithm suitable for the space-based cloud computing application background is designed, the high efficiency and the high reliability of scheduling are guaranteed, and the method is a key point and a difficult point worthy of research. The main goals to be realized by the centralized cloud and mist resource unified allocation scheduling system include: the method has the advantages of efficiently utilizing heterogeneous resources, meeting the requirements of space-based time delay sensitivity and big data application, adapting to dynamic network connection, ensuring the reliability of service flow, realizing the load balance of the system and the like.
Disclosure of Invention
In order to solve at least one technical problem, the invention provides a resource scheduling system, a resource scheduling method and a storage medium under an antenna-based cloud computing architecture.
In order to achieve the above object, a first aspect of the present invention provides a resource scheduling system under an antenna-based cloud computing architecture, where the scheduling system includes: the system comprises a scheduler, a network topology database, a hardware information database and an affinity rule database;
the scheduler is used for scheduling the computing resources corresponding to each application;
the network topology database is electrically connected with the scheduler and used for recording the connection state and the connection duration information among all the nodes;
the hardware information database is electrically connected with the scheduler and used for recording the hardware information of each node;
and the affinity rule database is electrically connected with the scheduler and used for recording the affinity rules of the service and the nodes.
In the scheme, the scheduling system is based on a user terminal, a space-based edge cloud, a ground remote cloud and a fog satellite cluster;
the user terminal is used for making a service request to the space-based edge cloud and/or fog satellite cluster and transmitting data information needing to be calculated and processed to the space-based edge cloud and/or fog satellite cluster;
the space-based edge cloud is used for setting a scheduling system, acquiring resource conditions through the scheduling system, executing a scheduling algorithm according to the resource conditions to deploy service nodes, and executing the data information calculation processing when the service nodes are deployed on the space-based edge cloud;
the ground remote cloud is used for setting an affinity rule learning system, the learning system takes historical scheduling data as input and takes a new affinity rule as output, and the ground remote cloud has other functions of computing and service;
the fog satellite cluster is used for setting a scheduling system, acquiring resource conditions through the scheduling system, executing a scheduling algorithm according to the resource conditions to deploy service nodes, and executing calculation processing on the data information when the service nodes are deployed on the fog satellite cluster.
In the scheme, the resource condition is one or more of task information, fog node information, cloud node information, network connection information and affinity rule information.
In the scheme, the action nodes of the scheduling system comprise the space-based edge cloud and heterogeneous resource nodes in the fog satellite cluster, and the heterogeneous resources are one or more of a CPU, a GPU and an FPGA.
In the scheme, the scheduling objects of the scheduling system are various space-based services deployed in a container form.
The second aspect of the present invention further provides a resource scheduling method under the space-based cloud computing architecture, which is applied to the resource scheduling system under the space-based cloud computing architecture, and the method includes:
a scheduling system receives a service scheduling request of a user;
the scheduling system acquires real-time cloud and mist resource conditions and database data content;
the scheduling system executes a scheduling algorithm according to the real-time cloud and mist resource conditions and the data content of the database so as to select an optimal deployment node for deployment;
deploying an application on the node.
In the scheme, the position of the scheduling system is in a space-based edge cloud and/or fog satellite cluster, and the scheduling system belongs to a part of a whole space-based container cluster management system.
In the scheme, the resource condition is one or more of task information, fog node information, cloud node information, network connection information and affinity rule information.
In the scheme, the action nodes of the scheduling system are heterogeneous resource nodes, and the heterogeneous resources are one or more of a CPU, a GPU and an FPGA.
The third aspect of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium includes a program of a resource scheduling method in a space-based cloud computing architecture, and when the program of the resource scheduling method in the space-based cloud computing architecture is executed by a processor, the step of implementing the method for resource scheduling in the space-based cloud computing architecture according to any one of claims 6 to 9 is implemented.
The scheduling system of the present invention comprises: the system comprises a scheduler, a network topology database, a hardware information database and an affinity rule database; the scheduling object of the scheduling system is a space-based service deployed in a container form; the action nodes of the scheduling system comprise heterogeneous resource nodes in space-based edge clouds and fog satellite clusters. The invention designs three deployment schemes for the physical position of the dispatching system and selects one of the three deployment schemes. The invention can realize the unified scheduling of the space-based cloud heterogeneous resources and the fog heterogeneous resources, the scheduling can utilize historical scheduling data, the adaptability to a dynamic network is realized, and the deployment requirements of various space-based applications can be met.
Additional aspects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
FIG. 1 illustrates a schematic diagram of a cloud computing architecture of the prior art;
FIG. 2 is a block diagram of a resource scheduling system under a space-based cloud computing architecture according to the present invention;
FIG. 3 is a flow chart of a resource scheduling method under the space-based cloud computing architecture according to the present invention;
FIG. 4 is a schematic diagram of a resource scheduling system in a first location according to the space-based cloud computing architecture of the present invention;
FIG. 5 is a schematic diagram of a resource scheduling system in a second location according to the cloud computing architecture of the present invention;
FIG. 6 is a schematic diagram of a resource scheduling system in a third location according to the space-based cloud computing architecture of the present invention;
fig. 7 shows a flowchart of a space-based cloud computing method according to an embodiment of the invention.
Detailed Description
In order that the above objects, features and advantages of the present invention can be more clearly understood, a more particular description of the invention, taken in conjunction with the accompanying drawings and detailed description, is set forth below. It should be noted that the embodiments and features of the embodiments of the present application may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced in other ways than those specifically described herein, and therefore the scope of the present invention is not limited by the specific embodiments disclosed below.
The invention designs a resource scheduling system, a resource scheduling method and a computer-readable storage medium under a space-based cloud computing architecture. The scheduling system comprises: a scheduler, a network topology database, a hardware information database and an affinity rule database. The scheduling object of the scheduling system is a space-based service deployed in the form of a container. The active nodes of the scheduling system comprise heterogeneous resource nodes in space-based edge clouds and fog satellite clusters. There are three deployment schemes for the physical location selection of the dispatch system.
Fig. 2 shows a block diagram of a resource scheduling system under a space-based cloud computing architecture according to the present invention.
As shown in fig. 2, a first aspect of the present invention provides a resource scheduling system under an antenna-based cloud computing architecture, where the scheduling system includes: the system comprises a scheduler, a network topology database, a hardware information database and an affinity rule database;
the scheduler is used for scheduling the computing resources corresponding to each application;
the network topology database is electrically connected with the scheduler and used for recording the connection state and the connection duration information among all the nodes;
the hardware information database is electrically connected with the scheduler and used for recording the hardware information of each node;
and the affinity rule database is electrically connected with the scheduler and used for recording the affinity rules of the service and the nodes.
According to the embodiment of the invention, the scheduling system is based on a user terminal, a space-based edge cloud, a ground remote cloud and a fog satellite cluster;
the user terminal is used for making a service request to the space-based edge cloud and/or fog satellite cluster and transmitting data information needing to be calculated and processed to the space-based edge cloud and/or fog satellite cluster;
the space-based edge cloud is used for setting a scheduling system, acquiring resource conditions through the scheduling system, executing a scheduling algorithm according to the resource conditions to deploy service nodes, and executing the data information calculation processing when the service nodes are deployed on the space-based edge cloud;
the ground remote cloud is used for setting an affinity rule learning system, the learning system takes historical scheduling data as input and takes a new affinity rule as output, and the ground remote cloud has other functions of computing and service;
the fog satellite cluster is used for setting a scheduling system, acquiring resource conditions through the scheduling system, executing a scheduling algorithm according to the resource conditions to deploy service nodes, and executing the data information calculation processing when the service nodes are deployed on the fog satellite cluster.
According to the embodiment of the invention, the resource condition is one or more of task information, fog node information, cloud node information, network connection information and affinity rule information.
Furthermore, a scheduler in the control node of the space-based edge cloud obtains resource conditions such as task information, fog node information, cloud node information, network connection information and the like through a communication interface or a database.
According to the embodiment of the invention, the action nodes of the scheduling system comprise the space-based edge cloud and heterogeneous resource nodes (cloud computing nodes and fog computing nodes) in the fog satellite cluster, and the heterogeneous resources are one or more of a CPU, a GPU and an FPGA.
The CPU is used as a general processor and gives consideration to calculation and control, 70% of transistors are used for constructing Cache, and a part of control units are used for processing complex logic and improving the execution efficiency of instructions, so that the calculation universality is high, the calculation processing complexity is high, but the calculation performance is general.
GPUs are mainly used for parallel computing like image processing. Graphics processing computations are characterized by high-density computations with less correlation between the data needed for the computations, and GPUs provide a large number of compute units (up to thousands of compute units) and a large amount of high-speed memory, and can process many pixels in parallel at the same time.
The design of the GPU is based on the fact that the GPU is more suitable for computing with high computing intensity and multiple paralleling. Thus, GPUs use transistors more for compute units than CPUs for data caches and flow controllers. The design is that each data unit executes the same program during parallel computing, complicated flow control is not needed, high computing capacity is needed, and therefore large cache capacity is not needed.
The FPGA is used as a high-performance and low-power consumption programmable chip, and can be designed according to a customized algorithm. Therefore, when processing mass data, compared with a CPU and a GPU, the FPGA has the following advantages: the FPGA has higher calculation efficiency and is closer to IO.
According to the embodiment of the invention, the scheduling objects of the scheduling system are various space-based services deployed in the form of a Docker container.
It should be noted that Docker is an open source container engine based on Linux container (LXC) technology, creating a lightweight, portable, self-sufficient container for any application. The basis of the Docker engine is that LCX technology containers effectively partition resources managed by a single operating system into independent groups to better balance conflicts in resource usage requirements between the isolated groups. Compared with virtualization, the container can directly run the instruction in the CPU without any special interpretation mechanism, so that instruction level simulation and just-in-time compilation are avoided, and para-virtualization and complex system call replacement are not required. The appearance of the Docker container solves the problems of difficult application deployment and portability to a certain extent. The Docker source code was developed using the Go language and hosted by the dockecloud developer onto the Github platform.
It should be noted that, because the design of the scheduling algorithm involves the connection status of the space-based network and the hardware information of the nodes, three modules, namely, a network topology database, a hardware information database and an affinity rule database, need to be added in addition to the scheduler modification.
The network topology database records the connection status and connection duration information between each node, and can be represented by an adjacency list or a linked list.
The hardware information database records the hardware information of each node, such as the frequency of a CPU (central processing unit) of the node, the video memory and bit width of a GPU (graphics processing unit), the speed grade of an FPGA (field programmable gate array), and the like.
The affinity rule database records affinity rules of some services and nodes, and records the contents of the nodes, the services, the affinity rules, the affinity weights and the like by adopting a key-value mode. These affinity rules include not only history information but also new rules obtained by machine learning using the history information as input. In this embodiment, the rule-based machine learning module is located in a ground remote cloud.
Machine learning is a multi-field interdiscipline and relates to a plurality of disciplines such as probability theory, statistics, algorithm complexity and the like. It is specialized to study how computers simulate or implement human learning behavior, and it is able to discover and mine the potential value contained in the data. Machine learning has become a branch of artificial intelligence, and potential rules of data are discovered and mined through a self-learning algorithm, so that unknown data are predicted. Machine learning has been widely used in the fields of computer science research, natural language processing, machine vision, speech, games, and the like.
The algorithms of machine learning include many kinds, mainly three kinds: supervised learning, unsupervised learning and reinforcement learning. The machine algorithm adopted in this embodiment is a Logistic Regression algorithm (Logistic Regression), which belongs to one of supervised learning, and predicts the probability of future result occurrence through the expression of historical data, and the formula used by the algorithm is as follows:
Figure BDA0002241275250000091
it is understood that other machine learning algorithms, such as naive bayes algorithm, K nearest neighbor algorithm, association rule algorithm, K-means algorithm, etc., can be used in other embodiments of the present invention.
The three databases are not directly connected to the communication interface module, but are connected to the scheduler.
The concrete reasons are as follows:
1. the contents of the database and the containers and the container cluster management system keep relative independence; the Docker and the container cluster management system can operate without affecting the content change of the information, and even the network topology database and the hardware information database can serve other applications in the cloud.
2. Relative stability of database content; except for fault conditions, the contents of the network topology database can be obtained through mathematical modeling calculation; the content of the hardware information database is the information determined from the satellite orbit entering; only the affinity rules database needs to be updated regularly, but the frequency of this update is not too fast.
In summary, the direct connection of the three databases to the scheduler is a preferable solution, and can also reduce the communication delay inside the module.
In a space-based information network of a cloud and fog computing architecture, fog satellite nodes and space edge clouds adopt a Docker lightweight virtualization technology, and a container cluster management system technology is used for managing a Docker container cluster.
The scheduler is a module of the container cluster management system that performs scheduling functions, and is responsible for scheduling the computing resources corresponding to the application. The input of the scheduler is the information of the computing resources corresponding to the application to be scheduled and all the computing nodes, the computing resources are output as the optimal nodes after the internal scheduling algorithm and strategy processing, and then the computing resources corresponding to the application are scheduled on the nodes.
The particularity of space-based computing resources is represented by: (1) isomerism: computing resources on the satellite comprise a CPU, an FPGA, a GPU, a memory and the like; (2) dispersibility: satellite computing resources are dispersed at various locations in space; (3) dynamic property: the satellite is in a motion state, and the topology of the spatial information network has time variability. Designing a resource interface and a scheduling algorithm aiming at the isomerism requirement to complete the scheduling of GPU and FPGA resources; and a network topology and connection oriented scheduling algorithm is designed according to the dispersion and dynamic requirements.
The specificity of space-based applications is manifested in the extremely high latency requirements of a particular application. When a certain task with extremely high priority appears, the load balancing type scheduling algorithm is not suitable, and instead, a scheduling algorithm with the calculation performance as the optimization target needs to be designed.
The resource scheduling system under the space-based cloud computing architecture has the following technical effects: the method can realize unified scheduling of the space-based cloud heterogeneous resources and the fog heterogeneous resources, the scheduling can utilize historical scheduling data, the adaptability to a dynamic network is achieved, and the deployment requirements of various space-based applications can be met.
Fig. 3 shows a flowchart of a resource scheduling method under a space-based cloud computing architecture according to the present invention.
As shown in fig. 3, a second aspect of the present invention further provides a resource scheduling method under an antenna-based cloud computing architecture, which is applied to a resource scheduling system under the antenna-based cloud computing architecture, where the method includes:
s302, a scheduling system receives a service scheduling request of a user;
s304, the dispatching system acquires real-time cloud and mist resource conditions and database data contents;
s306, the scheduling system executes a scheduling algorithm according to the real-time cloud and mist resource conditions and the data content of the database so as to select an optimal deployment node for deployment;
s308, deploying the application on the node.
According to the embodiment of the invention, the position of the scheduling system can be placed in a space-based edge cloud and/or fog satellite cluster, and the scheduling system belongs to a part of the whole space-based container cluster management system.
With reference to a ground-based cloud computing system, three schemes are available for scheduler positions in the scheduling system:
the first scheme is as follows: placing in a fog agent or gateway; (see FIG. 4)
Scheme two is as follows: placing in a cloud; (see FIG. 5)
And a third scheme is as follows: schedulers are placed in both cloud and fog. (see FIG. 6)
The scene of adopting the scheme one in the foundation cloud and fog computing system is more, because the foundation network topology is stable, the geographic position of the fog agent node is positioned between the cloud and the fog cluster, the distance between the cloud agent node and the cloud agent node, the cloud agent node and other fog nodes and the distance between the cloud agent node and users are very appropriate, the request is conveniently received, the resource condition is obtained, the service is conveniently deployed, and various communication delay is lower. However, in space-based cloud computing systems, there is no absolute margin and no absolute far end for a certain user node, which is determined by the dynamics of the space-based network connections. A fog satellite node may be available to a user at one time, but may lose connection at the next time. However, the scheduler must be deployed on a node which is connected with the user node more stably, and in a network with a fast-changing topology, it is very difficult to sort out stable fog proxy nodes in the fog cluster.
And the second scheme is to deploy the scheduler in the space-based edge cloud. This solution is suitable because the space-based edge cloud is deployed in high orbit, and the range of wireless communication is wider compared with other satellites, so that stable connection with the satellite can be established more widely.
The third scheme is an alternative scheme for realizing distributed scheduling in the space-based computing system, and may be used for solving the problem of expansibility of a space-based network scheduling system.
According to the embodiment of the invention, the resource condition is one or more of task information, fog node information, cloud node information, network connection information and affinity rule information.
According to the embodiment of the invention, the action node of the scheduling system is a heterogeneous resource node, and the heterogeneous resource is one or more of a CPU, a GPU and an FPGA.
In order to better illustrate the resource scheduling method under the space-based cloud computing architecture of the present invention, the following will describe in detail through an embodiment.
As shown in fig. 7, in the first step, when a user applies for a certain type of service, a request is first sent to the sky-based edge cloud, and the format of the request is a yaml file; secondly, a scheduler in a control node of the sky-based edge cloud obtains task information, fog node information, cloud node information and network connection information through a communication interface or a database; thirdly, the scheduler executes a scheduling algorithm to select an optimal node; fourthly, the dispatcher binds the service to the selected node through the communication interface, and the node can be in the cloud or the fog.
And when the service binding is completed, the basic flow of service scheduling is completed. The control node may then feed back a message to the user node informing of the service deployment location and other content (e.g., security issue related content). The user then transmits the data to the cloud and/or fog (depending on the scheduling results) for computational processing.
It should be noted that if the scheduling result is that the service is deployed in the fog, the following communication related to the computation will completely occur between the user node and the fog satellite node, and is not related to the space-based edge cloud, so that the edge computation is realized.
The third aspect of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium includes a program of a resource scheduling method under a space-based cloud computing architecture, and when the program of the resource scheduling method under the space-based cloud computing architecture is executed by a processor, the method implements the steps of the resource scheduling method under the space-based cloud computing architecture.
The scheduling system of the present invention comprises: the system comprises a scheduler, a network topology database, a hardware information database and an affinity rule database; the scheduling object of the scheduling system is a space-based service deployed in a container form; the action nodes of the scheduling system comprise heterogeneous resource nodes in space-based edge clouds and fog satellite clusters. The invention designs three deployment schemes and selects one of the three deployment schemes for the physical position of the dispatching system. The method and the device can realize unified scheduling of the space-based cloud heterogeneous resources and the fog heterogeneous resources, the scheduling can utilize historical scheduling data, the adaptability to a dynamic network is realized, and the deployment requirements of various space-based applications can be met.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The above-described device embodiments are merely illustrative, for example, the division of the unit is only one logical function division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units; can be located in one place or distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all the functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and various media capable of storing program codes.
Alternatively, the integrated unit of the present invention may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present invention or portions thereof contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, a ROM, a RAM, a magnetic or optical disk, or various other media that can store program code.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (2)

1. A resource scheduling method under a space-based cloud computing architecture is applied to a resource scheduling system under the space-based cloud computing architecture, and is characterized in that:
the resource scheduling system under the space-based cloud computing architecture comprises: the system comprises a scheduler, a network topology database, a hardware information database and an affinity rule database; the scheduler is used for scheduling the computing resources corresponding to each application; the network topology database is electrically connected with the scheduler and used for recording the connection state and the connection duration information among all the nodes; the hardware information database is electrically connected with the scheduler and used for recording the hardware information of each node; the affinity rule database is electrically connected with the scheduler and is used for recording the affinity rules of the service and the nodes;
the scheduling system is based on a user terminal, a space-based edge cloud, a ground remote cloud and a fog satellite cluster; the user terminal is used for making a service request to the space-based edge cloud and/or fog satellite cluster and transmitting data information needing to be calculated and processed to the space-based edge cloud and/or fog satellite cluster; the space-based edge cloud is used for setting a scheduling system, acquiring resource conditions through the scheduling system, executing a scheduling algorithm according to the resource conditions to deploy service nodes, and executing the data information calculation processing when the service nodes are deployed on the space-based edge cloud; the ground remote cloud is used for setting an affinity rule learning system, the learning system takes historical scheduling data as input and takes a new affinity rule as output, and the ground remote cloud has other functions of computing and serving; the fog satellite cluster is used for setting a scheduling system, acquiring resource conditions through the scheduling system, executing a scheduling algorithm according to the resource conditions to deploy service nodes, and executing the data information calculation processing when the service nodes are deployed on the fog satellite cluster;
the machine learning module based on the rules is located in a ground remote cloud, and the probability prediction formula adopted by the machine learning module is as follows:
Figure FDA0003591685420000011
the scheduling objects of the scheduling system are various space-based services deployed in a container form;
the method comprises the following steps: a dispatching system receives a service dispatching request of a user; the scheduling system acquires real-time cloud and mist resource conditions and database data contents; the scheduling system executes a scheduling algorithm according to the real-time cloud and mist resource conditions and the data content of the database so as to select an optimal deployment node for deployment; deploying an application on the node;
the position of the scheduling system is in a space-based edge cloud and/or fog satellite cluster, and the scheduling system belongs to one part of the whole space-based container cluster management system;
the resource condition is one or more of task information, fog node information, cloud node information, network connection information and affinity rule information;
the action nodes of the scheduling system are heterogeneous resource nodes, and the heterogeneous resources are one or more of a CPU, a GPU and an FPGA.
2. A computer-readable storage medium, wherein the computer-readable storage medium includes a program for resource scheduling under a space-based cloud computing architecture, and when the program for resource scheduling under the space-based cloud computing architecture is executed by a processor, the method for resource scheduling under the space-based cloud computing architecture recited in claim 1 is implemented.
CN201911000888.8A 2019-10-21 2019-10-21 Resource scheduling system, method and storage medium under space-based cloud computing architecture Active CN110753107B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911000888.8A CN110753107B (en) 2019-10-21 2019-10-21 Resource scheduling system, method and storage medium under space-based cloud computing architecture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911000888.8A CN110753107B (en) 2019-10-21 2019-10-21 Resource scheduling system, method and storage medium under space-based cloud computing architecture

Publications (2)

Publication Number Publication Date
CN110753107A CN110753107A (en) 2020-02-04
CN110753107B true CN110753107B (en) 2022-12-20

Family

ID=69279148

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911000888.8A Active CN110753107B (en) 2019-10-21 2019-10-21 Resource scheduling system, method and storage medium under space-based cloud computing architecture

Country Status (1)

Country Link
CN (1) CN110753107B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111611071B (en) * 2020-04-21 2021-09-07 中国人民解放军军事科学院国防科技创新研究院 Satellite system of satellite-cloud-edge-end architecture and data processing method thereof
CN112394945B (en) * 2020-10-28 2022-05-24 浙江大学 System verification method for complex edge calculation
CN112965800A (en) * 2021-03-09 2021-06-15 上海焜耀网络科技有限公司 Distributed computing task scheduling system
CN113271137A (en) * 2021-04-16 2021-08-17 中国电子科技集团公司电子科学研究院 Cooperative processing method and storage medium for space-based network heterogeneous computational power resources
CN114371938B (en) * 2022-01-10 2024-02-02 中国人民解放军国防科技大学 Space-based intelligent networking edge computing framework
CN114090303B (en) * 2022-01-14 2022-05-03 杭州义益钛迪信息技术有限公司 Software module scheduling method and device, electronic equipment, storage medium and product
CN115408329B (en) * 2022-08-26 2023-07-25 上海玫克生储能科技有限公司 Plug-and-play type edge computing terminal hardware system architecture

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106534318A (en) * 2016-11-15 2017-03-22 浙江大学 OpenStack cloud platform resource dynamic scheduling system and method based on flow affinity
CN108597599A (en) * 2018-04-28 2018-09-28 厦门理工学院 A kind of health monitoring system and method based on the scheduling of cloud and mist resource low latency
CN109117247A (en) * 2018-07-18 2019-01-01 上海交通大学 A kind of virtual resource management system and method based on heterogeneous polynuclear topology ambiguity
CN109936619A (en) * 2019-01-18 2019-06-25 中国科学院空间应用工程与技术中心 A kind of Information Network framework, method and readable storage medium storing program for executing calculated based on mist

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109117265A (en) * 2018-07-12 2019-01-01 北京百度网讯科技有限公司 The method, apparatus, equipment and storage medium of schedule job in the cluster

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106534318A (en) * 2016-11-15 2017-03-22 浙江大学 OpenStack cloud platform resource dynamic scheduling system and method based on flow affinity
CN108597599A (en) * 2018-04-28 2018-09-28 厦门理工学院 A kind of health monitoring system and method based on the scheduling of cloud and mist resource low latency
CN109117247A (en) * 2018-07-18 2019-01-01 上海交通大学 A kind of virtual resource management system and method based on heterogeneous polynuclear topology ambiguity
CN109936619A (en) * 2019-01-18 2019-06-25 中国科学院空间应用工程与技术中心 A kind of Information Network framework, method and readable storage medium storing program for executing calculated based on mist

Also Published As

Publication number Publication date
CN110753107A (en) 2020-02-04

Similar Documents

Publication Publication Date Title
CN110753107B (en) Resource scheduling system, method and storage medium under space-based cloud computing architecture
Ghobaei-Arani et al. A cost-efficient IoT service placement approach using whale optimization algorithm in fog computing environment
Sheikh Sofla et al. Towards effective offloading mechanisms in fog computing
Zaman et al. LiMPO: Lightweight mobility prediction and offloading framework using machine learning for mobile edge computing
US20210117860A1 (en) Method and system for distribution of computational and storage capacity using a plurality of moving nodes in different localities: a new decentralized edge architecture
Ramzanpoor et al. Multi-objective fault-tolerant optimization algorithm for deployment of IoT applications on fog computing infrastructure
Ogundoyin et al. Optimization techniques and applications in fog computing: An exhaustive survey
Jin et al. Self-aware distributed deep learning framework for heterogeneous IoT edge devices
US20220138156A1 (en) Method and apparatus providing a tiered elastic cloud storage to increase data resiliency
Guo et al. A delay-sensitive resource allocation algorithm for container cluster in edge computing environment
Cardellini et al. Self-adaptive container deployment in the fog: A survey
Dimitrios et al. Simulation and performance evaluation of a fog system
Trindade et al. Resource management at the network edge for federated learning
Wu et al. Optimal deploying IoT services on the fog computing: A metaheuristic-based multi-objective approach
CN110719335B (en) Resource scheduling method, system and storage medium under space-based cloud computing architecture
Yadav et al. An opposition-based hybrid evolutionary approach for task scheduling in fog computing network
Alqahtani et al. A proactive caching and offloading technique using machine learning for mobile edge computing users
Ashraf et al. Distributed application execution in fog computing: A taxonomy, challenges and future directions
Ghafari et al. E-AVOA-TS: Enhanced African vultures optimization algorithm-based task scheduling strategy for fog–cloud computing
Nematollahi et al. Task and resource allocation in the internet of things based on an improved version of the moth-flame optimization algorithm
Xu et al. A meta reinforcement learning-based virtual machine placement algorithm in mobile edge computing
Hashemi et al. Gwo-sa: Gray wolf optimization algorithm for service activation management in fog computing
Jiang et al. Hierarchical deployment of deep neural networks based on fog computing inferred acceleration model
Henna et al. Distributed and collaborative high-speed inference deep learning for mobile edge with topological dependencies
Shang et al. A cross-layer optimization framework for distributed computing in IoT networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant