CN110730138A - Dynamic resource allocation method, system and storage medium for space-based cloud computing architecture - Google Patents

Dynamic resource allocation method, system and storage medium for space-based cloud computing architecture Download PDF

Info

Publication number
CN110730138A
CN110730138A CN201911000850.0A CN201911000850A CN110730138A CN 110730138 A CN110730138 A CN 110730138A CN 201911000850 A CN201911000850 A CN 201911000850A CN 110730138 A CN110730138 A CN 110730138A
Authority
CN
China
Prior art keywords
network
traffic
flow
space
resource allocation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911000850.0A
Other languages
Chinese (zh)
Inventor
韦君勇
曹素芝
闫蕾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Technology and Engineering Center for Space Utilization of CAS
Original Assignee
Technology and Engineering Center for Space Utilization of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Technology and Engineering Center for Space Utilization of CAS filed Critical Technology and Engineering Center for Space Utilization of CAS
Priority to CN201911000850.0A priority Critical patent/CN110730138A/en
Publication of CN110730138A publication Critical patent/CN110730138A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/83Admission control; Resource allocation based on usage prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2441Traffic characterised by specific attributes, e.g. priority or QoS relying on flow classification, e.g. using integrated services [IntServ]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/147Network analysis or design for predicting network behaviour
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2483Traffic characterised by specific attributes, e.g. priority or QoS involving identification of individual flows
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/29Flow control; Congestion control using a combination of thresholds
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/78Architectures of resource allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/82Miscellaneous aspects
    • H04L47/823Prediction of resource usage
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/82Miscellaneous aspects
    • H04L47/829Topology based

Abstract

The invention provides a dynamic resource allocation method, a system and a storage medium under a space-based cloud computing architecture. The method comprises the following steps: sensing the state parameters of the current network to generate a network state sensing result; according to the network state perception result, detecting the traffic condition of each node and each link, the user access amount and the consumption of each application on the traffic under the current network topology structure, and carrying out classification and identification on the traffic to generate a traffic classification and identification result; predicting the traffic condition based on the geographic position, time distribution and access intensity according to the traffic classification identification result, and generating a traffic prediction result; and according to the flow prediction result, generating a resource configuration scheme through visualization of a big data analysis platform. The invention can efficiently utilize heterogeneous resources through dynamic resource allocation, meets the requirements of space-based time delay sensitivity and big data application, adapts to dynamic network connection, ensures the reliability of service flow, and realizes the load balance of the system.

Description

Dynamic resource allocation method, system and storage medium for space-based cloud computing architecture
Technical Field
The invention relates to the technical field of computers, in particular to a dynamic resource allocation method, a dynamic resource allocation system and a storage medium for a space-based cloud computing architecture.
Background
With the development of technology and the diversification of application requirements, the traditional satellite technology developed in a chimney type has the problems of single function, mutual isolation, serious dependence on the ground for operation and the like, and can not meet the urgent requirements of military, economy and the development of the living standard of people in China. The heaven and earth integrated information network comprehensively utilizes a novel information network technology, fully exerts the advantages of the network such as the heaven base, the foundation and the like, can realize the support of national major strategic actions and promotes the transmission and sharing of the multivariate information. The satellite network is an important component of a world-wide integrated information network and has the advantages of high and long range and wide coverage.
The important project 'heaven and earth integrated information network' started by the innovation 2030 in China aims to build a global coverage and on-demand service information network system. In the important project of the heaven-earth integrated information network, a 'heaven-earth network and earth network' architecture is adopted, and a heaven-earth base networking network and heaven-earth interconnection are formed, and the heaven-earth base networking network and earth-earth base networking network are interconnected and intercommunicated with a ground internet and a mobile communication network. The space-based backbone network is mainly formed by interconnecting space-based backbone nodes deployed on a GEO track at a high speed through inter-satellite links and has global coverage capability; the space-based access network is mainly composed of constellations arranged on LEO tracks, has global seamless random access and mobile broadband communication capabilities, and is also called a low-orbit constellation of the space-based access network. The satellite optical network based on the space laser communication has the advantages of high speed and large information capacity, and is an important technical means for future satellite relay and satellite networking.
While the concept of the world-wide integrated Network is continuously improved, the development of the ground networking technology, in particular, the emergence of technologies such as Software Defined Networking (SDN)/Network Function Virtualization (NFV), EC, and the like, and the rapid development of the AI (Artificial Intelligence) technology in recent years are combined, so that the satellite Network gradually develops to intellectualization, synergy and identification, and the trend of transition from "taking a host as a center" to "taking information as a center" is achieved.
The research of the space-based network application of the foundation network emerging technology mainly comprises the following steps: the method comprises the steps of researching a space-ground integrated network based on SDN/NFV/MEC, researching the fusion of a satellite network and a ground 5G network, designing a satellite network resource management architecture based on AI, researching a space-ground integrated network protocol, and researching a satellite network link. The SDN/NFV/MEC is three key technologies of 5G mobile communication, and meanwhile, the AI technology plays an important role in network intelligent application, and therefore, the research on the emerging network technology is beneficial to promoting the development of the ground internet and the mobile communication network and plays a significant role in the construction of the all-in-one information network.
While theoretical research and simulation are realized, research and development of experimental prototypes are also carried out. In 2018, 11 and 20, the first test satellite 'Tianzhiyi No.' specially used for verifying the software defined satellite key technology in China, which is researched and developed by software research institute of Chinese academy of sciences, is successfully transmitted in the transmission center of the drink spring satellite. The successful launching of the Tianzhi satellite provides an open test platform for developing the Tianzhi intelligence, provides assistance for commercial aerospace development and aerospace ecosystem construction, and promotes the evolution of the traditional satellite to the directions of platformization, software, intellectualization, virtualization and the like.
In the Cloud Computing (CC) mode, a User Equipment (UE) can access and utilize Computing and storage resources of a highly powerful remote centralized CC center through an operator and a Core Network (CN), as shown in fig. 1. In recent years, the number of UEs accessing the Internet has increased explosively, and meanwhile, in a new Internet of Things (iot) model, a myriad of heterogeneous devices with wide computing capabilities will be interconnected. This also means that network load will show an ever increasing trend, and linearly increasing centralized CC capability cannot match exponentially increasing massive edge data. The massive data transmitted from the edge UEs to the CC center increases the load of the network transmission bandwidth and introduces high latency. Moreover, the data transmission results in a large power consumption of the energy-limited UE.
To solve the problem of high network load and satisfy the requirement of ultra-low latency and ultra-high bandwidth for users, CC service should be moved to the vicinity of UE, as viewed from the network topology, i.e. the edge of the network, as considered in the emerging Edge Computing (EC) paradigm. These EC paradigms place Computing/storage capabilities at the Edge of the network, such as cloudlets, Mobile ad-hoc clouds, Fog Computing (FC), Mobile Edge Computing/Multi-access Edge Computing (MEC), with the advantages of achieving lower latency, saving UE energy consumption, supporting radio network real-time information and location-aware Computing, alleviating network congestion, and enhancing privacy and security of Mobile applications.
With the continuous development of the aviation industry, on-orbit satellites with various functions are continuously increased, and massive real-time data are generated. Particularly in the aspect of remote sensing data analysis, remote sensing data information is collected by each of the GEO satellite and the medium and low orbit satellite, and a large amount of real-time data is generated. Generally, the data needs to be downloaded to the ground station for processing, that is, the CC mode is adopted, and this resource-intensive large data processing mode can provide fast data processing capability and create economic benefits of effective scale. However, similar to terrestrial networks, this approach exposes the same problems with the increasing number of on-track devices and the increasing amount of network traffic.
This requires that computing power be added to the on-track device side to enable the data to be processed in real time, thereby avoiding the influence on the functions due to the limitation of network bandwidth. The EC technology is applied to the edge of the network, and an open platform integrating network, computing, storage and application core capabilities is used for providing edge intelligent processing service nearby, so that the key requirements of satellite equipment on implementation of business, data optimization, safety, privacy protection and the like are met. Aiming at the problem of huge on-orbit processing data quantity, the on-orbit processing capability of remote sensing data is favorably improved by combining the space-based edge calculation technology, on-orbit real-time processing is realized, rapid distribution is carried out as required, the real-time performance of decision is improved, and the processing time is reduced.
In the research of space-based computing architecture, in order to solve the contradiction between the space application of time delay sensitivity and big data and the limitation of satellite bandwidth, a space-based cloud computing system is proposed under the support of technologies such as software defined satellite, virtualization and space network. The system mainly comprises: a spatial edge node, a spatial edge cloud, and a ground-based far-end cloud.
The spatial edge nodes can be divided into user nodes (satellite or airplane) and fog satellite nodes. The user node does not undertake or only undertakes relatively few computing functions, and the service is mainly unloaded to the edge cloud or the remote cloud to be completed. The fog satellite nodes have certain computing and storage capacities, can be used as users to unload own computing tasks to edge clouds or remote clouds for assistance, can be used as computing resources to receive tasks sent by the cloud end, and can independently execute the tasks or establish a quick service cluster with other fog satellite nodes to execute the tasks. The fog satellite node uses a universal virtualization platform and can carry different types of services according to requirements.
The space edge cloud is a space-based information port infrastructure and takes on the functions of a space-based edge data center. Compared with a fog satellite, the space-based edge cloud integrates heterogeneous resources such as a CPU (central processing unit), a GPU (graphic processing unit), an FPGA (field programmable gate array) and the like, and has stronger computing and storing capabilities. Under the support of virtualization technology, the system can not only complete various applications unloaded from user nodes, but also complete functions of data fusion, task analysis, intelligent distribution, construction of a fog satellite node rapid service cluster and the like.
The ground remote cloud is foundation information port infrastructure and takes charge of the functions of a large-scale cloud computing center. Compared with a space-based edge cloud, a ground cloud has more computing storage resources and stronger computing power.
The fog satellite nodes, the spatial edge cloud and the ground far-end cloud form a three-layer computing model of the spatial information fog network, and computing power of each layer is gradually enhanced from the edge to the cloud. From the point of view of calculation distribution, the calculation with low complexity can be completed on the fog satellite node; the computation with higher complexity and higher requirement on processing real-time property is suitable to be finished on the space-based edge cloud; the calculation with low requirement on the real-time performance of the processing, large calculation amount and high calculation complexity is finished by being placed on a ground remote cloud.
The particularity of space-based computing resources is represented by: (1) isomerism property: computing resources on the satellite comprise a CPU, an FPGA, a GPU, a memory and the like; (2) dispersibility: satellite computing resources are dispersed at various locations in space; (3) the dynamic property: the satellite is in a motion state, and the topology of the spatial information network has time variability.
Therefore, designing a dynamic resource allocation method flow adapted to a space-based cloud computing architecture, how to allocate computing/storage resources on a satellite to meet corresponding target requirements, is a key point and a difficulty point worth of research.
Disclosure of Invention
In order to solve at least one technical problem, the invention provides a dynamic resource allocation method, a system and a storage medium for a space-based cloud computing architecture.
In order to achieve the above object, a first aspect of the present invention provides a dynamic resource allocation method for an sky-based cloud computing architecture, where the dynamic resource allocation method includes:
sensing the state parameters of the current network by using the existing network resources and the historical network data to generate a network state sensing result;
according to the network state perception result, detecting the traffic condition of each node and each link, the user access amount and the consumption of each application on the traffic under the current network topology structure, and carrying out classification and identification on the traffic to generate a traffic classification and identification result;
summarizing and inducing the flow characteristics of each node and each link under the current network topology structure according to the flow classification recognition result, predicting the flow condition based on the geographic position, the time distribution and the access intensity degree, and generating a flow prediction result;
and according to the flow prediction result, generating a resource configuration scheme through visualization of a big data analysis platform.
In the scheme, the method for sensing the state parameters of the current network by using the existing network resources and the historical network data further comprises the following steps:
and sensing the state parameters of the current network by an in-band network telemetry technology, wherein the state parameters are one or more of network physical topology, queue capacity and single-hop time delay.
In the scheme, detecting the traffic condition of each node and link, the user access amount and the consumption of each application on the traffic under the current network topology structure, and classifying and identifying the traffic, further comprises:
the method for indexing the data stream by the Hash function is adopted to detect the passing elephant stream on line at high speed, when the data stream arrives, the Hash index is utilized to carry out fast statistics on the scale of the data stream, and the data stream with the scale exceeding a certain threshold value is judged to be the elephant stream; and/or
And classifying and identifying the flow through a machine learning method based on the flow characteristics.
Further, based on the flow characteristics, the flow is classified and identified by a machine learning method, and the method further comprises the following steps:
defining characteristics by which traffic can be identified and differentiated;
training an ML classifier which can associate the feature set of the flow with known classes, applying an ML algorithm and classifying unknown flow according to a rule model which is trained well in the prior learning.
In the scheme, according to the traffic classification recognition result, the traffic characteristics of each node and each link under the current network topology structure are summarized and summarized, the traffic condition based on the geographical position, the time distribution and the access intensity degree is predicted, and the method further comprises the following steps:
and predicting the traffic condition based on the geographic position, the time distribution and the access intensity degree by a regression learning method and/or a reinforcement learning method.
Preferably, the resources include computing resources and storage resources, the computing resources are one or more of a CPU, a GPU and an FPGA, and the storage resources are one or more of a mechanical hard disk, a solid state hard disk, a liquid state hard disk and an optical hard disk.
The second aspect of the present invention further provides a dynamic resource allocation system of a space-based cloud computing architecture, where the dynamic resource allocation system of the space-based cloud computing architecture includes: the dynamic resource allocation method program of the space-based cloud computing architecture is executed by the processor to realize the following steps:
sensing the state parameters of the current network by using the existing network resources and the historical network data to generate a network state sensing result;
according to the network state perception result, detecting the traffic condition of each node and each link, the user access amount and the consumption of each application on the traffic under the current network topology structure, and carrying out classification and identification on the traffic to generate a traffic classification and identification result;
summarizing and inducing the flow characteristics of each node and each link under the current network topology structure according to the flow classification recognition result, predicting the flow condition based on the geographic position, the time distribution and the access intensity degree, and generating a flow prediction result;
and according to the flow prediction result, generating a resource configuration scheme through visualization of a big data analysis platform.
In the scheme, the method for sensing the state parameters of the current network by using the existing network resources and the historical network data further comprises the following steps:
and sensing the state parameters of the current network by an in-band network telemetry technology, wherein the state parameters are one or more of network physical topology, queue capacity and single-hop time delay.
In the scheme, detecting the traffic condition of each node and link, the user access amount and the consumption of each application on the traffic under the current network topology structure, and classifying and identifying the traffic, further comprises:
the method for indexing the data stream by the Hash function is adopted to detect the passing elephant stream on line at high speed, when the data stream arrives, the Hash index is utilized to carry out fast statistics on the scale of the data stream, and the data stream with the scale exceeding a certain threshold value is judged to be the elephant stream; and/or
And classifying and identifying the flow through a machine learning method based on the flow characteristics.
The third aspect of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium includes a program of a dynamic resource allocation method for a space-based cloud computing architecture, and when the program of the dynamic resource allocation method for the space-based cloud computing architecture is executed by a processor, the steps of the method for dynamically allocating resources for a space-based cloud computing architecture as described above are implemented.
The invention can meet the requirement of a 5G network by dynamically configuring computing resources and storage resources, achieve global seamless coverage, and increase the supportable user connection number to 100 universal users/square kilometer. The invention further optimizes the resource allocation while achieving global seamless coverage, so that the resource nodes are not interfered with each other, the resource allocation cost is reduced, and the maximization of the resource utilization rate is realized on the premise of limited resource allocation. The invention leads the satellite to have the on-orbit computing capability by introducing the edge computing, shortens the propagation delay and leads the propagation delay to reach the millisecond level, thereby ensuring that the data can be computed and processed in real time.
The invention also uses high-flux communication technology such as laser communication, etc., to increase the data amount of transmission, and meanwhile, the satellite in-orbit calculation can occupy less bandwidth for transmission, and the limited bandwidth can transmit more effective data. In addition, after the resources are primarily distributed, the resources can be adjusted again according to the user service quality evaluation, and the resource allocation is continuously optimized.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
FIG. 1 illustrates a prior art cloud computing mode network architecture diagram;
FIG. 2 shows a workflow of a resource allocation system under a space-based cloud computing architecture according to the present invention;
FIG. 3 is a flow chart illustrating a method for dynamic resource allocation for space-based cloud computing architecture in accordance with the present invention;
FIG. 4 is a flow diagram illustrating a method for dynamic resource allocation in accordance with one embodiment of the invention;
FIG. 5 illustrates an in-band network monitoring architecture diagram in accordance with the present invention;
FIG. 6 is a flow chart illustrating a elephant flow detection and prediction method of the present invention;
fig. 7 is a block diagram of a dynamic resource allocation system under the space-based cloud computing architecture of the present invention.
Detailed Description
In order that the above objects, features and advantages of the present invention can be more clearly understood, a more particular description of the invention will be rendered by reference to the appended drawings. It should be noted that the embodiments and features of the embodiments of the present application may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced in other ways than those specifically described herein, and therefore the scope of the present invention is not limited by the specific embodiments disclosed below.
As shown in fig. 2, the workflow of the resource allocation system under the space-based cloud computing architecture includes: basic resource allocation, dynamic resource allocation and satellite constellation design. The basic resource configuration comprises computing/storage resources which are configured in bottom hardware by all nodes in the space-based cloud computing architecture, the most basic task can be met, and the resource amount required by the task is not particularly large under the general condition. The dynamic resource configuration comprises a plurality of special calculation/storage resources, the calculation resources can be heterogeneous calculation resources (such as GPU, CPU and FPGA), and the storage resources can be different storage resources (such as mechanical hard disk, solid state hard disk, liquid state hard disk and optical hard disk). Due to the high dynamic characteristic of the space-based network, the corresponding satellite constellation orbit needs to be designed to complete the resource configuration according to the configuration requirement of the resource configuration system under the space-based cloud computing architecture.
Fig. 3 shows a flowchart of a dynamic resource allocation method of the space-based cloud computing architecture according to the present invention.
As shown in fig. 3, a first aspect of the present invention provides a dynamic resource allocation method for space-based cloud computing architecture, where the dynamic resource allocation method includes:
s302, sensing the state parameters of the current network by using the existing network resources and the historical network data to generate a network state sensing result;
s304, detecting the traffic condition of each node and link, the user access amount and the consumption of each application on the traffic under the current network topology structure according to the network state sensing result, and carrying out classification and identification on the traffic to generate a traffic classification and identification result;
s306, summarizing and inducing the flow characteristics of each node and each link under the current network topology structure according to the flow classification recognition result, predicting the flow condition based on the geographic position, time distribution and access intensity, and generating a flow prediction result;
and S308, generating a resource configuration scheme through visualization of the big data analysis platform according to the flow prediction result.
It should be noted that the resources include computing resources and storage resources, the computing resources may be one or more of a CPU, a GPU, and an FPGA, and the storage resources may be one or more of a mechanical hard disk, a solid state hard disk, a liquid state hard disk, and an optical hard disk.
It should be noted that, the dynamic resource allocation method adopts a logic of "network state awareness + traffic identification + traffic prediction + resource allocation scheme generation". The network state perception mainly utilizes the existing network resources and network historical data to perceive the state parameters of the current network, and the main parameters comprise network topology, link flow, link delay, node user access amount, access network side flow and the like. Preferably, the network state awareness can be made of the overall network state through in-band network telemetry.
The traffic identification is mainly to detect traffic conditions (such as elephant flow) of each node and link, user access amount and consumption of each application on the traffic under the current network topology structure according to the result of network state perception, and to classify and identify the traffic. Preferably, traffic identification may be performed by data stream indexing/machine learning techniques.
The flow prediction is mainly to summarize and summarize the flow characteristics of each node and link under the current network topology structure according to the flow classification and identification results, and predict the flow condition based on the geographic position, time distribution and access intensity. Preferably, the flow prediction may be performed by regression learning/reinforcement learning.
The resource allocation scheme is generated through visualization of a big data analysis platform according to the result of flow prediction, and includes resource allocation conditions at geographic or spatial positions and periodic resource allocation conditions based on service migration in different time periods (such as day and night or four seasons). Preferably, the generation resource configuration scheme can jointly generate the results of network state sensing, traffic identification and traffic prediction based on a big data analysis platform.
As shown in fig. 4, a resource allocation scheme is generated through network state sensing, traffic identification, traffic prediction, and finally. If the resource allocation scheme can meet the target requirement, a satellite constellation can be designed according to the resource allocation scheme. If the target requirements cannot be met, the network can be sensed, the flow identification and the flow prediction are carried out again, a resource allocation scheme is generated again and evaluated to form a closed-loop system, iterative updating is carried out according to network historical data, and the resource allocation scheme is optimized continuously.
According to the embodiment of the invention, the sensing of the state parameters of the current network by using the existing network resources and the historical data of the network further comprises the following steps:
and sensing the state parameters of the current network by an in-band network telemetry technology, wherein the state parameters are one or more of network physical topology, queue capacity and single-hop time delay.
It should be noted that the in-band network telemetry technology based on the SDN of the present invention senses the performance of nodes and links in the whole network, and specifically, fine-grained and efficient monitoring is performed on the state information (including network physical topology, queue capacity, single-hop delay, etc.) of the network device. The invention is based on the existing programmable network hardware plane, and aims to utilize a set of new network hardware monitoring system, namely an in-band network perception technology, to realize a network real-time monitoring technology without hardware dependence, extra flow, packet level and millisecond level perception. The in-band network perception technology is a data plane network perception technology, and the core idea is that network state data is directly written into a packet header of a data packet through programmable hardware.
As shown in fig. 5, the SDN controller only needs to issue a monitoring instruction to the network device. When the network flow passes through the node, the monitoring instruction is directly written in the data packet header through the programmable network equipment. The monitoring data is continuously added into the data packet header when the data is forwarded along the forwarding path. And finally, at a terminal node, the telemetering data can be separated from the network in a complex way, and the telemetering data can be directly uploaded to a data analysis platform.
The in-band network monitoring directly collects data from a data plane, directly writes a network state in a data packet header without participation of a controller, and directly uploads monitoring data to a big data analysis platform through a standard message queue. Therefore, network monitoring of packet-level fine granularity is achieved, and meanwhile, communication of massive probes and calculation pressure of a controller are avoided. The monitoring technology has the following advantages: (1) no additional hardware dependency: the INT does not need to depend on specific network hardware equipment, and for equipment of different manufacturers, the problem of hardware compatibility does not exist, so that the INT is favorable for popularization. (2) No extra flow: the INT writes the sensed network state data into the packet header of the data packet instead of generating the network state data packet additionally, so that the data volume is very small, extra flow cannot be generated in the network basically, and the stability of a network link is maintained. (3) Packet level awareness: the INT can perform network perception of data packet level and can obtain fine-grained state information such as delay, congestion and the like of each data packet. (4) Millisecond sensing: different from the traditional network monitoring technology for network perception according to a sampling or statistical mode, the INT can actively transmit the network state through each data packet, transmit data for applications such as network analysis and the like in real time and high efficiency, and is suitable for time delay sensitive service perception.
According to the embodiment of the present invention, detecting traffic conditions of each node and link, user access amount, and consumption amount of each application on traffic under the current network topology, and classifying and identifying the traffic, further comprising:
the method for indexing the data stream by the Hash function is adopted to detect the passing elephant stream on line at high speed, when the data stream arrives, the Hash index is utilized to carry out fast statistics on the scale of the data stream, and the data stream with the scale exceeding a certain threshold value is judged to be the elephant stream; and/or
And classifying and identifying the flow through a machine learning method based on the flow characteristics.
Further, based on the flow characteristics, the flow is classified and identified by a machine learning method, and the method further comprises the following steps:
defining characteristics by which traffic can be identified and differentiated;
training an ML classifier which can associate the feature set of the flow with known classes, applying an ML algorithm and classifying unknown flow according to a rule model which is trained well in the prior learning.
It can be understood that the big data brings huge challenges to communication, and the characteristics of large scale, high speed and multiple types of the big data are fully embodied in the communication. For example, in a mobile backbone network, the traffic bandwidth is 40G to 100 Gbps; in large data centers, traffic bandwidth can reach 1Tbps scale. In a communication network, a large-scale data flow is called elephant flow (elephantflow), and a small-scale data flow is called mouse flow (mouse flow). The elephant flow appears in a low proportion, but occupies network bandwidth, and causes network congestion. Therefore, it is an important subject of network management to detect and predict elephant flow in time and take necessary measures.
It should be noted that the elephant flow detection means counting the size of the data stream and determining whether the data stream is an elephant flow (i.e. there are a large number of packets) while the data stream continuously passes through. The invention adopts a method of indexing the data stream by a hash function to detect the passing elephant stream on line at high speed. Indexing the data stream by using a hash function, and when the data stream arrives, rapidly counting the scale of the data stream by using the hash index, wherein the data stream with the scale exceeding a certain threshold value is a elephant stream; in this process, the small-scale data stream is discarded to prevent data overflow.
Further, the present invention also provides a classification method based on traffic characteristics, which classifies traffic by recognizing statistical patterns in externally observable attributes of traffic without deep inspection of packet contents to collect information and infer semantics. The method may eventually cluster the flows in the network into groups with similar traffic patterns, or one or more related application categories. The flow characteristics include: packet inter-arrival time (mean, variance, etc.), packet size (maximum, minimum, mean), total number of bytes of the stream, duration of the stream, etc. In order to better integrate these flow characteristics, it is preferable to use a Machine Learning (ML) algorithm for flow classification.
The invention applies machine learning techniques to traffic classification based on traffic characteristics, involving the following steps: first, it is necessary to define characteristics that traffic can be identified and distinguished, which are traffic attributes calculated from a plurality of packets (e.g., maximum or minimum packet length in each direction, traffic duration, packet inter-arrival time, etc.); then, an ML classifier is trained that can associate a feature set of traffic with known classes (created from business requirements), and an ML algorithm is applied to classify unknown traffic using a previously learned trained rule model.
It should be noted that each ML algorithm has a different method for sorting and optimizing feature sets, so that different ML algorithms have different dynamic behaviors during training and classification. The criteria for measuring the traffic classification are: false Negatives (FN), False Positives (FP), True Positives (TP), and True Negatives (TN), recall rate, accuracy, traffic accuracy, and byte accuracy, among others. In the flow classification using ML, supervised learning and unsupervised learning are generally used, and preferably, unsupervised learning algorithms such as Nearest Neighbor (NN), Linear Discriminant Analysis (LDA), Quadratic Discriminant Analysis (QDA), naive bayes, genetic algorithms, and the like, and unsupervised learning algorithms such as Expectation Maximization (EM), AutoClass, K-Means, and the like are used for the flow classification. It can be understood that different ML algorithms differ in classification accuracy, modeling time, classification speed, etc.
It should be noted that, when applying machine learning to traffic classification and performing traffic classification in combination with specific services, it is necessary to consider that factors such as packet loss, delay jitter, packet fragmentation, direction of a flow unknown in advance, and consumption of CPU and memory resources have an influence on the classification performance of the ML algorithm.
Further, according to the traffic classification recognition result, the traffic characteristics of each node and link under the current network topology structure are summarized and summarized, and the traffic condition based on the geographic position, the time distribution and the access intensity is predicted, further comprising:
and predicting the traffic condition based on the geographic position, the time distribution and the access intensity degree by a regression learning method and/or a reinforcement learning method.
It should be noted that the elephant stream prediction means that when a data stream arrives, the system can determine whether the data stream is an elephant stream by only looking at the header information of the data stream. Further, elephant flow prediction may be defined as an online regression learning problem (onlineheregression learning), and a data flow is a sample, and consists of a feature vector and a flow scale, as shown in fig. 6. The feature vector indicates information such as address and time of the data stream. The online learning system continuously learns the regression model from the data and predicts its scale based on the features for a newly given sample. Preferably, the model belongs to Gaussian Process Regression (Gaussian Regression), the model is learned online from the flow data, the learned function follows the Gaussian Process, and the expectation and variance of the samples to be predicted are given by the joint Gaussian distribution of the training samples and the samples to be predicted, i.e. prediction is made.
It can be understood that in the artificial intelligence flow prediction, the network state perception provides abundant massive fine-grained network state and flow characteristics, and the global monitoring and perception of the whole network are realized. However, with the expansion of the network scale and the increase of the number of services, the link faces a massive network state space, and an intelligent agent needs to predict the state of the traffic from the massive state space, which brings great difficulty to the traditional machine learning. The deep reinforcement learning algorithm of the invention combines deep learning to compress the state space of reinforcement learning on the basis of reinforcement learning. The deep neural network has strong function fitting capability. The deep learning not only can bring the convenience of end-to-end optimization for the reinforcement learning, but also enables the reinforcement learning not to be limited in a low-dimensional space, and greatly expands the application range of the reinforcement learning. Reinforcement learning defines the goal of optimization, and deep learning gives the operation mechanism-the way to characterize the problem and the way to solve the problem. Therefore, the combination of reinforcement learning and deep learning can realize accurate prediction of the state of the flow from a massive state space.
It is understood that real-time information of the network is transferred to the AI plane through the controller and serves as an input State (State) of the AI plane. Since the controller gets a global view of the network, the state space for the network input states is large. And the Agent through deep reinforcement learning takes the state as input and compresses the network state space. Deep reinforcement learning derives Agent's policy by training using observed network data as state input. The deep reinforcement learning uses a strategy (Policy) pi ═ a | s, realizes the mapping of a state space and an Action space, and selects a nearly optimal decision Action (Action) according to an input state. And the AI plane feeds back the obtained decision action to the controller and issues the decision action to the underlying network configuration and deployment through the controller. The network strategy learning through deep reinforcement learning mainly has the following three advantages:
firstly, due to the black box characteristic of the deep reinforcement learning algorithm, different network decision tasks and optimization targets only need to be designed with action space and reward without redesigning a mathematical model;
secondly, the strong fitting capability of the deep neural network can process a complex environment, and an optimal network strategy is searched from the network state of a massive state space;
thirdly, once the agent of the deep reinforcement learning algorithm is trained, an approximate optimal solution can be given in one-step calculation, and compared with multi-step convergence of a heuristic algorithm, the method has great advantages for a high dynamic network.
Fig. 7 is a block diagram of a dynamic resource allocation system under the space-based cloud computing architecture of the present invention.
As shown in fig. 7, the second aspect of the present invention further provides a dynamic resource allocation system 7 of a space-based cloud computing architecture, where the dynamic resource allocation system 7 of the space-based cloud computing architecture includes: a memory 71 and a processor 72, wherein the memory 71 includes a dynamic resource allocation method program of space-based cloud computing architecture, and when the processor 72 executes the dynamic resource allocation method program of space-based cloud computing architecture, the following steps are implemented:
sensing the state parameters of the current network by using the existing network resources and the historical network data to generate a network state sensing result;
according to the network state perception result, detecting the traffic condition of each node and each link, the user access amount and the consumption of each application on the traffic under the current network topology structure, and carrying out classification and identification on the traffic to generate a traffic classification and identification result;
summarizing and inducing the flow characteristics of each node and each link under the current network topology structure according to the flow classification recognition result, predicting the flow condition based on the geographic position, the time distribution and the access intensity degree, and generating a flow prediction result;
and according to the flow prediction result, generating a resource configuration scheme through visualization of a big data analysis platform.
According to the embodiment of the invention, the sensing of the state parameters of the current network by using the existing network resources and the historical data of the network further comprises the following steps:
and sensing the state parameters of the current network by an in-band network telemetry technology, wherein the state parameters are one or more of network physical topology, queue capacity and single-hop time delay.
According to the embodiment of the present invention, detecting traffic conditions of each node and link, user access amount, and consumption amount of each application on traffic under the current network topology, and classifying and identifying the traffic, further comprising:
the method for indexing the data stream by the Hash function is adopted to detect the passing elephant stream on line at high speed, when the data stream arrives, the Hash index is utilized to carry out fast statistics on the scale of the data stream, and the data stream with the scale exceeding a certain threshold value is judged to be the elephant stream; and/or
And classifying and identifying the flow through a machine learning method based on the flow characteristics.
The third aspect of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium includes a program of a dynamic resource allocation method for a space-based cloud computing architecture, and when the program of the dynamic resource allocation method for the space-based cloud computing architecture is executed by a processor, the steps of the method for dynamically allocating resources for a space-based cloud computing architecture as described above are implemented.
The invention can meet the requirement of a 5G network by dynamically configuring computing resources and storage resources, achieve global seamless coverage, and increase the supportable user connection number to 100 universal users/square kilometer. The invention further optimizes the resource allocation while achieving global seamless coverage, so that the resource nodes are not interfered with each other, the resource allocation cost is reduced, and the maximization of the resource utilization rate is realized on the premise of limited resource allocation. The invention leads the satellite to have the on-orbit computing capability by introducing the edge computing, shortens the propagation delay and leads the propagation delay to reach the millisecond level, thereby ensuring that the data can be computed and processed in real time.
The invention also uses high-flux communication technology such as laser communication, etc., to increase the data amount of transmission, and meanwhile, the satellite in-orbit calculation can occupy less bandwidth for transmission, and the limited bandwidth can transmit more effective data. In addition, after the resources are primarily distributed, the resources can be adjusted again according to the user service quality evaluation, and the resource allocation is continuously optimized.
The invention can efficiently utilize heterogeneous resources, meet the requirements of space-based time delay sensitivity and big data application, adapt to dynamic network connection, ensure the reliability of service flow and realize the load balance of the system.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units; can be located in one place or distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all the functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Alternatively, the integrated unit of the present invention may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present invention may be essentially implemented or a part contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, a ROM, a RAM, a magnetic or optical disk, or various other media that can store program code.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (10)

1. A dynamic resource allocation method of a space-based cloud computing architecture is characterized by comprising the following steps:
sensing the state parameters of the current network by using the existing network resources and the historical network data to generate a network state sensing result;
according to the network state perception result, detecting the traffic condition of each node and each link, the user access amount and the consumption of each application on the traffic under the current network topology structure, and carrying out classification and identification on the traffic to generate a traffic classification and identification result;
summarizing and inducing the flow characteristics of each node and each link under the current network topology structure according to the flow classification recognition result, predicting the flow condition based on the geographic position, the time distribution and the access intensity degree, and generating a flow prediction result;
and according to the flow prediction result, generating a resource configuration scheme through visualization of a big data analysis platform.
2. The method of claim 1, wherein the sensing of the state parameters of the current network using existing network resources and network history data further comprises:
and sensing the state parameters of the current network by an in-band network telemetry technology, wherein the state parameters are one or more of network physical topology, queue capacity and single-hop time delay.
3. The method of claim 1, wherein the detecting traffic conditions of each node and link, the user access amount, and the consumption amount of each application on the traffic under the current network topology, and classifying and identifying the traffic further comprises:
the method for indexing the data stream by the Hash function is adopted to detect the passing elephant stream on line at high speed, when the data stream arrives, the Hash index is utilized to carry out fast statistics on the scale of the data stream, and the data stream with the scale exceeding a certain threshold value is judged to be the elephant stream; and/or
And classifying and identifying the flow through a machine learning method based on the flow characteristics.
4. The method of claim 3, wherein the classification and identification of the traffic is performed by a machine learning method based on the traffic characteristics, and further comprising:
defining characteristics by which traffic can be identified and differentiated;
training an ML classifier which can associate the feature set of the flow with known classes, applying an ML algorithm and classifying unknown flow according to a rule model which is trained well in the prior learning.
5. The method for dynamically allocating resources in a space-based cloud computing architecture according to claim 1, wherein traffic characteristics of each node and each link under a current network topology are summarized and summarized according to traffic classification recognition results, and traffic conditions based on geographical location, time distribution, and access density are predicted, further comprising:
and predicting the traffic condition based on the geographic position, the time distribution and the access intensity degree by a regression learning method and/or a reinforcement learning method.
6. The method for dynamically configuring resources of the space-based cloud computing architecture according to claim 1, wherein the resources include computing resources and storage resources, the computing resources are one or more of a CPU, a GPU and an FPGA, and the storage resources are one or more of a mechanical hard disk, a solid state hard disk, a liquid state hard disk and an optical hard disk.
7. A dynamic resource allocation system of a space-based cloud computing architecture, the dynamic resource allocation system of the space-based cloud computing architecture comprising: the dynamic resource allocation method program of the space-based cloud computing architecture is executed by the processor to realize the following steps:
sensing the state parameters of the current network by using the existing network resources and the historical network data to generate a network state sensing result;
according to the network state perception result, detecting the traffic condition of each node and each link, the user access amount and the consumption of each application on the traffic under the current network topology structure, and carrying out classification and identification on the traffic to generate a traffic classification and identification result;
summarizing and inducing the flow characteristics of each node and each link under the current network topology structure according to the flow classification recognition result, predicting the flow condition based on the geographic position, the time distribution and the access intensity degree, and generating a flow prediction result;
and according to the flow prediction result, generating a resource configuration scheme through visualization of a big data analysis platform.
8. The system of claim 7, wherein the sensing of the state parameters of the current network using existing network resources and network history data further comprises:
and sensing the state parameters of the current network by an in-band network telemetry technology, wherein the state parameters are one or more of network physical topology, queue capacity and single-hop time delay.
9. The system for dynamically configuring resources in space-based cloud computing architecture according to claim 7, wherein the traffic condition of each node and link, the user access amount, and the consumption amount of each application on the traffic under the current network topology are detected and classified, and the system further comprises:
the method for indexing the data stream by the Hash function is adopted to detect the passing elephant stream on line at high speed, when the data stream arrives, the Hash index is utilized to carry out fast statistics on the scale of the data stream, and the data stream with the scale exceeding a certain threshold value is judged to be the elephant stream; and/or
And classifying and identifying the flow through a machine learning method based on the flow characteristics.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium includes a program of a dynamic resource allocation method for space-based cloud computing architecture, and when the program of the dynamic resource allocation method for space-based cloud computing architecture is executed by a processor, the steps of the method for dynamic resource allocation for space-based cloud computing architecture as claimed in any one of claims 1 to 7 are implemented.
CN201911000850.0A 2019-10-21 2019-10-21 Dynamic resource allocation method, system and storage medium for space-based cloud computing architecture Pending CN110730138A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911000850.0A CN110730138A (en) 2019-10-21 2019-10-21 Dynamic resource allocation method, system and storage medium for space-based cloud computing architecture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911000850.0A CN110730138A (en) 2019-10-21 2019-10-21 Dynamic resource allocation method, system and storage medium for space-based cloud computing architecture

Publications (1)

Publication Number Publication Date
CN110730138A true CN110730138A (en) 2020-01-24

Family

ID=69220428

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911000850.0A Pending CN110730138A (en) 2019-10-21 2019-10-21 Dynamic resource allocation method, system and storage medium for space-based cloud computing architecture

Country Status (1)

Country Link
CN (1) CN110730138A (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111245664A (en) * 2020-03-23 2020-06-05 上海理工大学 GPU edge computing cluster communication system facing large-scale data stream processing
CN111490817A (en) * 2020-04-08 2020-08-04 北京邮电大学 Satellite network transmission method and device and electronic equipment
CN111526096A (en) * 2020-03-13 2020-08-11 北京交通大学 Intelligent identification network state prediction and congestion control system
CN111800352A (en) * 2020-06-30 2020-10-20 中国联合网络通信集团有限公司 Service function chain deployment method and storage medium based on load balancing
CN111867104A (en) * 2020-07-15 2020-10-30 中国科学院上海微系统与信息技术研究所 Power distribution method and power distribution device for low earth orbit satellite downlink
CN113015196A (en) * 2021-02-23 2021-06-22 重庆邮电大学 Network slice fault healing method based on state perception
CN113114335A (en) * 2021-03-18 2021-07-13 中国电子科技集团公司第五十四研究所 Software-defined space-based network networking architecture based on artificial intelligence
CN113315700A (en) * 2020-02-26 2021-08-27 中国电信股份有限公司 Computing resource scheduling method, device and storage medium
CN113382032A (en) * 2020-03-10 2021-09-10 阿里巴巴集团控股有限公司 Cloud node changing, network expanding and service providing method, device and medium
CN113589837A (en) * 2021-05-18 2021-11-02 国网辽宁省电力有限公司朝阳供电公司 Electric power real-time inspection method based on edge cloud
CN114050928A (en) * 2021-11-10 2022-02-15 湖南大学 SDN flow table overflow attack detection and mitigation method based on machine learning
CN114301791A (en) * 2021-12-29 2022-04-08 中国电信股份有限公司 Data transmission method and device, storage medium and electronic equipment
CN114531448A (en) * 2022-02-21 2022-05-24 联想(北京)有限公司 Calculation force determination method and device and calculation force sharing system
WO2022105642A1 (en) * 2020-11-18 2022-05-27 中兴通讯股份有限公司 Single service resource configuration method and apparatus, computer device and medium
CN114640383A (en) * 2022-01-26 2022-06-17 北京邮电大学 Satellite network service establishing method and device, electronic equipment and storage medium
CN115776445A (en) * 2023-02-10 2023-03-10 中科南京软件技术研究院 Node identification method, device, equipment and storage medium for traffic migration
CN115988574A (en) * 2023-03-15 2023-04-18 阿里巴巴(中国)有限公司 Data processing method, system, device and storage medium based on flow table
CN115981876A (en) * 2023-03-21 2023-04-18 国家体育总局体育信息中心 Cloud framework-based fitness data processing method, system and device
CN116938322A (en) * 2023-09-15 2023-10-24 中国兵器科学研究院 Networking communication method, system and storage medium of space-based time-varying topology

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160105364A1 (en) * 2014-10-13 2016-04-14 Nec Laboratories America, Inc. Network traffic flow management using machine learning
US20170155557A1 (en) * 2015-11-30 2017-06-01 International Business Machines Corporation Monitoring Dynamic Networks
CN107683597A (en) * 2015-06-04 2018-02-09 思科技术公司 Network behavior data collection and analysis for abnormality detection
CN108259367A (en) * 2018-01-11 2018-07-06 重庆邮电大学 A kind of Flow Policy method for customizing of the service-aware based on software defined network
CN108989099A (en) * 2018-07-02 2018-12-11 北京邮电大学 Federated resource distribution method and system based on software definition Incorporate network
CN109936619A (en) * 2019-01-18 2019-06-25 中国科学院空间应用工程与技术中心 A kind of Information Network framework, method and readable storage medium storing program for executing calculated based on mist
CN109995583A (en) * 2019-03-15 2019-07-09 清华大学深圳研究生院 A kind of scalable appearance method and system of NFV cloud platform dynamic of delay guaranteed

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160105364A1 (en) * 2014-10-13 2016-04-14 Nec Laboratories America, Inc. Network traffic flow management using machine learning
CN107683597A (en) * 2015-06-04 2018-02-09 思科技术公司 Network behavior data collection and analysis for abnormality detection
US20170155557A1 (en) * 2015-11-30 2017-06-01 International Business Machines Corporation Monitoring Dynamic Networks
CN108259367A (en) * 2018-01-11 2018-07-06 重庆邮电大学 A kind of Flow Policy method for customizing of the service-aware based on software defined network
CN108989099A (en) * 2018-07-02 2018-12-11 北京邮电大学 Federated resource distribution method and system based on software definition Incorporate network
CN109936619A (en) * 2019-01-18 2019-06-25 中国科学院空间应用工程与技术中心 A kind of Information Network framework, method and readable storage medium storing program for executing calculated based on mist
CN109995583A (en) * 2019-03-15 2019-07-09 清华大学深圳研究生院 A kind of scalable appearance method and system of NFV cloud platform dynamic of delay guaranteed

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113315700B (en) * 2020-02-26 2022-06-28 中国电信股份有限公司 Computing resource scheduling method, device and storage medium
CN113315700A (en) * 2020-02-26 2021-08-27 中国电信股份有限公司 Computing resource scheduling method, device and storage medium
CN113382032A (en) * 2020-03-10 2021-09-10 阿里巴巴集团控股有限公司 Cloud node changing, network expanding and service providing method, device and medium
CN111526096A (en) * 2020-03-13 2020-08-11 北京交通大学 Intelligent identification network state prediction and congestion control system
CN111245664A (en) * 2020-03-23 2020-06-05 上海理工大学 GPU edge computing cluster communication system facing large-scale data stream processing
CN111490817A (en) * 2020-04-08 2020-08-04 北京邮电大学 Satellite network transmission method and device and electronic equipment
CN111490817B (en) * 2020-04-08 2021-04-02 北京邮电大学 Satellite network transmission method and device and electronic equipment
CN111800352A (en) * 2020-06-30 2020-10-20 中国联合网络通信集团有限公司 Service function chain deployment method and storage medium based on load balancing
CN111800352B (en) * 2020-06-30 2023-02-17 中国联合网络通信集团有限公司 Service function chain deployment method and storage medium based on load balancing
CN111867104A (en) * 2020-07-15 2020-10-30 中国科学院上海微系统与信息技术研究所 Power distribution method and power distribution device for low earth orbit satellite downlink
CN111867104B (en) * 2020-07-15 2022-11-29 中国科学院上海微系统与信息技术研究所 Power distribution method and power distribution device for low earth orbit satellite downlink
WO2022105642A1 (en) * 2020-11-18 2022-05-27 中兴通讯股份有限公司 Single service resource configuration method and apparatus, computer device and medium
CN113015196A (en) * 2021-02-23 2021-06-22 重庆邮电大学 Network slice fault healing method based on state perception
CN113015196B (en) * 2021-02-23 2022-05-06 重庆邮电大学 Network slice fault healing method based on state perception
CN113114335A (en) * 2021-03-18 2021-07-13 中国电子科技集团公司第五十四研究所 Software-defined space-based network networking architecture based on artificial intelligence
CN113114335B (en) * 2021-03-18 2021-11-19 中国电子科技集团公司第五十四研究所 Software-defined space-based network networking architecture based on artificial intelligence
CN113589837A (en) * 2021-05-18 2021-11-02 国网辽宁省电力有限公司朝阳供电公司 Electric power real-time inspection method based on edge cloud
CN114050928A (en) * 2021-11-10 2022-02-15 湖南大学 SDN flow table overflow attack detection and mitigation method based on machine learning
CN114301791A (en) * 2021-12-29 2022-04-08 中国电信股份有限公司 Data transmission method and device, storage medium and electronic equipment
CN114640383A (en) * 2022-01-26 2022-06-17 北京邮电大学 Satellite network service establishing method and device, electronic equipment and storage medium
CN114640383B (en) * 2022-01-26 2023-04-11 北京邮电大学 Satellite network service establishing method and device, electronic equipment and storage medium
CN114531448B (en) * 2022-02-21 2024-02-27 联想(北京)有限公司 Calculation force determining method and device and calculation force sharing system
CN114531448A (en) * 2022-02-21 2022-05-24 联想(北京)有限公司 Calculation force determination method and device and calculation force sharing system
CN115776445A (en) * 2023-02-10 2023-03-10 中科南京软件技术研究院 Node identification method, device, equipment and storage medium for traffic migration
CN115988574B (en) * 2023-03-15 2023-08-04 阿里巴巴(中国)有限公司 Data processing method, system, equipment and storage medium based on flow table
CN115988574A (en) * 2023-03-15 2023-04-18 阿里巴巴(中国)有限公司 Data processing method, system, device and storage medium based on flow table
CN115981876A (en) * 2023-03-21 2023-04-18 国家体育总局体育信息中心 Cloud framework-based fitness data processing method, system and device
CN116938322A (en) * 2023-09-15 2023-10-24 中国兵器科学研究院 Networking communication method, system and storage medium of space-based time-varying topology
CN116938322B (en) * 2023-09-15 2024-02-02 中国兵器科学研究院 Networking communication method, system and storage medium of space-based time-varying topology

Similar Documents

Publication Publication Date Title
CN110730138A (en) Dynamic resource allocation method, system and storage medium for space-based cloud computing architecture
Huda et al. Survey on computation offloading in UAV-Enabled mobile edge computing
Khan et al. Digital-twin-enabled 6G: Vision, architectural trends, and future directions
Zhang et al. Deep learning empowered task offloading for mobile edge computing in urban informatics
Hu et al. UAV-assisted vehicular edge computing for the 6G internet of vehicles: Architecture, intelligence, and challenges
Zhou et al. Machine learning-based offloading strategy for lightweight user mobile edge computing tasks
Xu et al. Uav-assisted task offloading for iot in smart buildings and environment via deep reinforcement learning
Pinto et al. A framework for analyzing fog-cloud computing cooperation applied to information processing of UAVs
Wei et al. Reinforcement learning-empowered mobile edge computing for 6G edge intelligence
Hussain et al. CODE-V: Multi-hop computation offloading in Vehicular Fog Computing
Ashraf A proactive role of IoT devices in building smart cities
Gu et al. Coded storage-and-computation: A new paradigm to enhancing intelligent services in space-air-ground integrated networks
Alsulami et al. A federated deep learning empowered resource management method to optimize 5G and 6G quality of services (QoS)
Peng et al. High concurrency massive data collection algorithm for IoMT applications
Fazel et al. Unlocking the power of mist computing through clustering techniques in IoT networks
Ostrowski et al. Mobility-aware fog computing in dynamic networks with mobile nodes: A survey
Samiayya et al. An optimal model for enhancing network lifetime and cluster head selection using hybrid snake whale optimization
Gu et al. AI-Enhanced Cloud-Edge-Terminal Collaborative Network: Survey, Applications, and Future Directions
Kurniawan et al. Mobile computing and communications-driven fog-assisted disaster evacuation techniques for context-aware guidance support: A survey
Alghamdi et al. Optimized Contextual Data Offloading in Mobile Edge Computing
Zier et al. Firp: Firefly inspired routing protocol for future internet of things
Langpoklakpam et al. Review on Machine Learning for Intelligent Routing, Key Requirement and Challenges Towards 6G
Mahmoudian et al. The Intelligent Mechanism for Data Collection and Data Mining in the Vehicular Ad-Hoc Networks (VANETs) Based on Big-Data-Driven
Bousbaa et al. GTSS-UC: a game theoretic approach for services' selection in UAV clouds
Shahraki et al. A distributed fog node assessment model by using fuzzy rules learned by XGBoost

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200124

WD01 Invention patent application deemed withdrawn after publication