CN112799829A - Knowledge-driven network resource arrangement method - Google Patents

Knowledge-driven network resource arrangement method Download PDF

Info

Publication number
CN112799829A
CN112799829A CN202110021492.2A CN202110021492A CN112799829A CN 112799829 A CN112799829 A CN 112799829A CN 202110021492 A CN202110021492 A CN 202110021492A CN 112799829 A CN112799829 A CN 112799829A
Authority
CN
China
Prior art keywords
resources
resource
network
edge
knowledge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110021492.2A
Other languages
Chinese (zh)
Inventor
张培颖
庞雪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Petroleum East China
Original Assignee
China University of Petroleum East China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Petroleum East China filed Critical China University of Petroleum East China
Priority to CN202110021492.2A priority Critical patent/CN112799829A/en
Publication of CN112799829A publication Critical patent/CN112799829A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/502Proximity

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention provides a knowledge-driven network resource arrangement method, and relates to the technical field of computers. The method comprises the following steps: the computing and processing of the network are transferred to the edge of the network by utilizing an edge computing technology and a knowledge driving technology, and are divided according to the difference of resource requirements among nodes; network node virtualization, wherein the divided edge nodes are abstracted to different virtual networks to realize network virtualization; and arranging the network resources, wherein the resources are cleaned and converted, the arranged resources are stored in the cloud server, and the edge nodes arrange the resources into each network infrastructure through data fusion. The method uses the technologies of edge calculation, knowledge drive and the like in the process of arranging the resources, can transfer the calculation and processing of the data in the network to the edge of the network, and avoids centralized processing, thereby avoiding the redundancy of the data, improving the data processing efficiency, and improving the rationality and effectiveness of arranging the resources.

Description

Knowledge-driven network resource arrangement method
Technical Field
The invention relates to the technical field of computers, in particular to a knowledge-driven network resource arranging method.
Background
The network is a large-scale interconnection and interworking whole, and middleware can interact through different gateways. However, with the advent of the interconnected age and the rapid spread of wireless networks, the number of edge devices constituting the network has sharply increased, while generating a large amount of data information. Although the traditional centralized public cloud service architecture has achieved some success, it still faces huge challenges because the concentration of resources can cause the problems of time delay, cycle jitter, etc. of the network platform, while some real-time applications or applications with high requirements for delay sensitivity have high requirements, the expansion of the number of network devices and the requirements of various emerging applications are important issues to be perfected for the arrangement of network resources.
With the development of intelligent technology, knowledge economy is rapidly and leapfrogously developed, knowledge is reflected as the core value thereof, and great contribution is made to the development of knowledge economy. Knowledge-driven is a more intelligent and efficient way of computing following data-driven. In the knowledge drive, knowledge refers to a commonly recognized network service set obtained according to the state and information of a network, and specifically, the set comprises various calculation methods, codes, data, catalogs and required knowledge maps for processing and calculating data.
In a traditional network, a data processing mode is centralized, that is, data generated by devices in the network are all transmitted to a central cloud server for processing and calculation, which not only causes low efficiency and data redundancy, but also reduces the resource arrangement efficiency, so that resources cannot be applied sufficiently and reasonably, and resource waste is caused.
In the prior art, a plurality of resource arranging methods are provided for improving the reasonable arrangement of network resources, but the existing method has a single optimization mode, and the problems of overhead increase, time efficiency reduction and the like are easily caused while the resource arrangement is solved, that is, the network resource arranging method in the prior art has the technical problems of low efficiency and the like.
Disclosure of Invention
According to the problems in the prior art, the invention provides a knowledge-driven network resource arranging method, and aims to solve the technical problems of low efficiency, low safety and the like of the network resource arranging method in the prior art.
The resource arranging method is characterized by comprising the steps of transferring the processing of a network to the edge of the network by utilizing an edge computing technology, and dividing network edge nodes, wherein the network edge nodes comprise network equipment positioned at the edge in the network and are divided according to the difference of resource requirements among the nodes; network node virtualization, wherein the divided edge nodes are abstracted to different virtual networks to realize network virtualization; and arranging the network resources, wherein various resources to be arranged are managed through the cloud server, all the resources are cleaned and converted, the arranged resources are stored in the cloud server, and the resources are arranged into each network infrastructure through data fusion by the edge nodes.
In an optional embodiment, determining a partition manner of the edge node in the at least one network by using an edge node virtualization method includes: and by utilizing an edge computing technology, taking management equipment positioned at the edge in the network as edge nodes of the network, wherein each edge node is responsible for the management of facilities closest to the edge node in the network, evaluating, updating and optimizing the learned knowledge, and establishing a knowledge fusion framework and a knowledge drive. After the edge nodes are determined, the edge nodes are classified according to different resource demand types, all the edge nodes with high demands on time delay are divided into time delay sensitive types, all the edge nodes with high demands on bandwidth are divided into bandwidth sensitive types, all the edge nodes with high demands on cost perception are divided into cost sensitive types, and after the division, the edge nodes distributed at different positions of a network are divided together due to similar resource demands.
In an optional embodiment, the divided edge nodes are virtualized, the same kind of edge nodes are abstracted into different virtual networks, each virtual network is responsible for distributing and filtering different resource and service requests, that is, a knowledge request edge node is located between a cloud server and an infrastructure in the network, the abstracted virtual network is located between an edge node and the cloud server and has two specific interfaces, an upper interface is linked with the cloud server and used for receiving and processing downlink data from the cloud server, and a lower interface is linked with the edge node and used for receiving and processing uplink data from the edge node.
In an alternative embodiment, the abstracted virtual network may be represented by a virtual network model, and the virtual network model may be represented by a weighted undirected graph.
In an alternative embodiment, the network infrastructure may be represented using a physical network model, which may be represented using a weighted undirected graph.
In an alternative embodiment, the network resource arrangement is performed based on the different demands of the physical network facilities on the resources. In the process, the commonly approved network service set obtained according to the state and the information of the network is used as knowledge, and resources are reasonably arranged to different physical facilities through knowledge driving.
In an optional embodiment, for the resources that need to be allocated, on the premise that the resource arrangement condition is satisfied, the resource arrangement is performed on a case-by-case basis. The resource arrangement condition comprises: the amount of CPU resources contained in the resources to be scheduled must be greater than or equal to the maximum amount of CPU resources required by the infrastructure, and the amount of bandwidth resources contained in the resources to be scheduled must be greater than or equal to the maximum amount of bandwidth resources required by the infrastructure.
In an optional embodiment, for the case-by-case resource scheduling processing, when the physical facility has a high requirement for time delay, the time delay attributes of the resources to be scheduled are sorted first, and in the sorted attributes to be scheduled, scheduling processing is performed starting from the resource to be scheduled with the smallest time delay attribute, and the whole process needs to meet the resource scheduling condition first.
In an optional embodiment, for the case-by-case resource scheduling processing, when the physical facility has a high demand for bandwidth, the bandwidth attributes of the resources to be scheduled are sorted first, and in the sorted attributes to be scheduled, scheduling processing is performed starting from the resource to be scheduled with the largest bandwidth attribute, and the whole process needs to meet the resource scheduling condition first.
In an optional embodiment, for the case-by-case resource scheduling processing, when the physical facility has a high cost requirement, the cost attributes of the resources to be scheduled are sorted first, and in the sorted attributes to be scheduled, scheduling processing is performed starting from the resource to be scheduled with the smallest cost attribute, and the whole process needs to meet the resource scheduling condition first.
In an alternative embodiment, for the above three cases, if multiple resources to be scheduled are simultaneously confronted with the resource scheduling requirement, the resource scheduling requirement may be formulated
Figure BDA0002888755000000041
Calculating the Priority of each resource to be scheduled, that is, the greater the Priority value, the higher the Priority of the resource to be scheduled is, the resource will be allocated preferentially according to the requirement, in the above formula,
Figure BDA0002888755000000042
representing a virtual node, i.e. a carrier of resources to be scheduled,
Figure BDA0002888755000000045
indicating the bandwidth attribute value that the resource to be scheduled has,
Figure BDA0002888755000000043
indicating the value of the delay attribute that the resource to be scheduled has,
Figure BDA0002888755000000044
the cost attribute value of the resource to be scheduled is represented, and α, β, and ∈ represent the weight of three values, i.e., bandwidth, delay, and cost, respectively, and the relationship of the three weights may be represented as α + β + ∈ 1.
The invention has the beneficial effects that:
compared with the prior art, the invention provides a brand-new network resource arranging method, and by the aid of a brand-new designed knowledge-driven network resource arranging method, after the resources to be arranged are set and calculated with the help of an edge computing technology and a knowledge-driven technology, the resources to be arranged can be effectively arranged in the required physical facilities according to requirements, so that the resource arranging efficiency is effectively improved, and the redundancy and waste of the resources are reduced.
Drawings
In order to more clearly illustrate the detailed description of the invention or the technical solutions in the prior art, the drawings that are needed in the detailed description of the invention or the prior art will be briefly described below. It is obvious that the drawings in the following description are some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
FIG. 1 is a flowchart of a knowledge-driven network resource orchestration method according to an embodiment of the present invention;
FIG. 2 is a diagram of a network resource orchestration architecture using edge computing techniques according to an embodiment of the present invention;
fig. 3 is a diagram of a virtual network model after virtualization of an edge node according to an embodiment of the present invention;
fig. 4 is a diagram of a physical network model after abstraction of a network physical facility according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Some embodiments of the invention are described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
With the advent of the interconnected age and the rapid spread of wireless networks, the number of edge devices constituting the network has increased dramatically while generating a large amount of data information. In a traditional network, a data processing mode is centralized, that is, data generated by devices in the network are all transmitted to a central cloud server for processing and calculation, which not only causes low efficiency and data redundancy, but also reduces the resource arrangement efficiency, so that resources cannot be applied sufficiently and reasonably, and resource waste is caused. According to the problems in the prior art, the invention provides a knowledge-driven network resource arranging method, and aims to solve the technical problems of low efficiency, low safety and the like of the network resource arranging method in the prior art.
The embodiment of the invention provides a knowledge-driven network resource arranging method, which specifically comprises the following steps as shown in figure 1:
the method comprises the following steps: the computation and processing is transferred to the network edge using edge computing techniques.
Specifically, the network resource arrangement method provided by the present invention uses an edge computing technology to transfer the computation and processing processes in the network to the edge of the network for performing, as shown in fig. 2, determining the partition mode of the edge node in the at least one network by using an edge node virtualization method includes: the management equipment at the edge in the network is used as the edge node of the network by utilizing the edge computing technology, and each edge node is responsible for the management of the facility which is closest to the edge node in the network. After the edge nodes are determined, the edge nodes are classified according to different resource demand types, all the edge nodes with high demands on time delay are divided into time delay sensitive types, all the edge nodes with high demands on bandwidth are divided into bandwidth sensitive types, all the edge nodes with high demands on cost perception are divided into cost sensitive types, and after the division, the edge nodes distributed at different positions of a network are divided together due to similar resource demands.
Step two: virtualizing edge nodes
As shown in fig. 2, the divided edge nodes are virtualized, the same kind of edge nodes of the divided edge nodes are abstracted into different virtual networks, the abstracted virtual networks can be represented by a virtual network model, and as shown in fig. 3, the virtual network model can be represented by a weighted undirected graph. Each virtual network is responsible for distributing and filtering different resource and service requests, namely, a request edge node of knowledge is positioned between a cloud server and infrastructure in the network, the abstracted virtual network is positioned between the edge node and the cloud server and is provided with two specific interfaces, the upper interface is connected with the cloud server and used for receiving and processing downlink data from the cloud server, and the lower interface is connected with the edge node and used for receiving and processing uplink data from the edge node.
Step three: resource arrangement is carried out on resources to be arranged according to requirements through knowledge drive
And arranging the network resources based on different requirements of the physical network facilities on the resources. Various resources to be arranged are managed through the cloud server, then all the resources are cleaned and converted, the arranged resources are stored in the cloud server, and the resources are arranged into each network infrastructure through data fusion by the edge nodes. The network infrastructure described above may be represented using fig. 4 using a physical network model, which may be represented using a weighted undirected graph.
In the process, firstly, bandwidth data, experimental data, CPU data and other data of resources to be scheduled are collected; and then extracting the resource instances, the relationship among the resources and the attributes of the resources through knowledge extraction.
The commonly approved network service set obtained according to the state and information of the network is used as knowledge, and resources are reasonably arranged to different physical facilities through knowledge driving.
And for the resources needing to be allocated, the resources are arranged according to the condition on the premise of meeting the resource arrangement condition. The resource arrangement condition comprises: the amount of CPU resources contained in the resources to be scheduled must be greater than or equal to the maximum amount of CPU resources required by the infrastructure, and the amount of bandwidth resources contained in the resources to be scheduled must be greater than or equal to the maximum amount of bandwidth resources required by the infrastructure.
For the above resource arrangement processing according to the situation, when the physical facility has a high requirement for time delay, the time delay attributes of the resources to be arranged are firstly sequenced, in the sequenced attributes to be arranged, the arrangement processing is performed from the resource to be arranged with the smallest time delay attribute, and the whole process firstly meets the above resource arrangement condition.
For the resource arrangement processing according to the above situations, when the physical facility has a high demand for bandwidth, the bandwidth attributes of the resources to be arranged are sorted first, and in the sorted attributes to be arranged, the arrangement processing is performed starting from the resource to be arranged with the largest bandwidth attribute, and the whole process needs to meet the resource arrangement condition first.
For the resource arrangement processing according to the above situations, when the physical facility has a high cost requirement, the cost attributes of the resources to be arranged are sorted first, and in the sorted attributes to be arranged, the arrangement processing is performed starting from the resource to be arranged with the smallest cost attribute, and the whole process needs to meet the resource arrangement condition first.
For the above three cases, if a plurality of resources to be scheduled face the resource scheduling requirement at the same time, the resource scheduling requirement can be met through a formula
Figure BDA0002888755000000081
Calculating the Priority of each resource to be scheduled, that is, the greater the Priority value, the higher the Priority of the resource to be scheduled is, the resource will be allocated preferentially according to the requirement, in the above formula,
Figure BDA0002888755000000082
representing a virtual node, i.e. a carrier of resources to be scheduled,
Figure BDA0002888755000000083
indicating the bandwidth attribute value that the resource to be scheduled has,
Figure BDA0002888755000000084
indicating the value of the delay attribute that the resource to be scheduled has,
Figure BDA0002888755000000085
representing the cost attribute value of the resource to be arranged, alpha, beta and epsilon respectively representing the bandwidth and the time delayThe relationship between these three values and the three weights can be expressed as α + β + ∈ 1.

Claims (9)

1. A knowledge-driven network resource orchestration method, comprising:
transferring computation and processing to the network edge using edge computing techniques;
virtualizing edge nodes;
and performing resource arrangement on the resources to be arranged according to the requirements through knowledge driving.
2. The method of claim 1, wherein transferring computation and processing to the network edge using an edge computation technique comprises:
using edge computing technology, taking management equipment located at the edge in the network as edge nodes of the network, wherein each edge node is responsible for managing facilities closest to the edge node in the network, evaluating, updating and optimizing learned knowledge, and establishing a knowledge fusion framework and knowledge drive; after the edge nodes are determined, the edge nodes are classified according to different resource demand types, all the edge nodes with high demands on time delay are divided into time delay sensitive types, all the edge nodes with high demands on bandwidth are divided into bandwidth sensitive types, all the edge nodes with high demands on cost perception are divided into cost sensitive types, and after the division, the edge nodes distributed at different positions of a network are divided together due to similar resource demands.
3. The method of claim 1, wherein virtualizing the edge node comprises:
and abstracting the divided edge nodes of the same kind into different virtual networks, wherein the abstracted virtual networks are represented by using a virtual network model, and the virtual network model can be represented by using a weighted undirected graph.
4. The method of claim 3, comprising:
each virtual network is responsible for distributing and filtering different resource and service requests, namely a request edge node of knowledge is positioned between a cloud server and infrastructure in the network, and the abstracted virtual network is positioned between the edge node and the cloud server.
5. The method of claim 3, comprising:
the system comprises two specific interfaces, wherein an upper interface is connected with the cloud server and used for receiving and processing downlink data from the cloud server, and a lower interface is connected with an edge node and used for receiving and processing uplink data from the edge node.
6. The method of claim 1, wherein resource scheduling the resource to be scheduled according to the requirement comprises:
firstly, various resources to be arranged are managed through a cloud server, then all the resources are cleaned and converted, the arranged resources are stored in the cloud server, and the resources are arranged into each network infrastructure through data fusion by an edge node; in the process, firstly, bandwidth data, experimental data, CPU data and other data of resources to be scheduled are collected; then extracting the resource instances, the relation among the resources and the attributes of the resources through knowledge extraction; according to the state and information of the network, the obtained commonly approved network service set is used as knowledge, and resources are reasonably arranged to different physical facilities through knowledge driving.
7. The method of claim 6, comprising:
for the resources needing to be distributed, on the premise of meeting the resource arrangement conditions, the resource arrangement is carried out according to the conditions, wherein the resource arrangement conditions comprise: the amount of CPU resources contained in the resources to be scheduled must be greater than or equal to the maximum amount of CPU resources required by the infrastructure, and the amount of bandwidth resources contained in the resources to be scheduled must be greater than or equal to the maximum amount of bandwidth resources required by the infrastructure.
8. The method of claim 6, comprising:
when the physical facility has higher requirements on time delay, firstly sequencing the time delay attributes of the resources to be programmed, and in the sequenced attributes to be programmed, starting from the resource to be programmed with the minimum time delay attribute, performing programming treatment, wherein the whole process firstly meets the resource programming conditions; when the physical facility has higher requirements on the bandwidth, the bandwidth attributes of the resources to be arranged are sorted firstly, in the sorted attributes to be arranged, the arrangement processing is carried out from the resource to be arranged with the largest bandwidth attribute, and the whole process firstly meets the resource arrangement condition; when the physical facility has higher cost requirement, the cost attributes of the resources to be arranged are firstly sequenced, in the sequenced attributes to be arranged, the arrangement processing is carried out from the resource to be arranged with the minimum cost attribute, and the whole process firstly meets the resource arrangement condition.
9. The method of claim 6, comprising:
for three resource arrangement conditions, if a plurality of resources to be arranged face the resource arrangement requirement at the same time, the resource arrangement requirement can be met through a formula
Figure FDA0002888754990000031
Calculating the Priority of each resource to be scheduled, that is, the greater the Priority value, the higher the Priority of the resource to be scheduled is, the resource will be allocated preferentially according to the requirement, in the above formula,
Figure FDA0002888754990000032
representing a virtual node, i.e. a carrier of resources to be scheduled,
Figure FDA0002888754990000033
indicating the bandwidth attribute value that the resource to be scheduled has,
Figure FDA0002888754990000034
indicating the value of the delay attribute that the resource to be scheduled has,
Figure FDA0002888754990000035
the cost attribute value of the resource to be scheduled is represented, and α, β, and ∈ represent the weight of three values, i.e., bandwidth, delay, and cost, respectively, and the relationship of the three weights may be represented as α + β + ∈ 1.
CN202110021492.2A 2021-01-08 2021-01-08 Knowledge-driven network resource arrangement method Pending CN112799829A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110021492.2A CN112799829A (en) 2021-01-08 2021-01-08 Knowledge-driven network resource arrangement method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110021492.2A CN112799829A (en) 2021-01-08 2021-01-08 Knowledge-driven network resource arrangement method

Publications (1)

Publication Number Publication Date
CN112799829A true CN112799829A (en) 2021-05-14

Family

ID=75809216

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110021492.2A Pending CN112799829A (en) 2021-01-08 2021-01-08 Knowledge-driven network resource arrangement method

Country Status (1)

Country Link
CN (1) CN112799829A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113162967A (en) * 2021-01-18 2021-07-23 电子科技大学 Method for serving video in video Internet of things, storage device and server

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170164239A1 (en) * 2009-11-24 2017-06-08 Huawei Technologies Co., Ltd. Base station, network system, and implementation method
CN109525626A (en) * 2017-09-20 2019-03-26 中兴通讯股份有限公司 The management method of CDN network virtualization of function, apparatus and system
CN110505101A (en) * 2019-09-05 2019-11-26 无锡北邮感知技术产业研究院有限公司 A kind of network slice method of combination and device
CN110737442A (en) * 2019-09-24 2020-01-31 厦门网宿有限公司 edge application management method and system
CN111147398A (en) * 2019-12-09 2020-05-12 中国科学院计算机网络信息中心 Communication computing joint resource allocation method and system in delay sensitive network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170164239A1 (en) * 2009-11-24 2017-06-08 Huawei Technologies Co., Ltd. Base station, network system, and implementation method
CN109525626A (en) * 2017-09-20 2019-03-26 中兴通讯股份有限公司 The management method of CDN network virtualization of function, apparatus and system
CN110505101A (en) * 2019-09-05 2019-11-26 无锡北邮感知技术产业研究院有限公司 A kind of network slice method of combination and device
CN110737442A (en) * 2019-09-24 2020-01-31 厦门网宿有限公司 edge application management method and system
CN111147398A (en) * 2019-12-09 2020-05-12 中国科学院计算机网络信息中心 Communication computing joint resource allocation method and system in delay sensitive network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
PEIYING ZHANG ET AL.: "STEC-IoT: A Security Tactic by Virtualizing Edge Computing on IoT", 《IEEE XPLORE》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113162967A (en) * 2021-01-18 2021-07-23 电子科技大学 Method for serving video in video Internet of things, storage device and server
CN113162967B (en) * 2021-01-18 2022-02-18 电子科技大学 Method for serving video in video Internet of things, storage device and server

Similar Documents

Publication Publication Date Title
CN108829494B (en) Container cloud platform intelligent resource optimization method based on load prediction
CN106033476B (en) A kind of increment type figure calculation method under distributed computation mode in cloud computing environment
CN106534318B (en) A kind of OpenStack cloud platform resource dynamic scheduling system and method based on flow compatibility
CN105677486A (en) Data parallel processing method and system
CN102281290B (en) Emulation system and method for a PaaS (Platform-as-a-service) cloud platform
CN104915407A (en) Resource scheduling method under Hadoop-based multi-job environment
CN105610715B (en) A kind of cloud data center multi-dummy machine migration scheduling method of planning based on SDN
CN111880911A (en) Task load scheduling method, device and equipment and readable storage medium
TWI725744B (en) Method for establishing system resource prediction and resource management model through multi-layer correlations
CN111930511A (en) Identifier resolution node load balancing device based on machine learning
CN108920153A (en) A kind of Docker container dynamic dispatching method based on load estimation
CN111221624A (en) Container management method for regulation cloud platform based on Docker container technology
Song et al. Gaia scheduler: A kubernetes-based scheduler framework
CN103414767A (en) Method and device for deploying application software on cloud computing platform
CN115134371A (en) Scheduling method, system, equipment and medium containing edge network computing resources
CN113157459A (en) Load information processing method and system based on cloud service
CN114666335B (en) Distributed system load balancing device based on data distribution service DDS
CN114138488A (en) Cloud-native implementation method and system based on elastic high-performance computing
CN105468756A (en) Design and realization method for mass data processing system
CN113703984B (en) Cloud task optimization strategy method based on SOA (service oriented architecture) under 5G cloud edge cooperative scene
CN112799829A (en) Knowledge-driven network resource arrangement method
CN104346220A (en) Task scheduling method and system
Chen Design of computer big data processing system based on genetic algorithm
CN110958192B (en) Virtual data center resource allocation system and method based on virtual switch
Ren et al. Balanced allocation method of physical education distance education resources based on linear prediction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210514