CN114996025A - Network defense method based on edge technology - Google Patents

Network defense method based on edge technology Download PDF

Info

Publication number
CN114996025A
CN114996025A CN202210915405.2A CN202210915405A CN114996025A CN 114996025 A CN114996025 A CN 114996025A CN 202210915405 A CN202210915405 A CN 202210915405A CN 114996025 A CN114996025 A CN 114996025A
Authority
CN
China
Prior art keywords
task
network
data center
subtasks
data volume
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210915405.2A
Other languages
Chinese (zh)
Inventor
杨涛
罗思睿
漆彬
邓力为
徐远双
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Public Project Consulting Management Co ltd
Original Assignee
Sichuan Public Project Consulting Management Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Public Project Consulting Management Co ltd filed Critical Sichuan Public Project Consulting Management Co ltd
Priority to CN202210915405.2A priority Critical patent/CN114996025A/en
Publication of CN114996025A publication Critical patent/CN114996025A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/20Network architectures or network communication protocols for network security for managing network security; network security policies in general
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Mathematical Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention relates to the technical field of network security, and discloses a network defense method based on an edge technology, which comprises the following steps: step S1, confirming the position of the micro data center in the edge technology according to the data volume and the aggregation time of the task; and step S2, performing network defense by adopting a single task scheduling and light path configuration algorithm according to the position of the micro data center. According to the invention, the network defense effect can be enhanced in the process of using edge calculation by confirming the position of the micro data center in the edge technology. The invention provides a network defense method for a task grouping scheduling strategy considering the correlation among tasks and aiming at the configuration strategy according to the overlapping property of paths based on grouping tasks.

Description

Network defense method based on edge technology
Technical Field
The invention relates to the technical field of network security, in particular to a network defense method based on an edge technology.
Background
The edge computing can provide services and cloud computing functions required by IT of telecommunication users nearby by using a wireless access network, so that a telecommunication-level service environment with high performance, low delay and high bandwidth is created, the rapid downloading of various contents, services and applications in the network is accelerated, consumers can enjoy uninterrupted high-quality network experience, but the edge computing is moved, and computing tasks are unloaded to nearby edge servers by mobile equipment, so that the energy consumption and task completion delay of the edge computing are reduced, and network safety hazards are generated in the unloading process. In the process of mobile edge computing, data near the edge of a network can be processed through the mobile edge computing instead of relying on collective processing of a data center, when code is run through the edge computing, the code is not in a stack or in a secure environment, sometimes back-end query from an application program is still needed for the edge running, and when a single user connects a plurality of devices to the network at the same time, the single user is also confronted with attack of malicious viruses because of the security of an end point.
In summary, mobile edge computing, and even edge computing, while covering a variety of environments for remote management and monitoring, faces the same security or reliability issues that do not provide private cloud viewing. Therefore, a network defense method based on the edge technology is needed to enhance the network defense effect in the process of using the edge computing.
Disclosure of Invention
The invention aims to provide a network defense method based on an edge technology, which has the effect of enhancing the network defense effect in the process of using edge calculation.
The invention is realized by the following technical scheme: a network defense method based on edge technology comprises the following steps:
step S1, confirming the position of the micro data center in the edge technology according to the data volume and the aggregation time of the task;
and step S2, performing network defense by adopting a single task scheduling and light path configuration algorithm according to the position of the micro data center.
In order to better implement the present invention, further, the method for acquiring the data volume and the aggregation time of the task in step S1 includes:
collecting the data volume of the task;
according to the data volume of the tasks, the ant colony algorithm is used for distributing the data volume for the subtasks;
confirming the network bandwidth of the subtask according to the data size distributed by the subtask;
calculating the time required by the data volume of the subtasks according to the data volume of the subtasks and the network bandwidth;
and acquiring the task aggregation time according to the start time and the end time of the subtasks.
In order to better implement the present invention, further, the method for distributing data volume to the subtasks by using the ant colony algorithm and load balancing includes:
the method comprises the steps of firstly, initializing the number of ants, attaching required resources of tasks to each task according to the order of task submission, and loading iteration times, heuristic factors and relevant parameters;
secondly, randomly distributing n ants carrying task requirements on random nodes, calculating the probability that the kth ant distributes a task i on a node j, randomly selecting a node from nodes meeting conditions in a roulette mode to serve as a distribution node of the task, and deploying the task on the node;
and thirdly, after the kth ant completes all task allocation, locally updating the allocated nodes.
Fourthly, calculating the load unbalance degree N of the distribution scheme after the ants are deployed, comparing the load unbalance degree N with historical records, and recording the optimal distribution scheme and the minimum load unbalance degree;
fifthly, judging whether all ants are finished or not, and if all ants are not finished, continuing returning to the second step; if all ants finish the iteration, calculating and storing the global optimal solution;
sixthly, judging whether the iteration times are met or the preset load balance degree N is met, determining the data volume of the distributed subtasks according to the load balance degree N by using a load balance distribution method, if so, returning to the optimal path solution after the iteration of the algorithm is finished, and if not, returning to the second step
In order to better implement the present invention, the method for confirming the location of the micro data center in the edge technology in step S1 further includes:
generating a micro data center index according to the data volume of the task, the micro data center where the subtask is located and the node in-degree of the micro data center node where the subtask is located in the network;
and determining the position of the micro data center according to the index of the micro data center.
In order to better implement the present invention, further, the step S2 includes:
combining the emergency degree of the current network resource calculation task as the scheduling priority;
and processing the tasks locally according to the priority level, and performing optical path configuration on the intermediate result after processing through a delay transmission strategy.
Compared with the prior art, the invention has the following advantages and beneficial effects:
(1) according to the method, the network defense effect can be enhanced in the process of using edge calculation by confirming the position of the micro data center in the edge technology;
(2) the invention provides a network defense method for a task grouping scheduling strategy considering the correlation among tasks and aiming at the configuration strategy according to the overlapping property of paths based on grouping tasks.
Drawings
The invention is further described in connection with the following figures and examples, all of which are intended to be open ended and within the scope of the invention.
Fig. 1 is a flowchart of a network defense method based on edge technology according to the present invention.
Fig. 2 is a schematic structural diagram of a classical scheduling provided by the present invention.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it should be understood that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments, and therefore should not be considered as a limitation to the scope of protection. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
Example 1:
in the network defense method based on the edge technology of the embodiment, as shown in fig. 1, the security awareness of the enterprise is often low in the use of the edge technology. However, what is clearly calculated at present is to store and process data at the edge of a network close to a data source, and provide low-delay and efficient service for a user in cooperation with a cloud. Task scheduling is an important link for implementing edge computing, an excellent scheduling algorithm can make full use of computing and storage resources on the device, minimize data transmission and maximize application program execution performance, and create more excellent service effects for users in limited resources, for example, a password for accessing the device is usually a simple or default password. Not all trust is put on peripheral defense. While mobile edge computing and even edge computing can cover a variety of environments for remote management and monitoring, it faces the same security or reliability issues that cannot provide private cloud viewing, since edge micro datacenters deployed in an edge network have limited storage and processing capabilities, collaboration of multiple edge micro datacenters distributed across the optical network is the primary manifestation of edge computing. Therefore, the invention further optimizes the micro data center of the edge on the network defense by using the encryption equipment, the firewall and the intrusion detection and prevention system.
Regarding the task scheduling problem, the most classic scheduling at present is as shown in fig. 2, there are different edge nodes in an edge cloud, different tasks are allocated to different nodes, the computing resources of each edge node are different, and different tasks need to be executed simultaneouslyThe requirements are different, so that the relationship between the edge nodes and the task positions is configured in a multidimensional way in the task allocation process at present. The task received by an edge node at a certain time is n and is represented as T N = tn (n =0, 1,2,3, 4.), the demand of task i for a resource is described as d cpu n (n =0, 1,2,3, 4.), the number of nodes allocable in the edge network is m, denoted as P M (M =0, 1,2,3, 4..) the available resources in node j are expressed as r cpu j (j =0, 1,2,3, 4.) may be described as n mutually independent tasks being allocated to M edge nodes of different computing power, n being typically greater than M. Where the computational resources of each edge node vary widely and the demands of task n will also assume different states. And establishing a matching relation between the tasks and the edge nodes according to an optimization target to be realized, and realizing the optimal task allocation.
Task T N To node P M Is expressed by a matrix X as a matrix of X = xij, wherein xij represents the corresponding relation between the task Ti and the node Pj, and xij belongs to [0,1 ]],i∈[0,1],j∈[0,1]When xij =1, it indicates that task i is allocated on node j.
The allocation between the tasks and the nodes is a mapping relation of many-to-one, the allocation of the tasks needs to ensure that the demand of the tasks for each resource is less than the idle resource of the nodes, and the tasks are prevented from being allocated on the nodes with insufficient resources by comparing the demands of the tasks on CPUs, GPUs and memories with the available resources of the nodes, namely:
Figure DEST_PATH_IMAGE001
(1) wherein, in the step (A),
Figure DEST_PATH_IMAGE002
Figure DEST_PATH_IMAGE003
Figure DEST_PATH_IMAGE004
respectively represent tasks T N At the edge node P M The amount of use of the upper CPU, GPU and memory,
Figure DEST_PATH_IMAGE005
Figure DEST_PATH_IMAGE006
Figure DEST_PATH_IMAGE007
respectively representing edge nodes P M Regarding the available amount of CPU, GPU and memory, i is the number of task sends.
The load balance of the edge nodes is measured by adopting the load unbalance degree, the load unbalance degree is taken as a value between 0 and 1, the smaller the load unbalance degree is, the more uniform the load distribution of the tasks of the edge nodes is, the higher the overall performance of the system is, and the load is
The imbalance value N is expressed as:
Figure DEST_PATH_IMAGE008
(2) wherein, in the step (A),
Figure DEST_PATH_IMAGE009
Figure DEST_PATH_IMAGE010
Figure DEST_PATH_IMAGE011
respectively representing edge nodes P M For the utilization of CPU, GPU and memory,
Figure DEST_PATH_IMAGE012
Figure DEST_PATH_IMAGE013
Figure DEST_PATH_IMAGE014
respectively representing the average utilization rate of the whole edge node to CPU, GPU and memory. i is the number of tasks sent, and n is the number of tasks received by the edge node.
Example 2:
in this embodiment, further optimization is performed on the basis of embodiment 1, and in this embodiment, the method for acquiring the data volume and the aggregation time of the task includes: collecting the data volume of the task; according to the data volume of the tasks, the ant colony algorithm is used for distributing the data volume for the subtasks; confirming the network bandwidth of the subtasks according to the data size distributed by the subtasks; calculating the time required by the data volume of the subtasks according to the data volume of the subtasks and the network bandwidth; and acquiring the task aggregation time according to the start time and the end time of the subtasks.
When the edge computing is used, task scheduling is needed, a computing task is written in a far-end server to be processed, the task is firstly collected and then scheduled, and when the to-be-processed task is allocated to the edge server as a subtask, the data volume allocated to the subtask needs to be determined according to the data volume of the task.
The method comprises the following specific steps of acquiring the data volume a of a task based on redis: gather the source code module of snatching of task earlier, according to the difference of website, need use different resources, for example: forum Cookie resources are needed for collecting forums, snowball Cookie resources are needed for collecting snowball nets, IP proxy pool resources are needed for collecting automobile families, and the like. Pushing the collection task to different resource Lists according to different resources; and traversing each resource List by a unified task distributor Redis List to distribute resources and crawlers, distributing acquisition tasks based on Netty, reading the acquisition tasks, inquiring task parameters in hash of the acquisition tasks according to assigned Redis operation acquisition tasks of jobid + taskid contained in the acquisition tasks, applying for acquisition resources, and finally successfully acquiring all the acquisition resources and acquiring the data volume a of the tasks.
Other parts of this embodiment are the same as embodiment 1, and thus are not described again.
Example 3:
this embodiment is further optimized on the basis of the above embodiment 1 or 2, in this embodiment, a heuristic factor is first determined, the heuristic factor represents task i and the expected intensity not only at edge node j, and the heuristic factor is represented as η ij ,η ij Task pair of constant/waiting assignmentCosine similarity between required resources and node idle resources is equivalent to the fact that the similarity between a task i and a node j is represented by an included angle between the task i and the node j, the smaller the included angle is, the higher the similarity between the task i and the node j is, the higher the probability that the task is distributed at the node is, a constant is represented by Q, the cosine similarity between the required resources and the node idle resources of the task to be distributed is represented by M, and eta is ij =Q/M,η ij The larger the value of (c) is, the more likely the task is placed at the node.
The process of using the ant colony algorithm to distribute the data volume b to the subtasks according to the data volume of the tasks comprises the following steps:
the first step, initializing the number of ants, attaching the required resources of the tasks to each task according to the order of task submission, loading iteration times x and heuristic factors eta ij And a related parameter;
and secondly, randomly distributing n ants carrying task requirements on random nodes, calculating the probability that the kth ant distributes a task i to a node j, then randomly selecting a node from nodes meeting conditions in a roulette mode to serve as a distribution node of the task, and deploying the task on the node. n is the number of tasks received by the edge node, and in this step, n ants carrying task requirements are referred to.
And thirdly, after the kth ant completes all task allocation, locally updating the allocated nodes according to a formula (1).
And fourthly, calculating the load unbalance degree N of the distribution scheme according to a formula (2) after the ants are deployed, comparing with the historical records, and recording the optimal distribution scheme and the minimum load unbalance degree.
Step five, judging whether all ants are finished or not, and if not, continuously returning to the step two; and if all ants finish the iteration, calculating and storing the global optimal solution.
And sixthly, judging whether the iteration times are met or the preset load balance degree N is met, confirming the data amount b of the distributed subtasks according to the load balance degree N, and if so, returning the optimal path solution after the algorithm iteration is finished. If not, returning to the second step.
The distributed splitting of the task quantity received by the edge nodes into the distribution quantity of the subtasks is carried out on different machines, the tasks are completed in a coordinated mode, concurrency is improved, and in order to avoid the problem that a certain node is out of order, the whole service cannot be normally executed, therefore, a load balancing distribution method is selected, and after the load balancing server receives the task subtask data quantity b distributed according to the load balancing degree N, the flow is forwarded to the server according to the weight configured in the configuration file. After the data amount B allocated to the subtask is confirmed, the network bandwidth B of the subtask is confirmed according to the data amount allocated to the subtask, and the time t required by the data amount of the subtask is calculated according to the data amount of the subtask and the network bandwidth and is represented as t = B/B. The aggregation time is determined by the start time t0 and the end time t1 of the subtask and is denoted max (t 0, t 1).
Other parts of this embodiment are the same as those of embodiment 1 or 2, and thus are not described again.
Example 4:
this embodiment is further optimized based on any of the above embodiments 1-3, where edge computing is a distributed platform deployed on a network edge side near a terminal or a data source, and combines capabilities of a network, computing, and storage, so as to provide services nearby, and meet key requirements of user services in terms of real-time transmission, fast response, large-scale processing, and security protection [ 31, however, due to limitations of limited resources and cost on the network edge side, deployment of multiple micro data centers across different places is a basic mode of edge computing, and data is stored in a local micro data center in a distributed manner, so that tasks under edge computing, especially multiple concurrent task scenarios, are processed, and collaboration of the micro data centers is fully considered.
The data volume of the task is represented as a, the node in-degree of the micro data center node where the subtask is located in the network is represented as d, and the index Y of the micro data center is represented as
Figure DEST_PATH_IMAGE015
(ii) a Where t0 is the start time of the subtask. And finally, selecting the micro data center with the maximum performance index value, namely the task with large data volume, long local processing time and relatively high node access as the target micro data center of the task. And placing the subtasks which possibly consume more time in the target micro data center, and meeting the actual condition of optimizing the task blocking rate.
The start time of the aggregation stage of a task is determined by the time when the last sub-task reaches the destination micro data center, if the local processing time of one sub-task is very long, and the transmission time is added, the start time of the aggregation stage of the whole task is possibly later, so that the possibility of task blocking is higher, therefore, the micro data center where the sub-task with the longer local processing time is located is selected as the destination micro data center, so that the process of task retransmission with the longer local processing time can be avoided, and the problem of blocking rate can be optimized to a certain extent. By comprehensively considering the factors, the micro data center index of each task is related to the micro data center in which the subtask of the task is positioned, the total data volume to be transmitted in the micro data center is represented, and the node in-degree of the micro data center node in the network.
The micro datacenter index for a task may be represented as Z Xii The micro data center where the subtask of the data volume a of the task is located can be represented as Xii, and the local processing time is represented as tv i Because of the local processing time tv i =V ii /S ii Wherein V is ii The amount of data transferred, S, of the subtask ii being the amount of data a of the task ii Bandwidth resources allocated for subtask ii of data volume a of the task in the network. The size of the transmission data volume of the subtasks determines the transmission time, so that for one task, the micro data center where the subtasks with large transmission data volume are located can be used as the target micro data center to achieve better optimization effect of the transmission time; meanwhile, for a multi-concurrent task scene, in order to realize the joint optimization of a plurality of tasks, the possible influence of the selection of the task purpose micro data center on other tasks is also taken into consideration, so that the nodes of the micro data center are in the whole networkNode degree conditions in the network are also important influencing factors, and the micro data center with high node degree can meet more requests, so that the task blocking rate of the whole system can be optimized.
The micro datacenter index for each task may be expressed as
Figure 944805DEST_PATH_IMAGE016
Wherein Z is Xii Tv is a micro data center index of the task i In order to process the time locally,
Figure DEST_PATH_IMAGE017
to node-in-degree a micro datacenter node in a network,
Figure 455421DEST_PATH_IMAGE018
the weakening and the enhancement of the influence of variable factors in the whole are realized by performing square root and square operation on the variables.
And finally, selecting the micro data center with the maximum index value, namely large transmission data volume, long local processing time and relatively high node in-degree as the target micro data center of the task. And placing the subtasks which possibly consume more time in the target micro data center, and meeting the actual condition of optimizing the task blocking rate.
Other parts of this embodiment are the same as any of embodiments 1 to 3, and thus are not described again.
Example 5:
this embodiment is further optimized on the basis of any of embodiments 1 to 4, and after the destination micro data center of the task is determined, a single task scheduling and optical path configuration algorithm is proposed by researching make internal disorder or usurp through a joint design scheduling priority and an optical path configuration optimization strategy. Firstly, calculating the emergency degree of a task by combining the current network resource as the scheduling priority; and then, processing the tasks locally according to the priority level, and performing optical path configuration on the intermediate result after processing through a delay transmission strategy.
The single task scheduling means that a target micro data center is set for all task requests according to the performance index of the micro data center; and for the tasks which are not scheduled, calculating the emergency degree of the tasks which are not scheduled, setting the tasks with the local processing time longer than the deadline time as blocking and not processing, and then obtaining the target tasks to be scheduled by the polling. And then, configuring optical paths for subtasks in the target task, firstly, configuring the optical paths according to the emergency degree of the subtasks, solving the problem of optical path configuration of the subtasks according to a delay transmission strategy, and simultaneously updating the network until all the subtasks complete the optical path configuration. Finally, if the target task can complete scheduling and optical path configuration within time limit constraint time, the target task is put into a task set which is completed by scheduling, and the queuing condition of the micro data center under the task is updated, otherwise, if a certain subtask in the target task does not have an available path which satisfies the condition that the task is completed within the time limit, the target task is put into an unscheduled task set again, the resources of the subtask which is subjected to optical path configuration are released, the system time is updated, the task resources which are transmitted are released, the network is updated, and the process is circulated until all tasks are scheduled and completed or blocked. In the process, a shortest path algorithm is adopted in the optical path configuration process of the optical path configuration optimization strategy target task, and further, a Floyd (Floyed) algorithm can be selected.
In order to realize network defense of the edge computing network by adopting a single task scheduling and light path configuration algorithm according to the position of the micro data center, the scheduling priority is selected by combining the emergency degree of the current network resource computing task; processing the tasks locally according to the priority level, performing light path configuration on the processed intermediate result through a delayed transmission strategy, inputting all subtasks for completing the light path configuration into a redundant alarm model based on Kmeans clustering, calculating a minimum network defense execution point set through alarm association, and analyzing an optimal security defense target.
The K-means algorithm is also called a K-means algorithm, wherein K in the K-means algorithm represents K clusters of clusters, and means represents that the mean value of data values in each cluster is taken as the center of the cluster, or is called a centroid, that is, the centroid of each cluster is used for describing the cluster. The redundant alarm model based on the Kmeans cluster means that k objects are randomly selected from an original alarm event data set to serve as the center of one cluster. Then, for all the rest alarm events, the distances from other alarm events to the center of the cluster are calculated, and each alarm event is divided into the cluster closest to the alarm event according to the obtained distances. The Kmeans algorithm then iteratively improves the intra-cluster errors, uses all alarm events within a cluster for each cluster, calculates a new mean, then uses the new mean as a new cluster center, and reassigns all objects to the new nearest cluster class. And continuously iterating to know that the error in the cluster class is less than a given value or does not change any more.
The selection of k clusters of the cluster is generally determined according to actual requirements, or k values are directly given when an algorithm is realized; the method comprises the steps of dividing an object point into a cluster which is closest to a cluster center and needs nearest neighbor measurement strategies, adopting Euclidean distance in Euclidean space, adopting a cosine similarity function in processing documents, sometimes adopting Manhattan distance as measurement, and enabling practical measurement formulas to be different under different conditions. And (3) calculating a new centroid in a redundant alarm model of Kmeans clustering, respectively calculating the point with the minimum mean value of other points in the clusters to be the minimum centroid for k clusters generated after classification, and calculating the mean value of coordinates of no cluster to be the centroid for the cluster with coordinates. For whether to stop the K-means in the redundancy alarm model of the Kmeans cluster, when the centroid does not change any more or the maximum number loopLimit of loop is given, the K-means can be stopped when the centroid of each cluster does not change any more, when the number of loop exceeds the loopLimit, the K-means is stopped, only one of the two conditions needs to be met, the K-means can be stopped, if the K-means is not ended, the steps are repeated, and if the K-means is ended, the cluster and the centroid are printed (or drawn).
Inputting all subtasks for completing the optical path configuration into a redundant alarm model based on Kmeans clustering, calculating a minimum network defense execution point set through the centroid association of each cluster, and analyzing an optimal security defense target
Other parts of this embodiment are the same as any of embodiments 1 to 4, and thus are not described again.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention in any way, and all simple modifications and equivalent variations of the above embodiments according to the technical spirit of the present invention are included in the scope of the present invention.

Claims (6)

1. A network defense method based on an edge technology is characterized by comprising the following steps:
step S1, confirming the position of the micro data center in the edge technology according to the data volume and the aggregation time of the task;
and step S2, performing network defense by adopting a single task scheduling and light path configuration algorithm according to the position of the micro data center.
2. The method for network defense based on edge technology as claimed in claim 1, wherein the method for acquiring the data volume and the aggregation time of the task in step S1 includes:
collecting the data volume of the task based on redis;
according to the data volume of the tasks, distributing the data volume for the subtasks by using a method combining an ant colony algorithm and load balancing distribution;
confirming the network bandwidth of the subtask according to the data size distributed by the subtask;
calculating the time required by the data volume of the subtasks according to the data volume of the subtasks and the network bandwidth;
and acquiring the task aggregation time according to the start time and the end time of the subtasks.
3. The method for network defense based on edge technology as claimed in claim 2, wherein the method for distributing data volume for subtasks by using the ant colony algorithm and load balancing in combination comprises:
the method comprises the steps of firstly, initializing the number of ants, attaching required resources of tasks to each task according to the order of task submission, and loading iteration times, heuristic factors and relevant parameters;
secondly, randomly distributing n ants carrying task requirements on random nodes, calculating the probability that the kth ant distributes a task i on a node j, randomly selecting a node from nodes meeting conditions in a roulette mode to serve as a distribution node of the task, and deploying the task on the node;
thirdly, after the kth ant completes all task allocation, locally updating the allocated nodes;
fourthly, calculating the load unbalance degree N of the distribution scheme after the placement of the ants is completed, comparing the load unbalance degree N with historical records, and recording the optimal distribution scheme and the minimum load unbalance degree;
fifthly, judging whether all ants are finished or not, and if all ants are not finished, continuing returning to the second step; if all ants finish the iteration, calculating and storing the global optimal solution;
and sixthly, judging whether the iteration times are met or the preset load balance degree N is met, determining the data volume of the distributed subtasks according to the load balance degree N by using a load balance distribution method, if so, finishing the iteration of the algorithm, returning to the optimal path solution, and if not, returning to the second step.
4. The method for network defense based on edge technology as claimed in claim 3, wherein the method for confirming the location of the micro data center in the edge technology in step S1 includes:
generating a micro data center index according to the data volume of the task, the micro data center where the subtask is located and the node in-degree of the micro data center node where the subtask is located in the network;
and determining the position of the micro data center according to the index of the micro data center.
5. The method for network defense based on edge technology as claimed in claim 4, wherein the step S2 includes:
combining the emergency degree of the current network resource calculation task as the scheduling priority;
processing the tasks locally according to the priority level, and performing optical path configuration on the processed subtasks through a delay transmission strategy;
selecting a light path configuration algorithm to complete light path configuration of all the subtasks;
inputting all subtasks for completing the optical path configuration into a redundant alarm model based on Kmeans clustering, calculating a minimum network defense execution point set through alarm correlation, and analyzing an optimal security defense target.
6. The method of claim 5, wherein the optical path configuration algorithm is Froude algorithm.
CN202210915405.2A 2022-08-01 2022-08-01 Network defense method based on edge technology Pending CN114996025A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210915405.2A CN114996025A (en) 2022-08-01 2022-08-01 Network defense method based on edge technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210915405.2A CN114996025A (en) 2022-08-01 2022-08-01 Network defense method based on edge technology

Publications (1)

Publication Number Publication Date
CN114996025A true CN114996025A (en) 2022-09-02

Family

ID=83021590

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210915405.2A Pending CN114996025A (en) 2022-08-01 2022-08-01 Network defense method based on edge technology

Country Status (1)

Country Link
CN (1) CN114996025A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180173866A1 (en) * 2016-12-15 2018-06-21 David H. Williams Systems and methods for providing location-based security and/or privacy for restricting user access
CN111061548A (en) * 2019-11-29 2020-04-24 西安四叶草信息技术有限公司 Safety scanning task scheduling method and scheduler
CN114489925A (en) * 2021-12-09 2022-05-13 广东电网有限责任公司 Containerized service scheduling framework and flexible scheduling algorithm

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180173866A1 (en) * 2016-12-15 2018-06-21 David H. Williams Systems and methods for providing location-based security and/or privacy for restricting user access
CN111061548A (en) * 2019-11-29 2020-04-24 西安四叶草信息技术有限公司 Safety scanning task scheduling method and scheduler
CN114489925A (en) * 2021-12-09 2022-05-13 广东电网有限责任公司 Containerized service scheduling framework and flexible scheduling algorithm

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张波: ""边缘计算网络安全防线联动与攻击主动防御关键技术研究"", 《中国博士学位论文全文数据库信息科技辑》 *
李亚男: ""面向业务约束的边缘光互联微数据中心资源分配策略研究"", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Similar Documents

Publication Publication Date Title
Abdulhamid et al. Fault tolerance aware scheduling technique for cloud computing environment using dynamic clustering algorithm
Gill et al. Resource provisioning based scheduling framework for execution of heterogeneous and clustered workloads in clouds: from fundamental to autonomic offering
US10104185B1 (en) Policy-based container cotenancy
US7590623B2 (en) Automated management of software images for efficient resource node building within a grid environment
US7406691B2 (en) Minimizing complex decisions to allocate additional resources to a job submitted to a grid environment
US7793308B2 (en) Setting operation based resource utilization thresholds for resource use by a process
US8880638B2 (en) Distributed image cache for servicing virtual resource requests in the cloud
CN112433808B (en) Network security event detection system and method based on grid computing
Liu et al. Security-aware resource allocation for mobile cloud computing systems
Daniel et al. A novel approach for scheduling service request in cloud with trust monitor
Singh et al. Crow–penguin optimizer for multiobjective task scheduling strategy in cloud computing
CN115914392A (en) Computing power network resource scheduling method and system
Kim et al. Investigating the use of autonomic cloudbursts for high-throughput medical image registration
Alam et al. Security prioritized multiple workflow allocation model under precedence constraints in cloud computing environment
CN113014611A (en) Load balancing method and related equipment
Stavrinides et al. Cost‐aware cloud bursting in a fog‐cloud environment with real‐time workflow applications
Ji et al. Adaptive workflow scheduling for diverse objectives in cloud environments
US11900171B2 (en) Cloud computing capacity management system using automated fine-grained admission control
CN114996025A (en) Network defense method based on edge technology
Sermakani et al. Dynamic provisioning of virtual machine using optimized bit matrix load distribution in federated cloud
Jawade et al. Confinement forest‐based enhanced min‐min and max‐min technique for secure multicloud task scheduling
Kar et al. OMNI: Omni-directional dual cost optimization of two-tier federated cloud-edge systems
Qiao et al. A Novel Method for Resource Efficient Security Service Chain Embedding Oriented to Cloud Datacenter Networks
Sumathi et al. An improved scheduling strategy in cloud using trust based mechanism
Wen et al. Load balancing consideration of both transmission and process responding time for multi-task assignment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20220902