CN116467088A - Edge computing scheduling management method and system based on deep learning - Google Patents

Edge computing scheduling management method and system based on deep learning Download PDF

Info

Publication number
CN116467088A
CN116467088A CN202310727960.7A CN202310727960A CN116467088A CN 116467088 A CN116467088 A CN 116467088A CN 202310727960 A CN202310727960 A CN 202310727960A CN 116467088 A CN116467088 A CN 116467088A
Authority
CN
China
Prior art keywords
task
edge
equipment
computing
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310727960.7A
Other languages
Chinese (zh)
Other versions
CN116467088B (en
Inventor
黄钰群
黄伟群
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Borui Tianxia Technology Co ltd
Original Assignee
Shenzhen Borui Tianxia Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Borui Tianxia Technology Co ltd filed Critical Shenzhen Borui Tianxia Technology Co ltd
Priority to CN202310727960.7A priority Critical patent/CN116467088B/en
Publication of CN116467088A publication Critical patent/CN116467088A/en
Application granted granted Critical
Publication of CN116467088B publication Critical patent/CN116467088B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses an edge computing scheduling management method and system based on deep learning, and relates to the technical field of data processing, wherein the method comprises the following steps: connecting a resource scheduling center to obtain edge computing equipment of an edge computing layer; generating a task identification library; when a resource scheduling center receives a first task request, identifying the first task request based on a task identification library to obtain a corresponding edge device set based on the first task request; acquiring demand computing characteristics; building a load identification model through edge calculation feature identification when task processing is carried out on edge calculation equipment; inputting the demand computing characteristics into N load identification models for analysis, and obtaining first matched edge equipment; and inputting the first task request into first matched edge equipment for edge processing. The method and the device solve the technical problems of low edge calculation scheduling efficiency and poor scheduling quality in the prior art, and achieve the technical effect of improving the scheduling efficiency.

Description

Edge computing scheduling management method and system based on deep learning
Technical Field
The invention relates to the technical field of data processing, in particular to an edge computing scheduling management method and system based on deep learning.
Background
There are high latency, network instability, and low bandwidth issues in the traditional cloud computing mode. And most of data can be filtered by migrating part or all of the processing programs to edge calculation close to a user or a data collection point, so that the load of a cloud end is effectively reduced. Therefore, edge computation is widely used. However, the edge calculation is close to the data generating end, so that the types of tasks to be processed are complex, the devices are numerous, and the existing scheduling management mode cannot meet the use requirement, so that network delay is caused. The edge calculation scheduling efficiency in the prior art is low, and the scheduling quality is poor.
Disclosure of Invention
The application provides an edge computing scheduling management method and system based on deep learning, which are used for solving the technical problems of low edge computing scheduling efficiency and poor scheduling quality in the prior art.
In view of the above problems, the present application provides a method and a system for edge computing scheduling management based on deep learning.
In a first aspect of the present application, there is provided a method for edge computing schedule management based on deep learning, the method comprising:
connecting a resource scheduling center to obtain edge computing equipment of an edge computing layer;
generating a task identification library according to the historical task types of each edge device in the edge computing devices;
when the resource scheduling center receives a first task request, identifying the first task request based on the task identification library to obtain an edge device set corresponding to the first task request;
acquiring a demand computing feature of task processing in the first task request;
building a load identification model through edge calculation feature identification when the edge calculation equipment performs task processing;
invoking N load identification models based on the edge equipment set, inputting the demand calculation features into the N load identification models for analysis, and obtaining first matched edge equipment, wherein N is a positive integer less than or equal to the total number of the edge equipment set;
and inputting the first task request into the first matching edge equipment for edge processing.
In a second aspect of the present application, there is provided a deep learning-based edge computing schedule management system, the system comprising:
the edge computing equipment obtaining module is used for connecting a resource scheduling center and obtaining edge computing equipment of an edge computing layer;
the task identification library generation module is used for generating a task identification library according to the historical task types of each edge device in the edge computing devices;
the edge equipment set obtaining module is used for identifying the first task request based on the task identification library when the resource scheduling center receives the first task request, so as to obtain an edge equipment set corresponding to the first task request;
the computing feature obtaining module is used for obtaining the demand computing feature of task processing in the first task request;
the load identification model building module is used for building a load identification model through edge calculation feature identification when the edge calculation equipment is subjected to task processing;
the matching edge equipment obtaining module is used for calling N load identification models based on the edge equipment set, inputting the demand computing features into the N load identification models for analysis, and obtaining first matching edge equipment, wherein N is a positive integer less than or equal to the total number of the edge equipment set;
and the edge processing module is used for inputting the first task request into the first matched edge equipment to perform edge processing.
One or more technical solutions provided in the present application have at least the following technical effects or advantages:
according to the method, the edge computing equipment of the edge computing layer is obtained through connecting a resource scheduling center; generating a task identification library according to the historical task types of each edge device in the edge computing devices, then when a resource scheduling center receives a first task request, identifying the first task request based on the task identification library to obtain an edge device set corresponding to the first task request, and further obtaining the demand computing characteristics of task processing in the first task request; and setting up a load recognition model through edge computing feature recognition when task processing is carried out on the edge computing equipment, inputting the required computing features into the N load recognition models for analysis through calling the N load recognition models based on the edge equipment set, obtaining first matched edge equipment, wherein N is a positive integer less than or equal to the total number of the edge equipment set, and then inputting a first task request into the first matched edge equipment for edge processing. The technical effects of improving the edge processing efficiency and the processing quality are achieved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of an edge computing scheduling management method based on deep learning according to an embodiment of the present application;
fig. 2 is a schematic flow chart of model layer optimization by embedding a dynamic network layer into a load identification model in the edge computing scheduling management method based on deep learning according to the embodiment of the present application;
fig. 3 is a schematic flow chart of outputting N load identification models corresponding to an edge device set in the edge computing and dispatching management method based on deep learning according to the embodiment of the present application;
fig. 4 is a schematic structural diagram of an edge computing scheduling management system based on deep learning according to an embodiment of the present application.
Reference numerals illustrate: the system comprises an edge computing device obtaining module 11, a task recognition library generating module 12, an edge device set obtaining module 13, a computing feature obtaining module 14, a load recognition model building module 15, a matching edge device obtaining module 16 and an edge processing module 17.
Detailed Description
The application provides an edge computing scheduling management method and system based on deep learning, which are used for solving the technical problems of low edge computing scheduling efficiency and poor scheduling quality in the prior art.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present application based on the embodiments herein.
It should be noted that the terms "comprises" and "comprising," along with any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus, but may include other steps or modules not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
As shown in fig. 1, the present application provides a method for edge computing scheduling management based on deep learning, where the method includes:
step S100: connecting a resource scheduling center to obtain edge computing equipment of an edge computing layer;
step S200: generating a task identification library according to the historical task types of each edge device in the edge computing devices;
in one possible embodiment, the edge computing device of the edge computing layer is obtained by communicatively connecting the data interaction device with a port of a resource retrieval center. The resource scheduling center is a center for performing calculation resource allocation according to calculation tasks received in real time, and the edge calculation layer is a network layer for calculating data generated near the equipment end. The edge computing device is a device which is arranged at a data acquisition end or a system edge end and used for integrating, analyzing and computing feedback of acquired data, and comprises an intelligent sensor (which can acquire state information of the device in real time and classify, analyze and package the data in the sensor), a programmable logic controller (the device with programming, operation, control and output capabilities and designed according to production requirements) and an edge intelligent router (with functions of device monitoring, front-end device control and the like). And obtaining edge computing equipment of the edge computing layer, and providing basic data for subsequent equipment allocation.
In one embodiment, the data interaction device collects the computing task types completed by each edge device in the edge computing devices in the historical time, and obtains a plurality of computing task types. Optionally, the multiple computing task types include tasks such as data forwarding, single point control, log generation, and data uploading. And constructing an edge computing device-task type mapping relation according to the corresponding relation between the plurality of computing task types and the edge computing device. And generating the task identification library based on the mapping relation between the edge computing device and the task type, so as to identify the corresponding edge computing device for the task type received by the resource scheduling center. The technical effects of improving the identification precision and the dispatching management efficiency are achieved.
Step S300: when the resource scheduling center receives a first task request, identifying the first task request based on the task identification library to obtain an edge device set corresponding to the first task request;
in one embodiment, after the resource scheduling center receives the first task request, the task type of the first task request is extracted, and the first task type is obtained. And further, taking the first task type as an index, searching the edge computing equipment of which the completed task accords with the first task type based on the mapping relation of the edge computing equipment and the task type in the task identification library, and obtaining the edge equipment set according to a matching result. Therefore, the aim of providing an optimizing object for the subsequent optimizing of the edge computing equipment is fulfilled.
Step S400: acquiring a demand computing feature of task processing in the first task request;
in particular, the demand computation features include task processing timeliness, task processing complexity, and task processing data volume. And extracting the characteristics of the first task request according to the demand calculation characteristics, so as to obtain processing timeliness, task processing complexity and task processing data volume during task processing in the first task request. The task processing timeliness is described for a time period when the task in the task request is processed, if the task needs to be processed within 1 day, the corresponding timeliness is 1 day. The task processing complexity is used for describing the number of operation steps and the operation calculation difficulty in the task processing process in the task request. The task processing data size is used for describing the data size which needs to be processed by the task in the task request. For example, when the current work log is uploaded, the byte size of the work log content is the task processing data size.
Step S500: building a load identification model through edge calculation feature identification when the edge calculation equipment performs task processing;
further, as shown in fig. 2, step S500 in the embodiment of the present application further includes:
step S510: acquiring a real-time task list of the edge computing device;
step S520: dynamically predicting according to the real-time task list to obtain a first calculation degree to be processed;
step S530: generating a dynamic network layer according to the first calculation degree to be processed;
step S540: and embedding the dynamic network layer into the load identification model to perform model layer optimization.
Further, by performing edge computing feature recognition when performing task processing on the edge computing device, a load recognition model is built, and step S500 in the embodiment of the present application further includes:
step S550: processing a sample set by collecting tasks of the edge computing device;
step S560: performing task analysis by using the task processing sample set to obtain task computing characteristics, wherein the task computing characteristics comprise task processing speed, task processing timeliness and task storage space;
step S570: training according to the task processing rate, the task processing time and the task storage space as training data, and obtaining a load identification model when training is converged;
step S580: and outputting a task matching index when the edge computing equipment is in load balance according to the load identification model.
In one possible embodiment, the task processing sample data set is obtained by acquiring sample data of the task processing performed by the edge computing device. And performing task analysis on the task processing sample set from three dimensions of processing speed, processing time and storage space to obtain the task computing characteristics. The task computing characteristics reflect the task processing capacity of the edge computing device, including task processing speed, task processing time and task storage space. The task processing rate is the amount of tasks that the edge computing device processes per unit time. The task processing age is the time that it takes for the edge computing device to perform sample task processing. The task storage space is a memory space which is occupied by the edge computing device when the edge computing device processes tasks.
Specifically, the task processing rate, the task processing time and the task storage space are used as training data, the task processing sample data set index is extracted, the sample task matching index obtained by extraction during load balancing of the edge equipment is used as training supervision data, and data identification is performed. And training the load identification model constructed by taking the BP neural network as a basic framework by using training data, and supervising the training process by using the identified training supervision data until the model output reaches convergence, so as to obtain the load identification model. And the load identification model is used for intelligently outputting the matching index of the edge computing equipment and the task request when the load of the edge computing equipment is balanced.
Specifically, the real-time task list of the edge computing device is obtained, so that the real-time waiting amount of the edge computing device is dynamically predicted according to the task processing data amount in the real-time task list and the task processing rate of the edge computing device, and the first waiting calculation degree is obtained. Optionally, the first degree of calculation to be processed is a numerical value quantitatively reflecting the amount of calculation to be processed of the edge calculation device. And obtaining real-time processing data volume by multiplying the task processing rate of the edge computing device by the processing time, comparing the result of subtracting the real-time processing data volume from the task processing data volume in the real-time task list with the task processing data volume in the real-time task list, and taking the computing result as the first calculation degree to be processed. And generating parameters of the dynamic network layer according to the first calculation degree to be processed, embedding the dynamic network layer into the load identification model to perform model layer optimization, and updating and optimizing the load identification model according to a real-time task to achieve the technical effect of improving the model processing speed and the accuracy of output results.
Step S600: invoking N load identification models based on the edge equipment set, inputting the demand calculation features into the N load identification models for analysis, and obtaining first matched edge equipment, wherein N is a positive integer less than or equal to the total number of the edge equipment set;
further, as shown in fig. 3, step S600 in the embodiment of the present application further includes:
step S610: acquiring type information of each edge device in the edge device set;
step S620: judging whether similar edge devices with the same type exist in the edge device set or not according to the type information of each edge device;
step S630: if the similar edge equipment exists, building a first load identification model based on the similar edge equipment, and the like, and outputting N load identification models corresponding to the edge equipment set.
Further, inputting the demand computing features into the N load identification models for analysis, and obtaining a first matched edge device, where step S600 further includes:
step S610: acquiring demand computing features, wherein the demand computing features comprise task processing timeliness, task processing complexity and task processing data volume;
step S620: inputting the task processing timeliness, the task processing complexity and the task processing data amount into the N load identification models to be respectively matched to obtain N task matching indexes;
step S630: and optimizing the N task matching indexes, and outputting first matching edge equipment corresponding to the first task matching index.
Further, the calculation formula of the task matching index is as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device,matching indexes for tasks corresponding to the ith task, < +.>Characterization of the ith task is based on the variable +.>Corresponding task demand load degree,/->Characterization of the ith task is based on the variable +.>The real-time load degree of the corresponding equipment,load covariance for the ith task, +.>The total number of models is identified for the load in the set of edge devices.
In a possible embodiment, the type information of each edge device in the set of edge devices is obtained, that is, according to the historical task type of each edge device. And judging whether similar edge devices with the same type exist in the edge device set according to the type information of each edge device. If the type information is used as an index, performing equipment cluster analysis on the edge equipment set, gathering edge computing equipment belonging to the same class into one class, randomly selecting the same class of edge equipment from the same class of edge equipment, and constructing a first load identification model. Then, based on the same construction method, N load recognition models are obtained. Wherein N is the class number of the similar edge devices in the edge device set. The accuracy and the efficiency of identification can be improved by constructing N load identification models.
In one possible embodiment, the demand computation feature for the task processing in the first task request includes a task processing timeliness, a task processing complexity, and a task processing data amount. And respectively inputting the timeliness of task processing, the complexity of task processing and the data quantity of task processing into the N load identification models for matching, and obtaining N task matching indexes through calculation of a calculation formula of the task matching indexes in the models.
Specifically, N task matching indexes are input into an optimizing model to perform index optimization, and an optimal result is used as a first task matching index. And further, using the edge equipment corresponding to the first task matching index as the first matching edge equipment.
Specifically, the optimizing model comprises P optimizing nodes. The optimizing model is constructed according to a plurality of sample task matching indexes. Selecting a sample task matching index from a plurality of sample task matching indexes randomly to serve as a first optimizing node, assigning a value to the first optimizing node according to the size of the sample task matching index to obtain a first optimizing node assignment result, screening the sample task matching indexes according to the first optimizing node and the first optimizing node assignment result, and reserving the sample task matching index with the index value being superior to the first optimizing node assignment result to obtain a first optimizing result. The sample task matching indexes with the index value being superior to that of the second optimizing node assignment result are reserved, and the second optimizing result is obtained. And randomly selecting a sample task matching index from the P-1 optimizing result again to serve as a P-th optimizing node, assigning a value to the P-th optimizing node according to the corresponding sample task matching index to obtain a P-th optimizing node assigning result, screening the P-1 optimizing result according to the P-th optimizing node and the P-th optimizing node assigning result, and reserving the sample matching index with the sample task matching index being better than the P-1 optimizing node assigning result to obtain the P-th optimizing result. And taking the optimal result in the P-th optimizing result as an optimal optimizing result. Generating the optimizing model according to the first optimizing node, the second optimizing node and the P optimizing node.
Step S700: and inputting the first task request into the first matching edge equipment for edge processing.
Further, step S700 in the embodiment of the present application further includes:
step S710: when the resource scheduling center receives a plurality of task requests;
step S720: setting a plurality of concurrent channels based on the plurality of task requests, connecting the load identification models according to the concurrent channels, and outputting a plurality of load identification models corresponding to the concurrent channels;
step S730: and carrying out task parallel processing by using the plurality of load identification models, and outputting a plurality of matched edge devices based on the plurality of task requests.
In one possible embodiment, the first task request is input into the first matching edge device for edge processing. And when the resource scheduling center receives a plurality of task requests, setting a plurality of concurrent channels, and outputting the plurality of task request information by matching edge equipment. And the concurrency channels are connected with the load identification models, and a plurality of load identification models corresponding to the concurrency channels are obtained according to task types corresponding to the task requests. And carrying out parallel processing of tasks based on the load identification models, so as to obtain a plurality of matched edge devices corresponding to the task requests.
In summary, the embodiments of the present application have at least the following technical effects:
according to the method, the edge computing equipment of the edge computing layer is obtained, so that a management object is provided for scheduling management, then equipment matching is carried out on a first task request received by a resource scheduling center, a corresponding edge equipment set is obtained, the demand computing characteristics of the first task request are obtained, the demand computing characteristics are input into N load recognition models, a corresponding first matching edge equipment is obtained, and then the first matching edge equipment is used for edge processing. The technical effects of improving the edge computing, dispatching and managing efficiency and the managing quality are achieved.
Example two
Based on the same inventive concept as the edge computing schedule management method based on deep learning in the foregoing embodiments, as shown in fig. 4, the present application provides an edge computing schedule management system based on deep learning, and the system and method embodiments in the embodiments of the present application are based on the same inventive concept. Wherein the system comprises:
the edge computing equipment obtaining module 11 is used for connecting a resource scheduling center to obtain edge computing equipment of an edge computing layer;
a task identification library generating module 12, where the task identification library generating module 12 is configured to generate a task identification library according to historical task types of each edge device in the edge computing devices;
the edge device set obtaining module 13 is configured to, when the resource scheduling center receives a first task request, identify the first task request based on the task identification library, and obtain an edge device set corresponding to the first task request;
a calculation feature obtaining module 14, where the calculation feature obtaining module 14 is configured to obtain a demand calculation feature of task processing in the first task request;
the load identification model building module 15 is used for building a load identification model through edge calculation feature identification when the edge calculation equipment is subjected to task processing;
the matched edge device obtaining module 16 is configured to invoke N load identification models based on the edge device set, input the demand computing feature into the N load identification models for analysis, and obtain a first matched edge device, where N is a positive integer less than or equal to the total number of the edge device set;
the edge processing module 17 is configured to input the first task request into the first matching edge device to perform edge processing.
Further, the matched edge device obtaining module 16 is configured to perform the following method:
acquiring type information of each edge device in the edge device set;
judging whether similar edge devices with the same type exist in the edge device set or not according to the type information of each edge device;
if the similar edge equipment exists, building a first load identification model based on the similar edge equipment, and the like, and outputting N load identification models corresponding to the edge equipment set.
Further, the load identification model building module 15 is configured to perform the following method:
acquiring a real-time task list of the edge computing device;
dynamically predicting according to the real-time task list to obtain a first calculation degree to be processed;
generating a dynamic network layer according to the first calculation degree to be processed;
and embedding the dynamic network layer into the load identification model to perform model layer optimization.
Further, the load identification model building module 15 is configured to perform the following method:
processing a sample set by collecting tasks of the edge computing device;
performing task analysis by using the task processing sample set to obtain task computing characteristics, wherein the task computing characteristics comprise task processing speed, task processing timeliness and task storage space;
training according to the task processing rate, the task processing time and the task storage space as training data, and obtaining a load identification model when training is converged;
and outputting a task matching index when the edge computing equipment is in load balance according to the load identification model.
Further, the matched edge device obtaining module 16 is configured to perform the following method:
acquiring demand computing features, wherein the demand computing features comprise task processing timeliness, task processing complexity and task processing data volume;
inputting the task processing timeliness, the task processing complexity and the task processing data amount into the N load identification models to be respectively matched to obtain N task matching indexes;
and optimizing the N task matching indexes, and outputting first matching edge equipment corresponding to the first task matching index.
Further, the matching edge device obtaining module 16 includes the following calculation formula of the task matching index:
wherein, the liquid crystal display device comprises a liquid crystal display device,matching indexes for tasks corresponding to the ith task, < +.>Characterization of the ith task is based on the variable +.>Corresponding task demand load degree,/->Characterization of the ith task is based on the variable +.>The real-time load degree of the corresponding equipment,load covariance for the ith task, +.>The total number of models is identified for the load in the set of edge devices.
Further, the edge processing module 17 is configured to perform the following method:
when the resource scheduling center receives a plurality of task requests;
setting a plurality of concurrent channels based on the plurality of task requests, connecting the load identification models according to the concurrent channels, and outputting a plurality of load identification models corresponding to the concurrent channels;
and carrying out task parallel processing by using the plurality of load identification models, and outputting a plurality of matched edge devices based on the plurality of task requests.
It should be noted that the sequence of the embodiments of the present application is merely for description, and does not represent the advantages and disadvantages of the embodiments. And the foregoing description has been directed to specific embodiments of this specification. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
The foregoing description of the preferred embodiments of the present application is not intended to limit the invention to the particular embodiments of the present application, but to limit the scope of the invention to the particular embodiments of the present application.
The specification and drawings are merely exemplary of the application and are to be regarded as covering any and all modifications, variations, combinations, or equivalents that are within the scope of the application. It will be apparent to those skilled in the art that various modifications and variations can be made in the present application without departing from the scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the present application and the equivalents thereof, the present application is intended to cover such modifications and variations.

Claims (8)

1. An edge computing scheduling management method based on deep learning, which is characterized by comprising the following steps:
connecting a resource scheduling center to obtain edge computing equipment of an edge computing layer;
generating a task identification library according to the historical task types of each edge device in the edge computing devices;
when the resource scheduling center receives a first task request, identifying the first task request based on the task identification library to obtain an edge device set corresponding to the first task request;
acquiring a demand computing feature of task processing in the first task request;
building a load identification model through edge calculation feature identification when the edge calculation equipment performs task processing;
invoking N load identification models based on the edge equipment set, inputting the demand calculation features into the N load identification models for analysis, and obtaining first matched edge equipment, wherein N is a positive integer less than or equal to the total number of the edge equipment set;
and inputting the first task request into the first matching edge equipment for edge processing.
2. The method of claim 1, wherein the method further comprises:
acquiring type information of each edge device in the edge device set;
judging whether similar edge devices with the same type exist in the edge device set or not according to the type information of each edge device;
if the similar edge equipment exists, building a first load identification model based on the similar edge equipment, and the like, and outputting N load identification models corresponding to the edge equipment set.
3. The method of claim 1, wherein the method further comprises:
acquiring a real-time task list of the edge computing device;
dynamically predicting according to the real-time task list to obtain a first calculation degree to be processed;
generating a dynamic network layer according to the first calculation degree to be processed;
and embedding the dynamic network layer into the load identification model to perform model layer optimization.
4. A method according to claim 3, wherein the load recognition model is built by edge computing feature recognition when the edge computing device is tasked, the method comprising:
processing a sample set by collecting tasks of the edge computing device;
performing task analysis by using the task processing sample set to obtain task computing characteristics, wherein the task computing characteristics comprise task processing speed, task processing timeliness and task storage space;
training according to the task processing rate, the task processing time and the task storage space as training data, and obtaining a load identification model when training is converged;
and outputting a task matching index when the edge computing equipment is in load balance according to the load identification model.
5. The method of claim 1, wherein inputting the demand computation feature into the N load identification models for analysis to obtain a first matched edge device, the method comprising:
acquiring demand computing features, wherein the demand computing features comprise task processing timeliness, task processing complexity and task processing data volume;
inputting the task processing timeliness, the task processing complexity and the task processing data amount into the N load identification models to be respectively matched to obtain N task matching indexes;
and optimizing the N task matching indexes, and outputting first matching edge equipment corresponding to the first task matching index.
6. The method of claim 5, wherein the task matching index is calculated as:
wherein, the liquid crystal display device comprises a liquid crystal display device,matching indexes for tasks corresponding to the ith task, < +.>Characterization of the ith task is based on the variable +.>Corresponding task demand load degree,/->Characterization of the ith task is based on the variable +.>The real-time load degree of the corresponding equipment,load covariance for the ith task, +.>The total number of models is identified for the load in the set of edge devices.
7. The method of claim 1, wherein the method further comprises:
when the resource scheduling center receives a plurality of task requests;
setting a plurality of concurrent channels based on the plurality of task requests, connecting the load identification models according to the concurrent channels, and outputting a plurality of load identification models corresponding to the concurrent channels;
and carrying out task parallel processing by using the plurality of load identification models, and outputting a plurality of matched edge devices based on the plurality of task requests.
8. An edge computing schedule management system based on deep learning, the system comprising:
the edge computing equipment obtaining module is used for connecting a resource scheduling center and obtaining edge computing equipment of an edge computing layer;
the task identification library generation module is used for generating a task identification library according to the historical task types of each edge device in the edge computing devices;
the edge equipment set obtaining module is used for identifying the first task request based on the task identification library when the resource scheduling center receives the first task request, so as to obtain an edge equipment set corresponding to the first task request;
the computing feature obtaining module is used for obtaining the demand computing feature of task processing in the first task request;
the load identification model building module is used for building a load identification model through edge calculation feature identification when the edge calculation equipment is subjected to task processing;
the matching edge equipment obtaining module is used for calling N load identification models based on the edge equipment set, inputting the demand computing features into the N load identification models for analysis, and obtaining first matching edge equipment, wherein N is a positive integer less than or equal to the total number of the edge equipment set;
and the edge processing module is used for inputting the first task request into the first matched edge equipment to perform edge processing.
CN202310727960.7A 2023-06-20 2023-06-20 Edge computing scheduling management method and system based on deep learning Active CN116467088B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310727960.7A CN116467088B (en) 2023-06-20 2023-06-20 Edge computing scheduling management method and system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310727960.7A CN116467088B (en) 2023-06-20 2023-06-20 Edge computing scheduling management method and system based on deep learning

Publications (2)

Publication Number Publication Date
CN116467088A true CN116467088A (en) 2023-07-21
CN116467088B CN116467088B (en) 2024-03-26

Family

ID=87182872

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310727960.7A Active CN116467088B (en) 2023-06-20 2023-06-20 Edge computing scheduling management method and system based on deep learning

Country Status (1)

Country Link
CN (1) CN116467088B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100228819A1 (en) * 2009-03-05 2010-09-09 Yottaa Inc System and method for performance acceleration, data protection, disaster recovery and on-demand scaling of computer applications
CN112311820A (en) * 2019-07-26 2021-02-02 腾讯科技(深圳)有限公司 Edge device scheduling method, connection method, device and edge device
US11297161B1 (en) * 2020-10-08 2022-04-05 Toyota Motor Engineering & Manufacturing North America, Inc. Systems and methods for managing an automotive edge computing environment
CN114356531A (en) * 2022-01-12 2022-04-15 重庆邮电大学 Edge calculation task classification scheduling method based on K-means clustering and queuing theory
CN114816721A (en) * 2022-06-29 2022-07-29 常州庞云网络科技有限公司 Multitask optimization scheduling method and system based on edge calculation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100228819A1 (en) * 2009-03-05 2010-09-09 Yottaa Inc System and method for performance acceleration, data protection, disaster recovery and on-demand scaling of computer applications
CN112311820A (en) * 2019-07-26 2021-02-02 腾讯科技(深圳)有限公司 Edge device scheduling method, connection method, device and edge device
US11297161B1 (en) * 2020-10-08 2022-04-05 Toyota Motor Engineering & Manufacturing North America, Inc. Systems and methods for managing an automotive edge computing environment
CN114356531A (en) * 2022-01-12 2022-04-15 重庆邮电大学 Edge calculation task classification scheduling method based on K-means clustering and queuing theory
CN114816721A (en) * 2022-06-29 2022-07-29 常州庞云网络科技有限公司 Multitask optimization scheduling method and system based on edge calculation

Also Published As

Publication number Publication date
CN116467088B (en) 2024-03-26

Similar Documents

Publication Publication Date Title
EP4123515A1 (en) Data processing method and data processing device
CN110096349A (en) A kind of job scheduling method based on the prediction of clustered node load condition
CN110737529A (en) cluster scheduling adaptive configuration method for short-time multiple variable-size data jobs
WO2023124947A1 (en) Task processing method and apparatus, and related device
CN116760772B (en) Control system and method for converging flow divider
CN114816721B (en) Multitask optimization scheduling method and system based on edge calculation
CN111813502B (en) Computing resource management scheduling method for industrial edge node
CN115794341A (en) Task scheduling method, device, equipment and storage medium based on artificial intelligence
CN111724037A (en) Operation resource allocation method and device, computer equipment and readable storage medium
CN113553160A (en) Task scheduling method and system for edge computing node of artificial intelligence Internet of things
CN112819054A (en) Slice template configuration method and device
CN112765468A (en) Personalized user service customization method and device
CN109558248A (en) A kind of method and system for the determining resource allocation parameters calculated towards ocean model
CN116467088B (en) Edge computing scheduling management method and system based on deep learning
CN114676892A (en) Service order dispatching method, system and equipment based on intelligent matching
CN117407178A (en) Acceleration sub-card management method and system for self-adaptive load distribution
CN116244086B (en) Resource management optimization method and system based on multi-chip cloud architecture
CN115098238B (en) Application program task scheduling method and device
CN116109058A (en) Substation inspection management method and device based on deep reinforcement learning
CN112560213B (en) System modeling method and system based on model system engineering and hyper-network theory
CN111539863B (en) Intelligent city operation method and system based on multi-source task line
CN114389953A (en) Kubernetes container dynamic expansion and contraction method and system based on flow prediction
CN113884756A (en) Electric energy metering edge acquisition device and method
CN114625654A (en) Test method and related equipment thereof
CN113255947A (en) Network point service guiding method, device, computer equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant