CN117370035B - Real-time simulation computing resource dividing system and method - Google Patents

Real-time simulation computing resource dividing system and method Download PDF

Info

Publication number
CN117370035B
CN117370035B CN202311675375.3A CN202311675375A CN117370035B CN 117370035 B CN117370035 B CN 117370035B CN 202311675375 A CN202311675375 A CN 202311675375A CN 117370035 B CN117370035 B CN 117370035B
Authority
CN
China
Prior art keywords
data
server
edge server
real
cloud computing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311675375.3A
Other languages
Chinese (zh)
Other versions
CN117370035A (en
Inventor
谢宇哲
李智
王劭均
姚艳
李元林
冯怿彬
金佳
朱博文
张�浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Power Supply Co of State Grid Zhejiang Electric Power Co Ltd
Original Assignee
Ningbo Power Supply Co of State Grid Zhejiang Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Power Supply Co of State Grid Zhejiang Electric Power Co Ltd filed Critical Ningbo Power Supply Co of State Grid Zhejiang Electric Power Co Ltd
Priority to CN202311675375.3A priority Critical patent/CN117370035B/en
Publication of CN117370035A publication Critical patent/CN117370035A/en
Application granted granted Critical
Publication of CN117370035B publication Critical patent/CN117370035B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5066Algorithms for mapping a plurality of inter-dependent sub-tasks onto a plurality of physical CPUs
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides a real-time simulation computing resource dividing system and method, which relate to the technical field of computer application and comprise a user server, an edge server and a cloud computing server; the user server is used for acquiring real-time simulation operation data of a plurality of user terminals and obtaining user service processing data, edge server processing data and cloud computing processing data according to the real-time simulation operation data; the edge server is used for obtaining an edge server data evaluation value according to the edge server processing data, obtaining an edge server computing resource through the edge server data evaluation value, or sending the edge server processing data to the cloud computing server; and the cloud computing server is used for obtaining cloud computing server computing resources according to the edge server processing data and/or the cloud computing processing data based on the pre-stored computing resources and the mixing algorithm. The invention realizes efficient and energy-saving task processing and meets the calculation requirement of real-time simulation.

Description

Real-time simulation computing resource dividing system and method
Technical Field
The invention relates to the technical field of computer application, in particular to a real-time simulation computing resource dividing system and method.
Background
Real-time simulation is a computer simulation technique for simulating physical, environmental or system behavior in the real world and interacting in a real-time manner. The method has wide application in multiple fields, and the key characteristic of real-time simulation is that the method can quickly respond to user input and generate real-time simulation results. To achieve this goal, it is computationally optimal and uses high performance computing devices. The cloud computing technology and the edge computing technology provide a new path and a platform architecture for the research and development of real-time simulation, and can provide high-efficiency computing capability for real-time simulation application in a virtual resource pool manner.
Edge computing is a distributed computing model that pushes computing power and storage resources closer to the data sources and end devices to reduce latency and provide faster and real-time computing, but has limited data processing power. Cloud computing is a centralized computing model, wherein data and application programs are processed and stored on a remote cloud server, and a user is connected to the cloud server through a network to use computing resources and services, but faults such as downtime and the like are easy to occur when the cloud computing is in a high-load running state for a long time, so that the computing requirement of real-time simulation cannot be met.
Disclosure of Invention
The invention solves the problem that the unreasonable division of the computing resources can not meet the real-time simulation computing requirement.
In order to solve the problems, the invention provides a real-time simulation computing resource dividing system, which comprises a user server, an edge server and a cloud computing server, wherein the user server comprises a plurality of user terminals;
The user server is used for acquiring real-time simulation operation data of a plurality of user terminals, acquiring user service processing data, edge server processing data and cloud computing processing data according to the real-time simulation operation data, adding the user service processing data to a server processing queue, and respectively transmitting the edge server processing data and the cloud computing processing data to the edge server and the cloud computing server;
The edge server is used for obtaining an edge server data evaluation value according to the edge server processing data, calculating resources of the edge server through the edge server data evaluation value, or sending the edge server processing data to the cloud computing server;
The cloud computing server is used for obtaining the predicted data computing resources of the user side, obtaining pre-stored computing resources through the predicted data computing resources, and obtaining cloud computing server computing resources according to the edge server processing data and/or the cloud computing processing data based on the pre-stored computing resources and the hybrid algorithm.
Optionally, the obtaining the edge server data evaluation value according to the edge server processing data, obtaining an edge server computing resource by the edge server data evaluation value, or sending the edge server processing data to the cloud computing server includes:
When the edge server processing data is greater than or equal to a load threshold, sending the edge server processing data to the cloud computing server;
When the edge server processing data is smaller than the load threshold, inputting the edge server processing data into a data evaluation model to obtain the edge server data evaluation value;
Computing resources by the edge server through the edge server data evaluation values.
Optionally, the obtaining the predicted data computing resource of the user side includes:
acquiring current time node data;
and inputting the current time node data into a prediction calculation resource model to obtain the prediction data calculation resource.
Optionally, the method for constructing the predictive computing resource model includes:
Acquiring historical cloud computing processing data of the user side, and taking the historical cloud computing processing data as a data set;
preprocessing the data set to obtain a training set;
And training a preset neural network model through the training set to obtain the prediction calculation resource model.
Optionally, the obtaining the user service processing data, the edge server processing data and the cloud computing processing data according to the real-time simulation operation data includes:
when the real-time simulation operation data is non-unloading data, obtaining the user service processing data according to the real-time simulation operation data;
And when the real-time simulation operation data is the uninstallable data, obtaining the edge server processing data and the cloud computing processing data according to the real-time simulation operation data.
Optionally, the obtaining the edge server processing data and the cloud computing processing data according to the real-time simulation operation data includes:
When the real-time simulation operation data is larger than or equal to a preset threshold value, the real-time simulation operation data is used as the cloud computing processing data;
and when the real-time simulation operation data is smaller than a preset threshold value, using the real-time simulation operation data as the edge server processing data.
Optionally, the cloud computing server includes a plurality of data processing resource points, and the obtaining, based on the pre-stored computing resources and the hybrid algorithm, the cloud computing server computing resource according to the edge server processing data and/or the cloud computing processing data includes:
Taking the edge server processing data and/or the cloud computing processing data as subtasks;
screening all the data processing resource points according to the pre-stored computing resources to obtain a cloud computing resource aggregate;
obtaining a task dependency graph according to the subtasks and the cloud computing resource collection, wherein the task dependency graph is used for representing interaction relations between the subtasks and the data processing resource points;
And inputting the task dependency graph into a preset mixed algorithm model to obtain cloud computing server computing resources, wherein the preset mixed algorithm model is used for representing a model constructed based on a genetic algorithm and an ant colony algorithm.
Optionally, inputting the task dependency graph into a preset hybrid algorithm model to obtain the cloud computing server computing resource includes:
Obtaining an original computing resource allocation strategy through the task dependency graph based on the genetic algorithm;
Converting the original computing resource allocation strategy into an original pheromone;
Based on the ant colony algorithm, obtaining a target computing resource allocation strategy through the original pheromone;
obtaining cloud computing server computing resources through the target computing resource allocation strategy
Optionally, the cloud computing server is further configured to obtain a performance index, and obtain an early warning prompt according to the performance index, where the performance index includes a memory allowance, a CPU operand, and a data operation rate.
According to the real-time simulation computing resource dividing system, the user server obtains the user service processing data, the edge server processing data and the cloud computing processing data according to the real-time simulation operation data, and the data are distributed to different terminals for processing through dividing the real-time simulation operation data, so that a large amount of data processing can be met, and the burden of the cloud server can be reduced. The edge server further obtains an edge server data evaluation value through the edge server processing data, the edge server processing data is further judged through the edge server data evaluation value, if the edge server processing condition is not met, the edge server processing data is forwarded to the cloud computing server for processing, effective processing of the data is guaranteed, the cloud computing server obtains predicted data computing resources of the user side, prestored computing resources are obtained through the predicted data computing resources, and relevant computing resources are reserved for clients through analysis of user habits. And reasonably distributing the obtained data based on the hybrid algorithm to obtain computing resources of the cloud computing server, and reasonably dividing the computing resources to realize efficient and energy-saving task processing and meet the computing requirements of real-time simulation.
The invention also provides a method for dividing the real-time simulation computing resources, which comprises the following steps: the user server acquires real-time simulation operation data of a plurality of user terminals, obtains user service processing data, edge server processing data and cloud computing processing data according to the real-time simulation operation data, adds the user service processing data to a server processing queue, and sends the edge server processing data and the cloud computing processing data to an edge server and a cloud computing server respectively;
The edge server obtains an edge server data evaluation value according to the edge server processing data, and calculates resources of the edge server through the edge server data evaluation value or sends the edge server processing data to the cloud computing server;
the cloud computing server obtains the predicted data computing resources of the user side, obtains pre-stored computing resources through the predicted data computing resources, and obtains cloud computing server computing resources according to the edge server processing data and/or the cloud computing processing data based on the pre-stored computing resources and the hybrid algorithm.
The method for dividing the real-time simulation computing resources has the same advantages as the system for dividing the real-time simulation computing resources compared with the prior art, and is not described in detail herein.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 is a schematic diagram of a computing resource partitioning system according to an embodiment of the present invention;
Fig. 2 is a flow chart illustrating a method for computing resource partitioning according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, the embodiment provides a real-time simulation computing resource partitioning system, which includes a user server, an edge server and a cloud computing server, wherein the user server includes a plurality of user terminals;
The user server is used for acquiring real-time simulation operation data of a plurality of user terminals, acquiring user service processing data, edge server processing data and cloud computing processing data according to the real-time simulation operation data, adding the user service processing data to a server processing queue, and respectively transmitting the edge server processing data and the cloud computing processing data to the edge server and the cloud computing server;
The edge server is used for obtaining an edge server data evaluation value according to the edge server processing data, calculating resources of the edge server through the edge server data evaluation value, or sending the edge server processing data to the cloud computing server;
The cloud computing server is used for obtaining the predicted data computing resources of the user side, obtaining pre-stored computing resources through the predicted data computing resources, and obtaining cloud computing server computing resources according to the edge server processing data and/or the cloud computing processing data based on the pre-stored computing resources and the hybrid algorithm.
Specifically, an edge server and a cloud computing server are connected to each other through a core network, and the edge server includes a plurality of edge devices. In the edge server, each edge device is connected with each other through a wired link to form an edge network, and the edge devices can communicate with each other. The user server includes a plurality of user terminals, which may be computers or any mobile electronic devices. For the client, since it has the least computational resources, there is only one running computational task at the same time, and the rest of the tasks allocated to it must be waiting in the waiting queue. For the cloud computing server and the edge server, the cloud computing server and the edge server can process a plurality of tasks simultaneously, if the number of tasks allocated to the cloud computing server and the edge server exceeds the maximum number of tasks which can be processed simultaneously, the cloud computing server and the edge server bisect the computing resources contained in the current server for running tasks, the rest of the tasks wait for processing in a waiting queue, and if the number of tasks allocated to the cloud computing server and the edge server is smaller than the maximum load of the cloud computing server and the edge server, the cloud computing server and the edge server run processing simultaneously. In order to fully develop the advantages of the cloud and the edge server, the cloud and the edge server are required to be distributed to different terminals for processing according to the characteristics of tasks. For an edge server, the computing resources contained in the edge server are limited by factors such as geographic positions or device sizes, which are smaller than those of a cloud server, so that related tasks which cannot be processed need to be forwarded to the cloud computing server. The cloud computing server predicts cloud computing resources which a user may need to apply for by obtaining the predicted data computing resources, and reserves related computing resources in the cloud computing server by the predicted data computing resources to obtain prestored computing resources for providing related computing services for the user.
Specifically, the client may include a real-time simulation system, where the real-time simulation system needs to simulate a system behavior in real time according to a certain time step and a clock, complete calculation of a digital simulation model and interaction of input and output data in each simulation step, and react at the end of the simulation step. The system requires defining the object being modeled, its behavior and characteristics, typically described using a mathematical model or a physical model. The user server is used for acquiring real-time simulation running data of a plurality of the user terminals, wherein the real-time simulation data refers to data for generating simulation or simulating events in the real world in a real-time environment, and the data are used for simulating and evaluating the performance of a system, an algorithm or a model without performing actual tests in the real environment. In real-time simulation systems, the amount of data generated is often very large and, due to the need to process the data under real-time requirements, there is often a significant strain on the computing resources. By forwarding related tasks to an edge server or cloud computing server, the availability, scalability, and flexibility of the system are improved through the resources and services it provides.
According to the real-time simulation computing resource dividing system, the user server obtains the user service processing data, the edge server processing data and the cloud computing processing data according to the real-time simulation operation data, and the data are distributed to different terminals for processing through dividing the real-time simulation operation data, so that a large amount of data processing can be met, and the burden of the cloud server can be reduced. The edge server further obtains an edge server data evaluation value through the edge server processing data, the edge server processing data is further judged through the edge server data evaluation value, if the edge server processing condition is not met, the edge server processing data is forwarded to the cloud computing server for processing, effective processing of the data is guaranteed, the cloud computing server obtains predicted data computing resources of the user side, prestored computing resources are obtained through the predicted data computing resources, and relevant computing resources are reserved for clients through analysis of user habits. And reasonably distributing the obtained data based on the hybrid algorithm to obtain computing resources of the cloud computing server, and reasonably dividing the computing resources to realize efficient and energy-saving task processing and meet the computing requirements of real-time simulation.
In this embodiment, obtaining the edge server data evaluation value according to the edge server processing data, and computing resources of an edge server through the edge server data evaluation value, or sending the edge server processing data to the cloud computing server includes:
When the edge server processing data is greater than or equal to a load threshold, sending the edge server processing data to the cloud computing server;
When the edge server processing data is smaller than the load threshold, inputting the edge server processing data into a data evaluation model to obtain the edge server data evaluation value;
Computing resources by the edge server through the edge server data evaluation values.
Specifically, the edge server processes the data to obtain relevant characteristics, wherein the characteristics comprise data size, transmission time, waiting time and processing time. And inputting the edge server processing data and the server information into a data evaluation model to obtain an edge server data evaluation value.
When the data evaluation value of the edge server is smaller than or equal to a preset evaluation value, computing resources of the edge server are obtained according to the processing data of the edge server;
And when the data evaluation value of the edge server is larger than a preset evaluation value, transmitting the edge server processing data to the cloud computing server.
In some more specific embodiments, the preset evaluation value may be 0.7.
The real-time simulation computing resource dividing system in the embodiment obtains the data evaluation value of the edge server through the data evaluation model, evaluates the data characteristics and the current server condition respectively, judges whether the current edge server can process the data, and if the current edge server cannot process the data, sends the data to the cloud computing server, so that the data processing is ensured.
In this embodiment, the obtaining the predicted data computing resource of the user terminal includes:
acquiring current time node data;
and inputting the current time node data into a prediction calculation resource model to obtain the prediction data calculation resource.
In this embodiment, the method for constructing the prediction computing resource model includes:
Acquiring historical cloud computing processing data of the user side, and taking the historical cloud computing processing data as a data set;
preprocessing the data set to obtain a training set;
And training a preset neural network model through the training set to obtain the prediction calculation resource model.
Specifically, the cloud computing server screens historical data of users in advance, selects important users and relevant users applying for cloud computing resources frequently, takes the important users as the important users, and establishes a relevant client prediction computing resource model. Because the simulation computing resource calculation often has a long-term user which is important for the user, the related computing resource is reserved by establishing a prediction computing resource model, and the user needs are timely met.
In this embodiment, the obtaining the user service processing data, the edge server processing data, and the cloud computing processing data according to the real-time simulation operation data includes:
when the real-time simulation operation data is non-unloading data, obtaining the user service processing data according to the real-time simulation operation data;
And when the real-time simulation operation data is the uninstallable data, obtaining the edge server processing data and the cloud computing processing data according to the real-time simulation operation data.
Specifically, task offloading refers to a calculation mode in which when a user terminal cannot meet the calculation requirements of a task produced by the user terminal, or in order to reduce the load of the user terminal and reduce the completion time of all tasks, part or all of the calculation tasks are delivered to other servers for processing. When a new computing task is generated in the user terminal, the task needs to be divided into two parts, namely an off-loadable part and an off-loadable part according to the characteristics of the task: for the task which cannot be unloaded, the task can only be handed to a user side for processing; and for the task which can be unloaded, respectively sending the real-time simulation operation data to an edge server or a cloud computing server for processing as the edge server processing data and the cloud computing processing data.
The real-time simulation computing resource dividing system in the embodiment judges whether the real-time simulation running data is the data which can be unloaded, and uses the task which cannot be unloaded as user service processing data to be reserved to a user side for processing. And for the task which can be unloaded, respectively sending the real-time simulation operation data to an edge server or a cloud computing server for processing as the edge server processing data and the cloud computing processing data, thereby realizing reasonable distribution of computing resources.
In this embodiment, the obtaining the edge server processing data and the cloud computing processing data according to the real-time simulation operation data includes:
When the real-time simulation operation data is larger than or equal to a preset threshold value, the real-time simulation operation data is used as the cloud computing processing data;
and when the real-time simulation operation data is smaller than a preset threshold value, using the real-time simulation operation data as the edge server processing data.
In this embodiment, the cloud computing server includes a plurality of data processing resource points, and the obtaining, based on the pre-stored computing resources and the hybrid algorithm, the computing resources of the cloud computing server according to the edge server processing data and/or the cloud computing processing data includes:
Taking the edge server processing data and/or the cloud computing processing data as subtasks;
screening all the data processing resource points according to the pre-stored computing resources to obtain a cloud computing resource aggregate;
obtaining a task dependency graph according to the subtasks and the cloud computing resource collection, wherein the task dependency graph is used for representing interaction relations between the subtasks and the data processing resource points;
And inputting the task dependency graph into a preset mixed algorithm model to obtain cloud computing server computing resources, wherein the preset mixed algorithm model is used for representing a model constructed based on a genetic algorithm and an ant colony algorithm.
Specifically, each system parameter of a plurality of data processing resource points in the cloud computing server is configured, including a resource virtualization parameter, a resource limitation parameter and the like, computing resources held by a physical host are converted into virtualized resources through the virtualized parameters, and the prestored computing resources obtained before are removed to obtain a cloud computing resource collection. And obtaining a task dependency graph according to the subtasks and the cloud computing resource collection, wherein the task dependency graph is usually represented by a Directed Acyclic Graph (DAG), and is a graph structure formed by a group of nodes connected by directed edges, the direction of the edges designates the dependency relationship among the nodes, and no circulation path exists in the graph. The task successor task can only be executed if all predecessor tasks are finished. The interaction relation between the subtasks and the data processing resource points comprises processing time, communication time and the like of the subtasks at the data processing resource points. The nodes and edges of the DAG graph can easily represent various complex dependency relationships by using the DAG graph, and a foundation is laid for subsequent data processing.
In this embodiment, inputting the task dependency graph into a preset hybrid algorithm model to obtain the cloud computing server computing resource includes:
Obtaining an original computing resource allocation strategy through the task dependency graph based on the genetic algorithm;
Converting the original computing resource allocation strategy into an original pheromone;
Based on the ant colony algorithm, obtaining a target computing resource allocation strategy through the original pheromone;
and obtaining the computing resources of the cloud computing server through the target computing resource allocation strategy.
Specifically, the solving process of the resource allocation problem by adopting a genetic algorithm is to convert decision variables of the problem into genes, then encode the genes, generate new chromosomes by crossing, mutation and other operations on the chromosomes formed by the encoded genes, select more excellent chromosomes according to certain conditions, finally decode the chromosomes, and return to the most original decision variables to explain the resource allocation condition. The common coding modes include direct coding and indirect coding, and the selection of the coding modes directly influences the algorithm performance and efficiency due to the characteristics of huge resource scale, complex constraint relationship and the like in the cloud environment.
The conventional ant colony algorithm often has low searching efficiency due to the lack of initial pheromone accumulation, so how to convert the optimal solution of the genetic algorithm into the initial pheromone of the ant colony algorithm is very important. The algorithm is trapped into local optimum easily due to the fact that the initial value of the pheromone is set too small, the initial iterative process is disabled due to the fact that the initial value is set too large, and the pheromone left by ants does not start to function until the pheromone volatilizes to be small enough. And when the genetic algorithm is finished, selecting an optimized solution consisting of individuals with high fitness values in the population as an initial pheromone according to a certain probability. When the genetic algorithm is terminated, selecting individuals with the best fitness value in the population with the probability of 10%, converting the distribution scheme represented by the individuals into the number of ants on the resource point, and then converting all the number of ants on the resource point into pheromones through the conversion factors.
The real-time simulation computing resource dividing system of the embodiment combines the traditional genetic algorithm and the ant colony algorithm, so that the capability of efficient global search of the genetic algorithm is reserved, a better solution can be found through the ant colony algorithm, and when the genetic algorithm is stopped, the obtained result is converted into the original pheromone of the ant colony algorithm, and the extensive search and the global convergence of the genetic algorithm are utilized.
The real-time simulation computing resource dividing system of the embodiment can effectively overcome the defect of a single algorithm by combining a genetic algorithm and an ant colony algorithm, and improves the solving efficiency and quality.
In this embodiment, the cloud computing server is further configured to obtain a performance index, and obtain an early warning prompt according to the performance index, where the performance index includes a memory allowance, a CPU operand, and a data operation rate.
In some more specific embodiments, the user server is communicatively coupled to the edge server via a small base station.
According to the real-time simulation computing resource dividing system, the user server obtains the user service processing data, the edge server processing data and the cloud computing processing data according to the real-time simulation operation data, and the data are distributed to different terminals for processing through dividing the real-time simulation operation data, so that a large amount of data processing can be met, and the burden of the cloud server can be reduced. The edge server further obtains an edge server data evaluation value through the edge server processing data, the edge server processing data is further judged through the edge server data evaluation value, if the edge server processing condition is not met, the edge server processing data is forwarded to the cloud computing server for processing, effective processing of the data is guaranteed, the cloud computing server obtains predicted data computing resources of the user side, prestored computing resources are obtained through the predicted data computing resources, and relevant computing resources are reserved for clients through analysis of user habits. And reasonably distributing the obtained data based on the hybrid algorithm to obtain computing resources of the cloud computing server, and reasonably dividing the computing resources to realize efficient and energy-saving task processing and meet the computing requirements of real-time simulation.
Corresponding to the real-time simulation computing resource dividing system, the embodiment of the invention also provides a real-time simulation computing resource dividing method. Fig. 2 is a flow chart of a method for partitioning real-time simulation computing resources according to an embodiment of the present invention, where the method for partitioning real-time simulation computing resources includes:
The user server acquires real-time simulation operation data of a plurality of user terminals, obtains user service processing data, edge server processing data and cloud computing processing data according to the real-time simulation operation data, adds the user service processing data to a server processing queue, and sends the edge server processing data and the cloud computing processing data to an edge server and a cloud computing server respectively;
The edge server obtains an edge server data evaluation value according to the edge server processing data, and calculates resources of the edge server through the edge server data evaluation value or sends the edge server processing data to the cloud computing server;
the cloud computing server obtains the predicted data computing resources of the user side, obtains pre-stored computing resources through the predicted data computing resources, and obtains cloud computing server computing resources according to the edge server processing data and/or the cloud computing processing data based on the pre-stored computing resources and the hybrid algorithm.
The method for dividing the real-time simulation computing resources has the same advantages as the system for dividing the real-time simulation computing resources compared with the prior art, and is not described in detail herein.
It should be noted that in the present invention, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing is only a specific embodiment of the invention to enable those skilled in the art to understand or practice the invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features of the invention.

Claims (8)

1. The real-time simulation computing resource dividing system is characterized by comprising a user server, an edge server and a cloud computing server, wherein the user server comprises a plurality of user terminals;
The user server is used for acquiring real-time simulation operation data of a plurality of user terminals, obtaining user service processing data according to the real-time simulation operation data when the real-time simulation operation data are non-unloading data, taking the real-time simulation operation data as cloud computing processing data when the real-time simulation operation data are unloading data and the real-time simulation operation data are greater than or equal to a preset threshold value, taking the real-time simulation operation data as edge server processing data when the real-time simulation operation data are unloading data and the real-time simulation operation data are smaller than the preset threshold value, adding the user service processing data to a server processing queue, and respectively sending the edge server processing data and the cloud computing processing data to the edge server and the cloud computing server;
The edge server is used for obtaining an edge server data evaluation value according to the edge server processing data, calculating resources of the edge server through the edge server data evaluation value, or sending the edge server processing data to the cloud computing server;
The cloud computing server is used for obtaining the predicted data computing resources of the user side, obtaining pre-stored computing resources through the predicted data computing resources, and obtaining cloud computing server computing resources according to the edge server processing data and/or the cloud computing processing data based on the pre-stored computing resources and the hybrid algorithm.
2. The system for partitioning real-time simulation computing resources according to claim 1, wherein the obtaining an edge server data evaluation value from the edge server processing data, passing the edge server data evaluation value through an edge server computing resource, or sending the edge server processing data to the cloud computing server comprises:
When the edge server processing data is greater than or equal to a load threshold, sending the edge server processing data to the cloud computing server;
When the edge server processing data is smaller than the load threshold, inputting the edge server processing data into a data evaluation model to obtain the edge server data evaluation value;
Computing resources by the edge server through the edge server data evaluation values.
3. The system for partitioning real-time simulation computing resources according to claim 1, wherein said obtaining the predicted data computing resources of the user side comprises:
acquiring current time node data;
and inputting the current time node data into a prediction calculation resource model to obtain the prediction data calculation resource.
4. The partitioning system for real-time emulated computing resources of claim 3, wherein said predictive computing resource model building method comprises:
Acquiring historical cloud computing processing data of the user side, and taking the historical cloud computing processing data as a data set;
preprocessing the data set to obtain a training set;
And training a preset neural network model through the training set to obtain the prediction calculation resource model.
5. The system for partitioning real-time simulation computing resources according to claim 1, wherein the cloud computing server comprises a plurality of data processing resource points, and the obtaining the cloud computing server computing resources from the edge server processing data and/or the cloud computing processing data based on the pre-stored computing resources and the hybrid algorithm comprises:
Taking the edge server processing data and/or the cloud computing processing data as subtasks;
screening all the data processing resource points according to the pre-stored computing resources to obtain a cloud computing resource aggregate;
obtaining a task dependency graph according to the subtasks and the cloud computing resource collection, wherein the task dependency graph is used for representing interaction relations between the subtasks and the data processing resource points;
And inputting the task dependency graph into a preset mixed algorithm model to obtain cloud computing server computing resources, wherein the preset mixed algorithm model is used for representing a model constructed based on a genetic algorithm and an ant colony algorithm.
6. The system for partitioning real-time simulation computing resources according to claim 5, wherein said inputting the task dependency graph into a preset hybrid algorithm model to obtain cloud computing server computing resources comprises:
Obtaining an original computing resource allocation strategy through the task dependency graph based on the genetic algorithm;
Converting the original computing resource allocation strategy into an original pheromone;
Based on the ant colony algorithm, obtaining a target computing resource allocation strategy through the original pheromone;
and obtaining the computing resources of the cloud computing server through the target computing resource allocation strategy.
7. The system of claim 1, wherein the cloud computing server is further configured to obtain a performance index, and obtain an early warning prompt according to the performance index, where the performance index includes a memory margin, a CPU operand, and a data operation rate.
8. A method for partitioning real-time simulation computing resources, based on the partitioning system of real-time simulation computing resources of any one of claims 1 to 7, comprising:
The method comprises the steps that a user server obtains real-time simulation operation data of a plurality of user sides, user service processing data are obtained according to the real-time simulation operation data when the real-time simulation operation data are non-unloading data, the real-time simulation operation data are used as cloud computing processing data when the real-time simulation operation data are unloading data and are larger than or equal to a preset threshold, the real-time simulation operation data are used as edge server processing data when the real-time simulation operation data are smaller than the preset threshold, the user service processing data are added to a server processing queue, and the edge server processing data and the cloud computing processing data are respectively sent to an edge server and the cloud computing server;
The edge server obtains an edge server data evaluation value according to the edge server processing data, and calculates resources of the edge server through the edge server data evaluation value or sends the edge server processing data to the cloud computing server;
the cloud computing server obtains the predicted data computing resources of the user side, obtains pre-stored computing resources through the predicted data computing resources, and obtains cloud computing server computing resources according to the edge server processing data and/or the cloud computing processing data based on the pre-stored computing resources and the hybrid algorithm.
CN202311675375.3A 2023-12-08 2023-12-08 Real-time simulation computing resource dividing system and method Active CN117370035B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311675375.3A CN117370035B (en) 2023-12-08 2023-12-08 Real-time simulation computing resource dividing system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311675375.3A CN117370035B (en) 2023-12-08 2023-12-08 Real-time simulation computing resource dividing system and method

Publications (2)

Publication Number Publication Date
CN117370035A CN117370035A (en) 2024-01-09
CN117370035B true CN117370035B (en) 2024-05-07

Family

ID=89389597

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311675375.3A Active CN117370035B (en) 2023-12-08 2023-12-08 Real-time simulation computing resource dividing system and method

Country Status (1)

Country Link
CN (1) CN117370035B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019196127A1 (en) * 2018-04-11 2019-10-17 深圳大学 Cloud computing task allocation method and apparatus, device, and storage medium
CN111970323A (en) * 2020-07-10 2020-11-20 北京大学 Time delay optimization method and device for cloud-edge multi-layer cooperation in edge computing network
CN113037877A (en) * 2021-05-26 2021-06-25 深圳大学 Optimization method for time-space data and resource scheduling under cloud edge architecture
CN114785777A (en) * 2022-03-04 2022-07-22 杭州未名信科科技有限公司 Optimal decoupling method for end-edge-cloud computing of transmission resources
CN116016519A (en) * 2022-12-30 2023-04-25 南京邮电大学 QoE-oriented edge computing resource allocation method
CN116546053A (en) * 2023-05-26 2023-08-04 河北百亚信息科技有限公司 Edge computing service placement system in resource-constrained Internet of things scene
CN116708443A (en) * 2023-07-24 2023-09-05 中国电信股份有限公司 Multi-level calculation network task scheduling method and device
CN117009053A (en) * 2023-07-13 2023-11-07 鹏城实验室 Task processing method of edge computing system and related equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019196127A1 (en) * 2018-04-11 2019-10-17 深圳大学 Cloud computing task allocation method and apparatus, device, and storage medium
CN111970323A (en) * 2020-07-10 2020-11-20 北京大学 Time delay optimization method and device for cloud-edge multi-layer cooperation in edge computing network
CN113037877A (en) * 2021-05-26 2021-06-25 深圳大学 Optimization method for time-space data and resource scheduling under cloud edge architecture
CN114785777A (en) * 2022-03-04 2022-07-22 杭州未名信科科技有限公司 Optimal decoupling method for end-edge-cloud computing of transmission resources
CN116016519A (en) * 2022-12-30 2023-04-25 南京邮电大学 QoE-oriented edge computing resource allocation method
CN116546053A (en) * 2023-05-26 2023-08-04 河北百亚信息科技有限公司 Edge computing service placement system in resource-constrained Internet of things scene
CN117009053A (en) * 2023-07-13 2023-11-07 鹏城实验室 Task processing method of edge computing system and related equipment
CN116708443A (en) * 2023-07-24 2023-09-05 中国电信股份有限公司 Multi-level calculation network task scheduling method and device

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
C-V2X车联网中基于MEC的任务卸载策略研究;李智;中国优秀硕士论文电子期刊网;20220415;全文 *
Optimizing Offloading Strategies for Mobile Edge Cloud Systems;Zhiyan Chen Etc.;2022 IEEE 7th International Conference on Smart Cloud (SmartCloud);20221114;全文 *
基于云计算资源分配与调度优化的改进蚁群算法研究;王玲;;微型电脑应用;20200520(05);全文 *
基于云边协同算力调度的高效边缘卸载研究;王姗姗 等;无线电通信技术;20230105;全文 *
基于边缘计算的任务分配算法研究;张浩 等;现代信息科技;20210225;第5卷(第4期);全文 *

Also Published As

Publication number Publication date
CN117370035A (en) 2024-01-09

Similar Documents

Publication Publication Date Title
CN113242568B (en) Task unloading and resource allocation method in uncertain network environment
CN105704255B (en) A kind of server load balancing method based on genetic algorithm
WO2021088207A1 (en) Mixed deployment-based job scheduling method and apparatus for cloud computing cluster, server and storage device
CN113037877B (en) Optimization method for time-space data and resource scheduling under cloud edge architecture
Lagwal et al. Load balancing in cloud computing using genetic algorithm
CN114595049A (en) Cloud-edge cooperative task scheduling method and device
CN111049903A (en) Edge network load distribution algorithm based on application perception prediction
CN111176784A (en) Virtual machine integration method based on extreme learning machine and ant colony system
CN116932199A (en) Cloud rendering method, system, device, equipment and computer storage medium
CN116185523A (en) Task unloading and deployment method
CN111199316A (en) Cloud and mist collaborative computing power grid scheduling method based on execution time evaluation
Li et al. Optimal service selection and placement based on popularity and server load in multi-access edge computing
Huang et al. Computation offloading for multimedia workflows with deadline constraints in cloudlet-based mobile cloud
CN114172819A (en) Demand resource prediction method, system, electronic device and storage medium for NFV network element
CN117370035B (en) Real-time simulation computing resource dividing system and method
Yang et al. PerLLM: Personalized Inference Scheduling with Edge-Cloud Collaboration for Diverse LLM Services
CN117436627A (en) Task allocation method, device, terminal equipment and medium
CN117014389A (en) Computing network resource allocation method and system, electronic equipment and storage medium
CN112866358B (en) Method, system and device for rescheduling service of Internet of things
Cao et al. Online cost-rejection rate scheduling for resource requests in hybrid clouds
CN116149855A (en) Method and system for optimizing performance resource cost under micro-service architecture
CN112769942B (en) QoS-based micro-service dynamic arranging method
CN114741198A (en) Video stream processing method and device, electronic equipment and computer readable medium
CN114281544A (en) Electric power task execution method and device based on edge calculation
Jin et al. Common structures in resource management as driver for Reinforcement Learning: a survey and research tracks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant