CN116032757B - Network resource optimization method and device for edge cloud running scene - Google Patents

Network resource optimization method and device for edge cloud running scene Download PDF

Info

Publication number
CN116032757B
CN116032757B CN202211624421.2A CN202211624421A CN116032757B CN 116032757 B CN116032757 B CN 116032757B CN 202211624421 A CN202211624421 A CN 202211624421A CN 116032757 B CN116032757 B CN 116032757B
Authority
CN
China
Prior art keywords
edge cloud
flow
characteristic data
running
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211624421.2A
Other languages
Chinese (zh)
Other versions
CN116032757A (en
Inventor
张青青
李星星
杨雷生
童浩
郑淳键
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pioneer Cloud Computing Shanghai Co ltd
Original Assignee
Pioneer Cloud Computing Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pioneer Cloud Computing Shanghai Co ltd filed Critical Pioneer Cloud Computing Shanghai Co ltd
Priority to CN202211624421.2A priority Critical patent/CN116032757B/en
Publication of CN116032757A publication Critical patent/CN116032757A/en
Application granted granted Critical
Publication of CN116032757B publication Critical patent/CN116032757B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a network resource optimization method of an edge cloud running scene, which comprises the following steps: acquiring characteristic data of the edge cloud, and predicting the characteristic data based on a transducer algorithm to acquire current predicted flow; wherein the characteristic data at least comprises: task id, task attribute, time interval, device configuration; acquiring the characteristic data in the actual running process of the equipment and the characteristic data of the preset virtual running process, and inputting the characteristic data into a multi-target model; the multi-target model performs preset processing on the characteristic data to obtain prediction parameters of the edge cloud; and calculating the optimal gross utilization cost of the edge cloud according to a preset algorithm based on the reinforcement learning model and the predicted parameters and the current predicted flow, and determining the action corresponding to the optimal gross utilization cost to realize the network resource optimization of the edge cloud. The income of each cloud computing merchant can be improved, the failure rate of the mixed running scene can be reduced, and meanwhile, the dynamic expansion requirement of resources in the mixed running scene can be solved.

Description

Network resource optimization method and device for edge cloud running scene
Technical Field
The invention belongs to the technical field of network resource optimization, and particularly relates to a network resource optimization method, device, equipment and storage medium of an edge cloud running scene.
Background
As the internet ages and 5G communication networks become popular, the internet data size has grown exponentially. In this context, the traditional centralized architecture of cloud computing has failed to meet the demands of end users for timeliness, capacity, and computing power. The characteristics of ultra-low time delay, mass data, edge intelligence and the like of the edge cloud promote more enterprises to select the edge cloud technical scheme, and the edge cloud computing becomes an important component built between the center cloud and the terminal in the market.
In the edge cloud scenario, there is a 95 charging service scenario on the provider side and the service side of the cloud computing, and there is a service scenario in which multiple service nodes share a switch or node bandwidth. In this scenario, how to maximize the long-term gross profit between the provider side and the service side is an important factor of how much each edge cloud merchant pays, and is also a core element of competition between each cloud computing merchant.
The existing edge cloud running scene often adopts a linear weighting mode to measure 95 charging differences, and meanwhile has the following defects:
1. the difference of the 95 charging time nodes of each service, namely the peak staggering problem in the mixed running scene, is not considered, so that the machine 95 flow is quite easy to be high, and the charging cost is increased.
2. Only short-term 95 flow benefits are often considered, but the flow in a mixed running scene is very unstable, and often fluctuates back and forth with time and business, so that short-term benefits and long-term loss phenomena are often caused.
3. The problem of system stability of the main task is often that the auxiliary task is often the bandwidth resource of the main task is multiplexed through 95 charging logics in a mixed running scene, but the system stability of the main task is often very easy to be reduced in the scene.
Conventional cloud computing blend scene technology is mainly based on dynamic planning, machine learning or utilization rate prediction and other technologies. However, there is often strong isomerism and complexity between each business and each cloud computing device, so it is difficult to implement the optimized blend logic by some statistical or conventional machine learning methods.
Disclosure of Invention
In order to solve the problems, the invention aims to provide a network resource optimization method, device, equipment and storage medium of an edge cloud running mixing scene, which can improve the income of all cloud computing merchants, reduce the failure rate of the running mixing scene and solve the dynamic expansion requirement of resources in the running mixing scene.
In order to achieve the above purpose, the technical scheme of the invention is as follows: a network resource optimization method of an edge cloud running scene comprises the following steps: acquiring characteristic data of the edge cloud, and predicting the characteristic data based on a transducer algorithm to acquire current predicted flow; wherein the characteristic data at least comprises: task id, task attribute, time interval, device configuration; acquiring the characteristic data in the actual running process of the equipment and the characteristic data of the preset virtual running process, and inputting the characteristic data into a multi-target model; the multi-target model performs preset processing on the characteristic data to obtain prediction parameters of the edge cloud; and calculating the optimal gross utilization cost of the edge cloud according to a preset algorithm based on the reinforcement learning model and the predicted parameters and the current predicted flow, and determining the action corresponding to the optimal gross utilization cost to realize the network resource optimization of the edge cloud.
In one embodiment of the present invention, the obtaining the feature data of the edge cloud to predict based on a transducer algorithm to obtain the current predicted traffic further includes: acquiring flow data of the edge cloud in a first preset time period and forming serialized data; predicting flow data of a second preset time period based on the flow data in the first preset time period and comparing the flow data with real flow data of the second preset time period; based on the flow data of the first preset time period, the real flow data of the second preset time period and the comparison result of the predicted flow data of the second preset time period and the real flow data of the second preset time period, predicting is carried out to obtain the current predicted flow.
In an embodiment of the present invention, the performing, by the multi-objective model, the preset processing on the feature data to obtain the prediction parameters of the marginal cloud further includes: acquiring characteristic data comprising task id, task attribute, time interval and equipment configuration; outputting x c and z according to a gross = (x a+xb) revenue price traffic deduction-x c cost price deduction = x c (z revenue price traffic deduction-cost price x cost deduction) formula; the multi-target model is MMOE multi-target model, the feature data is processed based on task id embedding, then a plurality of expert networks generated by the MMOE multi-target model are spliced, and finally a product calculation is performed according to the Gate layer of the MMOE multi-target model and the expert networks, so that the prediction parameters of x c and z are finally obtained.
In one embodiment of the present invention, the calculating the optimal gross profit cost of the edge cloud according to a preset algorithm for the prediction parameter and the predicted flow based on the reinforcement learning model further includes: obtaining the predicted flow and the predicted parameters output by the multi-target model, and based on the formula: gross profit = (x a+xb) revenue price-business deduction-x c cost price-deduction-penalty = x c (z revenue price-business deduction-cost price-deduction-penalty) the optimal gross profit is calculated.
In one embodiment of the present invention, before calculating the optimal gross profit cost for the edge cloud based on the reinforcement learning model, further comprising: the following definitions are made: defining actions as mixed running service ids of various combinations; defining rewards y = revenue-cost-loss = gross-complaint cost-system downtime time system downtime cost; defining a pre-cut state as 95 predicted flow of each running-mixing service in flow prediction, 95 predicted flow of edge cloud in flow prediction and characteristic data; the post-cut state is defined as 95 predicted traffic of the cut-in hybrid running service in traffic prediction, 95 predicted traffic of the edge cloud in traffic prediction and characteristic data.
In one embodiment of the present invention, after the defining, the defining further includes: defining an DQN network, and respectively learning long-term gross profits in a mixed running scene through a train_net neural network and a target_net neural network; the inputs of the train_net and the target_net are state data before cut-in, and the output is action nodes.
In one embodiment of the present invention, the long-term gross profit learning in the jogging scene further includes: randomly extracting 200 pieces of data, which can be marked as { action 1= [ action data ], reward 1= [ bonus array ], state_before 1= [ pre-cut state array ], state_after 1= [ post-cut state array ] }; through the target_net network, input state_after1, output reward 2= [ bonus array of all actions ]; a fraction between 0 and 1 is random, if this fraction is greater than 0.1 (i.e. 90% probability), reward1=reward1+max (reward2) ×factor (factor=0.9), otherwise reward1=reward1; carrying { reward1, action1, state_before1} into the train_net for training, wherein the input of the train_net is state_before1, the output is reward 3= [ the rewards array of all actions ], the reward 3_action=reward 3×action1, in the train_net training process, the loss function is defined as mse between the reward3_action and the reward1, and the optimizer is defined as adam; every 50 iterations, all parameters of the train_net are copied into the target_net; stopping iteration under the condition that the optimization condition is met; the train_net network and the target_net network are derived.
Based on the same conception, the invention also provides a network resource optimization device of the edge cloud running scene, which comprises the following components: the flow prediction module is used for obtaining characteristic data of the edge cloud and predicting the characteristic data based on a transducer algorithm so as to obtain current predicted flow; wherein the characteristic data at least comprises: task id, task attribute, time interval, device configuration; the characteristic acquisition module is used for acquiring the characteristic data in the actual running process of the equipment and the characteristic data of the preset virtual running process and inputting the characteristic data into a multi-target model; the multi-target model performs preset processing on the characteristic data to obtain prediction parameters of the edge cloud; and calculating the optimal gross utilization cost of the edge cloud according to a preset algorithm based on the reinforcement learning model and the predicted parameters and the current predicted flow, and determining the action corresponding to the optimal gross utilization cost to realize the network resource optimization of the edge cloud.
Based on the same conception, the present invention also provides a computer device comprising: a memory for storing a processing program; and the processor is used for realizing the network resource optimization method of the edge cloud running scene according to any one of the above when executing the processing program.
Based on the same conception, the invention also provides a readable storage medium, wherein the readable storage medium is stored with a processing program, and the processing program realizes the network resource optimization method of the edge cloud blending scene when being executed by a processor.
After the technical scheme is adopted, compared with the prior art, the invention has the following advantages:
1. in the invention, the 95 wave peaks of the main service and the auxiliary tasks are predicted by a transducer technology, so that the better mining multitask is mixed and run in a passing switch or node.
2. The invention has reasonable resource scheduling and distribution, reduces the probability of system failure and improves the stability of the main task in the mixed running scene.
3. The invention fully and reasonably utilizes 95 charging rules and equipment resources through reinforcement learning, multi-objective and other technologies, and realizes the maximization of gross profit income of cloud computing manufacturers. The problems that 95 flow has larger fluctuation along with time and training samples are fewer are solved, enough training samples are added through virtual mixed running and actual mixed running, and the influence of time variation on final gross profit assessment is reduced to a certain extent. The mining method solves the problem that the conventional technology cannot mine the optimal income combination of each service in the mixed running scene, explores each potential service combination through virtual mixed running, and finally makes a decision on each service combination through a trial-and-error method of reinforcement learning.
Drawings
The invention is described in further detail below with reference to the attached drawing figures, wherein:
FIG. 1 is a flow chart of a network resource optimization method of an edge cloud blend running scene;
FIG. 2 is a flow chart of an embodiment of an edge cloud blend running scenario of the present invention;
fig. 3 shows the mae index in a blend scene based on the transform technique.
Detailed Description
The invention is described in further detail below with reference to the drawings and the specific examples. Advantages and features of the invention will become more apparent from the following description and from the claims. It is noted that the drawings are in a very simplified form and utilize non-precise ratios, and are intended to facilitate a convenient, clear, description of the embodiments of the invention.
It should be noted that all directional indicators (such as up, down, left, right, front, and rear … …) in the embodiments of the present invention are merely used to explain the relative positional relationship, movement, etc. between the components in a particular posture (as shown in the drawings), and if the particular posture is changed, the directional indicator is changed accordingly.
Example 1
Referring to fig. 1, a network resource optimization method of an edge cloud blend running scenario is shown, including:
S100: acquiring characteristic data of the edge cloud, and predicting the characteristic data based on a transducer algorithm to acquire current predicted flow; wherein the characteristic data at least comprises: task id, task attribute, time interval, device configuration;
S200: acquiring the characteristic data in the actual running process of the equipment and the characteristic data of the preset virtual running process, and inputting the characteristic data into a multi-target model;
S300: the multi-target model performs preset processing on the characteristic data to obtain prediction parameters of the edge cloud;
s400: and calculating the optimal gross utilization cost of the edge cloud according to a preset algorithm based on the reinforcement learning model and the predicted parameters and the current predicted flow, and determining the action corresponding to the optimal gross utilization cost to realize the network resource optimization of the edge cloud.
In the invention, the 95 wave peaks of the main service and the auxiliary tasks are predicted by a transducer technology, so that the better mining multitask is mixed and run in a passing switch or node.
In a preferred aspect of this embodiment, the obtaining the feature data of the edge cloud to predict based on a transducer algorithm to obtain the current predicted traffic further includes: acquiring flow data of the edge cloud in a first preset time period and forming serialized data; predicting flow data of a second preset time period based on the flow data in the first preset time period and comparing the flow data with real flow data of the second preset time period; based on the flow data of the first preset time period, the real flow data of the second preset time period and the comparison result of the predicted flow data of the second preset time period and the real flow data of the second preset time period, predicting is carried out to obtain the current predicted flow.
For example, by collecting each blend run task on each cloud node scene and every 5 minutes of bandwidth traffic for the device.
Referring to fig. 2, three flow curves exist for running service a and service b on device x, and three time points corresponding to the existence time points c, a and b are day 95 flow time nodes of the flow curves. And on three time nodes a, b and c, the flow on the point c is 95 flow of the equipment x, namely cost side flow, and the flows on the points a and b are income flows respectively corresponding to the service a and the service b.
Taking service a and service b running on device x as an example, the last 30 days of traffic data for 5 minutes each are acquired and form serialized data, for example, the data of the first 24 days of device x is x1 sequence [0.1,0.9, & gt.
Based on the transducer algorithm, y is predicted by inputs x1 and x2, so that a time sequence model is learned.
Fig. 3 shows the mae index (the lower the mae index is at 0.29, the better the algorithm effect) in the mixed running scene based on the transform technique, and the effect is far more than the effect of conventional time sequence prediction such as LSTM (the lower the mae index is), the more effective the mae index is at 0.48.
Through a transform algorithm, flow data of the equipment x, the service a and the service b in every 5 minutes granularity level of the future day can be respectively predicted, and respective 95 time nodes and 95 flows can be respectively obtained through 95 flow calculation logic.
In actual calculation of gross profit, traffic flows for calculating revenue, i.e., traffic a and traffic b flows x a and x b at time point a and time point b, are often larger than device x flow x c at time point c, i.e., can be expressed by the following formula: x a+xb≥xc, the following formula z= (x a+xb)/xc) can be defined.
In the blend scenario, the profit is often made by the price difference between the device side and the service side and the parameter z= (x a+xb)/xc) in the blend scenario, and the larger the z value, the higher the profit margin representing the whole.
Gross li formula, gross li= (x a+xb) revenue bid x business deduction penalty-x c cost bid x cost deduction penalty = x c (z revenue bid x business deduction penalty-cost bid x cost deduction penalty).
In the whole formula, because quotation and deduction factors are relatively fixed, the size of the gross profit is directly related to two variables, namely the device flow x c and the device flow z, but the two variables often have obvious changes along with factors such as node configuration, service scheduling, time period and the like.
The data characteristics of each running-mixing task are collected, and the data are mainly characteristic data such as task ids, task attributes, time intervals, equipment configuration and the like related to the running-mixing tasks.
However, the mixed running in the actual scene is complex, and the combination situation among a plurality of mixed running tasks is numerous, so that the actual mixed running gross profit is difficult to be estimated directly through the existing mixed running data. In this context, a blend simulator based on a multi-objective depth model, i.e. virtual blend, is proposed.
Training a multi-target neural network model, inputting characteristic data such as task id, task attribute, time interval, equipment configuration and the like related to a mixed running task, wherein an optimization target is a formula: gross = (x a+xb) revenue bid traffic deduction penalty-x c cost bid cost deduction penalty = x c (z revenue bid traffic deduction penalty-cost bid cost deduction penalty) x c and z.
The model adopts MMOE multi-target model, firstly carries out id-based embedding processing on characteristic data, then carries out splicing through a plurality of expert networks generated by MMOE, finally carries out calculation product operation according to a Gate layer of MMOE and the expert networks, and finally obtains the predicted values of x c and z. Wherein embedding represents an object, which may be a word, a commodity, a movie, or the like, by a low-dimensional vector.
Virtual mixed running output is carried out through a multi-target neural network model, the input multi-target neural network model is characteristic data such as task id, task attribute, time interval, equipment configuration and the like which are generated by relative random sampling or combination, the output of the multi-target neural network model is x c and z, and the output of the multi-target neural network model is represented by the formula: gross profit = (x a+xb) revenue price-business deduction-x c cost price-deduction-penalty = x c (z revenue price-business deduction-cost price-deduction-penalty) the optimal gross profit is calculated. And the optimal action can be determined based on the optimal gross profit, so that the optimization of the edge cloud network resource is realized.
The mixed running in the actual scene is complex, and the combination situation among a plurality of mixed running tasks is numerous, so that even a conventional neural network can relatively learn short-term income fluctuation, but long-term income and risks brought by mixed running on high service availability are difficult to learn.
The whole model is mainly divided into three parts of reinforcement learning input, forced learning and reinforcement learning output.
The input of definition reinforcement learning mainly comprises definitions of actions, rewards, states before cutting in, states after cutting in and the like, and also comprises sample sampling logic based on virtual blending.
Wherein the samples are mainly derived from real blend running and virtual blend running. The main significance of the method is that real mixed running data in reality is relatively smaller and cannot sufficiently support the possibility of digging potential higher mixed running logic, but by adding a certain proportion of virtual mixed running data, the potential mixed running logic with higher gross profit can be mined, and the model effect of final reinforcement learning can be improved. Through actual testing, the sample sampling logic is: 20% sample true blend run, 80% sample virtual blend run, can bring about 0.3% mae index evaluation promotion relative to other sample ratios.
The characteristic data includes task id, task attributes, time intervals, device configuration, 95 predicted traffic for each task and 95 predicted traffic for the device.
Blend service ids are defined for respective actions of the edge cloud.
Rewarding: y = revenue-cost-loss = gross-complaint cost-system downtime time system downtime cost.
State before cut in: the current blend running service [ a, b ] predicts the characteristic data of the flows x a and x b, the device predicts the flow x c、(xa+xb)/xc, the task id, the task attribute, the time interval, the device configuration and the like at 95.
Post-cut state: the cut-in blend service [ c, d ] predicts 95 characteristic data such as flows x c and x d, 95 predicted flows x c、(xc+xd)/xc of the device in the blend service, task id, task attribute, time interval, device configuration and the like.
Next, a DQN network (Deep Q-network) is defined, and long-term gross profit in the mixed running scene is learned through the train_net and target_net neural networks, respectively.
The inputs of the train_net and the target_net are data of a 'pre-cut state', and the outputs are action nodes, and the specific structure is as follows:
the specific steps of model training are as follows:
200 pieces of data are randomly extracted each time, which can be marked as { action 1= [ action data ], reward 1= [ bonus array ], state_before 1= [ pre-cut state array ], state_after 1= [ post-cut state array ] }.
Through the target_net network, state_after1 is input, and reward 2= [ bonus array of all actions ] is output.
A fraction between 0 and 1 is random, and if the fraction is greater than 0.1 (i.e. 90% probability), reward1=reward1+max (reward2) ×factor (factor=0.9), otherwise reward1=reward1.
{ REWARD1, action1, state_before1} brings in train_net, input of train_net is state_before1, output is REWARD 3= [ reward array of all actions ].
reward3_action=reward3*action1。
In the train_net training process, the loss function is defined as mse between reward3_action and reward1, and the optimizer is defined as adam.
Every 50 iterations, all parameters of the train_net are copied into the target_net.
And (5) meeting the optimization condition and stopping iteration.
The train_net network and the target_net network are derived.
Furthermore, based on input data of the train_net network, input is carried out to obtain a long-term optimal running mixing task in the running mixing scene.
Based on the above technical scheme of the embodiment, not only can an optimized income problem based on 95 flow charging in a mixed running scene be solved, but also the problems of long-term income mining and the like which cannot be solved by the conventional technical means can be solved.
In this embodiment, the action is a task in the edge cloud.
The invention has reasonable resource scheduling and distribution, reduces the probability of system failure and improves the stability of the main task in the mixed running scene.
The invention fully and reasonably utilizes 95 charging rules and equipment resources through reinforcement learning, multi-objective and other technologies, and realizes the maximization of gross profit income of cloud computing manufacturers. The problems that 95 flow has larger fluctuation along with time and training samples are fewer are solved, enough training samples are added through virtual mixed running and actual mixed running, and the influence of time variation on final gross profit assessment is reduced to a certain extent. The mining method solves the problem that the conventional technology cannot mine the optimal income combination of each service in the mixed running scene, explores each potential service combination through virtual mixed running, and finally makes a decision on each service combination through a trial-and-error method of reinforcement learning.
Example two
Based on the same conception, the invention also provides a network resource optimization device of the edge cloud running scene, which comprises the following components: the flow prediction module is used for obtaining characteristic data of the edge cloud and predicting the characteristic data based on a transducer algorithm so as to obtain current predicted flow; wherein the characteristic data at least comprises: task id, task attribute, time interval, device configuration; the characteristic acquisition module is used for acquiring the characteristic data in the actual running process of the equipment and the characteristic data of the preset virtual running process and inputting the characteristic data into a multi-target model; the multi-target model performs preset processing on the characteristic data to obtain prediction parameters of the edge cloud; and calculating the optimal gross utilization cost of the edge cloud according to a preset algorithm based on the reinforcement learning model and the predicted parameters and the current predicted flow, and determining the action corresponding to the optimal gross utilization cost to realize the network resource optimization of the edge cloud.
The invention fully and reasonably utilizes 95 charging rules and equipment resources through reinforcement learning, multi-objective and other technologies, and realizes the maximization of gross profit income of cloud computing manufacturers. The problems that 95 flow has larger fluctuation along with time and training samples are fewer are solved, enough training samples are added through virtual mixed running and actual mixed running, and the influence of time variation on final gross profit assessment is reduced to a certain extent. The mining method solves the problem that the conventional technology cannot mine the optimal income combination of each service in the mixed running scene, explores each potential service combination through virtual mixed running, and finally makes a decision on each service combination through a trial-and-error method of reinforcement learning.
Example III
Based on the same conception, the present invention also provides a computer device, which may vary greatly in configuration or performance, and may include one or more processors (central processing units, CPUs) (e.g., one or more processors) and memory, one or more storage media (e.g., one or more mass storage devices) storing application programs or data. The memory and storage medium may be transitory or persistent. The program stored on the storage medium may include one or more modules (not shown), each of which may include a series of instruction operations in the computer device. Still further, the processor may be arranged to communicate with a storage medium and to execute a series of instruction operations in the storage medium on a computer device.
The computer device may also include one or more power supplies, one or more wired or wireless network interfaces, one or more input/output interfaces, and/or one or more operating systems, such as Windows Serve, mac OS X, unix, linux, freeBSD, and the like.
Those skilled in the art will appreciate that the computer device architecture described is not limiting of the computer device and may include more or fewer components than shown or may be combined with certain components or a different arrangement of components.
The computer readable instructions, when executed by the processor, cause the processor to perform the steps of: acquiring characteristic data of the edge cloud, and predicting the characteristic data based on a transducer algorithm to acquire current predicted flow; wherein the characteristic data at least comprises: task id, task attribute, time interval, device configuration; acquiring the characteristic data in the actual running process of the equipment and the characteristic data of the preset virtual running process, and inputting the characteristic data into a multi-target model; the multi-target model performs preset processing on the characteristic data to obtain prediction parameters of the edge cloud; and calculating the optimal gross utilization cost of the edge cloud according to a preset algorithm based on the reinforcement learning model and the predicted parameters and the current predicted flow, and determining the action corresponding to the optimal gross utilization cost to realize the network resource optimization of the edge cloud.
In one embodiment, a readable storage medium is provided, where the computer readable instructions, when executed by one or more processors, cause the one or more processors to perform the above steps, and specific steps are not described herein.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. The network resource optimization method of the edge cloud running scene is characterized by comprising the following steps of:
Acquiring characteristic data of the edge cloud, and predicting the characteristic data based on a transducer algorithm to acquire current predicted flow; wherein the characteristic data at least comprises: task id, task attribute, time interval, device configuration;
acquiring the characteristic data in the actual running process of the equipment and the characteristic data of the preset virtual running process, and inputting the characteristic data into a multi-target model;
The multi-target model performs preset processing on the characteristic data to obtain prediction parameters of the edge cloud; the prediction parameters are profit-and-loss ratio parameters;
and calculating the optimal gross utilization cost of the edge cloud according to a preset algorithm based on the reinforcement learning model and the predicted parameters and the current predicted flow, and determining the action corresponding to the optimal gross utilization cost to realize the network resource optimization of the edge cloud.
2. The method for optimizing network resources of an edge cloud blending scenario of claim 1, wherein the obtaining the feature data of the edge cloud to predict based on a transducer algorithm to obtain the current predicted traffic further comprises:
Acquiring flow data of the edge cloud in a first preset time period and forming serialized data;
predicting flow data of a second preset time period based on the flow data in the first preset time period and comparing the flow data with real flow data of the second preset time period;
Based on the flow data of the first preset time period, the real flow data of the second preset time period and the comparison result of the predicted flow data of the second preset time period and the real flow data of the second preset time period, predicting is carried out to obtain the current predicted flow.
3. The method for optimizing network resources of an edge cloud blend scene as defined in claim 2, wherein the performing, by the multi-objective model, a preset process on the feature data to obtain the predicted parameters of the edge cloud further comprises:
acquiring characteristic data comprising task id, task attribute, time interval and equipment configuration;
Outputting x c and z according to a gross = (x a+xb) revenue price traffic deduction-x c cost price deduction = x c (z revenue price traffic deduction-cost price x cost deduction) formula; where x a represents the revenue flow for service a running on device x, x b represents the revenue flow for service b running on device x, x c represents the cost flow for device x itself, z represents the profit-loss, z= (x a+xb)/ xc;
The multi-target model is MMOE multi-target model, the feature data is processed based on task id embedding, then a plurality of expert networks generated by the MMOE multi-target model are spliced, and finally a product calculation is performed according to the Gate layer of the MMOE multi-target model and the expert networks, so that the prediction parameters of x c and z are finally obtained.
4. The method for optimizing network resources of an edge cloud blending scenario of claim 1, wherein the computing the optimal gross cost of the edge cloud based on the reinforcement learning model for the predicted parameters and the predicted traffic according to a preset algorithm further comprises:
Obtaining the predicted flow and the predicted parameters output by the multi-target model, and based on the formula: gross = (x a+xb) revenue price-business deduction-x c cost price deduction-x c (z revenue price-business deduction-cost price-cost deduction) optimal gross profit is calculated, wherein x a represents revenue flow of business a running on device x, x b represents revenue flow of business b running on device x, x c represents cost flow of device x itself running, z represents profit-loss rate, z= (x a+xb)/ xc.
5. The method for optimizing network resources of an edge cloud blending scenario of claim 4, further comprising, prior to calculating an optimal gross cost for the edge cloud based on the reinforcement learning model:
The following definitions are made:
defining actions as mixed running service ids of various combinations;
Defining rewards y = revenue-cost-loss = gross-complaint cost-system downtime time system downtime cost;
defining a pre-cut state as 95 predicted flow of each running-mixing service in flow prediction, 95 predicted flow of edge cloud in flow prediction and characteristic data;
The post-cut state is defined as 95 predicted traffic of the cut-in hybrid running service in traffic prediction, 95 predicted traffic of the edge cloud in traffic prediction and characteristic data.
6. The method for optimizing network resources of an edge cloud blend running scenario of claim 5, further comprising, after the defining:
defining an DQN network, and respectively learning long-term gross profits in a mixed running scene through a train_net neural network and a target_net neural network;
The inputs of the train_net and the target_net are state data before cut-in, and the output is action nodes.
7. The method for optimizing network resources of an edge cloud blend running scenario of claim 6, wherein learning long-term gross profit in the blend running scenario further comprises:
Randomly extracting 200 pieces of data, which can be marked as { action 1= [ action data ], reward 1= [ bonus array ], state_before 1= [ pre-cut state array ], state_after 1= [ post-cut state array ] };
Through the target_net network, input state_after1, output reward 2= [ bonus array of all actions ];
A fraction between 0 and 1 is random, if the fraction is greater than 0.1, the process is performed with reld1=reld1+max (relward 2) factor (factor=0.9), otherwise relward 1=relward 1;
Carrying { reward1, action1, state_before1} into the train_net for training, wherein the input of the train_net is state_before1, the output is reward 3= [ the rewards array of all actions ], the reward 3_action=reward 3×action1, in the train_net training process, the loss function is defined as mse between the reward3_action and the reward1, and the optimizer is defined as adam;
Every 50 iterations, all parameters of the train_net are copied into the target_net;
stopping iteration under the condition that the optimization condition is met;
the train_net network and the target_net network are derived.
8. The utility model provides a network resource optimizing device of edge cloud mixed running scene which characterized in that includes:
the flow prediction module is used for obtaining characteristic data of the edge cloud and predicting the characteristic data based on a transducer algorithm so as to obtain current predicted flow; wherein the characteristic data at least comprises: task id, task attribute, time interval, device configuration;
the characteristic acquisition module is used for acquiring the characteristic data in the actual running process of the equipment and the characteristic data of the preset virtual running process and inputting the characteristic data into a multi-target model;
The multi-target model performs preset processing on the characteristic data to obtain prediction parameters of the edge cloud; the prediction parameters are profit-and-loss ratio parameters; and calculating the optimal gross utilization cost of the edge cloud according to a preset algorithm based on the reinforcement learning model and the predicted parameters and the current predicted flow, and determining the action corresponding to the optimal gross utilization cost to realize the network resource optimization of the edge cloud.
9. A computer device, comprising:
A memory for storing a processing program;
A processor, when executing the processing program, implementing a network resource optimization method of the edge cloud blending scenario according to any one of claims 1 to 7.
10. A readable storage medium, wherein a processing program is stored on the readable storage medium, and when the processing program is executed by a processor, the network resource optimization method of the edge cloud blending scenario according to any one of claims 1 to 7 is implemented.
CN202211624421.2A 2022-12-16 2022-12-16 Network resource optimization method and device for edge cloud running scene Active CN116032757B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211624421.2A CN116032757B (en) 2022-12-16 2022-12-16 Network resource optimization method and device for edge cloud running scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211624421.2A CN116032757B (en) 2022-12-16 2022-12-16 Network resource optimization method and device for edge cloud running scene

Publications (2)

Publication Number Publication Date
CN116032757A CN116032757A (en) 2023-04-28
CN116032757B true CN116032757B (en) 2024-05-10

Family

ID=86071506

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211624421.2A Active CN116032757B (en) 2022-12-16 2022-12-16 Network resource optimization method and device for edge cloud running scene

Country Status (1)

Country Link
CN (1) CN116032757B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103812930A (en) * 2014-01-16 2014-05-21 华为技术有限公司 Method and device for resource scheduling
CN110351348A (en) * 2019-06-27 2019-10-18 广东石油化工学院 A kind of cloud computing resources method for optimizing scheduling based on DQN
CN111835827A (en) * 2020-06-11 2020-10-27 北京邮电大学 Internet of things edge computing task unloading method and system
CN111865647A (en) * 2019-04-30 2020-10-30 英特尔公司 Modular I/O configuration for edge computation using decomposed die kernels
US10938674B1 (en) * 2016-07-01 2021-03-02 EMC IP Holding Company LLC Managing utilization of cloud computing resources
CN114143891A (en) * 2021-11-30 2022-03-04 南京工业大学 FDQL-based multi-dimensional resource collaborative optimization method in mobile edge network
WO2022139879A1 (en) * 2020-12-24 2022-06-30 Intel Corporation Methods, systems, articles of manufacture and apparatus to optimize resources in edge networks
CN115022188A (en) * 2022-05-27 2022-09-06 国网经济技术研究院有限公司 Container placement method and system in power edge cloud computing network
WO2022217503A1 (en) * 2021-04-14 2022-10-20 深圳大学 Multi-access edge computing architecture for cloud-network integration
CN115334075A (en) * 2022-06-28 2022-11-11 北京邮电大学 5G edge calculation method and device for subway scene high-reliability low-delay service

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI620075B (en) * 2016-11-23 2018-04-01 財團法人資訊工業策進會 Server and cloud computing resource optimization method thereof for cloud big data computing architecture
US11132608B2 (en) * 2019-04-04 2021-09-28 Cisco Technology, Inc. Learning-based service migration in mobile edge computing

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103812930A (en) * 2014-01-16 2014-05-21 华为技术有限公司 Method and device for resource scheduling
US10938674B1 (en) * 2016-07-01 2021-03-02 EMC IP Holding Company LLC Managing utilization of cloud computing resources
CN111865647A (en) * 2019-04-30 2020-10-30 英特尔公司 Modular I/O configuration for edge computation using decomposed die kernels
CN110351348A (en) * 2019-06-27 2019-10-18 广东石油化工学院 A kind of cloud computing resources method for optimizing scheduling based on DQN
CN111835827A (en) * 2020-06-11 2020-10-27 北京邮电大学 Internet of things edge computing task unloading method and system
WO2022139879A1 (en) * 2020-12-24 2022-06-30 Intel Corporation Methods, systems, articles of manufacture and apparatus to optimize resources in edge networks
WO2022217503A1 (en) * 2021-04-14 2022-10-20 深圳大学 Multi-access edge computing architecture for cloud-network integration
CN114143891A (en) * 2021-11-30 2022-03-04 南京工业大学 FDQL-based multi-dimensional resource collaborative optimization method in mobile edge network
CN115022188A (en) * 2022-05-27 2022-09-06 国网经济技术研究院有限公司 Container placement method and system in power edge cloud computing network
CN115334075A (en) * 2022-06-28 2022-11-11 北京邮电大学 5G edge calculation method and device for subway scene high-reliability low-delay service

Also Published As

Publication number Publication date
CN116032757A (en) 2023-04-28

Similar Documents

Publication Publication Date Title
CN111026549B (en) Automatic test resource scheduling method for power information communication equipment
CN114721833B (en) Intelligent cloud coordination method and device based on platform service type
CN112270545A (en) Financial risk prediction method and device based on migration sample screening and electronic equipment
CN103778474A (en) Resource load capacity prediction method, analysis prediction system and service operation monitoring system
Wu et al. A deadline-aware estimation of distribution algorithm for resource scheduling in fog computing systems
CN105373432A (en) Cloud computing resource scheduling method based on virtual resource state prediction
Chen et al. Cloud–edge collaboration task scheduling in cloud manufacturing: An attention-based deep reinforcement learning approach
CN115543626A (en) Power defect image simulation method adopting heterogeneous computing resource load balancing scheduling
Gao et al. A deep learning framework with spatial-temporal attention mechanism for cellular traffic prediction
Cervan et al. Cluster-based stratified sampling for fast reliability evaluation of composite power systems based on sequential Monte Carlo simulation
Wang et al. Clifford fuzzy support vector machine for regression and its application in electric load forecasting of energy system
CN116032757B (en) Network resource optimization method and device for edge cloud running scene
Khadr et al. GA-based implicit stochastic optimization and RNN-based simulation for deriving multi-objective reservoir hedging rules
CN112948123A (en) Spark-based grid hydrological model distributed computing method
Liang et al. Prediction method of energy consumption based on multiple energy-related features in data center
CN114757448B (en) Manufacturing inter-link optimal value chain construction method based on data space model
CN116308578A (en) Edge cloud equipment pricing optimization method and device based on deep learning
CN113298120B (en) Fusion model-based user risk prediction method, system and computer equipment
Asan et al. Analysis of critical factors in energy service contracting using fuzzy cognitive mapping
Lv et al. Complexity problems handled by big data technology
CN114372849A (en) Internet resource service product pushing method and device and computer equipment
CN112766609A (en) Power consumption prediction method based on cloud computing
CN114547096A (en) Parallel execution method and system based on big data prediction
Wang et al. Communication network time series prediction algorithm based on big data method
Li et al. Dual-mutation mechanism-driven snake optimizer for scheduling multiple budget constrained workflows in the cloud

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Country or region after: China

Address after: Room 801, No. 2, Boyun Road, China (Shanghai) Pilot Free Trade Zone, Pudong New Area, Shanghai, March 2012

Applicant after: Pioneer Cloud Computing (Shanghai) Co.,Ltd.

Address before: Room 801, No. 2, Boyun Road, China (Shanghai) Pilot Free Trade Zone, Pudong New Area, Shanghai, March 2012

Applicant before: PPLABS NETWORK TECHNOLOGY (SHANGHAI) Co.,Ltd.

Country or region before: China

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant