CN113596146B - Resource scheduling method and device based on big data - Google Patents

Resource scheduling method and device based on big data Download PDF

Info

Publication number
CN113596146B
CN113596146B CN202110849911.1A CN202110849911A CN113596146B CN 113596146 B CN113596146 B CN 113596146B CN 202110849911 A CN202110849911 A CN 202110849911A CN 113596146 B CN113596146 B CN 113596146B
Authority
CN
China
Prior art keywords
mec
similar
data
hot spot
mecs
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110849911.1A
Other languages
Chinese (zh)
Other versions
CN113596146A (en
Inventor
彭亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Saisheng Technology Co ltd
Original Assignee
Beijing Saisheng Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Saisheng Technology Co ltd filed Critical Beijing Saisheng Technology Co ltd
Priority to CN202110849911.1A priority Critical patent/CN113596146B/en
Publication of CN113596146A publication Critical patent/CN113596146A/en
Application granted granted Critical
Publication of CN113596146B publication Critical patent/CN113596146B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The application discloses a resource scheduling method based on big data, which comprises the following steps: the cloud server sends the hot spot data group to a plurality of mobile edge nodes MEC; the plurality of MECs transmit the hot spot data group to a plurality of mobile terminals; the cloud server predicts key performance indicators KPIs of the MECs, if the predicted KPI of a first MEC does not meet quality of service QoS conditions, the distribution operation of the first MEC is suspended, and a hot spot data group stored by the first MEC is subjected to data splitting to be split into a plurality of sub hot spot data groups; the first MEC determines a similar MEC cluster with similar characteristics to the first MEC according to a similarity criterion, and sequentially distributes the plurality of sub hot spot data groups to different MECs of the similar MEC cluster according to a load balancing strategy, so that the similar MEC cluster sequentially distributes the plurality of sub hot spot data to a plurality of mobile terminals according to different routing paths.

Description

Resource scheduling method and device based on big data
Technical Field
The present application relates to the field of information technologies, and in particular, to a method and an apparatus for resource scheduling based on big data.
Background
With the development and popularization of big data, the challenges and requirements of big data are increasing. The cloud server of big data usually has the functions of big data acquisition, storage, mining, analysis and the like, and the big data cloud server can effectively process the big data through the functions.
However, the data distribution and resource scheduling capabilities of the cloud server are limited by the storage space and the network environment, and especially for the response of the hot event, the unreasonable storage resources and the network environment greatly affect the distribution performance of the big data hot content.
Disclosure of Invention
The embodiment of the invention provides a resource scheduling method based on big data, which is used for solving the problem that the hot data content distribution performance in the resource scheduling process does not reach the standard in the prior art.
The embodiment of the invention provides a resource scheduling method based on big data, which comprises the following steps:
the cloud server sends the hot spot data group to a plurality of mobile edge nodes (MECs);
the MECs send the hot spot data group to a plurality of mobile terminals;
the cloud server predicts key performance indicators KPIs of the MECs, if the predicted KPI of a first MEC does not meet quality of service QoS conditions, the distribution operation of the first MEC is suspended, and a hot spot data group stored by the first MEC is subjected to data splitting to be split into a plurality of sub hot spot data groups;
the first MEC determines a similar MEC cluster with similar characteristics to the first MEC according to a similarity criterion, and sequentially distributes the plurality of sub hot spot data groups to different MECs of the similar MEC cluster according to a load balancing strategy, so that the similar MEC cluster sequentially distributes the plurality of sub hot spot data to a plurality of mobile terminals according to different routing paths.
Optionally, the predicting, by the cloud server, key performance indicators KPIs of the plurality of MECs includes:
acquiring KPI historical data of the MEC, and generating a KPI value array from the KPI historical data;
and inputting the KPI value array into a prediction model, analyzing the KPI value array by the prediction model by using a regression algorithm, and outputting KPI values of future time points.
Optionally, the determining, by the first MEC, a similar MEC cluster having similar features to the first MEC according to a similarity criterion includes:
acquiring PRB load rate and content demand level of adjacent MEC nodes of a first MEC;
threshold ranges of PRB load rate and content demand level are set respectively, and the adjacent MEC nodes are screened based on the threshold ranges to screen out the similar MEC cluster groups.
Optionally, the determining, by the first MEC, a similar MEC cluster having similar features to the first MEC according to a similarity criterion includes:
acquiring the signal-to-noise ratio SNR of adjacent MEC nodes of the first MEC;
setting a threshold range of the SNR, and screening the adjacent MEC nodes based on the threshold range to screen out the similar MEC cluster group.
Optionally, before the cloud server sends the hotspot data set to a plurality of mobile edge nodes MEC, the method further comprises:
the cloud server predicts a hot event based on an artificial intelligence algorithm and generates a hot data group corresponding to the hot event.
Optionally, sequentially distributing the plurality of sub-hotspot data sets to different MECs of the similar MEC cluster according to a load balancing policy includes:
acquiring resource load rates of different MECs of the similar MEC cluster group, and sequencing the resource load rates from low to high;
sorting the plurality of sub-hotspot data sets from high to low according to data size;
and sequentially sending the sequenced multiple sub-hot spot data groups to different MECs of the sequenced similar MEC cluster group, wherein the data sizes of the multiple sub-hot spot data groups are in inverse proportion to the corresponding MEC resource load rates.
Optionally, the similar MEC cluster sequentially distributes the plurality of sub-hotspot data to a plurality of mobile terminals according to different routing paths, including:
the similar MEC cluster sets the sub hot spot data to be high-priority, and sets the sub hot spot data to be deterministic service flow;
and the similar MEC cluster predicts the future non-deterministic service traffic, if the sum of the predicted non-deterministic service traffic and the deterministic service traffic exceeds a preset threshold, the sub-hotspot data is sent according to a first routing path, and the non-hotspot data is sent according to a second routing path, wherein the service traffic corresponding to the non-hotspot data is the non-deterministic service traffic.
Optionally, the predicting future non-deterministic traffic flow size of the similar MEC cluster includes:
the similar MEC cluster predicts the future non-deterministic traffic volume through a long-time memory model LSTM.
Optionally, the method further comprises:
if the request quantity of the hot spot data group in unit time is lower than a hot spot flow threshold, the multi-cloud server degrades the hot spot event corresponding to the hot spot data group into a non-hot spot event, and reduces the priority of the hot spot data group;
the cloud server receives the terminal request messages transmitted by the MECs and extracts different labels in the terminal request messages;
the cloud server analyzes the plurality of tags, and defines the tags with unit time request quantity exceeding the hot spot flow threshold as hot spot event tags;
and the cloud center server generates a second hot spot data group corresponding to the hot spot event tag and sends the second hot spot data group to the MECs.
The embodiment of the present invention further includes an apparatus, which is characterized by comprising a memory and a processor, wherein the memory stores computer executable instructions, and the processor implements the method when executing the computer executable instructions on the memory.
According to the method and the device provided by the embodiment of the invention, in the hot spot data distribution process, in order to solve the problem of untimely response caused by hot spot data flow swelling, key performance indicators KPIs of a plurality of MECs are predicted in time, and after a hot spot data group is split if the KPIs are found to be not up to the standard, a plurality of adjacent MECs share the task of hot spot data content distribution, so that the load rate of a single MEC is effectively reduced, the network resource utilization rate is improved, and the network QoS is improved.
Drawings
In order to more clearly illustrate the technical solution of the embodiment of the present invention, the drawings used in the description of the embodiment will be briefly introduced below.
FIG. 1 is a diagram of a big data resource scheduling system architecture in one embodiment;
FIG. 2 is a flow diagram of a method for big data based resource scheduling in one embodiment;
FIG. 3 is a logic diagram that illustrates the distribution of sub-hotspot data groups in one embodiment;
FIG. 4 is a diagram illustrating the hardware components of the apparatus according to one embodiment.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Fig. 1 is an architecture diagram of a big data resource scheduling system according to an embodiment of the present invention. As shown in fig. 1, in the embodiment of the present invention, the cloud is a cloud server cluster including a plurality of extensible cloud servers, where the cloud server cluster includes a central server defined as a cloud central server, and is configured to monitor a storage state and an operation state of each cloud server, and dynamically release and extend resources based on the state of each cloud server, so as to ensure normal operation of a service. The cloud center server can be one of a plurality of cloud servers, can also be designated as a special server with a control strategy function, can dynamically acquire a hot spot data set, and responds to an I/O request of the hot spot data set. The management layer is an edge layer and consists of a plurality of edge nodes, the edge nodes are close to the user side, certain calculation and data processing capabilities are achieved, and the query and data acquisition requests of the user can be responded in a short time. The terminal is a terminal, is controlled by a user, generates an I/O request, sends the I/O request to the edge node and the cloud, and finally obtains needed data from the cloud or the edge node.
Fig. 2 is a flowchart of a method for resource scheduling based on big data according to an embodiment of the present invention, where the method provided in the embodiment of the present invention specifically includes:
s101, the cloud server sends the hot spot data group to a plurality of mobile edge nodes MEC;
the hot spot data group is the content to be distributed corresponding to the hot spot event, the hot spot data group can be in the forms of video, characters, pictures and the like, the hot spot data group is respectively composed of different data formats, and the event digital carrier of the specific content is formed through the steps of encoding, decoding and the like.
The response frequency of a hot event in a unit time is high, for example, a "hot search" event, the access amount in the unit time can reach millions, and such a high access frequency needs to respond in a short time, which is a challenge for cloud storage and the whole network.
The cloud server can predict the hot event through an artificial intelligence algorithm, and generate a hot data set corresponding to the prediction result based on the prediction result. For example, text analysis is performed on events collected in a period, topics of hot events are analyzed, a deep learning model is established, different topics in a historical record are used as input samples to perform model training, a topic-label model is trained, labels corresponding to the topics are output based on the topic-label model, the number of requests for providing the topics in a period is counted, if the increase rate of the number of requests is higher than a first threshold value, the labels corresponding to the topics are output by using the topic-label model, and retrieval of associated hot data groups is performed based on the corresponding labels.
The hot events are random and sudden, so that topics and different tags need to be associated, and association and classification of the hot events cannot be performed manually, so that model training needs to be performed by using a deep learning model to determine the association between the different topics and the tags, the tags can be extensions or types of the topics, and the tags can reflect the attention of the topics, so that the accumulated growth rate of the different tags of the topics needs to be concerned, the attention growth rate of the topics can be correctly reflected, and thus the correct hot events are output, and a corresponding hot data group is generated.
S102, the MECs send the hot spot data group to a plurality of mobile terminals;
in the embodiment of the invention, distributed storage is carried out by copying and distributing the hot spot data group, and the characteristic of quick MEC response time is utilized to sink the hot spot data group to the edge node in advance/in preparation, so that the response time of the hot spot event is shortened when the user terminal starts to search the content of the hot spot event.
S103, the cloud server predicts key performance indicators KPIs of the MECs, if the predicted KPI of the first MEC does not meet quality of service (QoS) conditions, the distribution operation of the first MEC is suspended, and a hot spot data group stored by the first MEC is subjected to data splitting into a plurality of sub hot spot data groups;
in embodiments of the present invention, throughput in a network may be predicted by utilizing Key Performance Indicators (KPIs) associated with a radio channel (e.g., PRB, CQI, SNR, etc.) metrics, referred to herein as Key Performance Indicators (KPIs). Different KPIs have different measurement modes, ranging from several times per second to once every few seconds; the measurement frequency of KPIs is referred to as KPI granularity.
In the embodiment Of the invention, KPIs are selected to provide data about equipment performance and network performance, if KPI parameters do not reach a certain marking threshold value, the KPIs are determined not to meet Quality Of Service (QoS), otherwise, the KPIs can be met.
In the embodiment of the present invention, the cloud server predicts the key performance indicators KPI of the multiple MECs, which may specifically be:
acquiring KPI historical data of the MEC, and generating a KPI value array from the KPI historical data;
for each MEC, KPI data for its selected historical time period needs to be collected at a different granularity. Where a "high granularity" KPI may be measured several times per second, and a "low granularity" KPI may be measured once every few seconds.
And inputting the KPI value array into a prediction model, analyzing the KPI value array by the prediction model by using a regression algorithm, and outputting KPI values of future time points.
And summarizing the historical KPI data, generating a KPI value array, and inputting the KPI value array into an intelligent prediction model of the machine learning ML.
In an embodiment of the present invention, the following ML algorithm may be considered: random Forest (RF), support Vector Machine (SVM), gradient tree boosting (GB) and Neural Network (NN). Where RF represents an integrated/reinforcement learning approach for regression and classification tasks. RF works by growing a set of decision trees (weak learners) and then making predictions by using the mean of the individual trees. This approach may reduce overfitting because each tree is built on a randomly selected subset of features. By considering a random subset of each segmented feature of the decision tree, they are further decorrelated, minimizing the fit. Therefore, the embodiment of the invention preferentially selects the algorithm to predict the KPI. The algorithm is the prior art, and the embodiment of the invention is not described in a repeated manner.
Similar to RF, GB also represents an overall algorithm, with the idea of iteratively building a model, where at each stage a "weak learner" is added to improve the existing model.
SVM is based on constructing a hyperplane for making decision boundaries separating points of different classes. To make the separation easier, the input features can be transformed by a suitable function called kernel. For SVMs, parameter tuning represents a major drawback. Conventionally, grid search can be used to automatically search for optimal parameters, but it is time consuming.
The multi-layered perceptron (MLP), also known as a feed-forward neural network, represents a deep learning model. It consists of multiple layers, with the input forming the first layer and the output the last layer. From each layer, take a linear combination of all values, apply the activation function, and send the result to the next layer. The purpose of the learning algorithm is to find the appropriate weights to use in linear combination.
If any KPI (e.g., PRB load rate, CQI, RSRQ) of one MEC (defined as a first MEC) among the plurality of MECs does not satisfy the expected conditions, if the MEC continuously distributes content, the network response time is greatly increased, and the data distribution process is rapidly and slowly performed. Therefore, it is necessary to suspend the distribution operation of the first MEC and to share the distribution operation by the remaining MECs.
In the embodiment of the invention, the hot spot data groups stored in the first MEC need different MEC nodes to be distributed instead, and because the hot spot data group data volume is large, if different MECs are used for forwarding the same hot spot data group, the phenomenon that the KPI of the first MEC does not reach the standard easily occurs, therefore, the invention creatively provides a data splitting mode, cuts the large-capacity data group into different small data groups, distributes the small data groups and can greatly reduce the congestion rate of the network.
The hot spot data group is subjected to data splitting to be split into a plurality of sub hot spot data groups, an equal division mode can be adopted, and an unequal division mode can also be adopted, namely a mode of distributing the sub hot spot data group with smaller data volume with high resource saturation and distributing the sub hot spot data group with larger data volume with low resource saturation is adopted in principle in consideration of the resource saturation of different MECs.
S104, the first MEC determines a similar MEC cluster group with similar characteristics to the first MEC according to a similarity criterion, and sequentially distributes the plurality of sub hot spot data groups to different MECs of the similar MEC cluster group according to a load balancing strategy, so that the similar MEC cluster group sequentially distributes the plurality of sub hot spot data to a plurality of mobile terminals according to different routing paths.
In this embodiment of the present invention, the determining, by the first MEC, a similar MEC cluster group having similar characteristics to the first MEC according to the similarity criterion may specifically be:
acquiring PRB load rate and content demand level of adjacent MEC nodes of a first MEC;
threshold ranges of PRB load rate and content demand level (for example, PRB load rate is 70% -80%, content demand is 5GB-10 GB) are respectively set, and the adjacent MEC nodes are screened based on the threshold ranges to screen out the similar MEC clusters. Namely, MECs which meet PRB load rate of 70-80% in adjacent MECs and have content requirements of 5GB-10GB are screened out, and other MECs which do not meet the conditions are filtered out. And MEC nodes with the capacity or the working efficiency similar to that of the first MEC can be screened out through the similarity criterion to replace the MEC to forward the hot spot data group.
In addition, in another embodiment, the signal-to-noise ratio SNR may also be obtained by acquiring the signal-to-noise ratio SNR of the neighboring MEC nodes of the first MEC;
setting a threshold range of the SNR, and screening the adjacent MEC nodes based on the threshold range to screen out the similar MEC cluster group.
Optionally, the distributing the multiple sub-hot-spot data groups to different MECs of the similar MEC cluster group in sequence according to a load balancing policy may specifically be:
acquiring resource load rates of different MECs of the similar MEC cluster group, and sequencing the resource load rates from low to high;
sorting the plurality of sub-hotspot data groups from high to low according to data size;
and sequentially sending the sequenced multiple sub-hot spot data groups to different MECs of the sequenced similar MEC cluster group, wherein the data sizes of the multiple sub-hot spot data groups are in inverse proportion to the corresponding MEC resource load rates. As shown in fig. 3, exemplarily, in the embodiment of the present invention, the resource load rates of five nodes, namely MEC-se:Sub>A, MEC-B, MEC-C, MEC-D and MEC-E, are 70%,74%,77%,72% and 80%, respectively, and the sequences from low to high according to the resource load rates are MEC-se:Sub>A, MEC-D, MEC-B, MEC-C and MEC-E, and at this time, the sub-hotspot datse:Sub>A sets are also 5, which are denoted as D1 to D5, and the datse:Sub>A sizes are 1gb,4gb,5gb,2gb,3gb, and the datse:Sub>A sizes are D3, D2, D5, D4, and D1, respectively, after the sequences from high to low according to the datse:Sub>A sizes, the allocation rules are as follows: d3 is assigned to MEC-A, D2 is assigned to MEC-D, D5 is assigned to MEC-B, D4 is assigned to MEC-C, and D1 is assigned to MEC-E.
In the embodiment of the present invention, the similar MEC cluster group sequentially distributes the multiple sub-hotspot data to the multiple mobile terminals according to different routing paths, which may specifically be:
the similar MEC cluster sets the data of the plurality of sub hot spots as high priority and sets the data of the plurality of sub hot spots as deterministic service flow; different data transmission has different transmission priorities, the data stream with high energy level is transmitted preferentially, and the data stream with high energy level and the data stream with low energy level can be transmitted after the data stream with high energy level is transmitted successfully. The deterministic service means data distribution which will occur in the future, and the sub-hotspot data belongs to one of the deterministic services and is determined to be sent to different mobile terminals in the future. Correspondingly, non-deterministic traffic is possible but not certain that the traffic exists. Network traffic of deterministic traffic is generally referred to as deterministic traffic, and non-deterministic network traffic corresponding thereto is referred to as non-deterministic traffic.
And the similar MEC cluster predicts the future non-deterministic service traffic, if the sum of the predicted non-deterministic service traffic and the deterministic service traffic exceeds a preset threshold, the sub-hotspot data is sent according to a first routing path, and the non-hotspot data is sent according to a second routing path, wherein the service traffic corresponding to the non-hotspot data is the non-deterministic service traffic. In the embodiment of the present invention, it is defined that the sub-hotspot data is of high priority, and is sent through a dedicated sending path of high priority, that is, a first routing path, while the non-hotspot data is of medium or low priority, belongs to non-deterministic traffic, and may be sent slightly delayed or suspended, and therefore, may be sent through a second routing path of medium or low priority.
Wherein, the similar MEC cluster predicts the future non-deterministic traffic flow through a long-time memory model LSTM. Long-short term memory (LSTM) is a special RNN, mainly to solve the problems of gradient extinction and gradient explosion during Long sequence training. LSTM can perform better in longer sequences than normal RNNs.
There are three main stages inside the LSTM:
the forgetting stage is mainly used for selectively forgetting the input transmitted by the last node. Simply put, "forget unimportant and remember important".
Second, selecting the memory stage. This stage selectively "remembers" the inputs of this stage. The input parameters are selected and memorized. Which important ones are recorded and which ones are not important, and the others are recorded less.
And thirdly, outputting. This phase will determine which will be the output of the current state.
The method for predicting the service flow by adopting the LSTM belongs to the prior art, and the method is not repeated.
In the embodiment of the invention, the hot spot event has strong timeliness, and the traffic threshold of the hot spot data group can be changed along with the difference of time. Therefore, if the request amount of the hot spot data group in unit time is lower than the hot spot flow threshold, the multi-cloud server degrades the hot spot event corresponding to the hot spot data group into a non-hot spot event, and reduces the priority of the hot spot data group. At this time, the cloud server may continue to find and predict a new hotspot event and generate a new hotspot data set (second hotspot data set), for example:
the cloud server receives terminal request messages transmitted by a plurality of MECs and extracts different labels in the terminal request messages;
the cloud server analyzes the plurality of tags, and defines the tags with the unit time request quantity exceeding the hot spot flow threshold as hot event tags;
the cloud center server generates a second hot spot data group corresponding to the hot spot event tag, and sends the second hot spot data group to the MECs.
According to the method and the device provided by the embodiment of the invention, in the process of hot spot data distribution, in order to solve the problem of untimely response caused by hot spot data flow swelling, key performance indicators KPIs of a plurality of MECs are predicted in time, and after a hot spot data group is split if the KPI is found to be not up to the standard, a plurality of adjacent MECs share the task of hot spot data content distribution, so that the load rate of a single MEC is effectively reduced, the utilization rate of network resources is improved, and the QoS of a network is improved.
Embodiments of the present invention also provide a computer-readable storage medium having stored thereon computer-executable instructions for performing the method in the foregoing embodiments.
The embodiment of the invention also provides a device which comprises a memory and a processor, wherein the memory is stored with computer executable instructions, and the processor realizes the method when running the computer executable instructions on the memory.
The method and the device provided by the embodiment of the invention evaluate the containers of the adjacent nodes, and ensure that the containers of the migration target meet the QoS guarantee after data migration through QoS prediction and secondary screening condition filtering, thereby improving user experience.
FIG. 4 is a diagram illustrating the hardware components of the apparatus according to one embodiment. It will be appreciated that fig. 4 only shows a simplified design of the device. In practical applications, the apparatuses may also respectively include other necessary elements, including but not limited to any number of input/output systems, processors, controllers, memories, etc., and all apparatuses that can implement the big data management method of the embodiments of the present application are within the protection scope of the present application.
The memory includes, but is not limited to, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM), or a portable read-only memory (CD-ROM), which is used for storing instructions and data.
The input system is for inputting data and/or signals and the output system is for outputting data and/or signals. The output system and the input system may be separate devices or may be an integral device.
The processor may include one or more processors, for example, one or more Central Processing Units (CPUs), and in the case of one CPU, the CPU may be a single-core CPU or a multi-core CPU. The processor may also include one or more special purpose processors, which may include GPUs, FPGAs, etc., for accelerated processing.
The memory is used to store program codes and data of the network device.
The processor is used for calling the program codes and data in the memory and executing the steps in the method embodiment. Specifically, reference may be made to the description of the method embodiment, which is not repeated herein.
In the several embodiments provided in the present application, it should be understood that the disclosed system and method may be implemented in other ways. For example, the division of the unit is only one logical function division, and other division may be implemented in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. The shown or discussed mutual coupling, direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, systems or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions according to the embodiments of the present application are all or partially generated when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable system. The computer instructions may be stored on or transmitted over a computer-readable storage medium. The computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)), or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that includes one or more of the available media. The usable medium may be a read-only memory (ROM), or a Random Access Memory (RAM), or a magnetic medium, such as a floppy disk, a hard disk, a magnetic tape, a magnetic disk, or an optical medium, such as a Digital Versatile Disk (DVD), or a semiconductor medium, such as a Solid State Disk (SSD).
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think of various equivalent modifications or substitutions within the technical scope of the present application, and these modifications or substitutions should be covered within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method for resource scheduling based on big data is characterized by comprising the following steps:
the cloud server analyzes the text of the collected events in a period, analyzes the theme of the hot event, trains a theme-tag model, outputs tags corresponding to the theme based on the theme-tag model, counts the number of requests with the theme in a period, outputs the tags corresponding to the theme by using the theme-tag model if the increase rate of the number of requests is higher than a first threshold value, and retrieves the associated hot data group based on the corresponding tags; focusing on the accumulated growth rate of different labels of the subject, thereby outputting a correct hot spot event and generating a corresponding hot spot data set;
the cloud server sends the hot spot data group to a plurality of mobile edge nodes MEC;
the plurality of MECs transmit the hot spot data group to a plurality of mobile terminals;
the cloud server predicts key performance indicators KPIs of the MECs, if the predicted KPI of a first MEC does not meet quality of service QoS conditions, the distribution operation of the first MEC is suspended, and a hot spot data group stored by the first MEC is subjected to data splitting to be split into a plurality of sub hot spot data groups;
the first MEC determines a similar MEC cluster with similar characteristics to the first MEC according to a similarity criterion, and sequentially distributes the sub-hot-point data sets to different MECs of the similar MEC cluster according to a load balancing strategy, so that the similar MEC cluster sequentially distributes the sub-hot-point data sets to a plurality of mobile terminals according to different routing paths.
2. The method of claim 1, wherein the cloud server predicts Key Performance Indicators (KPIs) for the plurality of MECs, comprising:
acquiring KPI historical data of the MEC, and generating a KPI value array from the KPI historical data;
and inputting the KPI value array into a prediction model, analyzing the KPI value array by the prediction model by using a regression algorithm, and outputting KPI values of future time points.
3. The method of claim 1 or 2, wherein the first MEC determining a similar MEC cluster with similar characteristics as the first MEC according to a similarity criterion comprises:
acquiring PRB load rate and content demand level of adjacent MEC nodes of a first MEC;
threshold ranges of PRB load rate and content demand level are set respectively, and the adjacent MEC nodes are screened based on the threshold ranges to screen out the similar MEC cluster groups.
4. The method of claim 1 or 2, wherein the first MEC determining a similar MEC cluster with similar characteristics as the first MEC according to a similarity criterion comprises:
acquiring the signal-to-noise ratio SNR of adjacent MEC nodes of the first MEC;
setting a threshold range of the SNR, and screening the adjacent MEC nodes based on the threshold range to screen out the similar MEC cluster group.
5. The method according to claim 1, wherein before the cloud server sends a hotspot data set to a plurality of mobile edge nodes, MECs, the method further comprises:
the cloud server predicts a hot event based on an artificial intelligence algorithm and generates a hot data group corresponding to the hot event.
6. The method of claim 1, wherein sequentially distributing the plurality of sub-hotspot data sets to different MECs of the similar MEC cluster according to a load balancing policy comprises:
acquiring resource load rates of different MECs of the similar MEC cluster group, and sequencing the resource load rates from low to high;
sorting the plurality of sub-hotspot data groups from high to low according to data size;
and sequentially sending the sequenced multiple sub-hot spot data groups to different MECs of the sequenced similar MEC cluster group, wherein the data sizes of the multiple sub-hot spot data groups are in inverse proportion to the corresponding MEC resource load rates.
7. The method of claim 1, wherein the similar MEC cluster sequentially distributes the plurality of sub-hotspot data sets to a plurality of mobile terminals according to different routing paths, comprising:
the similar MEC cluster sets the plurality of sub-hotspot data groups to be in high priority, and sets the plurality of sub-hotspot data groups to be in deterministic service flow;
and the similar MEC cluster predicts the future non-deterministic service traffic, if the sum of the predicted non-deterministic service traffic and the deterministic service traffic exceeds a preset threshold, the sub-hotspot data group is sent according to a first routing path, and non-hotspot data is sent according to a second routing path, wherein the service traffic corresponding to the non-hotspot data is the non-deterministic service traffic.
8. The method of claim 7, wherein the similar MEC cluster predicts a future non-deterministic traffic size, comprising:
the similar MEC cluster predicts the future non-deterministic traffic flow size through a long-time memory model LSTM.
9. The method of claim 7, further comprising:
if the request quantity of the hotspot data group in unit time is lower than a hotspot traffic threshold, the cloud server downgrades the hotspot event corresponding to the hotspot data group to a non-hotspot event, and reduces the priority of the hotspot data group;
the cloud server receives the terminal request messages transmitted by the MECs and extracts different labels in the terminal request messages;
the cloud server analyzes different tags in the terminal request message, and defines tags with unit time request quantity exceeding the hot spot flow threshold as hot event tags;
the cloud server generates a second hotspot data group corresponding to the hotspot event label, and sends the second hotspot data group to the MECs.
10. An apparatus comprising a memory having computer-executable instructions stored thereon and a processor that, when executing the computer-executable instructions on the memory, performs the method of any of claims 1 to 9.
CN202110849911.1A 2021-07-27 2021-07-27 Resource scheduling method and device based on big data Active CN113596146B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110849911.1A CN113596146B (en) 2021-07-27 2021-07-27 Resource scheduling method and device based on big data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110849911.1A CN113596146B (en) 2021-07-27 2021-07-27 Resource scheduling method and device based on big data

Publications (2)

Publication Number Publication Date
CN113596146A CN113596146A (en) 2021-11-02
CN113596146B true CN113596146B (en) 2022-10-04

Family

ID=78250320

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110849911.1A Active CN113596146B (en) 2021-07-27 2021-07-27 Resource scheduling method and device based on big data

Country Status (1)

Country Link
CN (1) CN113596146B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114374617A (en) * 2021-12-13 2022-04-19 中电信数智科技有限公司 Fault-tolerant prefabricating method for deterministic network

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106713055B (en) * 2017-02-27 2019-06-14 电子科技大学 A kind of energy-efficient deployment method of virtual CDN
CN108833468B (en) * 2018-04-27 2021-05-11 广州西麦科技股份有限公司 Video processing method, device, equipment and medium based on mobile edge calculation
CN110602180B (en) * 2019-08-26 2021-03-19 中国生态城市研究院有限公司 Big data user behavior analysis method based on edge calculation and electronic equipment
US11146455B2 (en) * 2019-12-20 2021-10-12 Intel Corporation End-to-end quality of service in edge computing environments
WO2021140950A1 (en) * 2020-01-09 2021-07-15 ソニーグループ株式会社 Content distribution system, content distribution method, and program

Also Published As

Publication number Publication date
CN113596146A (en) 2021-11-02

Similar Documents

Publication Publication Date Title
US11106560B2 (en) Adaptive thresholds for containers
CN112153700B (en) Network slice resource management method and equipment
EP3907609A1 (en) Improved quantum ant colony algorithm-based spark platform task scheduling method
US8589555B2 (en) Virtualization and consolidation analysis engine for enterprise data centers
CN108833197B (en) Active detection method and detection platform based on cloud
Kar et al. Offloading using traditional optimization and machine learning in federated cloud–edge–fog systems: A survey
Kaur et al. A systematic review on task scheduling in Fog computing: Taxonomy, tools, challenges, and future directions
CN104426799A (en) Traffic And Load Aware Dynamic Queue Management
US11463554B2 (en) Systems and methods for dynamic multi-access edge allocation using artificial intelligence
KR102245341B1 (en) Method for apply predictive model for workload distribution within the cloud edge
WO2017010922A1 (en) Allocation of cloud computing resources
US20170331762A1 (en) Virtual resource automatic selection system and method
CN109189578B (en) Storage server allocation method, device, management server and storage system
Pourghaffari et al. An efficient method for allocating resources in a cloud computing environment with a load balancing approach
CN113596146B (en) Resource scheduling method and device based on big data
US11799568B2 (en) Systems and methods for optimizing a network based on weather events
CN109815204A (en) A kind of metadata request distribution method and equipment based on congestion aware
Nithyanantham et al. Hybrid Deep Learning Framework for Privacy Preservation in Geo-Distributed Data Centre.
CN114978913B (en) Cross-domain deployment method and system for service function chains based on cut chains
US20230094964A1 (en) Dynamic management of locations of modules of a platform hosted by a distributed system
Vieira et al. Dynamic and mobility-aware vnf placement in 5g-edge computing environments
CN108667920B (en) Service flow acceleration system and method for fog computing environment
CN114915633A (en) Method, device and medium for scheduling users to gateway cluster in public cloud network
CN110099415B (en) Cloud wireless access network computing resource allocation method and system based on flow prediction
Midya et al. An adaptive resource placement policy by optimizing live VM migration for ITS applications in vehicular cloud network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220127

Address after: 561000 room 17011, unit 1, building C, Jianbo International Plaza, No. 188, Huangguoshu street, Huaxi street, Xixiu District, Anshun City, Guizhou Province

Applicant after: Guizhou Anhe Shengda Enterprise Management Co.,Ltd.

Address before: 518129 Bantian shangpinya garden, Longgang District, Shenzhen City, Guangdong Province

Applicant before: Peng Liang

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220906

Address after: 8th Floor, Building 1, Yard 35, Lugu Road, Shijingshan District, Beijing 100040

Applicant after: Beijing Saisheng Technology Co.,Ltd.

Address before: 561000 room 17011, unit 1, building C, Jianbo International Plaza, No. 188, Huangguoshu street, Huaxi street, Xixiu District, Anshun City, Guizhou Province

Applicant before: Guizhou Anhe Shengda Enterprise Management Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant