CN117395697A - Method for predicting communication load and computer readable medium - Google Patents

Method for predicting communication load and computer readable medium Download PDF

Info

Publication number
CN117395697A
CN117395697A CN202210737562.9A CN202210737562A CN117395697A CN 117395697 A CN117395697 A CN 117395697A CN 202210737562 A CN202210737562 A CN 202210737562A CN 117395697 A CN117395697 A CN 117395697A
Authority
CN
China
Prior art keywords
load
parameter
load parameter
model
value distribution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210737562.9A
Other languages
Chinese (zh)
Inventor
张羽
刘巧艳
李建国
毛凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to CN202210737562.9A priority Critical patent/CN117395697A/en
Priority to PCT/CN2023/101336 priority patent/WO2024001867A1/en
Publication of CN117395697A publication Critical patent/CN117395697A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0442Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/50Business processes related to the communications industry
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/06Testing, supervising or monitoring using simulated traffic

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Strategic Management (AREA)
  • Human Resources & Organizations (AREA)
  • Economics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Tourism & Hospitality (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Game Theory and Decision Science (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Development Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Primary Health Care (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Mobile Radio Communication Systems (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The present disclosure provides a method of communication load prediction, comprising: acquiring information of various load parameters; the information of each load parameter comprises a parameter value distribution of the load parameter; according to the regularity of the load parameters, determining that each load parameter is a strong regularity load parameter or a weak regularity load parameter; under the condition that the load parameter is a strong regular load parameter, determining a historical load translation model of the strong regular load parameter; the historical load translation model is used for determining the parameter value distribution of the load parameter in a future preset time period according to the parameter value distribution of the load parameter in a historical preset time period; training to obtain a machine learning model of the weak regular load parameter under the condition that the load parameter is the weak regular load parameter; the machine learning model is used for predicting the parameter value distribution of the load parameter in a future time period after the historical time period according to the parameter value distribution of the load parameter in the historical time period.

Description

Method for predicting communication load and computer readable medium
Technical Field
The present disclosure relates to the field of communications and artificial intelligence, and in particular, to a method for predicting a communication load, and a computer readable medium.
Background
In communications (especially wireless communications), there are a large number of time-related indicators (such as load parameters) that dynamically change with changes in users and services, and in many cases, algorithm and policy (such as energy-saving policy) decisions need to be made according to data of the load parameters.
In some related art, the load parameter is preconfigured or detected in real time: if the configuration is performed in advance, different cells, different scenes and different time periods are configured into the same strategy, so that the method has homogeneity and poor scene applicability; if the real-time detection is performed, the algorithm and the strategy can be correspondingly adjusted after the load parameters are changed, so that hysteresis is provided, and the user experience is affected.
Disclosure of Invention
The present disclosure provides a method of communication load prediction, a computer readable medium.
In a first aspect, an embodiment of the present disclosure provides a method of communication load prediction, including:
acquiring information of various load parameters; the information of each load parameter comprises a parameter value distribution of the load parameter;
according to the regularity of the load parameters, determining whether each load parameter is a strong regularity load parameter or a weak regularity load parameter; the regularity of each load parameter represents the similarity of the parameter value distribution of the load parameter in two preset time periods;
Under the condition that the load parameter is a strong regular load parameter, determining a historical load translation model of the strong regular load parameter; the historical load translation model is used for determining the parameter value distribution of the load parameter in a future preset time period according to the parameter value distribution of the load parameter in a historical preset time period;
training to obtain a machine learning model of the weak regular load parameter under the condition that the load parameter is the weak regular load parameter; the machine learning model is used for predicting the parameter value distribution of the load parameter in a future time period after the historical time period according to the parameter value distribution of the load parameter in the historical time period.
In a second aspect, the disclosed embodiments provide a computer readable medium having stored thereon a computer program which, when executed by a processor, implements a method of any of the disclosed embodiments.
In the embodiment of the disclosure, the load parameters are predicted through the model, so that the load parameters can be known in advance, and the judgment of algorithms and strategies (such as energy-saving strategies) can be carried out in advance, thereby avoiding hysteresis; meanwhile, in the embodiment of the disclosure, corresponding prediction models are respectively set for different cells and different parameters, so that accurate prediction can be realized according to the real conditions of the load parameters of each cell, and the homogeneity is avoided; in addition, in the embodiment of the disclosure, different prediction models are set according to the intensity of the regularity of the load parameters, and a simple historical load translation model is directly adopted for the load parameters with strong regularity, so that the calculation amount required by model training, load parameter prediction and the like is greatly reduced, and the implementation is easy.
Drawings
In the drawings of the embodiments of the present disclosure:
FIG. 1 is a flow chart of a method of traffic load prediction provided by an embodiment of the present disclosure;
FIG. 2 is a flow chart of a method of traffic load prediction provided by another embodiment of the present disclosure;
FIG. 3 is a process diagram of a method for traffic load prediction according to an embodiment of the present disclosure;
fig. 4 is a logic diagram of regularity judgment in a method for predicting communication load according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a sliding window in a method for traffic load prediction according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of LSTM structure in a method for predicting communication load according to an embodiment of the disclosure;
fig. 7 is a block diagram of a computer readable medium according to an embodiment of the present disclosure.
Detailed Description
In order to better understand the technical solutions of the present disclosure, the following describes in detail a method for predicting communication load and a computer readable medium provided by an embodiment of the present disclosure with reference to the accompanying drawings.
The present disclosure will be described more fully hereinafter with reference to the accompanying drawings, but the embodiments shown may be embodied in different forms and should not be construed as limited to the embodiments set forth below. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The accompanying drawings, which are included to provide a further understanding of embodiments of the disclosure and are incorporated in and constitute a part of this specification, illustrate the disclosure and together with the detailed embodiment, do not limit the disclosure. The above and other features and advantages will become more readily apparent to those skilled in the art from the description of the detailed embodiments with reference to the accompanying drawings.
The present disclosure may be described with reference to plan and/or cross-sectional views with the aid of idealized schematic diagrams of the present disclosure. Accordingly, the example illustrations may be modified in accordance with manufacturing techniques and/or tolerances.
Embodiments of the disclosure and features of embodiments may be combined with each other without conflict.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The term "and/or" as used in this disclosure includes any and all combinations of one or more of the associated listed items. As used in this disclosure, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. The terms "comprises," "comprising," "including," "includes," "including," "having," "including," "made of … …" and/or "comprising," when used in this disclosure, specify the presence of stated features, integers, steps, operations, elements, and/or components, but does not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components and/or groups thereof.
Unless otherwise defined, all terms (including technical and scientific terms) used in this disclosure have the same meaning as commonly understood by one of ordinary skill in the art. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and the present disclosure, and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The present disclosure is not limited to the embodiments shown in the drawings, but includes modifications of the configuration formed based on the manufacturing process. Thus, the regions illustrated in the figures have schematic properties and the shapes of the regions illustrated in the figures illustrate the particular shapes of the regions of the elements, but are not intended to be limiting.
In a first aspect, embodiments of the present disclosure provide a method of communication load prediction.
The method of the embodiment of the disclosure is used for predicting the load (load) of the communication network element, in particular to determining a prediction model of load parameters in a certain range (such as one network element, one cell lease and the like), and sending the prediction model to the corresponding network element, so that the network element predicts by using the prediction model, and carries out algorithm, strategy (such as energy-saving strategy) judgment and the like according to the predicted load parameters.
In the embodiment of the disclosure, the network element refers to a device or a module in the communication network, which needs to make some decisions according to load parameters, and the device or the module may be specifically in the form of a network manager, a network element, a base station, and the like.
In the embodiment of the present disclosure, the loading parameters refer to parameters that affect the loading of the network element, and include, but are not limited to, NR (new air interface) carrier RRC (radio resource control) connection number, NR carrier uplink PRB (material resource block) usage, NR carrier downlink PRB usage, cell group uplink PRB usage, cell group downlink PRB usage, LTE (long term evolution technology) DSS (dynamic spectrum sharing) cell group uplink PRB usage number, LTE DSS cell group downlink PRB usage number, LTE DSS cell group RRC connection number, LTE cell uplink PRB usage number, LTE cell downlink PRB usage number, LTE cell RRC number, and the like.
Referring to fig. 1, a method of communication load prediction according to an embodiment of the present disclosure includes, but is not limited to, at least the following steps:
s101, acquiring information of various load parameters.
Wherein the information of each load parameter includes a parameter value distribution of the load parameter.
The information of the collected load parameters corresponding to different communication scenes such as network management, network elements, base station portraits, data service modules and the like, specifically, the parameter values of the load parameters at each time in a certain historical time period (of course, the actual parameter values of the historical time), namely, the parameter value distribution of the load parameters.
It should be understood that the specific data format (e.g., sampling rate), data amount, etc. of the information of the load parameters obtained above may be set as desired. For example, it may be that the parameter value of the load parameter per minute per day within one month before the acquisition sampling time, i.e., the granularity CollectDataStep may be 15 minutes, while the sampling duration CollectDataTime may be 30 days.
It should be understood that the information of the load parameter may include other information besides the parameter value distribution of the load parameter, such as tag information of the time corresponding to each parameter value, such as the time of day of the week, whether it is a special holiday, whether a major event occurs, etc. With the additional information, more factors can be considered when the load parameters are predicted, so that more accurate and comprehensive prediction is realized.
It should be understood that the obtained information of the load parameters may also be subjected to operations such as data cleansing and data complement, which will not be described in detail herein.
S102, determining that each load parameter is a strong regular load parameter or a weak regular load parameter according to the regularity of the load parameter.
Wherein the regularity of each load parameter characterizes the similarity of the parameter value distribution of the load parameter in two predetermined time periods.
For each load parameter, the parameter value distribution in different preset time periods is not identical, but the similarity is possibly provided, and the similarity shows whether the parameter value distribution of the load parameter is regular; if the parameter value distribution of the load parameter in two preset time periods is similar, the regularity of the load parameter is strong, and the load parameter is called as a strong regularity load parameter; if the parameter value distribution similarity of the load parameter in two preset time periods is low, the variation irregularity of the load parameter is indicated, and the load parameter is called as a weak regular load parameter.
The above two predetermined time periods may be selected in various ways, but from an accurate point of view, two adjacent predetermined time periods closest to the sampling time may be used, for example, the first week and the first two weeks of the sampling time may be used as two predetermined time periods, respectively.
S1031, determining a historical load translation model of the strong regular load parameter under the condition that the load parameter is the strong regular load parameter.
The historical load translation model is used for determining the parameter value distribution of the load parameter in a future preset time period according to the parameter value distribution of the load parameter in a historical preset time period.
S1032, training to obtain a machine learning model of the weak regular load parameter under the condition that the load parameter is the weak regular load parameter.
The machine learning model is used for predicting the parameter value distribution of the load parameter in a future time period after the historical time period according to the parameter value distribution of the load parameter in the historical time period.
Judging the type of regularity of each load parameter, and specifically making different subsequent treatments.
Determining the history load translation model for the strong regular load parameters; because the regularity of the load parameter with strong regularity is strong, the historical load translation model can directly determine the parameter value distribution in the preset time period (such as the next week of the predicted time point) to be predicted in the future according to the parameter value distribution in the preset time period (such as the previous week of the predicted time point) in the history so as to realize the prediction.
And for the weak regular load parameters, determining that the weak regular load parameters adopt a machine learning model, and training by using data in the information of the load parameters as training samples to obtain the machine learning model, wherein the machine learning model is used for predicting future parameter value distribution according to some historical parameter value distribution.
The machine learning model may predict the parameter value distribution of the following day according to the parameter value distribution of the preceding week.
It should be understood that the specific ways of training the machine learning model are various, but the whole idea is to make the machine learning model predict the parameter value distribution at a later time (also belonging to the historical time) by using the historical parameter value distribution, and calculate the current predicted loss (loss) of the machine learning model by taking the actual parameter value distribution at the later time as a standard, and then adjust the parameters of the machine learning model according to the loss until the machine learning model can realize better prediction.
It should be understood that the above steps S1031 and S1032 are two processing manners for different cases, and thus the description and the order of the reference numerals thereof do not represent the order of execution thereof.
It should be understood that when there are multiple load parameters, the prediction model of each load parameter is determined according to its own regularity, so that all load parameters may be a historical load translation model, all load parameters may be a machine learning model, or some load parameters may be a historical load translation model, and some load parameters may be machine learning models.
In the embodiment of the disclosure, the load parameters are predicted through the model, so that the load parameters can be known in advance, algorithm and strategy judgment can be performed in advance, and hysteresis is avoided; meanwhile, in the embodiment of the disclosure, corresponding prediction models are respectively set for different cells and different parameters, so that accurate prediction can be realized according to the real conditions of the load parameters of each cell, and the homogeneity is avoided; in addition, in the embodiment of the disclosure, different prediction models are set according to the intensity of the regularity of the load parameters, and a simple historical load translation model is directly adopted for the load parameters with strong regularity, so that the calculation amount required by model training, load parameter prediction and the like is greatly reduced, and the implementation is easy.
In some embodiments, the predetermined period of time is seven days.
As one way of an embodiment of the present disclosure, the above predetermined time period may be seven days (one week). This is because, in general, the parameter value distribution of the load parameter has a certain law of variation on each day of the week in addition to the law of variation on each time of the day, so that the predetermined time period of the week can well include such variation.
It should be understood that the predetermined time period is not limited thereto, as it may be one day, one month, or the like.
In some embodiments, the historical load translation model is used to take the parameter value distribution of the load parameter over a closest historical predetermined time period as the parameter value distribution of the load parameter over a closest future predetermined time period.
As one way of an embodiment of the present disclosure, the historical load translation model corresponding to the strong regular load parameter may directly use the parameter value distribution of the previous closest predetermined time period as the parameter value distribution of the (translation as) predicted next closest predetermined time period.
For example, the parameter value distribution of the previous week of the predicted time point may be used as the parameter value distribution of the next week of the predicted time point.
It should be appreciated that the specific manner in which the historical load translation model is not limited thereto. For example, the historical load translation model may also use a distribution of average parameter values for two corresponding times in the previous two weeks as the predicted parameter values for the corresponding times in the next week.
In some embodiments, the machine learning model includes: and any one of a long-short-term memory neural network model LSTM, a one-dimensional convolutional neural network model and a conversion template transducer model.
As one approach to embodiments of the present disclosure, the machine learning model corresponding to the weak regularity load parameters may be a Long short term memory neural network model (LSTM, long-Short Term Memory).
Alternatively, the machine learning model may be another deep learning model such as a one-dimensional convolutional neural network model or a transducer model.
In some embodiments, referring to fig. 2, after determining the historical load translation model of the strong regular load parameter (S1031), further comprising:
s1041, transmitting the historical load translation model of the load parameter to the network element corresponding to the load parameter.
After training the machine learning model to obtain the weak regular load parameters (S1032), further comprising:
and S1042, transmitting the machine learning model of the load parameter to the network element corresponding to the load parameter.
As a way of an embodiment of the present disclosure, after obtaining the prediction model, the prediction model (a historical load translation model or a machine learning model) may also be sent to a corresponding network element, that is, the prediction model for a certain load parameter may be sent to the network element that generates the load parameter, so that the network element predicts the load parameter according to the received prediction model, and instructs the network element to perform an algorithm, a policy planning, and so on.
It should be understood that it is also possible if the prediction model is not issued to the network element, but rather the prediction is performed directly by the execution body of the method (e.g. network manager, server, etc.).
In some embodiments, referring to fig. 2, obtaining information of a plurality of load parameters (S101) includes:
s1011, acquiring information of various load parameters, and normalizing parameter value distribution of each load parameter.
S1012, sending the normalized parameter of each load parameter to the network element corresponding to the load parameter.
As a way of the embodiments of the present disclosure, the prediction model may process "normalized" data, so that the parameter values of the load parameters need to be normalized first; therefore, the data generated by the prediction model is normalized, but the network element needs an actual predicted value of the load parameter, so that the normalized parameter is also sent to the network element corresponding to the load parameter for the network element to perform corresponding data normalization and reduction.
The specific normalization method is various, and may be maximum and minimum normalization, for example.
In some embodiments, referring to fig. 2, after transmitting the historical load translation model of the load parameter to the network element corresponding to the load parameter (S1041) or transmitting the machine learning model of the load parameter to the network element corresponding to the load parameter (S1042), the method further includes:
s105, receiving the prediction effect information of the network element.
The prediction effect information characterizes the accuracy of the parameter value distribution of the load parameters obtained by the network element according to the historical load translation model or the machine learning model.
And S106, returning to the step of acquiring information of various load parameters when the predicted effect information is lower than a predetermined standard.
As a way of this embodiment of the present disclosure, after determining the prediction model, the prediction effect (i.e., whether the parameter value distribution of the predicted load parameter matches the actual parameter value distribution) may be further analyzed (e.g., periodically analyzed), and when the prediction effect is degraded due to aging, degradation, or change of the situation of the prediction model, the information of the load parameter is collected again, and the prediction model is obtained again (i.e., the prediction model is updated) and is sent to the network element again. Of course, if the prediction effect is not degraded, the current prediction model may be continuously used and the prediction effect thereof may be analyzed.
In some embodiments, determining whether each load parameter is a strongly regular load parameter or a weakly regular load parameter based on the regularity of the load parameters comprises:
the method comprises the steps of determining load parameters, of which the waveform trend similarity and the numerical similarity respectively accord with corresponding threshold ranges, as strong regular load parameters, and determining other load parameters as weak regular load parameters; the waveform trend similarity of each load parameter represents the similarity of the variation trend of the parameter value distribution of the load parameter in two preset time periods, and the numerical similarity of each load parameter represents the similarity of the parameter value of the load parameter in two preset time periods.
As one way of an embodiment of the present disclosure, the regularity of the load parameter may include both the waveform trend similarity and the numerical similarity.
The waveform trend similarity refers to that in two preset time periods, the change rules of parameter value distribution are similar, for example, in two weeks, if the parameter value of a certain load parameter is higher on monday and on thursday, and is lower on thursday, the change rules of the parameter value distribution and the parameter value distribution are similar, and the waveform trend similarity is high.
And numerical similarity means that the "absolute values" of the load parameters within two predetermined time periods cannot differ too much.
In the embodiment of the disclosure, the corresponding threshold ranges (for example, greater than 0.9 and less than 20%) may be set for the waveform trend similarity and the numerical similarity, respectively, and only when the waveform trend similarity and the numerical similarity of one load parameter both conform to the corresponding threshold ranges, they are considered as the strong regular load parameters.
It should be understood that the specific manner of determining regularity according to the similarity of waveform trend and the similarity of values is various. For example, the waveform trend similarity may be calculated and compared with a corresponding threshold range, and the numerical similarity may be calculated and compared with a corresponding threshold range only if it meets the threshold range; if the waveform trend similarity does not accord with the threshold range, the waveform trend similarity is directly judged to be a weak regular load parameter, and numerical similarity does not need to be calculated.
In some embodiments, the waveform trend similarity is calculated by the following formula:
the numerical similarity is calculated by the following formula:
wherein self_ cov represents waveform trend similarity, σ represents pearson correlation coefficient operation, train_y_front represents parameter value distribution of load parameters in a first predetermined time period, train_y_behend represents parameter value distribution of load parameters in a second predetermined time period, residual_mean_diff represents numerical similarity, mean represents average value operation, and max represents maximum value operation.
As one way of an embodiment of the present disclosure, the waveform trend similarity may be calculated from the pearson correlation coefficient (Pearson Correlation Coefficient) using the above formula. The influence of different numerical dimensions is eliminated by the standardized calculation in the above manner with respect to the similarity or the like of the features of euclidean distance, cosine value or the like.
The numerical similarity may be calculated according to the above formula, i.e. the difference between the averages of the load parameters over two predetermined time periods, with respect to the duty cycle of the larger average thereof, it being evident that a smaller value of the duty cycle indicates a higher numerical similarity.
It should be appreciated that the specific algorithm for calculating the waveform trend similarity and the numerical similarity is not limited thereto.
A specific method of traffic load prediction according to an embodiment of the present disclosure is described below by way of example with reference to fig. 3, which may include:
stage one: data acquisition and data preprocessing
The data acquisition is to acquire the information of the required load parameters from a network manager, a network element, a base station portrait or a data service module and the like. And after the data acquisition is completed, data preprocessing such as data cleaning, missing data completion and the like is performed so as to be used for the subsequent steps.
The data acquisition and data preprocessing may specifically include:
a1.1, data acquisition, namely acquiring information of load parameters of a plurality of time periods of CollectDataTime (in days) from a network manager, a network element or a base station portrait and other modules by using the granularity CollectDataStep (in minutes), namely acquiring training data TrainingData (parameter values of the load parameters).
For example, 30 days of data may be collected at 15 minutes granularity, so that 96 data per day are available.
The loading parameters include, but are not limited to, NR carrier RRC connection number, NR carrier uplink PRB usage, NR carrier downlink PRB usage, cell group uplink PRB usage, cell group downlink PRB usage, LTE DSS cell group uplink PRB usage, LTE DSS cell group downlink PRB usage, LTE DSS cell group RRC connection number, LTE cell uplink PRB usage, LTE cell downlink PRB usage, LTE cell RRC number, and the like.
A1.2, data time axis complement: in the data acquisition process, the condition of missing data acquisition in a certain granularity can occur, so that the corresponding time axis filling is needed, and the filled data is empty.
A1.3, data time axis de-duplication: in the data acquisition process, the situation of repeated data acquisition in a certain granularity may occur, so that repeated data acquisition is required to be subjected to deduplication, and the deduplication rule can be to reserve the data which appears for the first time in the data set and delete the repeated data thereafter.
A1.4, data complement: if the missing data is at the head of the dataset, head null data may be padded as first non-null data from the head; if the missing data is at the tail of the data set, then the tail null data may be padded as the first non-null data from the tail; if the missing data is in the data concentration part, the first non-empty data can be searched forwards and backwards respectively, and linear interpolation filling (or other modes such as mean filling and the like) can be performed.
Stage two: regularity threshold determination
Referring to fig. 4, the regularity threshold judgment distinguishes different regularity exhibited by the time-series load parameter data and respectively establishes a prediction model, so as to achieve the purpose of implementing a targeted scheme for different types of load parameter data in a self-adaptive manner, thereby accelerating the operation effect and improving the prediction precision on the premise of reducing the input of overall computing resources.
The intensity of the regularity of the load parameter is respectively expressed in: (1) The waveform change trend of the parameter values of the load parameters distributed in two adjacent time sequences (the preset time period is taken as a unit of 'week'), namely waveform consistency; (2) The parameter values of the load parameters are basically stable in size in two adjacent weeks, namely, the values are consistent.
Therefore, we use the average value and the relative average value difference of two week phase relation number matrixes to respectively judge the similarity of the waveform trend and the numerical similarity, and compare the two scores with the respective regularity judgment threshold (threshold range) so as to comprehensively judge the degree of regularity.
A2.1, normalizing the data set: obtaining the maximum value MaxData in the parameter value TrainigData of each load parameter, and carrying out maximum and minimum normalization (or other normalization modes) on the parameter value of the load parameter, namely calculating TrainigData/MaxData; and then the MaxData index (normalization parameter) is sent to the corresponding network element so that the network element can take the prediction model and then perform inverse normalization (reduction) on the prediction result data generated by the network element.
A2.2, calculating a correlation coefficient matrix mean value: the waveform trend similarity (correlation coefficient) reflects the similarity of waveforms between the data distribution of the last week (train_y_band) and the data distribution of the last week (train_y_front) in the training data, i.e., how much the new data after is repeating the pattern of change of the existing data before:
The calculation may be specifically based on pearson correlation coefficient (Pearson Correlation Coefficient), which eliminates the influence of different numerical dimensions by the normalized calculation compared to the similarity of euclidean distance and cosine (cosine) characterizations.
Of course, other alternative vector distance or similarity calculation schemes are possible.
A2.3, calculating a relative mean difference: after determining that the average value of the calculated correlation coefficient matrix is larger (e.g., greater than 0.9), that is, it is confirmed that waveforms of two weeks are highly similar, such data may be additionally subjected to relative average value difference calculation and determination:
that is, on the premise that the waveform rule is confirmed to be relatively similar, the situation that the overall parameter values of the load parameters are excessively different (for example, less than 20%) is not determined to occur.
A2.4, judging a regularity threshold: according to different application scenes, the dividing basis of the strong regularity and the non-strong regularity (weak regularity) load parameters can be determined by configuring the regularity judging threshold parameters threshold and threshold meandif. Preliminary division is respectively carried out on regularity by using threshold regulation, the average value of the correlation coefficient matrix is larger than or equal to the strong regularity load parameter of threshold regulation, which is the preliminary judgment, whether the relative average value difference is smaller than the threshold mean diff is further judged, and if both the average value differences are satisfied, LSTM is used; if one of the terms is not satisfied, a historical load translation model is used.
Stage three: model training
Model training is to construct a prediction model for each load parameter condition (periodicity, volatility, trend, flow size, etc.) based on time series statistics of historical data, namely a historical load translation model loading shifting and a prediction model loading modeling. The model training specifically comprises:
a3.1, determining the load shifting by adopting a historical load translation model for the strong regular load parameters. It can be seen that when judging the load parameter of the (t+1) th week, if strong similarity is proved between the load data of the (t+1) th week and the load data of the (t-1) th week, we can consider that the data of the (t+1) th week also continues the similarity, that is, the data of the (t+1) th week should be almost the same as the data of the (t-1) th week, so that the LoadShifting model is adopted and sent to the network element.
A3.2, carrying out determining training on the weak regular load parameters to obtain a corresponding prediction model LoadForcasting, namely learning a periodic fluctuation rule, an increasing and decreasing trend and the like of the data age time sequence through the prediction model, and deeply combining the intrinsic characteristics to make model prediction.
A3.2.1, data construction.
A3.2.1.1, data sample construction: sample data is obtained by a sliding window method.
The cells may use the times (0, 15,30, 45) of 15min integer times (0, 15,30, 45) of the absolute time (00:00) as the load evaluation period (15 min granularity), and the window length is (LoadPeriod 96+96), and the sliding window step length is 96 (i.e. one day). Thus, referring to fig. 5, 21 training samples can be obtained by a 28 day data sliding window construction.
The format of each sample is required to be constructed according to the input format requirement of the LSTM, and one piece of constructed sample data consists of two parts, namely characteristic data and tag data:
characteristic data (X): loadPeriod is 96 consecutive normalized load parameter data, wherein LoadPeriod defaults to 7, i.e., one week of consecutive data.
Tag data (Y): 96 consecutive normalized load parameter data following the characteristic data X, i.e. consecutive data of one day after the characteristic data.
A3.2.1.2 training set and test set partitioning: in training, in order to evaluate the accuracy of the model, the data needs to be divided into a training set D_norm_train and a testing set D_norm_test, wherein the training set is used for training the model, the testing set is used for verifying the training effect of the model, the dividing proportion of the training set is alpha, and the proportion of the testing set is (1-alpha).
A3.2.1.3, parameter normalization: in general, the same MaxData parameters are adopted by the network manager and the network element, that is, when the network element side determines that the maximum value TestMaxData in the test data is smaller than or equal to the maximum value parameter MaxData set in the training data of the network manager side, the set MaxData is used as the maximum value parameter; however, when the network element side determines that the maximum value TestMaxData in the test data is greater than the maximum value parameter MaxData set in the network management side training data, the reason may be that: (1) burst outliers; (2) The overall load becomes larger (i.e., the actual generated data is greater than the maximum of the previously acquired data). So that the two possible situations can be distinguished first and the response can be made separately.
(1) TestMaxData > MaxData because of the small burst value/outlier:
and (3) identification: when the mean meanttestdata of TestData (test data) differs little from the mean meanttraindata of TrainData (training data), i.e. the larger value duty ratio in TestData is not large, a small burst value is illustrated, for example:
|MeanTestData-MeanTrainData|/MeanTrainData<20%;
and (3) treatment: for a small number of burst values, the MaxData parameters are not modified, and only abnormal values which are greater than 1 after normalization are processed, for example, the TestMaxData is replaced by the original MaxData, namely, the values which are greater than 1 after normalization are replaced by 1.
(2) TestMaxData > MaxData because when the overall load becomes large:
and (3) identification: when the mean value meantdata of TestData differs greatly from the mean value meanttraindata of TrainData, that is, the larger value in TestData occupies more space, it is indicated that the overall load becomes larger, for example:
|MeanTestData-MeanTrainData|/MeanTrainData≧20%;
and (3) treatment: as the overall load becomes larger, the MaxData value is corrected to the maximum value of the new data, i.e., maxdata=testmaxdata.
A3.2.2, model structure selection: referring to fig. 6, the basic structure of the lstm includes an input layer, an output layer, and a number of hidden layers.
A3.2.2.1, parameter setting.
The following examples can be used for parameter settings:
unitsum (hidden layer neuron number): selecting default 16 according to the number of single-layer network neurons;
LoadPeriod (data period): 7 days of data were selected;
ActFunc (activation function): the value range enum ('tanh', 'relu', 'elu', 'linear', 'sigmoid'), a default 'elu' is selected;
dropwatio (deactivation ratio): preventing overfitting, and defaulting to 0 in a value range (0-1);
according to the above configuration, the model size is about 50KB.
Batch size: i.e. the data amount LoadPeriodBatchSize for each batch of training, the default is selected to be 16;
epoch (training wheel number): selecting default 100 times for the number loadperioddioepoch of training lots;
optimizer (Optimizer): determining a convergence mode of the gradient descent of the model, selecting a default adam, and selecting a value range enum ('adam', 'adagard', 'rmsprop', 'momentem');
the model is initialized by the determined mean and variance parameters and is set by random seeds, so that the model is convenient for reproduction of different scenes.
A3.2.3, judging the model effect.
A3.2.3.1, judgment flow: and (3) verifying the validity of the prediction result of the model by setting an error threshold LoadPrectMerror.
A3.2.3.1.1, model training: calculating error_train of the LoadPreactModel on the training set D_norm_train, and judging by the following logic:
IF Error_train<LoadPredictModel:
Outputting a LoadPredirectModel;
Else:
the model LoadPredictModel is not output.
A3.2.3.1.2, model verification: the Error error_test of the LoadPredictModel on the test set d_norm_test is calculated and judged by the following logic:
If Error_test<LoadPredictModel:
outputting the model LoadPreactModel to an online reasoning engine (network element side);
Else:
the model LoadPredictModel is not output to the online inference engine (network element side).
A3.2.3.2, error calculation.
The error is set as the relative error between the predicted value load_prediction and the true value load_true, and both the load_prediction and the load_true are time series containing 15 minutes of granularity data in the future period, and are recorded as follows:
load_predict=[load_predict_1,...,load_predict_i,...,load_predict_N];
note that: if the predicted result load_prediction has a negative value, the predicted result load_prediction needs to be replaced by 0.
load_true=[load_true_1,...,load_true_i,...,load_true_N];
We define the sample relative average error (MAE/sample mean) as:
the error threshold LoadPredictMError adopts network management configuration, and the closer the value is to 1, the more relaxed the standard adopted by the model is, and the threshold can be set to be 0.4.
A3.2.4, model derivation.
The model is exported into an HDF5 format file after training: model. Save ('XXX. H5')
Model naming rules: the model package issued by the network management side adopts a zip package format, and the model package names the rule: network element id+time zip. The model package contains a plurality of model directories therein, such as the following:
/>
A3.2.4.1 model_name1 model name format is: algorithm name_business object id; wherein the algorithm name and the service object id are agreed by the network management intelligent application and the network element intelligent application.
A3.2.4.1.1 for a history load translation model, for example, four are obtained, corresponding to the four model names as follows:
the NR carrier uses an RB number model, a model of NR carrier id=1: PRBloadshift_carrier_1;
-NR carrier RRC user number model, NR carrier id = 1 model: RRCloadshift_carrier_1;
the RB number model is used for the NR cell group, model of NR cell ncgi= { PLMN:46001, gbid: 1, cellid:1 }: PRBloadshift_PLMN_460-01_gNBid_1_cellid_1;
-NR cell group RRC user number model, model of NR cell ncgi= { PLMN:46001, gbid: 1, cellid:1 }: RRCloadshift_PLMN_460-01_gNBid_1_cellid_1.
A3.2.4.1.2 for LSTM, four are obtained, for example, corresponding to the four model names as follows:
the NR carrier uses an RB number model, a model of NR carrier id=1: PRBloadforecast_carrier_1;
-NR carrier RRC user number model, NR carrier id = 1 model: RRCloadforecast_carrier_1;
the RB number model is used for the NR cell group, model of NR cell ncgi= { PLMN:46001, gbid: 1, cellid:1 }: PRBloadforecast_PLMN_460-01_gNBid_1_cellid_1;
-NR cell group RRC user number model, model of NR cell ncgi= { PLMN:46001, gbid: 1, cellid:1 }: RRCloadequast_PLMN_460-01_gNBid_1_cellid_1.
A3.2.4.2 version1 version number: the system is a digital, 32-bit integer, and can be automatically generated by a network management intelligent application, manually specified or automatically generated by LSE.
A3.2.4.3 the trapara. Json training parameter information content must contain: model name, build time, version number, and other relevant parameters are customized according to different algorithms.
For LSTM, the following parameters are also included:
-the NR carrier uses RB number normalized denominator;
-NR carrier RRC user number normalization denominator;
-NR cell groups normalized by RB number denominator;
-NR cell group RRC user number normalization denominator.
No additional parameters are required for the historical load translation model.
Stage four: load parameter prediction
And issuing the prediction model obtained by training to a network element, and predicting the load condition according to the prediction model so as to guide the work such as energy saving and shutdown. Because the load condition in the cell may change greatly over time, the model needs to be updated according to a certain period, and the updated model is sent to the network element again.
A4.1, starting prediction.
Judging whether the load parameter prediction condition is satisfied or not: (1) a load prediction switch is turned on; (2) the RSE model is loaded. If all the conditions are met, a load prediction timer (provisional 00:00:00 points) is started, after the timer is overtime, a load prediction service interface of the middle station is called, load prediction is initiated, and load prediction components of the middle station extract load data of NR carriers and NR cell groups respectively to make predictions. It should be appreciated that the prediction model of the user load and the prediction model of the PRB load for the NR carrier and NR cell groups are different and need to be extracted and predicted separately.
A4.2, when predicting the load parameter of the date X, the predicted input data needs to extract the load parameter data of the days (X-8) - (X-1). The output data are load parameter data of 96 granularities on the whole day of X date.
After receiving the predictive model issued by UME, the RSE decompresses the file to obtain a predictive model file model. Tflite and an algorithm configuration file trainpara. Json, and the method comprises the following steps:
the RSE autonomously loads the algorithm model. If the model file carries an empty model, the model file needs to be loaded, which is equivalent to deleting the model in use by the RSE; the method comprises the steps of storing an algorithm configuration file, describing the file by adopting json format, and formulating a fixed template for each algorithm according to the algorithm description content, wherein the algorithm configuration file is used for predicting the organization of input data and predicting the input parameter setting called by the algorithm; querying the RSE for the prediction model capability, and if the RSE loads the prediction model, returning success by the RSE; otherwise, the reply fails.
A4.4, the prediction model is loaded successfully.
The RSE broadcasts model information, and after receiving the broadcast, the middle station prediction component judges whether the energy-saving carrier wave and the cell support load prediction and uses what algorithm model to execute the prediction according to the model type carried in the broadcast and the corresponding model object main key ID, and stores the corresponding information; after the completion, the middle stage prediction component broadcasts the load prediction capability notification to the application, and after the application receives the load prediction capability notification, the application judges whether the load prediction can be initiated according to the broadcast content.
In a second aspect, referring to fig. 7, the disclosed embodiments provide a computer readable medium having a computer program stored thereon, which when executed by a processor, implements a method of any of the disclosed embodiments.
Wherein the processor is a device having data processing capabilities including, but not limited to, a Central Processing Unit (CPU) or the like; memory is a device with data storage capability including, but not limited to, random access memory (RAM, more specifically SDRAM, DDR, etc.), read-only memory (ROM), electrically charged erasable programmable read-only memory (EEPROM), FLASH memory (FLASH); the I/O interface (read/write interface) is connected between the processor and the memory, and can implement information interaction between the memory and the processor, which includes, but is not limited to, a data Bus (Bus), etc.
Those of ordinary skill in the art will appreciate that all or some of the steps, systems, functional modules/units in the apparatus disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof.
In a hardware implementation, the division between the functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be performed cooperatively by several physical components.
Some or all of the physical components may be implemented as software executed by a processor, such as a Central Processing Unit (CPU), digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as known to those skilled in the art. Computer storage media includes, but is not limited to, random access memory (RAM, more particularly SDRAM, DDR, etc.), read-only memory (ROM), electrically charged erasable programmable read-only memory (EEPROM), FLASH memory (FLASH), or other magnetic disk storage; a compact disk read-only (CD-ROM), digital Versatile Disk (DVD) or other optical disk storage; magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage; any other medium that can be used to store the desired information and that can be accessed by a computer. Furthermore, as is well known to those of ordinary skill in the art, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
The present disclosure has disclosed example embodiments, and although specific terms are employed, they are used and should be interpreted in a generic and descriptive sense only and not for purpose of limitation. In some instances, it will be apparent to one skilled in the art that features, characteristics, and/or elements described in connection with a particular embodiment may be used alone or in combination with other embodiments unless explicitly stated otherwise. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the disclosure as set forth in the appended claims.

Claims (10)

1. A method of communication load prediction, comprising:
acquiring information of various load parameters; the information of each load parameter comprises a parameter value distribution of the load parameter;
according to the regularity of the load parameters, determining whether each load parameter is a strong regularity load parameter or a weak regularity load parameter; the regularity of each load parameter represents the similarity of the parameter value distribution of the load parameter in two preset time periods;
Under the condition that the load parameter is a strong regular load parameter, determining a historical load translation model of the strong regular load parameter; the historical load translation model is used for determining the parameter value distribution of the load parameter in a future preset time period according to the parameter value distribution of the load parameter in a historical preset time period;
training to obtain a machine learning model of the weak regular load parameter under the condition that the load parameter is the weak regular load parameter; the machine learning model is used for predicting the parameter value distribution of the load parameter in a future time period after the historical time period according to the parameter value distribution of the load parameter in the historical time period.
2. The method of claim 1, wherein,
the historical load translation model is used for taking the parameter value distribution of the load parameter in a closest historical preset time period as the parameter value distribution of the load parameter in a closest future preset time period.
3. The method of claim 1, wherein the machine learning model comprises:
and any one of a long-short-term memory neural network model LSTM, a one-dimensional convolutional neural network model and a conversion template transducer model.
4. The method of claim 1, wherein,
after the historical load translation model for determining the strong regular load parameter, the method further comprises the following steps: transmitting the historical load translation model of the load parameter to a network element corresponding to the load parameter;
after the training, the machine learning model for obtaining the weak regular load parameters further comprises: and sending the machine learning model of the load parameter to a network element corresponding to the load parameter.
5. The method of claim 4, wherein the obtaining information for a plurality of load parameters comprises:
acquiring information of various load parameters, and normalizing parameter value distribution of each load parameter;
and sending the normalized parameter of each load parameter to the network element corresponding to the load parameter.
6. The method of claim 4, wherein after the sending the historical load translation model of the load parameter to the network element corresponding to the load parameter or the sending the machine learning model of the load parameter to the network element corresponding to the load parameter, further comprising:
receiving the prediction effect information of the network element; the prediction effect information characterizes the accuracy of the parameter value distribution of the load parameters obtained by the network element according to the historical load translation model or the machine learning model prediction;
And returning the step of acquiring information of various load parameters when the predicted effect information is lower than a preset standard.
7. The method of claim 1, wherein said determining whether each of the load parameters is a strongly regular load parameter or a weakly regular load parameter based on the regularity of the load parameters comprises:
the method comprises the steps of determining load parameters, of which the waveform trend similarity and the numerical similarity respectively accord with corresponding threshold ranges, as strong regular load parameters, and determining other load parameters as weak regular load parameters; the waveform trend similarity of each load parameter represents the similarity of the variation trend of the parameter value distribution of the load parameter in two preset time periods, and the numerical similarity of each load parameter represents the similarity of the parameter value of the load parameter in two preset time periods.
8. The method of claim 7, wherein,
the waveform trend similarity is calculated by the following formula:
the numerical similarity is calculated by the following formula:
wherein self_ cov represents the waveform trend similarity, σ represents the pearson correlation coefficient operation, train_y_front represents the parameter value distribution of the load parameter in a first predetermined time period, train_y_band represents the parameter value distribution of the load parameter in a second predetermined time period, residual_mean_diff represents the numerical similarity, mean represents the average operation, and max represents the maximum operation.
9. The method of claim 1, wherein,
the predetermined period of time is seven days.
10. A computer readable medium having stored thereon a computer program which, when executed by a processor, implements the method of traffic load prediction according to any of claims 1 to 9.
CN202210737562.9A 2022-06-27 2022-06-27 Method for predicting communication load and computer readable medium Pending CN117395697A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210737562.9A CN117395697A (en) 2022-06-27 2022-06-27 Method for predicting communication load and computer readable medium
PCT/CN2023/101336 WO2024001867A1 (en) 2022-06-27 2023-06-20 Communication load prediction method, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210737562.9A CN117395697A (en) 2022-06-27 2022-06-27 Method for predicting communication load and computer readable medium

Publications (1)

Publication Number Publication Date
CN117395697A true CN117395697A (en) 2024-01-12

Family

ID=89383245

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210737562.9A Pending CN117395697A (en) 2022-06-27 2022-06-27 Method for predicting communication load and computer readable medium

Country Status (2)

Country Link
CN (1) CN117395697A (en)
WO (1) WO2024001867A1 (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6477951B1 (en) * 2018-04-05 2019-03-06 トヨタ自動車株式会社 In-vehicle electronic control unit
CN114418158A (en) * 2020-10-10 2022-04-29 中国移动通信集团设计院有限公司 Cell network load index prediction method based on attention mechanism learning network
CN112308345A (en) * 2020-11-30 2021-02-02 中国联合网络通信集团有限公司 Communication network load prediction method, device and server
CN113962874A (en) * 2021-07-13 2022-01-21 武汉大学 Bus load model training method, device, equipment and storage medium
CN113610303B (en) * 2021-08-09 2024-03-19 北京邮电大学 Load prediction method and system
CN113672666A (en) * 2021-08-23 2021-11-19 成都佳华物链云科技有限公司 Machine load prediction method and device, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
WO2024001867A1 (en) 2024-01-04

Similar Documents

Publication Publication Date Title
Wang et al. Machine learning for 5G and beyond: From model-based to data-driven mobile wireless networks
Gupta et al. Correlated multi-armed bandits with a latent random source
CN112633962B (en) Service recommendation method and device, computer equipment and storage medium
CN113326126A (en) Task processing method, task scheduling device and computer equipment
WO2016033969A1 (en) Method and system for predicting traffic data amount and/or resource data amount
CN112398700B (en) Service degradation method and device, storage medium and computer equipment
Lei et al. Learning-based resource allocation: Efficient content delivery enabled by convolutional neural network
CN114374616B (en) Energy consumption evaluation method, device, equipment and medium
CN115550195A (en) Traffic suppression prediction method, electronic device, and storage medium
CN112636995B (en) Forwarding network resource allocation method and device
CN117395697A (en) Method for predicting communication load and computer readable medium
CN116842447A (en) Post-processing method, device and system for classified data and electronic device
CN112416590A (en) Server system resource adjusting method and device, computer equipment and storage medium
CN113297152B (en) Method and device for updating cache of edge server of power internet of things
CN112486683B (en) Processor control method, control apparatus, and computer-readable storage medium
CN111144652B (en) Tour comfort algorithm and trend prediction based method, system and device
CN115292361A (en) Method and system for screening distributed energy abnormal data
CN114511760A (en) Sample equalization method, device, equipment and storage medium
CN114139627A (en) Data determination method and device, storage medium and electronic equipment
CN114139621A (en) Method, device, equipment and storage medium for determining model classification performance identification
CN113935407A (en) Abnormal behavior recognition model determining method and device
CN115225986A (en) Adaptive OSU bandwidth adjustment method and device
CN112003662A (en) Cooperative spectrum sensing method and device based on dimensionality reduction and clustering in cognitive network
Gong et al. Evolutionary algorithms with user’s preferences for solving hybrid interval multi-objective optimization problems
CN117858169A (en) Cell flow control method, base station and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication