WO2019153878A1 - 一种基于机器学习的数据处理方法以及相关设备 - Google Patents

一种基于机器学习的数据处理方法以及相关设备 Download PDF

Info

Publication number
WO2019153878A1
WO2019153878A1 PCT/CN2018/121033 CN2018121033W WO2019153878A1 WO 2019153878 A1 WO2019153878 A1 WO 2019153878A1 CN 2018121033 W CN2018121033 W CN 2018121033W WO 2019153878 A1 WO2019153878 A1 WO 2019153878A1
Authority
WO
WIPO (PCT)
Prior art keywords
network element
algorithm model
target
information
installation
Prior art date
Application number
PCT/CN2018/121033
Other languages
English (en)
French (fr)
Inventor
徐以旭
王岩
张进
王园园
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP18905264.0A priority Critical patent/EP3734518A4/en
Publication of WO2019153878A1 publication Critical patent/WO2019153878A1/zh
Priority to US16/985,406 priority patent/US20200364571A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/27Regression, e.g. linear or logistic regression
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/01Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/16Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/04Arrangements for maintaining operational condition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks

Definitions

  • the present application relates to the field of communications, and in particular, to a data processing method based on machine learning and related devices.
  • Machine learning is a multidisciplinary interdisciplinary study of how computers can simulate or implement human learning behaviors to acquire new knowledge or skills and reorganize existing knowledge structures to continuously improve themselves. Performance. With the advent of the era of big data, machine learning, especially for deep learning of large-scale data, is gaining more and more attention and application, including the application of machine learning in wireless communication networks.
  • Machine learning can include steps such as data acquisition, feature engineering, training, and forecasting.
  • the prior art combines these steps in one network element, and the network element can be referred to as a network data analytics (NWDA) network element.
  • NWDA network data analytics
  • the model is stored in the NWDA network entity.
  • the subsequent prediction process is that the user plane function (UPF) network element sends the predicted data or feature vector to the NWDA.
  • the result is predicted by the NWDA and sent to the policy control function (PCF) network element, and the PCF generates a policy by using the prediction result, and sends the generated policy to the UPF network element, where the generation policy can set the service quality.
  • the quality of service (QoS) parameters are validated by the UPF network element.
  • QoS quality of service
  • the delay in service processing is very high, such as radio resource management (RRM)/radio transmission technology (radio transmission technology, RTT) requires processing in seconds or even transmission time interval (TTI) (milliseconds).
  • RRM radio resource management
  • RTT radio transmission technology
  • TTI transmission time interval
  • the training and prediction fusion is performed in the NWDA network element, as shown in FIG.
  • the NWDA performs the prediction after training the model, including: the NWDA receives the feature vector from the UPF network element, and inputs the feature vector into the trained model, obtains the prediction result, and sends the prediction result to the PCF, and then The PCF generates a policy corresponding to the predicted result and sends it to the relevant user plane network element to execute the policy.
  • each information exchange between devices may have a delay. Therefore, more interactions in the prior art increase the delay, and affect the service experience for services with high real-time requirements.
  • the embodiment of the present application provides a data processing method based on machine learning and related devices, which are used to solve the problem that the service experience is affected in the prior art due to the increase of the interaction delay.
  • a first aspect of the present application provides a data processing method based on machine learning, including: receiving, by a first network element, installation information of an algorithm model from a second network element, where the first network element may be a UPF or The base station, in addition, the second network element is used to train the algorithm model; after receiving the installation information of the algorithm model, the first network element installs an algorithm model based on the installation information; when the algorithm model is successfully installed in the first network element, the first network The data is collected and the acquired data is predicted by the installed algorithm model.
  • the training step in the machine learning is performed by the second network element, and the first network element installs an algorithm model, and predicts data collected by the first network element according to the algorithm model, and implements the network architecture.
  • the logical function of the inner model training and the prediction is separated. After the data is collected by the first network element, the data can be predicted according to the installed algorithm model, the interaction delay is reduced, and the interaction delay is increased in the prior art. The problem that caused the business experience to be affected.
  • the installation information of the algorithm model includes the following information: a unique identifier ID of the algorithm model, an algorithm type of the algorithm model, and a structure of the algorithm model.
  • the content included in the installation information is refined, so that the installation of the algorithm model is more detailed and operable.
  • the installation information of the algorithm model further includes policy index information, where the policy index information includes a prediction result of the algorithm model, and the prediction The result is the identification information of the corresponding policy.
  • the installation information of the algorithm model may further include the policy index information, so that the first network element may find the identification information of the policy corresponding to the predicted result according to the policy index information, and the first network element is the first network element. Determining the strategy based on the predicted results provides an implementation condition.
  • the first network element before the first network element collects data, the first network element receives the collected information from the second network element, where the collected information is at least Includes the ID of the feature to be acquired. In this implementation manner, the first network element further receives the collection information from the second network element, so that the first network element obtains the value of the feature to be collected corresponding to the collected data according to the identifier ID of the feature to be collected, and performs prediction.
  • the source of the parameters required for the prediction by the first network element is explained, which increases the operability of the embodiment of the present application.
  • the first network element after the first network element receives the collected information from the second network element, the first network element sends the collected information to the third network element. And a unique identifier ID of the target algorithm model, wherein the target algorithm model is at least one model in the algorithm model; the first network element further receives the target feature vector corresponding to the collected information and the unique identifier of the target algorithm model sent by the third network element ID, where the target algorithm model is used to predict the data.
  • the operation of acquiring the target feature vector may be transferred to the third network element for execution, and the first network element may perform the function of predicting according to the model, thereby reducing the workload of the first network element.
  • the method may further include: sending, by the first network element, a unique identifier ID of the target algorithm model to the fourth network element, the target prediction result, and Target target index information corresponding to the target algorithm model, wherein the target prediction result is used to determine a target strategy, and the target prediction result is a result of inputting the target feature vector to the target algorithm model; the first network element is received from the fourth network element Identification information of the target policy.
  • the first network element forwards the function of determining the target policy according to the target prediction result to the fourth network element, which not only reduces the workload of the first network element, but also combines the fourth implementation manner of the first aspect. Different functions are implemented by multiple network elements, which also increases the flexibility of the network.
  • the method further includes: receiving, by the first network element, the target operation indication and the algorithm model from the second network element a unique identifier ID, the target operation indication is used to instruct the first network element to perform a target operation on the algorithm model, where the target operation may include but is not limited to any one of the following operations: modifying the algorithm model, deleting the algorithm model, and activating the algorithm model Or deactivate the algorithm model.
  • the algorithm model can be modified, deleted, and the like, and the various requirements that may occur in the actual application are further conformed, and the edge device free upgrade service is also not interrupted. .
  • the first network element when the target operation is to modify the algorithm model, receives the modified algorithm model from the second network element. information.
  • the second network element needs to send the installation information of the modified algorithm model to the first network element to perform re-installation, so that the embodiment is more complete in the operation steps.
  • the first network element after the algorithm model fails to be installed, the first network element further sends an indication of the installation failure reason to the second network element to notify The reason why the second NE installation failed.
  • the first network element needs to feedback the reason why the algorithm model fails to be installed, and the solution of the embodiment of the present application is added.
  • a second aspect of the embodiments of the present application provides a data processing method based on machine learning, including: obtaining, by a second network element, an algorithm model of training completion. After obtaining the algorithm model, the second network element sends the installation information of the algorithm model to the first network element, so that the first network element installs an algorithm model according to the installation information of the algorithm model, where the algorithm model is used to predict the first network.
  • the data collected by the element, and the first network element is a UPF or a base station.
  • the training step in the machine learning is performed by the second network element, and the first network element installs an algorithm model, and predicts data collected by the first network element according to the algorithm model, and implements the network architecture.
  • the logical function of the inner model training and the prediction is separated. After the data is collected by the first network element, the data can be predicted according to the installed algorithm model, the interaction delay is reduced, and the interaction delay is increased in the prior art. The problem that caused the business experience to be affected.
  • the installation information of the algorithm model includes the following information: a unique identifier ID of the algorithm model, an algorithm type of the algorithm model, and a structure of the algorithm model.
  • the content included in the installation information is refined, so that the installation of the algorithm model is more detailed and operable.
  • the installation information of the algorithm model further includes policy index information, where the policy index information includes an output result of the algorithm model, and an output.
  • the result is the identification information of the corresponding policy.
  • the installation information of the algorithm model may further include the policy index information, so that the first network element may find the identification information of the policy corresponding to the predicted result according to the policy index information, and the first network element is the first network element. Determining the strategy based on the predicted results provides an implementation condition.
  • a third implementation manner of the second aspect of the embodiment of the present application after the second network element sends the installation information of the algorithm model to the first network element, when the first network element installation algorithm model fails The second network element receives an indication of an installation failure reason from the first network element. In this implementation manner, if the algorithm model fails to be installed, the second network element receives the reason why the algorithm model fed back from the first network element fails to be installed, so that the embodiment of the present application is more operable.
  • the method further includes: the second network element sends the collection information to the first network element, where the collection information includes at least the identifier ID of the feature to be collected. .
  • the first network element obtains the value of the feature to be collected corresponding to the collected data according to the identifier of the feature to be collected, and performs the prediction by using the collected information sent by the second network element. The source of the parameters required for the prediction by the first network element is explained, which increases the operability of the embodiment of the present application.
  • a third aspect of the present application provides a network element, where the network element is a first network element, where the first network element may be a user plane network element UPF or a base station, and includes: a first transceiver unit, configured to: The second network element receives the installation information of the algorithm model, the second network element is used to train the algorithm model, the installation unit is configured to install the algorithm model according to the installation information of the algorithm model received by the transceiver unit, and the acquisition unit is configured to collect data; The unit is configured to predict the data collected by the acquisition unit according to the algorithm model after the installation unit installs the algorithm model successfully.
  • the training step in the machine learning is performed by the second network element, the installation unit installs the algorithm model, and the prediction unit predicts the data collected by the collection unit according to the algorithm model, and implements the network architecture.
  • the logic function of model training and prediction is separated.
  • the prediction unit can predict the data according to the installed algorithm model, which reduces the interaction delay and solves the problem of increasing the interaction delay in the prior art. The problem that causes the business experience to be affected.
  • the installation information of the algorithm model includes the following information: a unique identifier ID of the algorithm model, an algorithm type of the algorithm model, and a structure of the algorithm model.
  • the installation instructions for the parameters and algorithm models, and the installation instructions for the algorithm model are used to indicate the installation of the algorithm model.
  • the content included in the installation information is refined, so that the installation of the algorithm model is more detailed and operable.
  • the installation information of the algorithm model may further include policy index information, where the policy index information includes prediction results of the algorithm model, and prediction results.
  • the identification information of the corresponding policy may further include the policy index information, so that the first network element may find the identification information of the policy corresponding to the predicted result according to the policy index information, and the first network element is the first network element. Determining the strategy based on the predicted results provides an implementation condition.
  • the first transceiver unit is further configured to: receive the collection information from the second network element, where the collection information includes at least the identifier of the feature to be collected. ID.
  • the first transceiver unit further receives the collection information from the second network element, so that the first network element obtains the value of the feature to be collected corresponding to the collected data according to the identifier ID of the feature to be collected, and performs prediction.
  • the source of the parameters required for the prediction by the first network element is explained, which increases the operability of the embodiment of the present application.
  • the second transceiver unit is configured to send the collection information and the unique identifier ID of the target algorithm model to the third network element, where The target algorithm model is at least one model in the algorithm model; the second transceiver unit is further configured to receive, from the third network element, a target feature vector corresponding to the collected information and a unique identifier ID of the target algorithm model, where the target algorithm model is used Perform a forecasting operation.
  • the operation of acquiring the target feature vector may be transferred to the third network element for execution, and the first network element may perform the function of predicting according to the model, thereby reducing the workload of the first network element.
  • the first network element further includes: a third transceiver unit, configured to send, to the fourth network element, a unique identifier of the target algorithm model ID, target prediction result and target policy index information corresponding to the target algorithm model, wherein the target prediction result is used for determining the target strategy, and the target prediction result is a result obtained by inputting the target feature vector to the target algorithm model; the third transceiver unit And for receiving the identification information of the target policy from the fourth network element.
  • a third transceiver unit configured to send, to the fourth network element, a unique identifier of the target algorithm model ID, target prediction result and target policy index information corresponding to the target algorithm model, wherein the target prediction result is used for determining the target strategy, and the target prediction result is a result obtained by inputting the target feature vector to the target algorithm model; the third transceiver unit And for receiving the identification information of the target policy from the fourth network element.
  • the function of determining the target policy according to the target prediction result is transferred to the fourth network element for execution, which not only reduces the workload of the first network element, but also combines the fourth implementation manner of the first aspect with different functions. It is implemented by multiple network elements, which also increases the flexibility of the network.
  • the first transceiver unit is further configured to: receive a target operation indication and a unique identifier ID of the algorithm model from the second network element, and target The operation indication is used to instruct the first network element to perform a target operation on the algorithm model, wherein the target operation includes modifying the algorithm model, deleting the algorithm model, activating the algorithm model, or deactivating the algorithm model.
  • the algorithm model can be modified, deleted, and the like, and the various requirements that may occur in the actual application are further conformed, and the edge device free upgrade service is also not interrupted. .
  • the first transceiver unit when the target operation is to modify the algorithm model, is further configured to: receive the modified Installation information for the algorithm model.
  • the second network element when the algorithm model needs to be modified, the second network element needs to send the installation information of the modified algorithm model to the first transceiver unit to perform reinstallation, so that the embodiment is more perfect in the operation steps.
  • the first transceiver unit is further configured to: send an installation failure reason indication to the second network element.
  • the first transceiver unit further feeds back the reason why the algorithm model installation fails to the second network element, so that the solution of the embodiment of the present application is added.
  • a fourth aspect of the present application provides a network element, where the network element is a second network element, where the training unit is configured to obtain an algorithm model for training completion, and the transceiver unit is configured to send the training completion to the first network element.
  • the installation information of the algorithm model wherein the installation information of the algorithm model is used for the installation of the algorithm model, the algorithm model is used for data prediction, and the first network element is the user plane network element UPF or the base station.
  • the training step in the machine learning is performed by the training unit of the second network element, and the first network element installs an algorithm model, and predicts data collected by the first network element according to the algorithm model, and implements Separating the logical functions of the model training and prediction in the network architecture. After the first network element collects the data, the data can be predicted according to the installed algorithm model, which reduces the interaction delay and solves the interaction in the prior art. The problem of the business experience being affected by the increase in latency.
  • the installation information of the algorithm model includes the following information: a unique identifier ID of the algorithm model, an algorithm type of the algorithm model, and a structure of the algorithm model.
  • An installation instruction of the parameter and the algorithm model, and an installation indication of the algorithm model is used to instruct the first network element to install the algorithm model.
  • the content included in the installation information is refined, so that the installation of the algorithm model is more detailed and operable.
  • the installation information of the algorithm model further includes policy index information, where the policy index information includes an output result of the algorithm model, and an output The result is the identification information of the corresponding policy.
  • the installation information of the algorithm model may further include the policy index information, so that the first network element may find the identification information of the policy corresponding to the predicted result according to the policy index information, and the first network element is the first network element. Determining the strategy based on the predicted results provides an implementation condition.
  • the transceiver unit is further configured to: when the first network element installation algorithm model fails, receive the installation failure reason from the first network element. Instructions. In this implementation manner, if the algorithm model fails to be installed, the transceiver module receives the reason that the algorithm model feedback from the first network element fails to be installed, so that the embodiment of the present application is more operable.
  • the transceiver unit is further configured to: send the collection information to the first network element, where the collection information includes at least the identifier ID of the feature to be collected.
  • the first network element obtains the value of the feature to be collected corresponding to the collected data according to the identifier of the feature to be collected, and performs the prediction by using the collection information sent by the transceiver unit of the second network element. The source of the parameters required for the prediction by the first network element is explained, which increases the operability of the embodiment of the present application.
  • a fifth aspect of the embodiments of the present application provides a communication apparatus, which has a function of implementing a behavior of a first network element or a behavior of a second network element in the design of the foregoing method.
  • This function can be implemented in hardware or in hardware by executing the corresponding software.
  • the hardware or software includes one or more modules corresponding to the functions described above. This module can be software and/or hardware.
  • the communication device includes a storage unit, a processing unit, and a communication unit.
  • a storage unit for storing program code and data required by the communication device; a processing unit for calling the program code to control and manage the operation of the communication device; and a communication unit for supporting the communication device and other devices Communication.
  • the communication device includes a processor, a communication interface, a memory, and a bus, wherein the communication interface, the processor, and the memory are connected to each other through a bus; the communication interface is used to support the communication device and other devices.
  • Communication the memory is used to store program code and data required by the communication device, and the processor is configured to call the program code to support the first network element or the second network element to perform the corresponding function in the foregoing method.
  • a sixth aspect of an embodiment of the present application provides an apparatus comprising a memory for storing instructions.
  • the support processor implements the first network element or the second network element to perform a corresponding function in the above method, such as transmitting or processing data and/or information involved in the above method.
  • the device may include a chip, and may also include a chip and other discrete devices.
  • a seventh aspect of the embodiments of the present application provides a system, where the system includes the first network element of the foregoing first aspect and the second network element of the second aspect; or the first network element and the foregoing The second network element in four aspects.
  • An eighth aspect of the embodiments of the present application provides a computer readable storage medium having stored therein instructions that, when executed on a computer, cause the computer to perform the methods described in the above aspects.
  • a ninth aspect of the present application provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the methods described in the above aspects.
  • the embodiment of the present application has the following advantages: the first network element receives the installation information of the algorithm model from the second network element, where the first network element is a user plane network element UPF or a base station, and the second network element The element is used to train the algorithm model; the first network element installs the algorithm model according to the installation information of the algorithm model; when the algorithm model is successfully installed, the first network element collects data, and the data is predicted according to the algorithm model.
  • the training step in the machine learning is performed by the second network element, and the first network element installs an algorithm model, and predicts data received by the first network element according to the algorithm model, and implements the network architecture.
  • the internal model and the logical function of the prediction are separated. After the data is collected by the first network element, the data can be predicted according to the installed algorithm model, the interaction delay is reduced, and the interaction delay is increased in the prior art. The problem that causes the business experience to be affected.
  • Figure 2A is a schematic diagram of one possible linear regression
  • Figure 2B is a schematic diagram of one possible logistic regression
  • Figure 2C is a schematic diagram of one possible CART classification
  • 2D is a schematic diagram of a possible random forest and decision tree
  • 2E is a schematic diagram of a possible SVM classification
  • 2F is a schematic diagram of a possible Bayesian classification
  • 2G is a schematic diagram of a possible neural network model structure
  • 2H is a possible system architecture diagram provided by the present application.
  • FIG. 3 is a flowchart of a possible machine learning-based data processing method according to an embodiment of the present application.
  • FIG. 4 is a flowchart of another possible machine learning-based data processing method according to an embodiment of the present application.
  • FIG. 5 is a schematic diagram of an embodiment of a possible first network element according to an embodiment of the present disclosure
  • FIG. 6 is a schematic diagram of an embodiment of a possible second network element according to an embodiment of the present disclosure.
  • FIG. 7 is a schematic block diagram of a communication apparatus according to an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of a communication apparatus according to an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of a system according to an embodiment of the present application.
  • machine learning can include the following steps:
  • Step 1 Data collection: refers to obtaining various types of raw data from the object that generates the data set, and storing it in a database or memory for training or prediction.
  • Step 2 Feature Engineering: Feature engineering (FE) is a process unique to machine learning, and its core part includes feature processing.
  • the feature processing includes preprocessing the data, such as feature selection (FS) and dimensionality reduction. Since there are a large number of redundant, uncorrelated, and noisy features in the original data, the original data should be cleaned, deduplicated, denoised, and the like. Through pre-processing, the original data is simply structured, and the characteristics and correlation analysis of the training data are extracted.
  • Feature selection is an effective means of reducing redundant features and irrelevant features.
  • Step 3 Training model: After the training data is prepared, select the appropriate algorithm, features and labels. The selected features and labels, as well as the prepared training data, are input to the algorithm and then executed by the computer. Common algorithms include logistic regression, decision trees, support vector machines (SVMs), and so on. Each algorithm may also include multiple derived algorithm types. A machine learning model is generated after the training of a single training algorithm.
  • Step 4 Prediction: The sample data that needs to be predicted is input into the trained machine learning model, and the predicted value of the model output can be obtained. It should be noted that, based on different algorithm problems, the predicted value of the output may be a real number or a classification result. This predicted value is what is predicted by machine learning.
  • the most important process in the process of completing the application of the machine learning algorithm is the feature engineering of the data, the selection of the algorithm, and the beneficial effects brought about by the analysis of the predicted results.
  • the structure of the intermediate process and model in the algorithm training and prediction process can be regarded as the black box method, while for the model-driven architecture design, the model for prediction generated by the different machine learning algorithm training needs to be performed. Materialization, merging, and abstraction. Several common machine learning algorithms are briefly described below.
  • Linear regression is a method of modeling the relationship between a continuous dependent variable y and one or more predictors x.
  • Fig. 2A which is a schematic diagram of a possible linear regression
  • the purpose of linear regression is to predict the numerical value of the target value.
  • the purpose of training the regression algorithm model is to solve the regression coefficients. Once these coefficients are available, the target values can be predicted based on the input of the new sampled feature vectors. For example, multiplying the regression coefficient by the input feature vector value, and then summing the products is to find the inner product of the two, and the result of the summation is the predicted value.
  • the predictive model can be expressed by the following formula:
  • the training obtains the regression coefficient w T and the constant term b, wherein the constant term b is also generated by training.
  • the input feature x must be linear, and then the actual situation is often not the case, so the input feature needs to be characterized, such as 1/x, x2, lg. (x), etc., such that the transformed eigenvalues are linearly related to the results.
  • the information that its model consists of includes the following:
  • Model input feature vector X
  • the dependent variable is not continuous but is classified, you can use the logistics link function to convert the linear regression to a logical regression.
  • the dependent variable is a binary (0/1, True/False, Yes/No)
  • the logistic regression is a classification method, the final output of the model must be a discrete subtype.
  • the linear regression output is sent to a step function, and then the step function outputs the binary or multi-class value.
  • FIG. 2B which is a schematic diagram of a possible logistic regression.
  • the curve can be used as a boundary line, the positive sample is above the boundary line, and the negative sample is below the boundary line.
  • the predictive model can be represented by the following Sigmoid function:
  • Model input feature vector X indication
  • Model output classification result Z
  • Nonlinear functions Sigmoid, step function, logarithmic equation, etc.
  • the step function value separates the interval threshold.
  • the threshold may be 0.5, that is, greater than 0.5 and 1 is less than 0.5 and 0.
  • it may also be a multi-class, and there may be correspondingly more than one threshold value.
  • the regression algorithm includes at least one of other regression methods such as least square, stepwise regression, and ridge regression, which will not be repeated here.
  • the decision tree is trained based on the information entropy or Gini coefficient of the input feature data to determine the classification feature priority and classification judgment method.
  • the decision tree needs to be pruned to reduce the over-fitting of the model prediction.
  • the complexity of the model There are two main types of decision trees: the classification tree (the output is the sample's class label) and the regression tree (the output is a real number), and the classification and regression tree (CART) contains the above two decision trees, such as Figure 2C shows a possible CART classification diagram. It can be seen that CART is a binary tree and each non-leaf node has two nodes. Therefore, for the first subtree, the number of leaf nodes is lower than that of the non-leaf node. The number of leaf nodes is one, and the information formed by the model based on the CART algorithm may include:
  • Model input feature vector X
  • Model output classification result Z
  • Model description tree classification structure, such as ⁇ ROOT: ⁇ Node: ⁇ Leaf ⁇ , for reference, based on the reference signal received power (RSRP) and signal-to-noise ratio of the target cell during UE mobility (signal noise ratio, SNR) determines whether the UE switches to the target cell.
  • RSRP reference signal received power
  • SNR signal-to-noise ratio of the target cell during UE mobility
  • ⁇ 'RSRP>-110': ⁇ 0: 'no', 1: ⁇ 'SNR>10': ⁇ 0: 'no', 1: 'yes' ⁇ that is, first determine whether RSRP is greater than -110dBm, if the decision is 0 (ie, not greater than -110dBm), then do not switch to the target cell ('no'); if the decision is 1 (ie greater than -110dBm), further determine whether the SNR is greater than 10dB, if the decision is 0 (ie less than 10dB), then do not switch to the target cell ('no'), if the decision is 1 (ie greater than 10dB), then switch to the target cell ( 'yes').
  • the decision tree may also include a random forest, which is composed of a multi-classifier and a regression tree CART, as shown in FIG. 2D, which is a schematic diagram of a possible random forest and decision tree, which uses multiple trees to sample the samples.
  • a classifier trained and predicted, and the tree is not associated with the tree.
  • the training set they use is back-sampled from the total training set. Some samples in the training set may appear in the training set of a tree multiple times, or may never appear in one The training of the trees is concentrated.
  • the selected features are randomly selected from all the features. The randomness of each number in the sample and feature selection has certain independence, which can effectively solve the existence of a single decision tree. Fitting the problem.
  • the process of prediction is to use the multiple trees in the forest to predict separately. Each tree will produce corresponding classification values, and then the classification results of multiple trees will be jointly determined to obtain the final classification result.
  • the mathematical model of the decision tree can be expressed as: The description of the model can be summarized into three parts: multiple decision trees and corresponding features and methods, possible classification results description, and final classification selection methods. The information it constitutes can include:
  • Model input feature vector X
  • Model output classification result Z
  • Model Description Several decision trees, that is, models including several of the above decision trees, are not described here.
  • Voting method including absolute majority and relative majority, where the absolute majority is the result of a vote with more than half (ie 0.5) or other values.
  • a random forest model consists of five trees, and the predicted result is 1, respectively. 1,1,3,2, then the prediction result is 1; while the relative majority is the minority obeying the majority, for example, a random forest model consists of three trees, and the predicted result is 1, 2, 2, then the prediction result is 2 .
  • SVM is a supervised learning model and related learning algorithms for analyzing data in classification and regression analysis.
  • a sample cannot be linearly separable by a linear model, it is necessary to find a spatial hyperplane to separate samples of different categories.
  • FIG. 2E it is a possible SVM classification diagram.
  • the SVM also requires high local tolerance to the sample.
  • a key operation of the SVM on the sample is to implicitly map the low-dimensional feature data into the high-dimensional feature space, and this mapping can make the linearly indivisible two types of points in the low-dimensional space linearly separable.
  • the method is called kernel trick, and the spatial mapping function used is called a kernel function.
  • the kernel function is not only suitable for use in support vector machines, but a common radial basis kernel function, also called Gaussian kernel function, is taken as an example:
  • x is any point in space
  • y is the center of the kernel function
  • is the width parameter of the kernel function.
  • Gaussian kernel there are linear kernels, polynomial kernels, Laplacian kernels, SIGMOD kernel functions, etc., which are not limited herein.
  • a very obvious benefit of this SVM is that only a small amount of data is needed to accurately predict and find a global optimal solution.
  • the information constituting the model based on the SVM algorithm may include:
  • Model input feature vector X
  • Kernel function method k, for example, a so-called radial basis function (RBF);
  • Kernel function parameters for example, polynomial parameters, Gaussian kernel bandwidth, etc., need to match the kernel function method.
  • the classification method of the predicted value for example, the Sign method.
  • the Bayesian classifier is a probabilistic model, as shown in Fig. 2F, which is a possible Bayesian classification diagram.
  • Class1 and class2 can be understood as two categories, for example, whether a message belongs to a certain type of service. That is classified as: yes and no.
  • the theoretical basis is Bayesian decision theory, which is the basic method for implementing decision making under the framework of probability.
  • B) is the posterior probability
  • A) is the probability of occurrence of B under the condition that the mode belongs to class A, the class conditional probability density called B
  • P(A) is the studied
  • P(B) is the probability density of feature vector B.
  • P(X) is the same for all categories Y, and the Bayesian formula above can be approximated according to the Markov model.
  • the information of the model composition predicted by the input feature vector including: input layer feature and feature method, P(Y) classification type prior probability list and P(Xi
  • 2G is a schematic diagram of a possible neural network model structure, wherein a complete neural network model includes an input layer, an output layer, and one or more hidden layers.
  • the neural network model is a multi-layered two-class perceptron, and a single-layer two-class perceptron model is similar to a regression model.
  • the cells of the input layer are the inputs of the hidden layer cells, and the output of the hidden layer cells is the input of the output layer cells.
  • the connection between the two perceptrons has a weight, and each perceptron of the tth layer is associated with each perceptron of the t-1th layer. Of course, you can also set the weight to 0, thus essentially canceling the connection.
  • the classification evaluation model Logit has various methods, such as Sigmoid, Softplus and ReLU, and the same neural network layer can be activated and classified based on different evaluation model methods.
  • the most common neural network training process can be to push back the results from the input, gradually reduce the error and adjust the neuron weight, ie error back Propagation (BP) algorithm.
  • BP error back Propagation
  • the principle of BP can be understood as: using the error after output to estimate the error of the layer before the output layer, and then using this layer of error to estimate the error of the previous layer, thus obtaining the error estimates of all layers.
  • the error estimate here can be understood as a kind of partial derivative, that is, the connection weight of each layer is adjusted according to the partial derivative, and the output error is recalculated by the adjusted connection weight. Until the error of the output meets the requirements or the number of iterations overflows the set value.
  • the network structure is an input layer containing i neurons, a hidden layer containing j neurons, and an output layer containing k neurons, and the input layer network element x i acts through the hidden layer network element.
  • the output layer network element is subjected to a non-linear transformation to produce an output signal z k , wherein each sample for network training includes an input vector X and a desired output quantity t, a deviation between the network output value Y and the desired output value t,
  • the connection weight w ij of the input layer network element and the hidden layer network element and the connection weight T jk between the hidden layer network element and the output layer network element and the neural unit threshold the error is decreased along the gradient direction, and the learning is repeated. Training, determining the network parameters (weights and thresholds) corresponding to the minimum error, the training stops. At this time, the trained neural network can process the non-linear conversion information with the smallest output error for the input information of similar samples.
  • composition information of the neural network-based model includes: j
  • the activation function used by each layer such as the sigmod function
  • the training and prediction in machine learning are integrated in one network element such as NWDA.
  • NWDA network element
  • each predicted NWDA will receive the data required for the prediction sent by the UPF.
  • the feature vector the result is predicted by NWDA and the strategy is generated, and the interaction delay generated in this process is not suitable for the characteristics with high real-time requirements, which affects its business experience.
  • the present application provides a data processing method based on machine learning, by performing a training step in machine learning by a second network element, the first network element is installed with an algorithm model, and the first network element has an acquisition feature.
  • the interaction delay increases the problem that the business experience is affected.
  • the machine learning process can be further decomposed into a data service function (DSF), an analysis and modeling function (A&MF), and a model execution function (model execution).
  • DSF data service function
  • A&MF analysis and modeling function
  • model execution model execution
  • the function, MEF, and the adapted policy function (APF) may also be performed by other naming methods, which are not limited herein.
  • these functions can be deployed on the network elements of the network, such as a centralized unit (CU), a distributed unit (DU), and a gNBDA in a 5G network, or Deployed in the LTE eNodeB, UMTS RNC or NodeB; the above functions can also be deployed independently in a network element entity, which can be called RAN data analysis (RANDA), or other name.
  • RANDA RAN data analysis
  • the training and prediction in machine learning are respectively performed by different network elements, and can be separately described based on the following two situations:
  • Case A The functions of the above four functions except A&MF (ie, DSF, MEF, and APF) are deployed in a separate network entity.
  • A&MF ie, DSF, MEF, and APF
  • Case B The above four functions (ie, DSF, A&MF, MEF, and APF) are abstractly decomposed and deployed separately on each layer of the network element of the network.
  • FIG. 3 is a possible machine learning-based data processing method provided on the basis of Case A according to an embodiment of the present application, including:
  • the second network element obtains an algorithm model of training completion.
  • the network element that performs the prediction function is referred to as the first network element
  • the network element that performs the training function is referred to as the second network element.
  • the second network element trains the algorithm model based on the requirements of the actual intelligent service, selecting appropriate algorithms, features, and tag data.
  • the second network element passes the goal to be achieved and uses this to find a suitable algorithm. For example, if the goal to be achieved is to predict the value of the target variable, then a supervised learning algorithm can be selected.
  • Feature selection is a process of selecting an optimal subset from a set of original features. In this process, the superiority of a given subset of features is measured by a specific evaluation criterion, through feature selection. Redundant and irrelevant features in the original feature set are removed, while useful features are retained.
  • the second network element may be a RANDA, or a CUDA deployed in the CU (which may be understood as a name of the RANDA deployed on the CU) or deployed in an operation support system (OSS).
  • OSSDA can be understood as the name of RANDA deployed on OSS
  • DUDA deployed in DU can be understood as the name of RANDA deployed on the DU
  • gNBDA deployed in gNB can be understood as RANDA deployed on gNB
  • the second network element may be an NWDA as an independently deployed network element.
  • the first network element can be a base station or an UPF.
  • the first network element may be a UPF
  • the first network element may be a base station, and thus is not limited herein.
  • the second network element selects an appropriate algorithm and selects appropriate features and tags according to actual business needs.
  • the selected feature and label, and the prepared training data are input to the algorithm and then executed to obtain the trained algorithm model.
  • the neural network algorithm is taken as an example to illustrate the approximate process of model training: since the neural network is used to supervise the learning task, there is a large amount of training data used to train the model. Therefore, the selected tag data is used as training data before training for the neural network algorithm model.
  • the neural network algorithm model is trained based on the training data, wherein the neural network algorithm model may include a generator and a discriminator.
  • the idea of confrontation training can be used to alternately train the generator and the discriminator, and then the data to be predicted is input to the finally obtained generator to generate a corresponding output result.
  • the generator is a probability generation model whose goal is to generate samples that are consistent with the training data distribution.
  • the discriminator is a classifier whose goal is to accurately determine whether a sample is from training data or from a generator.
  • the generator and the discriminator form a "confrontation", and the generator is continuously optimized so that the discriminator cannot distinguish the difference between the generated sample and the training data sample, and the discriminator is continuously optimized to distinguish the difference.
  • the generator and discriminator are alternately trained to achieve equilibrium: the generator can generate samples that are fully consistent with the training data distribution (so that the discriminator cannot resolve), and the discriminator can sensitively resolve any samples that do not match the training data distribution.
  • the discriminator performs model training on the generator according to the training data sample and the generated sample, and discriminates the attribution rate of each generated sample generated by the generator through the trained discriminator model, and sends the discriminating result to the generator to
  • the generator is caused to perform model training based on the new generated samples and the discrimination results discriminated by the discriminator.
  • the antagonistic training of the loop is performed, thereby improving the ability of the generator to generate the generated sample, and improving the ability of the discriminator to discriminate the generated sample attribution rate. That is, in confrontation training, the discriminator and generator are alternately trained until equilibrium is reached. When the capabilities of the generator and the ability of the discriminator are trained to a certain extent, the attribution rate of the samples generated by the discriminator discriminator generator tends to be stable.
  • the model of the training generator and the discriminator can be stopped.
  • the discriminator discriminates the attribution rate of the sample according to all the training data samples and the generated samples, and the variation of the discriminant result obtained by the discriminator is less than a preset threshold
  • the training of the neural network algorithm model may be ended.
  • the second network element sends the installation information of the algorithm model to the first network element.
  • the installation information of the algorithm model is sent to the first network element through a communication interface with the first network element.
  • the installation information of the algorithm model may be carried in the first message to be sent to the first network element.
  • the installation information of the algorithm model includes: a unique identifier ID of the algorithm model, an algorithm type indication of the algorithm model (eg, indicating that the algorithm type of the algorithm model is linear regression or a neural network), and structural parameters of the algorithm model (eg, linear regression)
  • the structural parameters of the model may include a regression value Z, a regression coefficient, a constant term, and a step function, etc., and an installation instruction of the algorithm model (for indicating that the first network element installs the algorithm model).
  • the installation information of the algorithm model may not include the algorithm type indication of the algorithm model, that is, the first network element may pass the algorithm.
  • the structural parameters of the model determine the algorithm type of the algorithm model, so the algorithm type indication of the algorithm model may be an option, which is not limited herein.
  • the installation information of the algorithm model may further include policy index information, where the policy index information includes each prediction result of the algorithm model, and identifier information of the policy corresponding to each prediction result (for example, a policy corresponding to the prediction result 1)
  • the identification information is ID1
  • the policy corresponding to ID1 is to set the QoS parameter value.
  • the second network element may also send the collection information to the first network element by using the first message, so that the first network element subscribes to the feature vector according to the collected information as an input of the algorithm model.
  • the collection information of the feature vector includes at least an identifier ID of the feature to be collected.
  • the feature vector is a set of feature values of the feature to be acquired.
  • the feature to be collected is an IP address, an APP ID, and a port number, and the corresponding feature value may be 10.10.10.0, WeChat, and 21, the feature vector is the set of feature values ⁇ 10.10.10.0, WeChat, 21 ⁇ .
  • the collection information of the feature vector may further include a subscription period of the feature vector, such as collecting the feature vector every 3 minutes, that is, the running parameter of the first network element may be changing all the time, and collecting every other subscription period.
  • the eigenvectors of different data are used as predictions for the input of the algorithm model.
  • the second network element may send the second information message to the first network element, where the second network message carries the collected information, and the second message carries the collected information.
  • the first network element specifies which of the algorithm models the subscribed feature vector is served, and the second message also carries the unique identification ID of the algorithm model.
  • the installation information of the collection information and the algorithm model may be included in one message and sent to the first network element, or may be decomposed into two messages and sent to the first network element. It should be noted that, if the two messages are respectively sent to the first network element, the timing at which the second network element sends the two messages may be the first message first, the second message, or the same. No restrictions are imposed.
  • the first message may be a model install message or other existing messages, which is not limited herein.
  • the first network element installs an algorithm model according to the installation information of the algorithm model.
  • the first network element After receiving the first message by using the communication interface with the second network element, the first network element obtains the installation information of the algorithm model included in the first message, and then installs the algorithm model according to the installation information of the algorithm model, and the installation process
  • the method may be: determining, by the first network element, an algorithm type of the algorithm model, where the determining manner may be directly determining an algorithm type of the algorithm model by using an algorithm type indication in the first message, or when the first message does not include an algorithm type indication,
  • the algorithm type of the algorithm model is determined by the structural parameter correspondence of the algorithm model in the first message.
  • the first network element can determine the The algorithm type of the algorithm model is logistic regression.
  • the structural parameters of the algorithm model are used as the parameters corresponding to the algorithm type of the algorithm model to form the parameters, so as to complete the installation of the algorithm model.
  • the algorithm type is a linear regression algorithm as an example.
  • the first network element instantiates the structural parameters of the algorithm model as model constituent parameters into the structure of the corresponding algorithm model.
  • a feature set of a linear regression model used to control pilot power includes ⁇ RSRP, CQI, TCP Load ⁇ , the regression coefficients are ⁇ 0.45, 0.4, 0.15 ⁇ , and the constant term b is 60.
  • the first network element can instantiate the model locally after receiving the above model structure parameters.
  • the first network element needs to subscribe to the feature vector according to the collected information, and the process of subscribing to the feature vector
  • the method may include: determining, according to the collected information, whether the first network element has the capability of providing the feature value of the feature to be collected.
  • the first network element determines whether the device has the capability, for example, the first network element determines whether the identifier ID of the feature to be collected is included in the preset collectable feature information, and if included, the first network The meta-determination has this capability; conversely, if not included, the first network element determines that the capability is not available.
  • each feature that the first network element supports to provide the feature value has a unique number, for example, the number 1A corresponds to the RSRP, the number 2A corresponds to the channel quality indicator (CQI), and the number 3A corresponds to the signal and the interference plus.
  • the number can be used as the preset collectible feature information. If the number corresponding to the feature to be collected is not included, the feature value of the feature to be acquired is not supported.
  • the first network element determines that it has the capability of providing the feature value of the feature to be collected, the first network element successfully subscribes to the feature vector; correspondingly, if the first network element determines that it does not have the feature value for providing the feature to be collected, Capability, the first network element fails to subscribe to the feature vector. It should be noted that, if the first network element fails to subscribe to the feature vector, the first network element also needs to feed back a subscription failure message to the second network element, and the subscription failure message needs to carry the identification information of the feature that cannot be obtained.
  • the first network element when the installation information and the collection information of the algorithm model are not in one message, that is, the first network element further receives the second message from the second network element, and obtains the collection information included in the second message, the first network element
  • the feature vector may be subscribed according to the collected information in the second message, and the process of subscribing to the feature vector is not described herein again.
  • the first network element sends an installation result indication to the second network element.
  • the first network element in response to the first message, sends a first response message to the second network element by using a communication interface with the second network element, where the first response message carries an installation result indication, and the first response message
  • the unique identification ID of the algorithm model is included.
  • the installation result indication is used to indicate that the second network element indicates that the algorithm model is successfully installed; when the first network element installation algorithm model fails, the installation result indication is used to the second network.
  • the meta indicates that the installation of the algorithm model failed.
  • the first response message also carries an indication of the installation failure reason, which is used to notify the reason why the second network element fails to be installed.
  • the reason for the installation failure may be that the algorithm model is too large, or the parameters in the installation information of the algorithm model are invalid, and are not limited herein.
  • the first network element may send a feature vector subscription result indication to the second network element, to indicate whether the feature vector is successfully subscribed.
  • the first network element may send a feature vector subscription result indication to the second network element, to indicate whether the feature vector is successfully subscribed.
  • the installation information and the collection information of the algorithm model are carried in the first message, the corresponding first response message also carries the feature vector subscription result indication.
  • the first response message further carries identifier information of the feature that cannot be obtained.
  • the first network element passes the communication interface with the second network element to the second network element in response to the second message.
  • Sending a second response message where the second response message carries a feature vector subscription result indication.
  • the second response message further carries identifier information of the feature that cannot be obtained.
  • the first response message may be a model install response message, or other existing messages, which are not limited herein.
  • the first network element predicts data according to an algorithm model.
  • the prediction function is started, that is, the data is predicted according to the installed algorithm model, including: the first network element collects data, and in actual application, the first network element collects
  • the data may be, but is not limited to, any one of the following: 1. parameters of the first network element running state, such as a central processing unit (CPU) occupancy rate, a memory usage rate, a packet sending rate, etc.; The packet characteristic data of the first network element, such as packet size, packet interval, etc.; 3. Base station RRM/RRT related parameters, such as RSRP, CQI, and the like.
  • the data collected by the first network element is not limited herein.
  • the target feature vector of the data is obtained by collecting the identification ID of the feature to be collected in the information, and the first network element inputs the target feature vector into the algorithm model to obtain the target prediction result, which needs to be explained.
  • the target prediction result can be a numerical value or a classification.
  • the first network element finds the identification information of the target policy corresponding to the target prediction result in the policy index information, and the first network element may index the target policy according to the identification information of the target policy, and The data collected by a network element performs the target strategy.
  • the first network element may perform other operations, such as deleting or modifying, on the algorithm model according to actual requirements.
  • the first network element performing other operations on the installed algorithm model may include: receiving, by the first network element, a third message from the second network element by using a communication interface with the second network element, where the third message carries at least an algorithm model The unique identifier ID, and the third message is used to instruct the first network element to perform a target operation on the algorithm model.
  • the target operation may include one of the following operations: modifying a model modification, deleting a model delete, a model active, or deactivating a model de-active.
  • the information carried in the third message may also be different.
  • the third message may carry the unique identifier ID of the algorithm model.
  • the third message further includes installation information of the modified algorithm model, so that the first network element can modify the algorithm model according to the installation information of the modified algorithm model.
  • the training step in the machine learning is performed by the second network element, and the first network element installs an algorithm model, and predicts data collected by the first network element according to the algorithm model, and implements the network architecture.
  • the internal model and the logical function of the prediction are separated.
  • the prediction can be performed according to the installed algorithm model, and the interaction delay is reduced, which solves the problem in the prior art.
  • the interaction delay increases the problem that the business experience is affected.
  • the process of machine learning abstracts the functions of DSF, A&MF, MEF and APF in the logical function category, and each of the distributed units can deploy these four types of logic functions, and
  • the network element that performs the A&MF function is called the second network element
  • the network element that performs the MEF is called the first network element
  • the network element of the DSF is called.
  • the network element that performs the APF is called the fourth network element.
  • FIG. 4 is a possible machine learning-based data processing provided by the case B according to the embodiment of the present application.
  • the second network element obtains a target algorithm model that is completed by training.
  • the second network element sends the installation information of the target algorithm model to the first network element.
  • the first network element installs a target algorithm model according to installation information of the target algorithm model.
  • the steps 401 to 403 are similar to the steps 301 to 303 in FIG. 3, and details are not described herein again.
  • the first network element sends the collection information to the third network element.
  • the second network element may carry the collection information by using the first message and send the information to the first network element, or the second network element may send the second message to the first network element, where the second message carries the collection.
  • the unique identification ID of the information and target algorithm model may be used to estimate the unique identification ID of the information and target algorithm model.
  • the first network element receives and decodes the first message, and separates the collected information and carries it in a separate third message (eg, a feature subscribe message or In the other existing message, the first network element sends the third message to the third network element through the communication interface with the third network element, so that the third network element obtains the collected information according to the third message.
  • a separate third message eg, a feature subscribe message or In the other existing message
  • the first network element sends the third message to the third network element through the communication interface with the third network element, so that the third network element obtains the collected information according to the third message.
  • the algorithm model may include multiple models in practical applications, the third network element may need to provide feature vectors for model input for multiple models, which algorithm model is used to facilitate the third network element to explicitly subscribe to the feature vector.
  • at least one model in the algorithm model is referred to as a target algorithm model, and thus the third message further includes a unique identification ID of
  • the first network element may forward the received second message to the third network element by using a communication interface with the third network element.
  • the second network element may also directly send the collection information to the third network element.
  • the second network element sends a fourth message to the third network element, where the fourth message carries the collection information and the target algorithm model.
  • the ID is uniquely identified, so the manner in which the third network element receives the collection information of the feature vector is not limited herein.
  • the third network element sends a feature vector subscription result indication to the first network element.
  • the third network element determines whether it has the capability of providing the feature value of the feature to be collected. In this embodiment, the third network element determines whether it has the feature to be collected.
  • the manner of the capability of the feature value is similar to the manner in which the first network element described in step 303 in FIG. 3 determines whether it has the capability of providing the feature value of the feature to be collected, and details are not described herein again.
  • the third network element sends a feature vector subscription result indication to the first network element, to indicate to the first network element whether the feature vector subscription is successful. It should be noted that, if the collected information is that the first network element is sent to the third network element by using the third message, the third network element may send the feature vector subscription result indication to the first by using the third response message.
  • the network element, and the third response message also carries a unique identification ID of the target algorithm model.
  • the third response message further carries the unreachable Identification information of the feature.
  • the third network element sends a fourth response message to the second network element, where the The fourth response message carries the feature vector subscription result indication.
  • the fourth response message further carries the identification information of the unobtainable feature.
  • the first network element sends an installation result indication to the second network element.
  • step 406 is similar to step 304 in FIG. 3, and details are not described herein again.
  • the third network element sends the target feature vector to the first network element.
  • the third network element collects the target data from the first network element, and obtains the feature value of the feature to be acquired of the target data, thereby obtaining the target feature vector. Therefore, after obtaining the target feature vector, the third network element may send the target feature vector to the first network element by using the feature vector feedback message, where the feature vector feedback message may be a feature report message or other existing message, and the feature is The vector feedback message also carries a unique identification ID of the target algorithm model.
  • the third network element sends the feature vector feedback message to the first network element every other subscription period.
  • the first network element performs prediction according to a target algorithm model.
  • the first network element After receiving the feature vector feedback message, the first network element indexes the target algorithm model for performing prediction according to the identifier ID of the target algorithm model in the feature vector feedback message, and inputs the target feature vector in the feature vector feedback message to the target In the target algorithm model, to obtain the corresponding target prediction result.
  • the target prediction result may be a value, such as a value in a continuous interval or a value in a discrete interval.
  • the first network element sends a target prediction result to the fourth network element.
  • the target prediction result is sent to the fourth network element by using a communication interface with the fourth network element, where the target prediction result may be carried in the first network element to the fourth network element.
  • the fifth message that is sent, where the fifth message may be a prediction indication message or other existing message, is not limited herein.
  • the fifth message may carry the unique identifier ID of the target algorithm model and the target policy index information corresponding to the target algorithm model, so that the fourth network element is configured according to the The fifth message determines a target policy corresponding to the target prediction result.
  • the fourth network element determines a target policy.
  • the fourth network element After receiving the fifth message by using the communication interface with the first network element, the fourth network element performs decoding to obtain a unique identifier ID, a target prediction result, and a target prediction model corresponding to the target algorithm model carried by the fifth message.
  • the target policy index information is used to find the identification information of the target policy corresponding to the target policy result in the target policy index information, that is, the fourth network element determines the target policy.
  • the fourth network element may further determine whether the target policy is adapted to the corresponding predicted data. For example, in actual application, when the base station is switched, not only the result predicted by the model but also the actual running state of the network is required. Whether congestion or the like occurs, if it is not adapted, the target policy needs to be re-determined.
  • the fourth network element sends a fifth feedback message to the first network element, where the fifth feedback message may be a prediction response message or other existing message, where the fifth feedback message is used.
  • the target message corresponding to the target prediction result is fed back to the first network element, where the fifth feedback message carries the identification information of the target policy, so that the first network element can execute the target policy on the target data.
  • the logical functions are separated into four types, which can be deployed on different physical devices as needed, thereby increasing the flexibility of the network, and the unnecessary functions may not be deployed, thereby saving network resources.
  • the method for processing the data processing based on the machine learning in the embodiment of the present application is described.
  • the following describes the network element in the embodiment of the present application. Referring to FIG. 5, in an embodiment of the network element in the embodiment of the present application, the network element may be used.
  • the operation of the first network element in the foregoing method embodiment is performed, where the network element includes:
  • the first transceiver unit 501 is configured to receive installation information of an algorithm model from a second network element, where the second network element is used to train the algorithm model;
  • the installation unit 502 is configured to install an algorithm model according to the installation information of the algorithm model received by the transceiver unit;
  • the collecting unit 503 is configured to collect data
  • the prediction unit 504 is configured to predict, according to the algorithm model, the data collected by the acquisition unit 504 after the installation unit installs the algorithm model successfully.
  • the first transceiver unit 501 is further configured to receive the collection information from the second network element, where the collection information includes at least an identifier ID of the feature to be collected.
  • the first network element may further include:
  • the second transceiver unit 505 is configured to send the collection information and the unique identifier ID of the target algorithm model to the third network element, where the target algorithm model is at least one model in the algorithm model, and receive the target feature corresponding to the collected information from the third network element.
  • the unique identification ID of the vector and target algorithm model, and the target algorithm model is used to perform the prediction operation.
  • the first network element may further include:
  • the third transceiver unit 506 is configured to send, to the fourth network element, a unique identifier ID of the target algorithm model, a target prediction result, and target target index information corresponding to the target algorithm model, and the target prediction result is used for determining the target policy, and the target prediction result The result obtained by inputting the target feature vector to the target algorithm model; receiving the identification information of the target policy from the fourth network element.
  • the first transceiver unit 501 is further configured to:
  • the target operation includes modifying the algorithm model, deleting the algorithm model, and activating the algorithm model Or deactivate the algorithm model.
  • the first transceiver unit 501 is further configured to:
  • the installation information of the modified algorithm model is received from the second network element.
  • the first transceiver unit 501 is further configured to:
  • the training step in the machine learning is performed by the second network element, the installation unit installs the algorithm model, and the prediction unit predicts the data received by the first network element according to the algorithm model, and implements the network.
  • the logic function between the model and the prediction is separated. After the data is collected by the acquisition unit, the prediction unit can predict the data according to the installed algorithm model, which reduces the interaction delay and solves the problem of the interaction delay in the prior art. The problem that the service experience is affected is also affected.
  • the logic functions can be separated into four types, which can be deployed on different physical devices as needed, which increases the flexibility of the network, and the unnecessary functions can also be deployed without saving. Network resources.
  • the network element may perform the operation of the second network element in the foregoing method embodiment, where the network element includes:
  • a training unit 601 configured to obtain an algorithm model of training completion
  • the transceiver unit 602 is configured to send installation information of the algorithm model to the first network element, where the installation information of the algorithm model is used for installation of the algorithm model, and the algorithm model is used for data prediction, where the first network element is a UPF or a base station.
  • the transceiver unit 602 is further configured to: when the first network element installation algorithm model fails, receive an installation failure reason indication from the first network element.
  • the transceiver unit 602 is further configured to:
  • the collection information is sent to the first network element, and the collection information includes at least an identifier ID of the feature to be collected.
  • the training step in the machine learning is performed by the training unit of the second network element, and the first network element installs an algorithm model, and predicts data collected by the first network element according to the algorithm model, and implements Separating the logical functions of the model training and prediction in the network architecture.
  • the data can be predicted according to the installed algorithm model, which reduces the interaction delay and solves the interaction in the prior art.
  • the problem of the service experience is affected by the increase of the delay; in addition, the logic functions can be separated into four types, which can be deployed on different physical devices as needed, which increases the flexibility of the network, and the unnecessary functions may not be Deployment saves network resources.
  • the first network element and the second network element in the embodiment of the present application are described in detail from the perspective of the modular functional entity, and the first network element in the embodiment of the present application is The second network element is described in detail.
  • Figure 7 shows a possible schematic diagram of a communication device.
  • the communication device 700 includes a processing unit 702 and a communication unit 703.
  • the processing unit 702 is configured to control and manage the operation of the communication device.
  • the communication device 700 can also include a storage unit 701 for storing program codes and data required by the communication device.
  • the communication device can be the first network element described above.
  • processing unit 702 is configured to support the first network element to perform steps 303 and 305 in FIG. 3, steps 403 and 408 in FIG. 4, and/or other processes for the techniques described herein.
  • the communication unit 703 is configured to support communication between the first network element and other devices.
  • the communication unit 703 is configured to support the first network element to perform step 302 and step 304 in FIG. 3, step 402, step 404 to step in FIG. 407, and step 409.
  • the communication device can be the second network element described above.
  • processing unit 702 is configured to support second network element to perform step 301 in FIG. 3, step 401 in FIG. 4, and/or other processes for the techniques described herein.
  • the communication unit 703 is configured to support communication between the second network element and other devices.
  • the communication unit 703 is configured to support the second network element to perform step 302 and step 304 in FIG. 3, step 402 and step 406 in FIG.
  • the processing unit 702 may be a processor or a controller, and may be, for example, a central processing unit (CPU), a general-purpose processor, a digital signal processor (DSP), and an application-specific integrated circuit (application-specific). Integrated circuit (ASIC), field programmable gate array (FPGA) or other programmable logic device, transistor logic device, hardware component, or any combination thereof. It is possible to implement or carry out the various illustrative logical blocks, modules and circuits described in connection with the present disclosure.
  • the processor can also be a combination of computing functions, for example, including one or more microprocessor combinations, a combination of a DSP and a microprocessor, and the like.
  • the communication unit 703 can be a communication interface, a transceiver, a transceiver circuit, etc., wherein the communication interface is a collective name and can include one or more interfaces, such as a transceiver interface.
  • the storage unit 701 can be a memory.
  • the processing unit 702 can be a processor, the communication unit 703 can be a communication interface, and when the storage unit 701 can be a memory, as shown in FIG. 8, the communication device 810 includes a processor 812, a communication interface 813, and a memory 811. Alternatively, the communication device 810 can also include a bus 814.
  • the communication interface 813, the processor 812, and the memory 811 may be connected to each other through a bus 814; the bus 814 may be a peripheral component interconnect (PCI) bus or an extended industry standard architecture (EISA). Bus, etc.
  • PCI peripheral component interconnect
  • EISA extended industry standard architecture
  • Bus 814 can be divided into an address bus, a data bus, a control bus, and the like. For ease of representation, only one thick line is shown in Figure 8, but it does not mean that there is only one bus or one type of bus.
  • the communication device 810 can be used to indicate the step of the first network element. In another embodiment, the communication device 810 can be used to indicate the step of the second network element. I will not repeat them here.
  • Embodiments of the present application also provide an apparatus, which may be a chip, the apparatus including a memory, wherein the memory is used to store instructions.
  • the memory stored instructions are executed by the processor, causing the processor to perform some or all of the steps of the first network element in the machine learning based data processing method in the embodiments described in FIGS. 3 through 4, such as in FIG. Steps 303 and 305, steps 403 and 408 in FIG. 4, and/or other processes for the techniques described herein.
  • causing the processor to perform some or all of the steps in the machine learning-based data processing method of the second network element in the embodiment described in FIG. 3 to FIG. 4, such as step 301 in FIG. 3, step 401 in FIG. And/or other processes for the techniques described herein.
  • the embodiment of the present application further provides a system, as shown in FIG. 9, is a schematic structural diagram of a possible system provided by the present application.
  • the system may include one or more central processing unit 922 and memory 932, one or more.
  • a storage medium 930 storing storage application 942 or data 944 (eg, one or one storage device in Shanghai).
  • the memory 932 and the storage medium 930 may be short-term storage or persistent storage.
  • Programs stored on storage medium 930 may include one or more modules (not shown), each of which may include a series of instruction operations in the system.
  • central processor 922 can be configured to communicate with storage medium 930, executing a series of instruction operations in storage medium 930 on system 900.
  • System 900 can also include one or more power supplies 926, one or more wired or wireless network interfaces 950, one or more input and output interfaces 958, and/or one or more operating systems 941, such as Windows Server, Mac OS X, Unix, Linux, FreeBSD, etc.
  • operating systems 941 such as Windows Server, Mac OS X, Unix, Linux, FreeBSD, etc.
  • the computer program product includes one or more computer instructions.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
  • the computer instructions can be stored in a computer readable storage medium or transferred from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions can be from a website site, computer, server or data center Transmission to another website site, computer, server or data center via wired (eg coaxial cable, fiber optic, digital subscriber line (DSL)) or wireless (eg infrared, wireless, microwave, etc.).
  • wired eg coaxial cable, fiber optic, digital subscriber line (DSL)
  • wireless eg infrared, wireless, microwave, etc.
  • the computer readable storage medium can be any available media that can be stored by a computer or a data storage device such as a server, data center, or the like that includes one or more available media.
  • the usable medium may be a magnetic medium (eg, a floppy disk, a hard disk, a magnetic tape), an optical medium (eg, a DVD), or a semiconductor medium (such as a solid state disk (SSD)) or the like.
  • the disclosed system, apparatus, and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be electrical, mechanical or otherwise.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • a computer readable storage medium A number of instructions are included to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present application.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like, which can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Mobile Radio Communication Systems (AREA)
  • Telephonic Communication Services (AREA)

Abstract

一种基于机器学习的数据处理方法及相关设备,用于解决由于交互时延增加所导致业务体验受到影响的问题。本申请实施例方法包括:第一网元从第二网元接收算法模型的安装信息(302),所述第一网元为用户面网元UPF或基站,所述第二网元用于训练所述算法模型;所述第一网元根据所述算法模型的安装信息,安装所述算法模型(303);当所述算法模型安装成功后,所述第一网元采集数据,根据所述算法模型对所述数据进行预测。

Description

一种基于机器学习的数据处理方法以及相关设备
本申请要求于2018年02月06日提交中国专利局、申请号为201810125826.9、发明名称为“一种基于机器学习的数据处理方法以及相关设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及通信领域,尤其涉及一种基于机器学习的数据处理方法以及相关设备。
背景技术
机器学习(machine learning,ML)是一门多领域交叉学科,研究的是:计算机怎样模拟或实现人类的学习行为,以获取新的知识或技能,重新组织已有的知识结构使之不断改善自身的性能。随着大数据时代的到来,机器学习尤其是适用于大规模数据的深度学习正得到越来越广泛的关注和应用,其中包括机器学习在无线通信网络中的应用。
机器学习可以包括数据采集、特征工程、训练、以及预测等步骤。在无线通信网络中,现有技术将这些步骤都融合在一个网元中执行,可将该网元称为网络数据分析(network data analytics,NWDA)网元。当NWDA收集到足够数据并训练出模型后,将模型保存于NWDA的网络实体,后续的预测过程是用户面功能(user plane function,UPF)网元将预测所需的数据或者特征向量发送给NWDA,由NWDA预测出结果并发送给策略控制功能(policy control function,PCF)网元,PCF通过该预测结果生成策略,并将该生成策略下发给UPF网元,其中生成策略可以为设置服务质量(quality of service,QoS)参数等,由UPF网元执行生效。
由于网络中存在很多的实时性业务的场景,其对业务处理的时延会要求很高,比如在无线接入网的无线资源管理(radio resource management,RRM)/无线传输技术(radio transmission technology,RTT)的各类算法中需要达到秒级甚至传输时间间隔(transmission time interval,TTI)级(毫秒级)的业务处理。现有技术中,训练和预测融合在NWDA网元执行,如图1所示。例如,NWDA训练好模型后进行预测,包括:NWDA接收来自UPF网元的特征向量,并将该特征向量输入到训练好的模型中,得到预测结果,并将该预测结果发送给PCF,再由PCF生成与预测结果对应的策略并下发到相关的用户面网元来执行策略。而实际应用中,设备之间的每一次信息交互都可能存在时延,故现有技术中较多的交互使得时延也相应增加,针对实时性要求高的业务,影响了其业务体验。
发明内容
本申请实施例提供了一种基于机器学习的数据处理方法以及相关设备,用于解决现有技术中由于交互时延增加所导致业务体验受到影响的问题。
本申请实施例的第一方面提供了一种基于机器学习的数据处理方法,包括:第一网元从第二网元接收算法模型的安装信息,其中,该第一网元可以是UPF或者是基站,另外, 第二网元用来训练算法模型;第一网元接收到算法模型的安装信息后,基于该安装信息安装算法模型;当算法模型成功安装在第一网元后,第一网元采集数据,并通过安装的算法模型对采集到的数据进行预测。本申请实施例中,将机器学习中的训练步骤由第二网元执行,第一网元安装算法模型,并根据该算法模型对第一网元采集到的数据进行预测,实现了将网络架构内模型训练和预测的逻辑功能分离,在第一网元采集到数据后,即可根据已安装的算法模型对数据进行预测,减少了交互时延,解决了现有技术中由于交互时延增加所导致业务体验受到影响的问题。
在一种可能的设计中,在本申请实施例第一方面的第一种实现方式中,算法模型的安装信息包括以下信息:算法模型的唯一标识ID、算法模型的算法类型、算法模型的结构参数和算法模型的安装指示,其中,算法模型的安装指示用于指示第一网元安装该算法模型。本实现方式中,细化了安装信息中包括的内容,使得算法模型的安装更加详细可操作。
在一种可能的设计中,在本申请实施例第一方面的第二种实现方式中,算法模型的安装信息还包括策略索引信息,该策略索引信息包括算法模型的预测结果,和与该预测结果对应的策略的标识信息。本实现方式中,说明了算法模型的安装信息中还可以包括策略索引信息,使得第一网元可以根据该策略索引信息,找出与预测的结果对应的策略的标识信息,为第一网元根据预测结果确定策略提供了实现条件。
在一种可能的设计中,在本申请实施例第一方面的第三种实现方式中,在第一网元采集数据之前,第一网元从第二网元接收采集信息,该采集信息至少包括待采集特征的标识ID。本实现方式中,第一网元还从第二网元接收采集信息,使得第一网元根据待采集特征的标识ID,获取采集到的数据所对应的待采集特征的值,来进行预测。说明了第一网元进行预测所需要的参数的来源,增加了本申请实施例的可操作性。
在一种可能的设计中,在本申请实施例第一方面的第四种实现方式中,第一网元从第二网元接收采集信息之后,第一网元向第三网元发送采集信息和目标算法模型的唯一标识ID,其中,目标算法模型为算法模型中的至少一个模型;第一网元再接收第三网元发送的与采集信息对应的目标特征向量和目标算法模型的唯一标识ID,其中,目标算法模型用来对数据进行预测。本实现方式中,可以将采集目标特征向量的操作转给第三网元执行,而第一网元来执行根据模型进行预测的功能,减轻了第一网元的工作负荷。
在一种可能的设计中,在本申请实施例第一方面的第五种实现方式中,还可包括:第一网元向第四网元发送目标算法模型的唯一标识ID,目标预测结果和与目标算法模型对应的目标策略索引信息,其中,目标预测结果用来确定目标策略,目标预测结果是将目标特征向量输入到目标算法模型所输出的结果;第一网元从第四网元接收目标策略的标识信息。本实现方式中,第一网元将根据目标预测结果确定目标策略的功能转给第四网元执行,不仅减轻了第一网元的工作负荷,同时结合第一方面的第四种实现方式,将不同的功能分别由多个网元来实现,也增加了网络的灵活性。
在一种可能的设计中,在本申请实施例第一方面的第六种实现方式中,算法模型安装成功后,还包括:第一网元从第二网元接收目标操作指示和算法模型的唯一标识ID,目标操作指示用于指示第一网元对算法模型执行目标操作,其中,该目标操作可以包括但不限 于以下操作中的任一种:修改算法模型,删除算法模型,激活算法模型或者去激活算法模型。本实现方式中,追加了再安装好算法模型后,还可以对该算法模型进行修改、删除等操作,更加顺应实际应用中可能会出现的各种需求,也可实现边缘设备免升级业务不中断。
在一种可能的设计中,在本申请实施例第一方面的第七种实现方式中,当目标操作为修改算法模型时,第一网元从第二网元接收修改后的算法模型的安装信息。本实现方式中,当需要修改算法模型时,第二网元还需向第一网元发送修改后的算法模型的安装信息来进行重新安装,使得本实施例在操作步骤上更加完善。
在一种可能的设计中,在本申请实施例第一方面的第八种实现方式中,当算法模型安装失败后,第一网元还要向第二网元发送安装失败原因指示,来告诉第二网元安装失败的原因。本实现方式中,若算法模型安装失败,则第一网元要反馈算法模型安装失败的原因,增加了本申请实施例的解决方案。
本申请实施例的第二方面提供了一种基于机器学习的数据处理方法,包括:第二网元获得训练完成的算法模型。在得到该算法模型后,第二网元向第一网元发送该算法模型的安装信息,使得第一网元根据该算法模型的安装信息来安装算法模型,其中算法模型用于预测第一网元采集到的数据,且第一网元为UPF或者基站。本申请实施例中,将机器学习中的训练步骤由第二网元执行,第一网元安装算法模型,并根据该算法模型对第一网元采集到的数据进行预测,实现了将网络架构内模型训练和预测的逻辑功能分离,在第一网元采集到数据后,即可根据已安装的算法模型对数据进行预测,减少了交互时延,解决了现有技术中由于交互时延增加所导致业务体验受到影响的问题。
在一种可能的设计中,在本申请实施例第二方面的第一种实现方式中,算法模型的安装信息包括以下信息:算法模型的唯一标识ID、算法模型的算法类型、算法模型的结构参数和算法模型的安装指示,其中,算法模型的安装指示用于指示第一网元安装该算法模型。本实现方式中,细化了安装信息中包括的内容,使得算法模型的安装更加详细可操作。
在一种可能的设计中,在本申请实施例第二方面的第二种实现方式中,算法模型的安装信息还包括策略索引信息,其中,策略索引信息包括算法模型的输出结果,和与输出结果对应的策略的标识信息。本实现方式中,说明了算法模型的安装信息中还可以包括策略索引信息,使得第一网元可以根据该策略索引信息,找出与预测的结果对应的策略的标识信息,为第一网元根据预测结果确定策略提供了实现条件。
在一种可能的设计中,在本申请实施例第二方面的第三种实现方式中,第二网元向第一网元发送算法模型的安装信息之后,当第一网元安装算法模型失败时,第二网元从第一网元接收安装失败原因指示。本实现方式中,若算法模型安装失败,则第二网元接收从第一网元反馈的算法模型安装失败的原因,使得本申请实施例更加具有可操作性。
在一种可能的设计中,在本申请实施例第二方面的第四种实现方式中,还包括:第二网元向第一网元发送采集信息,采集信息至少包括待采集特征的标识ID。本实现方式中,通过第二网元发送的采集信息,使得第一网元根据待采集特征的标识ID,获取采集到的数据所对应的待采集特征的值,来进行预测。说明了第一网元进行预测所需要的参数的来源,增加了本申请实施例的可操作性。
本申请实施例的第三方面提供了一种网元,该网元为第一网元,其中该第一网元可以为用户面网元UPF或者基站,包括:第一收发单元,用于从第二网元接收算法模型的安装信息,第二网元用于训练算法模型;安装单元,用于根据收发单元接收的算法模型的安装信息,安装算法模型;采集单元,用于采集数据;预测单元,用于当安装单元安装算法模型成功后,根据算法模型对该采集单元采集的数据进行预测。本申请实施例中,将机器学习中的训练步骤由第二网元执行,安装单元安装算法模型,并由预测单元根据该算法模型对采集单元采集到的数据进行预测,实现了将网络架构内模型训练和预测的逻辑功能分离,在采集单元采集到数据后,预测单元即可根据已安装的算法模型对数据进行预测,减少了交互时延,解决了现有技术中由于交互时延增加所导致业务体验受到影响的问题。
在一种可能的设计中,在本申请实施例第三方面的第一种实现方式中,算法模型的安装信息包括以下信息:算法模型的唯一标识ID、算法模型的算法类型、算法模型的结构参数和算法模型的安装指示,算法模型的安装指示用于指示安装算法模型。本实现方式中,细化了安装信息中包括的内容,使得算法模型的安装更加详细可操作。
在一种可能的设计中,在本申请实施例第三方面的第二种实现方式中,算法模型的安装信息还可以包括策略索引信息,策略索引信息包括算法模型的预测结果,和与预测结果对应的策略的标识信息。本实现方式中,说明了算法模型的安装信息中还可以包括策略索引信息,使得第一网元可以根据该策略索引信息,找出与预测的结果对应的策略的标识信息,为第一网元根据预测结果确定策略提供了实现条件。
在一种可能的设计中,在本申请实施例第三方面的第三种实现方式中,第一收发单元还用于:从第二网元接收采集信息,采集信息至少包括待采集特征的标识ID。本实现方式中,第一收发单元还从第二网元接收采集信息,使得第一网元根据待采集特征的标识ID,获取采集到的数据所对应的待采集特征的值,来进行预测。说明了第一网元进行预测所需要的参数的来源,增加了本申请实施例的可操作性。
在一种可能的设计中,在本申请实施例第三方面的第四种实现方式中,第二收发单元,用于向第三网元发送采集信息和目标算法模型的唯一标识ID,其中,目标算法模型为算法模型中的至少一个模型;第二收发单元还用于,从第三网元接收与采集信息对应的目标特征向量和目标算法模型的唯一标识ID,其中,目标算法模型用于执行预测操作。本实现方式中,可以将采集目标特征向量的操作转给第三网元执行,而第一网元来执行根据模型进行预测的功能,减轻了第一网元的工作负荷。
在一种可能的设计中,在本申请实施例第三方面的第五种实现方式中,第一网元还包括:第三收发单元,用于向第四网元发送目标算法模型的唯一标识ID,目标预测结果和与目标算法模型对应的目标策略索引信息,其中目标预测结果用于目标策略的确定,目标预测结果为将目标特征向量输入至目标算法模型所得到的结果;第三收发单元还用于,从第四网元接收目标策略的标识信息。本实现方式中,将根据目标预测结果确定目标策略的功能转给第四网元执行,不仅减轻了第一网元的工作负荷,同时结合第一方面的第四种实现方式,将不同的功能分别由多个网元来实现,也增加了网络的灵活性。
在一种可能的设计中,在本申请实施例第三方面的第六种实现方式中,第一收发单元 还用于:从第二网元接收目标操作指示和算法模型的唯一标识ID,目标操作指示用于指示第一网元对算法模型执行目标操作,其中,目标操作包括修改算法模型,删除算法模型,激活算法模型或者去激活算法模型。本实现方式中,追加了再安装好算法模型后,还可以对该算法模型进行修改、删除等操作,更加顺应实际应用中可能会出现的各种需求,也可实现边缘设备免升级业务不中断。
在一种可能的设计中,在本申请实施例第三方面的第七种实现方式中,当目标操作为修改算法模型时,第一收发单元还用于:从第二网元接收修改后的算法模型的安装信息。本实现方式中,当需要修改算法模型时,第二网元还需向第一收发单元发送修改后的算法模型的安装信息来进行重新安装,使得本实施例在操作步骤上更加完善。
在一种可能的设计中,在本申请实施例第三方面的第八种实现方式中,当算法模型安装失败后,第一收发单元还用于:向第二网元发送安装失败原因指示。本实现方式中,若算法模型安装失败,则第一收发单元还要向第二网元反馈算法模型安装失败的原因,使得增加了本申请实施例的解决方案。
本申请实施例的第四方面提供了一种网元,该网元为第二网元,包括训练单元,用于获得训练完成的算法模型;收发单元,用于向第一网元发送训练完成的算法模型的安装信息,其中算法模型的安装信息用于算法模型的安装,算法模型用于数据预测,另外,第一网元为用户面网元UPF或基站。本申请实施例中,将机器学习中的训练步骤由第二网元的训练单元执行,第一网元安装算法模型,并根据该算法模型对第一网元采集到的数据进行预测,实现了将网络架构内模型训练和预测的逻辑功能分离,在第一网元采集到数据后,即可根据已安装的算法模型对数据进行预测,减少了交互时延,解决了现有技术中由于交互时延增加所导致业务体验受到影响的问题。
在一种可能的设计中,在本申请实施例第四方面的第一种实现方式中,算法模型的安装信息包括以下信息:算法模型的唯一标识ID、算法模型的算法类型、算法模型的结构参数和算法模型的安装指示,算法模型的安装指示用于指示第一网元安装算法模型。本实现方式中,细化了安装信息中包括的内容,使得算法模型的安装更加详细可操作。
在一种可能的设计中,在本申请实施例第四方面的第二种实现方式中,算法模型的安装信息还包括策略索引信息,其中,策略索引信息包括算法模型的输出结果,和与输出结果对应的策略的标识信息。本实现方式中,说明了算法模型的安装信息中还可以包括策略索引信息,使得第一网元可以根据该策略索引信息,找出与预测的结果对应的策略的标识信息,为第一网元根据预测结果确定策略提供了实现条件。
在一种可能的设计中,在本申请实施例第四方面的第三种实现方式中,收发单元还用于:当第一网元安装算法模型失败时,从第一网元接收安装失败原因指示。本实现方式中,若算法模型安装失败,则收发范元接收从第一网元反馈的算法模型安装失败的原因,使得本申请实施例更加具有可操作性。
在一种可能的设计中,在本申请实施例第四方面的第四种实现方式中,收发单元还用于:向第一网元发送采集信息,采集信息至少包括待采集特征的标识ID。本实现方式中,通过第二网元的收发单元发送的采集信息,使得第一网元根据待采集特征的标识ID,获取 采集到的数据所对应的待采集特征的值,来进行预测。说明了第一网元进行预测所需要的参数的来源,增加了本申请实施例的可操作性。
本申请实施例的第五方面提供了一种通信装置,该通信装置具有实现上述方法设计中第一网元行为或者第二网元行为的功能。该功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。该硬件或软件包括一个或多个与上述功能相对应的模块。该模块可以是软件和/或硬件。
在一种可能的实现方式中,该通信装置包括存储单元、处理单元以及通信单元。
其中,存储单元,用于存储该通信装置所需的程序代码和数据;处理单元,用于调用该程序代码,对该通信装置的动作进行控制管理;通信单元,用于支持通信装置与其他设备的通信。
在一个可能的实现方式中,该通信装置的结构中包括处理器、通信接口、存储器和总线,其中,通信接口、处理器以及存储器通过总线相互连接;通信接口用于支持通信装置与其他设备的通信,该存储器用于存储该通信装置所需的程序代码和数据,处理器用于调用该程序代码,支持第一网元或者第二网元执行如上述方法中相应的功能。
本申请实施例的第六方面提供了一种装置,该装置包括存储器,该存储器用于存储指令。当存储器存储的指令被处理器执行时,支持处理器实现上述第一网元或者第二网元执行上述方法中相应的功能,例如发送或处理上述方法中所涉及的数据和/或信息。该装置可以包括芯片,也可以包括芯片和其他分立器件。
本申请实施例的第七方面提供了一种系统,该系统包括前述第一方面的第一网元和第二方面的第二网元;或者,包括前述第三方面的第一网元和第四方面的第二网元。
本申请实施例的第八方面提供了一种计算机可读存储介质,该计算机可读存储介质中存储有指令,当其在计算机上运行时,使得计算机执行上述各方面所述的方法。
本申请的第九方面提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述各方面所述的方法。
从以上技术方案可以看出,本申请实施例具有以下优点:第一网元从第二网元接收算法模型的安装信息,该第一网元为用户面网元UPF或基站,该第二网元用于训练所述算法模型;第一网元根据算法模型的安装信息,安装该算法模型;当算法模型安装成功后,第一网元采集数据,根据算法模型对该数据进行预测。本申请实施例中,将机器学习中的训练步骤由第二网元执行,第一网元安装算法模型,并根据该算法模型对第一网元接收到的数据进行预测,实现了将网络架构内模型和预测的逻辑功能分离,在第一网元采集到数据后,即可根据已安装的算法模型对数据进行预测,减少了交互时延,解决了现有技术中由于交互时延增加所导致业务体验受到影响的问题。
附图说明
图1为现有技术中一种可能的基于机器学习的方法流程图;
图2A为一种可能的线性回归的示意图;
图2B为一种可能的logistics回归的示意图;
图2C为一种可能的CART分类的示意图;
图2D为一种可能的随机森林和决策树的示意图;
图2E为一种可能的SVM分类的示意图;
图2F为一种可能的贝叶斯分类的示意图;
图2G为一种可能的神经网络模型结构的示意图;
图2H为本申请提供的一种可能的系统架构图;
图3为本申请实施例提供的一种可能的基于机器学习的数据处理方法的流程图;
图4为本申请实施例提供的另一可能的基于机器学习的数据处理方法的流程图;
图5为本申请实施例提供的一种可能的第一网元的实施例示意图;
图6为本申请实施例提供的一种可能的第二网元的实施例示意图;
图7为本申请实施例提供的一种通信装置的示意性框图;
图8为本申请实施例提供的一种通信装置的结构示意图;
图9为本申请实施例提供的一种系统的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。
随着机器学习的不断完善,使得我们有机会从海量数据集中提取潜在有用的信息和规律。机器学习的主要目的是提取有用的特征,然后根据已有的实例,构造从特征到标签的映射。其中,标签用于区别数据,而特征用于描述数据的特性,可以理解为,特征是对数据做出某个判断的依据,而标签是对数据所做出的结论。实际应用中,机器学习可以包括如下几个步骤:
步骤一、数据采集:是指从产生数据集的对象中获取各种类型的原始数据,存放在数据库或内存中用来进行训练或者预测。
步骤二、特征工程:特征工程(feature engineering,FE)是机器学习所特有的一个过程,其核心部分包括特征处理。其中特征处理包括对数据进行预处理,例如特征选择(feature selection,FS)和降维等。由于原始数据中存在大量的冗余的、不相关的、带噪声的特征,因此要对原始数据进行清洗、去重、去噪等操作。通过预处理,对原始数据做简单的结构化处理,提取训练数据的特征、相关性分析等。特征选择是减少冗余特征、不相关特征的有效手段。
步骤三、训练模型:将训练数据准备好后,选择合适的算法、特征以及标签。将选择的特征和标签,以及准备好的训练数据输入到算法后再由计算机执行训练算法。常见的算法包括逻辑回归、决策树、支持向量机(support vector machine,SVM)等。每一个算法也可能会包括多个衍生的算法类型。单个训练算法训练结束后会产生一个机器学习模型。
步骤四、预测:将需要预测的样本数据输入到训练得到的机器学习模型中,可得到该模型输出的预测值。需要说明的是,基于不同的算法问题,该输出的预测值可能是一个的实数,也可能是一个分类结果。这个预测值就是机器学习所预测出的内容。
需要说明的是,在完成机器学习算法应用的过程中最重要的是数据的特征工程、算法的选择和分析预测出的结果所带来的有益效果。对于算法训练和预测过程中的中间过程和模型的结构可以看做黑盒的方式,而对于模型驱动的架构设计,则需要对通过不同的机器学习算法训练所产生的用于预测的模型,进行具体化、合并以及抽象。下面将分别对几种常见的机器学习算法进行简单描述。
1、回归算法
常见的回归算法包括线性回归,逻辑(logistics)回归等。其中线性回归是对连续因变量y与一个或多个预测变量x之间的关系进行建模的方法。如图2A所示,为一种可能的线性回归的示意图,线性回归的目的是预测数值型的目标值。训练回归算法模型的目的是求解回归系数,一旦有了这些系数再基于新的采样特征向量的输入就可以预测到目标数值。例如,用回归系数乘以输入特征向量值,再将乘积求和也就是求两者的内积,求和得到的结果就是预测值。预测模型可以通过如下公式表示:
z=w 0x 0+w 1x 1+w 2x 2+...+w nx n+n,
改写为向量形式,可以为z=w Tx+b;
训练得到回归系数w T与常量项b,其中常量项b也是训练产生的,用来对模型做整体的调整与具体的某一个特征无关,b可以不存在即b=0,基于系数、常量项和新的特征值进行预测。
需要说明的是,以线性回归算法为基础的模型的一个要点是输入特征x必须是线性的,然后实际情况往往并非如此,所以需要对输入特征进行特征工程处理,例如1/x,x2,lg(x)等,使得转换后的特征值与结果线性相关。其模型构成的信息包括以下:
模型输入:特征向量X;
模型输出:回归值Z;
训练得到的回归系数:向量w T
常量项:b;
阶跃函数:NONE。
实际应用中,如果因变量不是连续的而是分类的,则可以使用logistics链接函数将线性回归转换为逻辑回归,例如,因变量是二分类(0/1,True/False,Yes/No)时我们可以使用logistics回归。由于Logistics回归是一种分类方法,所以其模型的最终输出必定是离散的分类型,通常是将线性回归后的输出送入一种阶跃函数,再由阶跃函数输出二分类或者多分类值。请参阅图2B,为一种可能的logistics回归的示意图,可将曲线作为边界线,边界线上方的为正例样本,边界线下方的为负例样本。预测模型可以通过如下Sigmoid函数表示:
S(x)=1/(1+e -x);
该Sigmoid函数的输入记为z,可由以下公式得出:z=w Tx+b。
另外,以logistics回归算法为基础的模型,其构成的信息包括:
模型输入:特征向量X指示;
模型输出:分类结果Z;
训练得到的回归系数:向量w T
常量项:b;
非线性函数:Sigmoid,阶跃函数,对数方程等;
阶跃函数值分离区间阈值,例如该阈值可以为0.5,即大于0.5取1,小于0.5取0.当然,也可能是多分类,则会相应有大于一个的阈值取值。
另外,回归算法中还包括其他最小二乘回归(least square)、逐步回归(stepwise regression)和岭回归(ridge regression)等回归方法中的至少一项,此处不再一一赘述。
2、决策树
通常决策树的训练依据输入特征数据的信息熵或者基尼系数,进而确定分类特征优先级和分类判断方法,除此之外还需要对决策树进行剪枝优化,来减少模型预测的过拟合和模型的复杂度。决策树主要有两种类型:分类树(输出是样本的类标)和回归树(输出是一个实数),而分类和回归树(classification and regression tree,CART)包含了上述两种决策树,如图2C所示,为一种可能的CART的分类示意图,可以看出,CART是一棵二叉树,且每个非叶子节点都有两个节点,所以对于第一棵子树,其叶子节点数比非叶子节点数多1,基于CART算法的模型其构成的信息可以包括:
模型输入:特征向量X;
模型输出:分类结果Z;
模型描述:树状分类结构,如{ROOT:{Node:{Leaf}}},为便于理解,以UE移动过程中基于目标小区的参考信号接收功率(reference signal receiving power,RSRP)和信号噪声比(signal noise ratio,SNR)决定UE是否切换到目标小区为例,可采用{'RSRP>-110':{0:'no',1:{'SNR>10':{0:'no',1:'yes'}}}},即首先判断RSRP是否大于-110dBm,若判决为0(即不大于-110dBm),则不切换到目标小区('no');若判决为1(即大于-110dBm),则进一步判断SNR是否大于10dB,若判决为0(即小于10dB),则不切换到目标小区('no'),若判决为1(即大于10dB),则切换到目标小区('yes')。
另外,决策树中还可以包括随机森林,其是由多分类器和回归树棵CART构成,如图2D所示,为一种可能的随机森林和决策树示意图,其利用多棵树对样本进行训练并预测的一种分类器,且树与树之间又是不关联的。训练时,对于每棵树,它们使用的训练集是从总的训练集中有放回采样出来的,训练集中的有些样本可能多次出现在一棵树的训练集中,也可能从未出现在一棵树的训练集中。在训练每一棵树的节点时,选用的特征是从所有特征中随机选择的,每一棵数在样本和特征选择上的随机性有一定的独立性,可以有效解决单棵决策树存在过拟合问题。
预测的过程是同时使用森林里的多棵树分别单独预测,每一颗树都会产生相对应的分类值,再由多棵树的分类结果共同进行决策得到最后的分类结果。决策树的数学模型可以表述为:
Figure PCTCN2018121033-appb-000001
对于模型的描述可以总结为三个部分:多棵决策树 以及相应的特征和方法,可能的分类结果描述,最终的分类选择方法。其构成的信息可以包括:
模型输入:特征向量X;
模型输出:分类结果Z;
模型描述:若干棵决策树,即包括若干个上述的决策树的模型,此处不再赘述。
投票方法:包括绝对多数和相对多数,其中绝对多数是某一个预测结果值超过半数(即0.5)或者其他数值的投票结果,比如一个随机森林模型由五棵树构成,分别预测的结果是1,1,1,3,2,那么预测结果就是1;而相对多数即少数服从多数,比如一个随机森林模型由三棵树构成,分别预测的结果是1,2,2,那么预测结果即为2。
3、SVM
SVM是在分类与回归分析中分析数据的有监督学习模型与相关的学习算法,当样本无法通过线性模型进行线性划分(linearly separable)时,需要寻找一个空间的超平面将不同类别的样本分开,如图2E所示,为一种可能的SVM分类示意图,另外,SVM还要求对样本的局部扰动容忍性高,通过下面的线性方程可以表述所谓的超平面:w Tx+b=0,其中,w T为回归系数,b为常量。SVM对样本的一个关键操作是隐含着将低维的特征数据映射到高维的特征空间中,而这个映射可以把低维空间中线性不可分的两类点变成线性可分的,这个过程的方法称为核技巧(kernel trick),所使用的空间映射函数称为核函数(kernel function)。核函数不仅适合在支持向量机中使用,以常用的径向基核函数也称高斯核函数为例:
Figure PCTCN2018121033-appb-000002
其中x为空间中任一点,y为核函数中心,σ为核函数的宽度参数。除了高斯核之外还有线性核、多项式核、拉普拉斯核、SIGMOD核函数等,此处不做限定。
其中,在超平面w Tx+b=-1和w Tx+b=1的两个超平面上面的点的拉格朗日乘子Alpha值大于0,其他所有的值的Alpha值均为0,所以只需要找到这些落在两侧超平面上的点,并计算出他们的Alpha值就可以对新的样本进行分类。由此SVM的一个很明显的好处是只需要少量的数据即可以准确的预测,找到全局最优解。构成基于SVM算法的模型的信息可以包括:
模型输入:特征向量X;
训练出来的支持向量:SVs(x);
训练出来的支持向量所对应的拉格朗日系数:αs;
支持向量所对应的标签值:SV Label(y);
核函数方法:k,例如,所谓径向基函数(radial basis function,RBF);
核函数参数:例如,多项式参数,高斯核带宽等,需要和核函数方法匹配。
常量项:b;
预测值的分类方法:例如Sign方法。
4、贝叶斯分类器
贝叶斯分类器是一种概率模型,如图2F所示,为一种可能的贝叶斯分类示意图,其中,class1和class2可以理解为两个分类,例如一个报文是否属于某类业务,即分类为:是和否。其理论基础是贝叶斯决策论,是概率框架下的实施决策的基本方法,概率推断的基础是贝叶斯定理:
Figure PCTCN2018121033-appb-000003
其中,P(A|B)为后验概率;P(B|A)为在模式属于A类的条件下出现B的概率,称为B的类条件概率密度;P(A)为在所研究的识别问题中出现A类的概率,也成先验概率;P(B)是特征向量B的概率密度。
其应用在机器学习算法中需要将算法设计的模型特征类型代入:
Figure PCTCN2018121033-appb-000004
对于所有类别Y来讲P(X)都是相同的,根据马尔科夫模型上述贝叶斯公式可近似等于
Figure PCTCN2018121033-appb-000005
由此可以比较容易的得出根据输入特征向量预测出分类的模型构成的信息包括:输入层特征及特征方法、P(Y)分类类型先验概率列表和P(Xi|Y)特征值极大似然估计列表。
5、神经网络
如图2G所示,为一种可能的神经网络模型结构示意图,其中,一个完整的神经网络模型包括输入层(input layer)、输出层(output layer)和一个或多个隐藏层(hidden layers),可以认为神经网络模型是多层的二分类感知器,而一个单层的二分类感知器模型又类似于一个回归的模型。输入层的单元是隐藏层单元的输入,隐藏层单元的输出是输出层单元的输入。两个感知器之间的连接有一个权量,第t层的每个感知器与第t-1层的每个感知器相互关联。当然,,也可以设置权量为0,从而在实质上取消连接。
其中,分类评价模型Logit有多种方法,例如Sigmoid、Softplus和ReLU等,同一个神经网络层可以是基于不同的评价模型方法进行激活分类。最常见的神经网络训练过程可以是由结果向输入倒推,逐步减少误差并调节神经元权重,即误差逆传播(error Back Propagation,BP)算法。BP的原理可理解为:利用输出后的误差来估计输出层前一层的误差,再用这层误差来估计更前一层误差,如此获取所有各层误差估计。这里的误差估计可以理解为某种偏导数,就是根据这种偏导数来调整各层的连接权值,再用调整后的连接权值重新计算输出误差。直到输出的误差达到符合的要求或者迭代次数溢出设定值。
结合图2G,假设该网络结构是一个含有i个神经元的输入层,含有j个神经元的隐藏层,含有k个神经元的输出层,则输入层网元x i通过隐藏层网元作用于输出层网元,经过非线形变换,产生输出信号z k,其中用于网络训练的每个样本包括输入向量X和期望输出量t,网络输出值Y与期望输出值t之间的偏差,通过调整输入层网元与隐藏层网元的连 接权量w ij和隐藏层网元与输出层网元之间的连接权量T jk以及神经单元阈值,使误差沿梯度方向下降,经过反复学习训练,确定与最小误差相对应的网络参数(权值和阈值),训练即告停止。此时经过训练的神经网络即能对类似样本的输入信息,自行处理输出误差最小的经过非线形转换的信息。
故基于神经网络的模型的构成信息包括:j
输入层:特征向量X={x 1,x 2,x 3,…,x i};
输出层网元输出算法:如z k=f(∑T jk*y jk),其中f为非线性函数,T jk表示隐藏层网元与输出层网元之间的连接权量,y j为隐藏层输出,θ k为输出层的神经单元阈值;
隐藏层网元输出算法:如y j=f(∑w ij*x ii),其中f为非线性函数,w ij表示输入层网元i与隐藏层网元j之间的连接权量,x i为输入层输出,θ j为隐藏层的神经单元阈值;
输出和隐藏层的每一个网元所对应的权重列表;
每一层所使用的激活函数:如sigmod函数;
误差计算函数:用于反映神经网络期望输出与计算输出之间误差大小的函数,如:E p=1/(2*∑(t pi-o pi) 2),其中,t pi表示网元i的期望输出值,o pi表示网元i的计算输出值。
以上是对常见的机器学习算法模型的分析和结构分解,依据这些模型可以进一步的进行抽象合并。在现有的3GPP智能网络架构中,将机器学习中的训练和预测都融合在一个网元如NWDA中执行,在后续的预测过程中,每一次预测NWDA都会接受UPF发送的预测所需的数据或者特征向量,由NWDA预测出结果并生成策略,而这个过程中产生的交互时延不适合实时性要求很高的特性,影响了其业务体验。有鉴于此,本申请提供了一种基于机器学习的数据处理方法,通过将机器学习中的训练步骤由第二网元执行,第一网元安装算法模型,且该第一网元具备获取特征值的能力,故第一网元可以根据该算法模型对预测所需的数据进行预测,实现了将网络架构内模型和预测的逻辑功能分离,减少了交互时延,解决了现有技术中由于交互时延增加所导致业务体验受到影响的问题。
如图2H所示,为本申请提供的一种可能的系统架构图。在该图中,可以将机器学习的过程在逻辑功能上进一步的分解为数据服务功能(data severce function,DSF),分析和建模功能(analysis and modeling function,A&MF),模型执行功能(model execution function,MEF)和策略适配功能(adapted policy function,APF),需要说明的是,这些功能的命名也可以采用其他命名方式,此处不做限定。图2H中,可以按需将这些功能部署在网络的各层网元上,如5G网络中的集中式单元(centralized unit,CU),分布式单元(distributed unit,DU)以及gNBDA内运行,或者部署在LTE eNodeB,UMTS RNC或者NodeB;也可以将上述功能独立部署在一个网元实体,可将该网元实体称为RAN数据分析网 元(RAN data analysis,RANDA),或者其他命名。
因此在本申请所提供的基于机器学习的数据处理方法中,将机器学习中的训练和预测分别由不同的网元执行,可基于以下两种情况进行分别描述:
情况A、将上述四种功能中除A&MF以外的功能(即DSF、MEF和APF)部署在一个独立的网络实体中。
情况B、将上述四种功能(即DSF、A&MF、MEF和APF)抽象分解,分开部署在网络的各层网元上。
请参阅图3,为本申请实施例在情况A的基础上所提供的一种可能的基于机器学习的数据处理方法,包括:
301、第二网元获得训练完成的算法模型。
为便于区分,本实施例中将执行预测功能的网元称为第一网元,将执行训练功能的网元称为第二网元。第二网元基于实际智能业务的需求,选择合适的算法、特征以及标签数据,来训练算法模型。第二网元通过要实现的目标,并以此来找到合适的算法。例如,若要实现的目标是预测目标变量的值,则可以选择监督学习算法。在选择监督学习算法后,若目标变量的类型是离散型,如是/否,1/2/3等,则可以进一步选择监督学习算法中的分类器算法;若目标变量类型是连续型的数值,则可以选择监督学习算法中的回归算法。特征选择是一个从原始特征集合中选择最优子集的过程,在该过程中,对一个给定特征子集的优良性通过一个特定的评价标准(evaluation criterion)进行衡量,通过特征选择,将原始特征集合中的冗余(redundant)特征和不相关(irrelevant)特征除去,而保留有用特征。
需要说明的是,在接入网中,第二网元可以是RANDA,或者部署在CU的CUDA(可理解为RANDA部署在CU上的名称)或者部署在运营支撑系统(operation support system,OSS)的OSSDA(可理解为RANDA部署在OSS上的名称),或者部署在DU的DUDA(可理解为RANDA部署在DU上的名称),或者部署在gNB的gNBDA(可理解为RANDA部署在gNB上的名称)。在核心网中,第二网元可以是NWDA,作为一个独立部署的网元。第一网元可以是基站或UPF。例如在核心网中,第一网元可以为UPF;在接入网中,第一网元可以为基站,故此处不做限定。
另外,基于前述提到的各种算法,第二网元选择的合适的算法,并依据实际业务需求选择合适的特征以及标签。将选择的特征和标签,以及准备好的训练数据输入到算法后执行训练,来得到训练完成的算法模型。为便于理解训练的流程,以神经网络算法为例来说明模型训练的大概过程:由于神经网络被用于监督学习的任务中,即有大量的训练数据用来进行模型的训练。因此,在针对神经网络算法模型的训练之前,将选择的标签数据作为训练数据。
获得训练数据后,基于该训练数据对神经网络算法模型进行训练,其中神经网络算法模型可以包括生成器和判别器。实际应用中,可以采用对抗性训练的思想,来交替训练生成器和判别器,进而将待预测数据输入至最终得到的生成器,以生成对应的输出结果。例如,生成器是一个概率生成模型,其目标为生成与训练数据分布一致的样本。判别器则是 一个分类器,其目标为准确判别一个样本是来自训练数据、还是来自生成器。这样一来,生成器和判别器形成“对抗”,生成器不断优化使得判别器无法区分出生成样本和训练数据样本的区别,而判别器不断优化使得能够分辨这种区别。生成器和判别器交替训练,最终达到平衡:生成器能生成完全符合训练数据分布的样本(以至判别器无法分辨),而判别器则能够敏感的分辨任何不符合训练数据分布的样本。
判别器根据训练数据样本和生成样本对生成器进行模型训练,通过训练后的判别器的模型对生成器所生成的每个生成样本的归属率进行判别,并将判别结果发送给生成器,以使得生成器根据判别器所判别的新的生成样本和判别结果进行模型训练。依此进行循环的对抗性训练,从而提高生成器生成该生成样本的能力,提高判别器判别生成样本归属率的能力。即,对抗性训练中,会交替训练判别器和生成器,直至达到平衡。当生成器的能力和判别器的能力训练到一定程度后,判别器判别生成器所生成的样本的归属率将趋向稳定。此时,则可以停止训练生成器和判别器的模型。例如当判别器根据所获取的所有训练数据样本和生成样本进行样本的归属率的判别,且判别器得到的判别结果的变化量小于预设阈值时,则可以结束对神经网络算法模型的训练。
可选的,还可以通过生成器和判别器的迭代次数作为判断条件来确定是否停止训练,其中生成器生成一次生成样本以及判断器判断一次生成器生成的生成样本表示一次迭代。比如,设置1000次迭代指标,若生成器生成过1000次后,则可以停止训练,或者若判别器判断过1000次后,则可以停止训练,进而得到训练好的神经网络算法模型。
需要说明的是,随着人工智能的不断发展和更新,关于算法模型的训练已经是较为成熟的技术,关于其他算法如回归算法或者决策树等所对应的模型的训练,此处不再一一赘述。
302、第二网元向第一网元发送算法模型的安装信息。
第二网元训练得到算法模型后,通过与第一网元之间的通信接口向第一网元发送算法模型的安装信息。该算法模型的安装信息可以携带于第一消息中来发送给第一网元。其中,该算法模型的安装信息包括:算法模型的唯一标识ID、算法模型的算法类型指示(例如指示该算法模型的算法类型为线性回归或者是神经网络)、算法模型的结构参数(例如线性回归模型的结构参数可包括回归值Z、回归系数、常量项和阶跃函数等)、算法模型的安装指示(用于指示第一网元安装该算法模型)。需要说明的是,由于各算法模型的算法类型所对应的结构参数各不一样,实际应用中,算法模型的安装信息中还可以不包括算法模型的算法类型指示,即第一网元可通过算法模型的结构参数确定算法模型的算法类型,故算法模型的算法类型指示可以为可选项,此处不做限定。
可选的,该算法模型的安装信息中还可包括策略索引信息,其中策略索引信息包括该算法模型的各预测结果,和与各预测结果对应的策略的标识信息(例如预测结果1对应的策略的标识信息为ID1,ID1对应的策略是设置QoS参数值。
需要说明的是,第二网元还可通过该第一消息向第一网元发送采集信息,以使得第一网元根据该采集信息订阅特征向量来作为算法模型的输入。其中特征向量的采集信息至少包括待采集特征的标识ID。特征向量为待采集特征的特征值集合。为便于理解特征、特征 值和特征向量的关系,将进行举例说明,例如对一个报文,待采集特征为IP地址、APP ID和端口号,则对应的特征值可以为10.10.10.0、WeChat和21,特征向量即为特征值的集合{10.10.10.0,WeChat,21}。可选的,该特征向量的采集信息中还可以包括特征向量的订阅周期,如每3分钟采集一次特征向量,即第一网元的运行参数可能一直在发生变化,每隔一个订阅周期就采集不同的数据的特征向量来作为算法模型的输入进行预测。
可选的,第二网元除了通过上述第一消息来将该采集信息发送给第一网元,还可以向第一网元发送第二消息,该第二消息携带采集信息,另外,为了使第一网元明确所订阅的特征向量是服务于哪个算法模型的,第二消息还携带有该算法模型的唯一标识ID。综上,采集信息和算法模型的安装信息可以同时包含在一个消息中发送给第一网元,也可以分解为两条消息分别发送给第一网元。需要说明的是,若分解为两条消息分别发送给第一网元,则第二网元发送该两条消息的时序可以为先发第一消息,再发第二消息,或者同时发送,此处不做限定。
需要说明的是,实际应用中,该第一消息可以为模型安装消息(model install message),或者其他已有的消息,此处不做限定。
303、第一网元根据算法模型的安装信息安装算法模型。
第一网元通过与第二网元之间的通信接口接收到第一消息后,获取到第一消息中包含的算法模型的安装信息,进而根据该算法模型的安装信息安装算法模型,安装过程可以包括:第一网元确定算法模型的算法类型,确定的方式可以为通过第一消息中的算法类型指示来直接确定算法模型的算法类型,或者当第一消息中不包含算法类型指示时,通过第一消息中的算法模型的结构参数对应确定算法模型的算法类型。例如,若算法模型的结构参数包括特征向量x、分类结果z、训练得到的回归系数w T、常量项b,以及阶跃函数和解约函数值分离区间阈值0.5,则第一网元可以确定该算法模型的算法类型为logistics回归。在确定了算法模型的算法类型后,将算法模型的结构参数作为算法模型的算法类型所对应的模型构成参数,以完成算法模型的安装。为便于理解,以算法类型为线性回归算法为例进行说明,若第一消息中的算法类型指示用于指示该算法模型的结构类型为线性回归算法,且算法模型的结构参数包括特征集(即特征的集合)和对应的回归系数w T、常量项b,则第一网元将该算法模型的结构参数作为模型构成参数实例化到对应的算法模型的结构中。例如一个用来控制导频功率的线性回归模型的特征集包括{RSRP,CQI,TCP Load},回归系数分别是{0.45,0.4,0.15},常量项b是60,没有阶跃函数(因为是线性回归不是逻辑回归),第一网元接收到上述的模型结构参数后即可在本地实例化模型。
可选的,当算法模型的安装信息和采集信息同时包含在一个消息中,即当第一消息中还包括采集信息时,第一网元需要根据该采集信息订阅特征向量,订阅特征向量的过程可以包括:第一网元根据该采集信息判断自身是否具备提供待采集特征的特征值的能力。其中,第一网元判断自身是否具备该能力的方式有多种,例如,第一网元确定待采集特征的标识ID是否包含于预置的可采集特征信息中,若包含,则第一网元确定具备该能力;反之,若不包含,则第一网元确定不具备该能力。可以理解为,第一网元支持提供特征值的每一个特征都有一个唯一的编号,比如编号1A对应RSRP,编号2A对应信道质量指示(channel  quality indicator,CQI),编号3A对应信号与干扰加噪声比(signal to interference plus noise ratio,SINR)等,可将这些编号作为预置的可采集特征信息,如果待采集特征对应的编号不在其中,那就是不支持提供该待采集特征的特征值。因此,若第一网元判断自身具备提供待采集特征的特征值的能力,则第一网元成功订阅特征向量;对应的,若第一网元判断其不具备提供待采集特征的特征值的能力,则第一网元订阅特征向量失败。需要说明的是,若第一网元订阅特征向量失败,则第一网元还需向第二网元反馈订阅失败消息,该订阅失败消息需携带不能获取到的特征的标识信息。
可选的,当算法模型的安装信息和采集信息不在一个消息内,即第一网元还从第二网元接收第二消息,获取到第二消息中包含的采集信息时,第一网元可根据该第二消息中的采集信息来订阅特征向量,订阅特征向量的过程此处不再赘述。
304、第一网元向第二网元发送安装结果指示。
可选的,响应于第一消息,第一网元通过与第二网元的通信接口向第二网元发送第一响应消息,该第一响应消息携带有安装结果指示,且第一响应消息中包括该算法模型的唯一标识ID。例如,第一网元安装算法模型成功时,该安装结果指示用于向第二网元指示安装该算法模型成功;第一网元安装算法模型失败时,该安装结果指示用于向第二网元指示安装该算法模型失败。此时,可选的,第一响应消息还携带有安装失败原因指示,用于告知第二网元安装失败的原因。其中,安装失败的原因可以为算法模型过大,或者是算法模型的安装信息中的参数不合法等,此处不做限定。
可选的,若第一网元还从第二网元接收采集信息以订阅特征向量,则第一网元可向第二网元发送特征向量订阅结果指示,用于指示该特征向量是否订阅成功。需要说明的是,若算法模型的安装信息和采集信息均携带于第一消息中,则对应的第一响应消息中也携带有特征向量订阅结果指示。可选的,若特征向量订阅结果指示用于指示特征向量订阅失败,该第一响应消息还携带有不能获取到的特征的标识信息。
可选的,若第二网元通过第二消息将采集信息发送给第一网元时,则响应于该第二消息,第一网元通过与第二网元的通信接口向第二网元发送第二响应消息,该第二响应消息中携带有特征向量订阅结果指示。可选的,若特征向量订阅结果指示用于指示特征向量订阅失败,该第二响应消息还携带有不能获取到的特征的标识信息。
需要说明的是,实际应用中,该第一响应消息可以为模型安装响应(model install response)消息,或者其他已有的消息,此处不做限定。
305、第一网元根据算法模型对数据进行预测。
第一网元订阅好特征向量且成功安装算法模型后,开始执行预测功能,即根据安装好的算法模型对数据进行预测,包括:第一网元采集数据,实际应用中,第一网元采集的数据可以为但不限于以下几种中的任一项:1、第一网元运行状态的参数,如中央处理器(central processing unit,CPU)占用率、内存使用率、发包速率等;2、经过第一网元的报文特征数据,如包大小,包间隔等;3、基站RRM/RRT相关的参数,如RSRP,CQI等。第一网元采集的数据此处不做限定。
第一网元采集到数据后,通过采集信息中待采集特征的标识ID获取该数据的目标特征 向量,第一网元将该目标特征向量输入到算法模型中得到目标预测结果,需要说明的是,该目标预测结果可以为一个数值或者一个分类。第一网元得到目标预测结果后,在策略索引信息中找出与目标预测结果对应的目标策略的标识信息,进而第一网元可以根据该目标策略的标识信息索引到目标策略,并对第一网元采集到的数据执行该目标策略。
需要说明的是,实际应用中,在安装完算法模型后,第一网元还可以根据实际需求对该算法模型执行其他操作,例如删除或者修改等。例如,第一网元对安装完成的算法模型执行其他操作可以包括:第一网元通过与第二网元的通信接口从第二网元接收第三消息,该第三消息至少携带有算法模型的唯一标识ID,且该第三消息用于指示第一网元对算法模型执行目标操作。目标操作可以包括以下操作中的一种:修改算法模型(model modification)、删除算法模型(model delete)、激活算法模型(model active)或者去激活算法模型(model de-active)。需要说明的是,针对不同的操作,第三消息中携带的信息也可能不同,例如,当目标操作为删除、激活或者去激活算法模型时,第三消息中可以携带该算法模型的唯一标识ID;当目标操作为修改算法模型时,第三消息中还要包括修改后的算法模型的安装信息,以使得第一网元能依据该修改后的算法模型的安装信息对算法模型进行修改。
本申请实施例中,将机器学习中的训练步骤由第二网元执行,第一网元安装算法模型,并根据该算法模型对第一网元采集到的数据进行预测,实现了将网络架构内模型和预测的逻辑功能分离,在第一网元获取到特征的特征值以得到特征向量后,即可根据已安装的算法模型进行预测,减少了交互时延,解决了现有技术中由于交互时延增加所导致业务体验受到影响的问题。
需要说明的是,基于对智能网元功能的分解,将机器学习的过程在逻辑功能类别上抽象出DSF、A&MF、MEF和APF功能,每一个分布式单元都可以部署这四类逻辑功能,并执行其中的一个或多个功能,为便于理解,本实施例中,将执行A&MF功能的网元称为第二网元,执行MEF的网元称为第一网元,执行DSF的网元称为第三网元,执行APF的网元称为第四网元,基于此,请参阅图4,为本申请实施例在情况B的基础上所提供的一种可能的基于机器学习的数据处理方法,包括:
401、第二网元获得训练完成的目标算法模型。
402、第二网元向第一网元发送目标算法模型的安装信息。
403、第一网元根据目标算法模型的安装信息安装目标算法模型。
本申请实施例中,步骤401至步骤403与图3中的步骤301至303类似,此处不再赘述。
404、第一网元向第三网元发送采集信息。
在步骤402中,第二网元可以通过第一消息来携带采集信息并发送给第一网元,或者第二网元向第一网元发送第二消息,该第二消息中携带有该采集信息和目标算法模型的唯一标识ID。
因此,当采集信息携带于第一消息时,第一网元接收并解码该第一消息,将其中的采集信息独立出并携带于单独的第三消息(例如特征订阅消息(feature subscribe message) 或者其他已有消息)中,第一网元将该第三消息通过与第三网元之间的通信接口发送给第三网元,以使得第三网元根据第三消息所包含的采集信息,获取到第一网元采集到的数据的目标特征向量。由于实际应用中,算法模型可能包括多个模型,因此第三网元可能需要为多个模型提供模型输入的特征向量,为便于第三网元明确所订阅的特征向量是服务于哪一个算法模型,本申请中将算法模型中的至少一个模型称为目标算法模型,故该第三消息中还包括目标算法模型的唯一标识ID。
可选的,当采集信息携带于第二消息时,则第一网元可将接收到的第二消息通过与第三网元之间的通信接口转发给第三网元。
可选的,第二网元还可以直接向第三网元发送采集信息,例如,第二网元向第三网元发送第四消息,该第四消息中携带有采集信息和目标算法模型的唯一标识ID,故第三网元接收到特征向量的采集信息的方式此处不做限定。
405、第三网元向第一网元发送特征向量订阅结果指示。
第三网元获得采集信息以订阅特征向量后,判断自身是否具备提供待采集特征的特征值的能力,需要说明的是,本实施例中,第三网元判断自身是否具备提供待采集特征的特征值的能力的方式与图3中的步骤303中所述的第一网元判断自身是否具备提供待采集特征的特征值的能力的方式类似,此处不再赘述。
可选的,第三网元向第一网元发送特征向量订阅结果指示,用于向第一网元指示特征向量订阅是否成功。需要说明的是,若采集信息为第一网元通过第三消息携带来发送给第三网元,则对应的,第三网元可以通过第三响应消息将特征向量订阅结果指示发送给第一网元,且该第三响应消息还携带有目标算法模型的唯一标识ID。可选的,若第三网元判断自身不具备提供待采集特征的特征值的能力,即特征向量订阅结果指示用于指示特征向量订阅失败,则第三响应消息中还携带有不能获取到的特征的标识信息。
可选的,若第三网元获得的采集信息为第二网元直接通过第四消息携带来向第三网元发送,则第三网元向第二网元发送第四响应消息,该第四响应消息携带特征向量订阅结果指示;可选的,当特征向量订阅结果指示用于指示特征向量订阅失败时,第四响应消息中还携带有不能获取到的特征的标识信息。
406、第一网元向第二网元发送安装结果指示。
本申请实施例中,步骤406与图3中的步骤304类似,此处不再赘述。
407、第三网元向第一网元发送目标特征向量。
当特征向量订阅结果指示用于指示特征向量订阅成功时,第三网元从第一网元中采集目标数据,并获得该目标数据的待采集特征的特征值,进而得到目标特征向量。故第三网元获得目标特征向量后,可以通过特征向量反馈消息向第一网元发送目标特征向量,该特征向量反馈消息可以为特征反馈(feature report)消息或者其他已有消息,且该特征向量反馈消息还携带有目标算法模型的唯一标识ID。
可选的,若该特征向量的订阅为周期性订阅时,则第三网元每隔一个订阅周期即向第一网元发送特征向量反馈消息。
408、第一网元根据目标算法模型进行预测。
第一网元接收到特征向量反馈消息后,根据特征向量反馈消息中的目标算法模型的标识ID索引到用于执行预测的目标算法模型,并将特征向量反馈消息中的目标特征向量输入到该目标算法模型中,以得到对应的目标预测结果。需要说明的是,该目标预测结果可以为一个数值,例如为连续区间内的数值或者离散区间内的数值。
409、第一网元向第四网元发送目标预测结果。
第一网元产生目标预测结果后,通过与第四网元之间的通信接口向第四网元发送该目标预测结果,其中,该目标预测结果可携带于第一网元向第四网元发送的第五消息中,其中该第五消息可以为预测指示消息(predication indication message)或者其他已有消息,此处不做限定。
需要说明的是,该第五消息除了携带目标预测结果外,还可携带目标算法模型的唯一标识ID、和与该目标算法模型对应的目标策略索引信息,以使得所述第四网元根据该第五消息确定与目标预测结果对应的目标策略。
410、第四网元确定目标策略;
第四网元通过与第一网元之间的通信接口接收到第五消息后,进行解码获得第五消息携带的目标算法模型的唯一标识ID、目标预测结果、和与该目标算法模型对应的目标策略索引信息,进而在目标策略索引信息中找出与目标策略结果对应的目标策略的标识信息,即第四网元确定得到目标策略。
可选的,第四网元还可以判断该目标策略是否适配于对应预测的数据,例如实际应用中,在切换基站时,不仅需要依据模型预测的结果,还要结合网络实际的运行状态如是否发生拥塞等情况,若不适配,则需要重新确定目标策略。
可选的,第四网元确定目标策略后,向第一网元发送第五反馈消息,该第五反馈消息可以为预测响应(predication response)消息或者其他已有消息,该第五反馈消息用于向第一网元反馈目标预测结果对应的目标策略,该第五反馈消息携带有目标策略的标识信息,以使得第一网元能对目标数据执行该目标策略。
本申请实施例中,将逻辑功能分离为四种,可以按需部署在不同的实体设备上,增加了网络的灵活性,另外不需要的功能也可以不部署,节约了网络资源。
上面对本申请实施例中基于机器学习的数据处理方法进行了描述,下面对本申请实施例中的网元进行描述,请参阅图5,本申请实施例中网元的一个实施例,该网元可以执行上述方法实施例中的第一网元的操作,该网元包括:
第一收发单元501,用于从第二网元接收算法模型的安装信息,该第二网元用于训练该算法模型;
安装单元502,用于根据该收发单元接收的算法模型的安装信息,安装算法模型;
采集单元503,用于采集数据;
预测单元504,用于当安装单元安装算法模型成功后,根据算法模型对采集单元504采集到的数据进行预测。
可选的,在一些可能的实现方式中,
第一收发单元501还用于从第二网元接收采集信息,采集信息至少包括待采集特征的 标识ID。
可选的,在一些可能的实现方式中,该第一网元可进一步包括:
第二收发单元505,用于向第三网元发送采集信息和目标算法模型的唯一标识ID,目标算法模型为算法模型中的至少一个模型;从第三网元接收与采集信息对应的目标特征向量和目标算法模型的唯一标识ID,目标算法模型用于执行预测操作。
可选的,在一些可能的实现方式中,该第一网元可进一步包括:
第三收发单元506,用于向第四网元发送目标算法模型的唯一标识ID,目标预测结果和与目标算法模型对应的目标策略索引信息,目标预测结果用于目标策略的确定,目标预测结果为将目标特征向量输入至目标算法模型所得到的结果;从第四网元接收目标策略的标识信息。
可选的,在一些可能的实现方式中,第一收发单元501还可用于:
从第二网元接收目标操作指示和算法模型的唯一标识ID,目标操作指示用于指示第一网元对算法模型执行目标操作,目标操作包括修改所述算法模型,删除算法模型,激活算法模型或者去激活算法模型。
可选的,第一收发单元501还可用于:
当目标操作为修改算法模型时,从第二网元接收修改后的算法模型的安装信息。
可选的,第一收发单元501还可用于:
当算法模型安装失败后,向第二网元发送安装失败原因指示。
本申请实施例中,将机器学习中的训练步骤由第二网元执行,安装单元安装算法模型,并由预测单元根据该算法模型对第一网元接收到的数据进行预测,实现了将网络架构内模型和预测的逻辑功能分离,在采集单元采集到数据后,预测单元即可根据已安装的算法模型对数据进行预测,减少了交互时延,解决了现有技术中由于交互时延增加所导致业务体验受到影响的问题;另外,还可将逻辑功能分离为四种,可以按需部署在不同的实体设备上,增加了网络的灵活性,另外不需要的功能也可以不部署,节约了网络资源。
请参阅图6,本申请实施例中网元的另一个实施例,该网元可以执行以上方法实施例中的第二网元的操作,该网元包括:
训练单元601,用于获得训练完成的算法模型;
收发单元602,用于向第一网元发送算法模型的安装信息,算法模型的安装信息用于算法模型的安装,算法模型用于数据预测,第一网元为UPF或基站。
可选的,在一些可能的实现方式中,收发单元602还用于,当第一网元安装算法模型失败时,从第一网元接收安装失败原因指示。
可选的,在一些可能的实现方式中,收发单元602还用于:
向第一网元发送采集信息,采集信息至少包括待采集特征的标识ID。
本申请实施例中,将机器学习中的训练步骤由第二网元的训练单元执行,第一网元安装算法模型,并根据该算法模型对第一网元采集到的数据进行预测,实现了将网络架构内模型训练和预测的逻辑功能分离,在第一网元采集到数据后,即可根据已安装的算法模型对数据进行预测,减少了交互时延,解决了现有技术中由于交互时延增加所导致业务体验 受到影响的问题;另外,还可将逻辑功能分离为四种,可以按需部署在不同的实体设备上,增加了网络的灵活性,另外不需要的功能也可以不部署,节约了网络资源。
上面图5至图6从模块化功能实体的角度分别对本申请实施例中的第一网元和第二网元进行详细描述,下面从硬件处理的角度对本申请实施例中的第一网元和第二网元进行详细描述。
请参阅图7。在采用集成的单元的情况下,图7示出了一种通信装置可能的结构示意图。该通信装置700包括:处理单元702和通信单元703。处理单元702用于对该通信装置的动作进行控制管理。该通信装置700还可以包括存储单元701,用于存储该通信装置所需的程序代码和数据。
在一个实施例中,该通信装置可以是上述第一网元。例如,处理单元702用于支持第一网元执行图3中的步骤303和步骤305,图4中的步骤403和步骤408,和/或用于本文所描述的技术的其它过程。通信单元703用于支持第一网元与其他设备的通信,例如,通信单元703用于支持第一网元执行图3中的步骤302和步骤304,图4中的步骤402、步骤404至步骤407,和步骤409。
在另一个实施例中,该通信装置可以是上述第二网元。例如,处理单元702用于支持第二网元执行图3中的步骤301,图4中的步骤401,和/或用于本文所描述的技术的其它过程。通信单元703用于支持第二网元与其他设备的通信,例如,通信单元703用于支持第二网元执行图3中的步骤302和步骤304,图4中的步骤402和步骤406。
其中,处理单元702可以是处理器或控制器,例如可以是中央处理器(central processing unit,CPU),通用处理器,数字信号处理器(digital signal processor,DSP),专用集成电路(application-specific integrated circuit,ASIC),现场可编程门阵列(field programmable gate array,FPGA)或者其他可编程逻辑器件、晶体管逻辑器件、硬件部件或者其任意组合。其可以实现或执行结合本申请公开内容所描述的各种示例性的逻辑方框,模块和电路。处理器也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,DSP和微处理器的组合等等。通信单元703可以是通信接口、收发器、收发电路等,其中,通信接口是统称,可以包括一个或多个接口,例如收发接口。存储单元701可以是存储器。
处理单元702可以为处理器,通信单元703可以为通信接口,存储单元701可以为存储器时,参阅图8所示,该通信装置810包括:处理器812、通信接口813、存储器811。可选的,通信装置810还可以包括总线814。其中,通信接口813、处理器812以及存储器811可以通过总线814相互连接;总线814可以是外设部件互连标准(peripheral component interconnect,PCI)总线或扩展工业标准结构(extended industry standard architecture,EISA)总线等。总线814可以分为地址总线、数据总线、控制总线等。为便于表示,图8中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
类似的,在一个实施例中,通信装置810可用于指示上述第一网元的步骤。在另一个实施例中,通信装置810可用于指示上述第二网元的步骤。此处不再赘述。
本申请实施例还提供一种装置,该装置可以是芯片,该装置包括存储器,其中存储器 用于存储指令。当存储器存储的指令被处理器执行时,使得处理器执行第一网元在图3至图4所述实施例中的基于机器学习的数据处理方法中的部分或全部步骤,例如图3中的步骤303和步骤305、图4中的步骤403和步骤408,和/或用于本申请所描述的技术的其它过程。或者,使得处理器执行第二网元在图3至图4所述实施例中的基于机器学习的数据处理方法中的部分或全部步骤,例如图3中的步骤301、图4中的步骤401,和/或用于本申请所描述的技术的其它过程。
本申请实施例还提供一种系统,如图9所示,为本申请提供的一种可能的系统的结构示意图,该系统可以包括一个或多个中央处理器922和存储器932,一个或一个以上存储应用程序942或数据944的存储介质930(例如一个或一个以上海量存储设备)。其中,存储器932和存储介质930可以是短暂存储或持久存储。存储在存储介质930的程序可以包括一个或一个以上模块(图示没标出),每个模块可以包括对系统中的一系列指令操作。更进一步地,中央处理器922可以设置为与存储介质930通信,在系统900上执行存储介质930中的一系列指令操作。系统900还可以包括一个或一个以上电源926,一个或一个以上有线或无线网络接口950,一个或一个以上输入输出接口958,和/或,一个或一个以上操作系统941,例如Windows Server,Mac OS X,Unix,Linux,FreeBSD等等。
上述图3至图4所描述的基于机器学习的数据处理方法的实施例可以基于该图9所示的系统结构来实现。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。
所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程设备。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存储的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘(solid state disk,SSD))等。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显 示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。

Claims (30)

  1. 一种基于机器学习的数据处理方法,其特征在于,包括:
    第一网元从第二网元接收算法模型的安装信息,所述第一网元为用户面网元UPF或基站,所述第二网元用于训练所述算法模型;
    所述第一网元根据所述算法模型的安装信息,安装所述算法模型;
    当所述算法模型安装成功后,所述第一网元采集数据,根据所述算法模型对所述数据进行预测。
  2. 根据权利要求1所述的方法,其特征在于,所述算法模型的安装信息包括以下信息:所述算法模型的唯一标识ID、所述算法模型的算法类型、所述算法模型的结构参数和所述算法模型的安装指示,所述算法模型的安装指示用于指示安装所述算法模型。
  3. 根据权利要求2所述的方法,其特征在于,所述算法模型的安装信息还包括策略索引信息,所述策略索引信息包括所述算法模型的预测结果,和与所述预测结果对应的策略的标识信息。
  4. 根据权利要求2或3中任一项所述的方法,其特征在于,所述第一网元采集数据之前,所述方法还包括:
    所述第一网元从所述第二网元接收采集信息,所述采集信息至少包括待采集特征的标识ID。
  5. 根据权利要求4所述的方法,其特征在于,所述第一网元从所述第二网元接收采集信息后,所述方法还包括:
    所述第一网元向第三网元发送所述采集信息和目标算法模型的唯一标识ID,所述目标算法模型为所述算法模型中的至少一个模型;
    所述第一网元从所述第三网元接收与所述采集信息对应的目标特征向量和所述目标算法模型的唯一标识ID,所述目标算法模型用于执行预测操作。
  6. 根据权利要求5所述的方法,其特征在于,所述方法还包括:
    所述第一网元向第四网元发送所述目标算法模型的唯一标识ID,目标预测结果和与所述目标算法模型对应的目标策略索引信息,所述目标预测结果用于目标策略的确定,所述目标预测结果为将所述目标特征向量输入至所述目标算法模型所得到的结果;
    所述第一网元从所述第四网元接收所述目标策略的标识信息。
  7. 根据权利要求1至6中任一项所述的方法,其特征在于,当所述算法模型安装成功后,所述方法还包括:
    所述第一网元从所述第二网元接收目标操作指示和所述算法模型的唯一标识ID,所述 目标操作指示用于指示所述第一网元对所述算法模型执行目标操作,所述目标操作包括修改所述算法模型,删除所述算法模型,激活所述算法模型或者去激活所述算法模型。
  8. 根据权利要求7所述的方法,其特征在于,当所述目标操作为修改所述算法模型时,所述方法还包括:
    所述第一网元从所述第二网元接收修改后的所述算法模型的安装信息。
  9. 根据权利要求1至8中任一项所述的方法,其特征在于,当所述算法模型安装失败后,所述方法还包括:
    所述第一网元向所述第二网元发送安装失败原因指示。
  10. 一种基于机器学习的数据处理方法,其特征在于,包括:
    第二网元获得训练完成的算法模型;
    所述第二网元向第一网元发送所述算法模型的安装信息,所述算法模型的安装信息用于所述算法模型的安装,所述算法模型用于数据预测,所述第一网元为用户面网元UPF或基站。
  11. 根据权利要求10所述的方法,其特征在于,所述算法模型的安装信息包括以下信息:所述算法模型的唯一标识ID、所述算法模型的算法类型、所述算法模型的结构参数和所述算法模型的安装指示,所述算法模型的安装指示用于指示所述第一网元安装所述算法模型。
  12. 根据权利要求10或11所述的方法,其特征在于,所述算法模型的安装信息还包括策略索引信息,所述策略索引信息包括所述算法模型的预测结果,和与所述预测结果对应的策略的标识信息。
  13. 根据权利要求10至12中任一项所述的方法,其特征在于,所述第二网元向第一网元发送所述算法模型的安装信息之后,所述方法还包括:
    当所述第一网元安装所述算法模型失败时,所述第二网元从所述第一网元接收安装失败原因指示。
  14. 根据权利要求10至13中任一项所述的方法,其特征在于,所述方法还包括:
    所述第二网元向所述第一网元发送采集信息,所述采集信息至少包括待采集特征的标识ID。
  15. 一种网元,所述网元为第一网元,其特征在于,所述第一网元为用户面网元UPF或基站,包括:
    第一收发单元,用于从第二网元接收算法模型的安装信息,所述第二网元用于训练所述算法模型;
    安装单元,用于根据所述收发单元接收的所述算法模型的安装信息,安装所述算法模型;
    采集单元,用于采集数据;
    预测单元,用于当所述安装单元安装所述算法模型成功后,根据所述算法模型对所述采集单元采集的所述数据进行预测。
  16. 根据权利要求15所述的网元,其特征在于,所述算法模型的安装信息包括以下信息:所述算法模型的唯一标识ID、所述算法模型的算法类型、所述算法模型的结构参数和所述算法模型的安装指示,所述算法模型的安装指示用于指示安装所述算法模型。
  17. 根据权利要求16所述的网元,其特征在于,所述算法模型的安装信息还包括策略索引信息,所述策略索引信息包括所述算法模型的预测结果,和与所述预测结果对应的策略的标识信息。
  18. 根据权利要求16至或17中任一项所述的网元,其特征在于,所述第一收发单元还用于:
    从所述第二网元接收采集信息,所述采集信息至少包括待采集特征的标识ID。
  19. 根据权利要求18所述的网元,其特征在于,所述网元还包括:
    第二收发单元,用于向第三网元发送所述采集信息和目标算法模型的唯一标识ID,所述目标算法模型为所述算法模型中的至少一个模型;
    所述第二收发单元还用于,从所述第三网元接收与所述采集信息对应的目标特征向量和所述目标算法模型的唯一标识ID,所述目标算法模型用于执行预测操作。
  20. 根据权利要求19所述的网元,其特征在于,所述网元还包括:
    第三收发单元,用于向第四网元发送所述目标算法模型的唯一标识ID,目标预测结果和与所述目标算法模型对应的目标策略索引信息,所述目标预测结果用于目标策略的确定,所述目标预测结果为将所述目标特征向量输入至所述目标算法模型所得到的结果;
    所述第三收发单元还用于,从所述第四网元接收所述目标策略的标识信息。
  21. 根据权利要求15至20中任一项所述的网元,其特征在于,所述第一收发单元还用于:
    从所述第二网元接收目标操作指示和所述算法模型的唯一标识ID,所述目标操作指示用于指示所述第一网元对所述算法模型执行目标操作,所述目标操作包括修改所述算法模型,删除所述算法模型,激活所述算法模型或者去激活所述算法模型。
  22. 根据权利要求21所述的网元,其特征在于,当所述目标操作为修改所述算法模型时,所述第一收发单元还用于:
    从所述第二网元接收修改后的所述算法模型的安装信息。
  23. 根据权利要求15至22中任一项所述的网元,其特征在于,当所述算法模型安装失败后,所述第一收发单元还用于:
    向所述第二网元发送安装失败原因指示。
  24. 一种网元,所述网元为第二网元,其特征在于,包括:
    训练单元,用于获得训练完成的算法模型;
    收发单元,用于向第一网元发送所述算法模型的安装信息,所述算法模型的安装信息用于所述算法模型的安装,所述算法模型用于数据预测,所述第一网元为用户面网元UPF或基站。
  25. 根据权利要求24所述的网元,其特征在于,所述算法模型的安装信息包括以下信息:所述算法模型的唯一标识ID、所述算法模型的算法类型、所述算法模型的结构参数和所述算法模型的安装指示,所述算法模型的安装指示用于指示所述第一网元安装所述算法模型。
  26. 根据权利要求25所述的网元,其特征在于,所述算法模型的安装信息还包括策略索引信息,所述策略索引信息包括所述算法模型的输出结果,和与所述输出结果对应的策略的标识信息。
  27. 根据权利要求24至26中任一项所述的网元,其特征在于,所述收发单元还用于:
    当所述第一网元安装所述算法模型失败时,从所述第一网元接收安装失败原因指示。
  28. 根据权利要求24至26中任一项所述的网元,其特征在于,所述收发单元还用于:
    向所述第一网元发送采集信息,所述采集信息至少包括待采集特征的标识ID。
  29. 一种计算机可读存储介质,包括指令,当其在计算机上运行时,使得计算机执行如权利要求1-14任意一项所述的方法。
  30. 一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行如权利要求1-14任意一项所述的方法。
PCT/CN2018/121033 2018-02-06 2018-12-14 一种基于机器学习的数据处理方法以及相关设备 WO2019153878A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP18905264.0A EP3734518A4 (en) 2018-02-06 2018-12-14 DATA PROCESSING METHODS BASED ON MACHINE LEARNING AND ASSOCIATED DEVICE
US16/985,406 US20200364571A1 (en) 2018-02-06 2020-08-05 Machine learning-based data processing method and related device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810125826.9A CN110119808A (zh) 2018-02-06 2018-02-06 一种基于机器学习的数据处理方法以及相关设备
CN201810125826.9 2018-02-06

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/985,406 Continuation US20200364571A1 (en) 2018-02-06 2020-08-05 Machine learning-based data processing method and related device

Publications (1)

Publication Number Publication Date
WO2019153878A1 true WO2019153878A1 (zh) 2019-08-15

Family

ID=67519709

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/121033 WO2019153878A1 (zh) 2018-02-06 2018-12-14 一种基于机器学习的数据处理方法以及相关设备

Country Status (4)

Country Link
US (1) US20200364571A1 (zh)
EP (1) EP3734518A4 (zh)
CN (1) CN110119808A (zh)
WO (1) WO2019153878A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112329226A (zh) * 2020-11-02 2021-02-05 南昌智能新能源汽车研究院 双离合变速器的离合器油压传感器数据驱动型预测方法
CN112799385A (zh) * 2019-10-25 2021-05-14 中国科学院沈阳自动化研究所 一种基于引导域人工势场的智能体路径规划方法
CN113034264A (zh) * 2020-09-04 2021-06-25 深圳大学 客户流失预警模型的建立方法、装置、终端设备及介质

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11074502B2 (en) * 2018-08-23 2021-07-27 D5Ai Llc Efficiently building deep neural networks
US20200175383A1 (en) * 2018-12-03 2020-06-04 Clover Health Statistically-Representative Sample Data Generation
US20220294606A1 (en) * 2019-08-16 2022-09-15 Telefonaktiebolaget Lm Ericsson (Publ) Methods, apparatus and machine-readable media relating to machine-learning in a communication network
US20220292398A1 (en) * 2019-08-16 2022-09-15 Telefonaktiebolaget Lm Ericsson (Publ) Methods, apparatus and machine-readable media relating to machine-learning in a communication network
CN110569288A (zh) * 2019-09-11 2019-12-13 中兴通讯股份有限公司 一种数据分析方法、装置、设备和存储介质
CN112788661B (zh) * 2019-11-07 2023-05-05 华为技术有限公司 网络数据的处理方法、网元及系统
CN112886996A (zh) * 2019-11-29 2021-06-01 北京三星通信技术研究有限公司 信号接收方法、用户设备、电子设备及计算机存储介质
CN111079660A (zh) * 2019-12-19 2020-04-28 点睛数据科技(杭州)有限责任公司 基于热红外成像图片的影院在线人数统计方法
US11599980B2 (en) 2020-02-05 2023-03-07 Google Llc Image transformation using interpretable transformation parameters
CN113570063B (zh) * 2020-04-28 2024-04-30 大唐移动通信设备有限公司 机器学习模型参数传递方法及装置
CN113570062B (zh) * 2020-04-28 2023-10-10 大唐移动通信设备有限公司 机器学习模型参数传递方法及装置
CN117320034A (zh) * 2020-04-29 2023-12-29 华为技术有限公司 一种通信方法、装置及系统
CN111782764B (zh) * 2020-06-02 2022-04-08 浙江工业大学 一种交互式nl2sql模型的可视理解与诊断方法
CN114143802A (zh) * 2020-09-04 2022-03-04 华为技术有限公司 数据传输方法和装置
US20220086175A1 (en) * 2020-09-16 2022-03-17 Ribbon Communications Operating Company, Inc. Methods, apparatus and systems for building and/or implementing detection systems using artificial intelligence
TWI766522B (zh) * 2020-12-31 2022-06-01 鴻海精密工業股份有限公司 資料處理方法、裝置、電子設備及存儲介質
US11977466B1 (en) * 2021-02-05 2024-05-07 Riverbed Technology Llc Using machine learning to predict infrastructure health
CN117597969A (zh) * 2021-09-18 2024-02-23 Oppo广东移动通信有限公司 Ai数据的传输方法、装置、设备及存储介质
CN114302506B (zh) * 2021-12-24 2023-06-30 中国联合网络通信集团有限公司 一种基于人工智能ai的协议栈单元、数据处理方法和装置
WO2023169402A1 (zh) * 2022-03-07 2023-09-14 维沃移动通信有限公司 模型的准确度确定方法、装置及网络侧设备
CN116776985A (zh) * 2022-03-07 2023-09-19 维沃移动通信有限公司 模型的准确度确定方法、装置及网络侧设备
WO2024055191A1 (en) * 2022-09-14 2024-03-21 Huawei Technologies Co., Ltd. Methods, system, and apparatus for inference using probability information
CN118041764A (zh) * 2022-11-11 2024-05-14 华为技术有限公司 一种机器学习的管控方法和装置
CN116451582B (zh) * 2023-04-19 2023-10-17 中国矿业大学 基于机器学习融合模型的火灾热释放速率测量系统和方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102298569A (zh) * 2010-06-24 2011-12-28 微软公司 在线学习算法的并行化
CN106471851A (zh) * 2014-06-20 2017-03-01 开放电视公司 基于学习模型的设备定位
CN106663224A (zh) * 2014-06-30 2017-05-10 亚马逊科技公司 用于机器学习模型评估的交互式界面
CN107229976A (zh) * 2017-06-08 2017-10-03 郑州云海信息技术有限公司 一种基于spark的分布式机器学习系统
CN107577943A (zh) * 2017-09-08 2018-01-12 北京奇虎科技有限公司 基于机器学习的样本预测方法、装置及服务器

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5467428A (en) * 1991-06-06 1995-11-14 Ulug; Mehmet E. Artificial neural network method and architecture adaptive signal filtering
US6041322A (en) * 1997-04-18 2000-03-21 Industrial Technology Research Institute Method and apparatus for processing data in a neural network
US8769152B2 (en) * 2006-02-14 2014-07-01 Jds Uniphase Corporation Align/notify compression scheme in a network diagnostic component
US20080201705A1 (en) * 2007-02-15 2008-08-21 Sun Microsystems, Inc. Apparatus and method for generating a software dependency map
US9529110B2 (en) * 2008-03-31 2016-12-27 Westerngeco L. L. C. Constructing a reduced order model of an electromagnetic response in a subterranean structure
US10558924B2 (en) * 2014-05-23 2020-02-11 DataRobot, Inc. Systems for second-order predictive data analytics, and related methods and apparatus
US10496927B2 (en) * 2014-05-23 2019-12-03 DataRobot, Inc. Systems for time-series predictive data analytics, and related methods and apparatus
US10366346B2 (en) * 2014-05-23 2019-07-30 DataRobot, Inc. Systems and techniques for determining the predictive value of a feature
US10713594B2 (en) * 2015-03-20 2020-07-14 Salesforce.Com, Inc. Systems, methods, and apparatuses for implementing machine learning model training and deployment with a rollback mechanism
US11087236B2 (en) * 2016-07-29 2021-08-10 Splunk Inc. Transmitting machine learning models to edge devices for edge analytics

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102298569A (zh) * 2010-06-24 2011-12-28 微软公司 在线学习算法的并行化
CN106471851A (zh) * 2014-06-20 2017-03-01 开放电视公司 基于学习模型的设备定位
CN106663224A (zh) * 2014-06-30 2017-05-10 亚马逊科技公司 用于机器学习模型评估的交互式界面
CN107229976A (zh) * 2017-06-08 2017-10-03 郑州云海信息技术有限公司 一种基于spark的分布式机器学习系统
CN107577943A (zh) * 2017-09-08 2018-01-12 北京奇虎科技有限公司 基于机器学习的样本预测方法、装置及服务器

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3734518A4

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112799385A (zh) * 2019-10-25 2021-05-14 中国科学院沈阳自动化研究所 一种基于引导域人工势场的智能体路径规划方法
CN113034264A (zh) * 2020-09-04 2021-06-25 深圳大学 客户流失预警模型的建立方法、装置、终端设备及介质
CN112329226A (zh) * 2020-11-02 2021-02-05 南昌智能新能源汽车研究院 双离合变速器的离合器油压传感器数据驱动型预测方法

Also Published As

Publication number Publication date
CN110119808A (zh) 2019-08-13
EP3734518A1 (en) 2020-11-04
EP3734518A4 (en) 2021-03-10
US20200364571A1 (en) 2020-11-19

Similar Documents

Publication Publication Date Title
WO2019153878A1 (zh) 一种基于机器学习的数据处理方法以及相关设备
Zhang et al. Towards artificial intelligence enabled 6G: State of the art, challenges, and opportunities
Jiang Cellular traffic prediction with machine learning: A survey
US11315045B2 (en) Entropy-based weighting in random forest models
US20200401945A1 (en) Data Analysis Device and Multi-Model Co-Decision-Making System and Method
Mulvey et al. Cell fault management using machine learning techniques
EP3972339A1 (en) Handover success rate prediction and management using machine learning for 5g networks
US20210042578A1 (en) Feature engineering orchestration method and apparatus
CN105830080A (zh) 使用特定于应用和特定于应用类型的模型进行移动设备行为的高效分类的方法和系统
US20230325258A1 (en) Method and apparatus for autoscaling containers in a cloud-native core network
EP4156631A1 (en) Reinforcement learning (rl) and graph neural network (gnn)-based resource management for wireless access networks
WO2023279674A1 (en) Memory-augmented graph convolutional neural networks
Patil et al. [Retracted] Prediction of IoT Traffic Using the Gated Recurrent Unit Neural Network‐(GRU‐NN‐) Based Predictive Model
Kaur et al. An efficient handover mechanism for 5G networks using hybridization of LSTM and SVM
Zhang et al. [Retracted] Network Traffic Prediction via Deep Graph‐Sequence Spatiotemporal Modeling Based on Mobile Virtual Reality Technology
Chen et al. Agile services provisioning for learning-based applications in fog computing networks
US20230368077A1 (en) Machine learning entity validation performance reporting
Mathur et al. A machine learning approach for offload decision making in mobile cloud computing
Endes et al. 5G network slicing with multi-purpose AI models
CN113258971B (zh) 一种多频联合波束赋形方法、装置、基站及存储介质
Chowdhury Distributed deep learning inference in fog networks
Wang et al. Research on Fault Identification Method of Power System Communication Network Based on Deep Learning
US20230376516A1 (en) Low-resource event understanding
WO2024124975A1 (zh) 网络质量评估方法、装置、设备、存储介质及程序产品
CN117389739A (zh) 基于任务优先级的o-ran边缘服务器资源部署方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18905264

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2018905264

Country of ref document: EP

Effective date: 20200729

NENP Non-entry into the national phase

Ref country code: DE