WO2022017231A1 - 训练通信决策模型的方法、电子设备、计算机可读介质 - Google Patents

训练通信决策模型的方法、电子设备、计算机可读介质 Download PDF

Info

Publication number
WO2022017231A1
WO2022017231A1 PCT/CN2021/106219 CN2021106219W WO2022017231A1 WO 2022017231 A1 WO2022017231 A1 WO 2022017231A1 CN 2021106219 W CN2021106219 W CN 2021106219W WO 2022017231 A1 WO2022017231 A1 WO 2022017231A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
decision
training
communication
parameters
Prior art date
Application number
PCT/CN2021/106219
Other languages
English (en)
French (fr)
Inventor
董嘉
倪华
康红辉
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Priority to US18/016,400 priority Critical patent/US20230274184A1/en
Priority to EP21847428.6A priority patent/EP4187439A4/en
Publication of WO2022017231A1 publication Critical patent/WO2022017231A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/29Graphical models, e.g. Bayesian networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/098Distributed learning, e.g. federated learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/16Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence

Definitions

  • the present disclosure relates to, but is not limited to, the field of communication technology.
  • the decision-making model using the machine learning method is not easy to converge, and requires a lot of transmission resources.
  • the difference in the number of samples between different communication sites is likely to cause poor decision-making results for communication sites with a small number of samples.
  • the present disclosure provides a method for training a communication decision model, where the communication decision model is used for decision-making of a communication site, the method comprising: training an initial model according to a first decision sample to adjust the model of the initial model parameters to obtain a first training model; wherein, the first decision sample is a sample of a decision that has been made by the first communication site; obtain at least one second model parameter modification value of at least one second training model; wherein, each The second training model is obtained by training the initial model according to a second decision sample of a second communication site, and the second decision sample is a sample of the decision that the second communication site has made;
  • the second model parameter modification value reflects at least a partial modification of the model parameter of the second training model relative to the model parameter of the initial model; the model of the first training model is adjusted according to at least part of the second model parameter modification value parameters to obtain a communication decision model for the first communication site.
  • the present disclosure also provides an electronic device, comprising: one or more processors; a memory on which one or more programs are stored, when the one or more programs are stored by the one or more programs When executed by the processor, the one or more processors are caused to execute any one of the methods for training a communication decision model described herein; one or more I/O interfaces are connected between the one or more processors and the memory; is configured to implement signal interaction between the one or more processors and the memory.
  • the present disclosure also provides a computer-readable medium on which a computer program is stored, and when the program is executed by a processor, implements any one of the methods for training a communication decision model described herein.
  • FIG. 1 is a flowchart of a method for training a communication decision model provided by the present disclosure
  • FIG. 2 is a flowchart of some steps in a method for training a communication decision model provided by the present disclosure
  • FIG. 3 is a flowchart of a method for training a communication decision model provided by the present disclosure
  • FIG. 4 is a flowchart of some steps in a method for training a communication decision model provided by the present disclosure
  • FIG. 5 is a schematic flowchart of a method for training a communication decision model provided by the present disclosure
  • FIG. 6 is a block diagram of the composition of an electronic device provided by the present disclosure.
  • FIG. 7 is a block diagram of the composition of a computer-readable medium provided by the present disclosure.
  • Embodiments of the present disclosure may be described with reference to plan views and/or cross-sectional views with the aid of idealized schematic illustrations of the present disclosure. Accordingly, example illustrations may be modified according to manufacturing techniques and/or tolerances.
  • Embodiments of the present disclosure are not limited to the embodiments shown in the drawings, but include modifications of configurations formed based on manufacturing processes.
  • the regions illustrated in the figures have schematic properties and the shapes of regions illustrated in the figures are illustrative of the specific shapes of regions of elements and are not intended to be limiting.
  • communication stations use fixed logic to make decisions, and different communication stations use the same logic but can set different logic parameters. Because the conditions of different communication sites are quite different, the same logic may not be able to adapt to the specific conditions of each communication site; and because the logic parameters are manually set, it takes a long time of statistical analysis to find relatively accurate logic parameters.
  • a communication site can use a machine learning method to make decisions, collect sample data that have made decisions from multiple communication sites, and use machine learning methods to learn from the data from multiple communication sites to form a correspondence Each communication site makes decisions based on the obtained machine learning model.
  • the resulting machine learning model may be complicated, and in the process of training Not easy to converge. And the machine learning model is more complex, and it will inevitably be larger than the simple machine learning model. Therefore, the transmission of the machine learning model will also require more transmission resources.
  • the present disclosure provides a method for training a communication decision-making model, which is used for decision-making of multiple communication sites in a communication network.
  • Each communication site corresponds to a service node, and the service node is used to process the service of the communication site, such as making decisions.
  • Multiple communication sites correspond to a central node, which is used to process services between different communication sites, such as data exchange between different communication sites.
  • Different service nodes can be different devices or located in different locations, for example, the service node is its corresponding communication site; different service nodes can also be the same device or located in the same location, such as the service node and the central node are the same One server, that is, one server completes the entire process of the method of the embodiment of the present disclosure.
  • each step of the method in the embodiments of the present disclosure may be performed in different devices, or may be performed in one device, and when different steps are performed in different devices, different data transmission processes may be required, but As long as the substantive process of each step of the method of the embodiment of the present disclosure is included, it belongs to the protection scope of the present disclosure.
  • the method of the present disclosure may include steps S101 to S103.
  • step S101 an initial model is trained according to the first decision sample to adjust model parameters of the initial model to obtain a first training model; wherein the first decision sample is a sample of a decision already made by the first communication site.
  • the service node corresponding to the first communication site collects the first decision samples of the first communication site, and after collecting a certain number of first decision samples, trains the initial model according to the first decision samples, or adjusts the initial model according to the first decision samples
  • the model parameters are obtained to obtain the first training model.
  • the first decision sample is a sample that the first communication site has already made a decision on, which may be, for example, feature information (or index information), the decision made, and the effect of the decision.
  • the initial model is delivered by the central node to each business node.
  • it can be the model structure of a machine learning model (such as an artificial intelligence model) that has been pre-trained through big data and the corresponding model parameters.
  • the initial models of different communication sites are all identical.
  • At step S102 at least one second model parameter modification value of at least one second training model is obtained; wherein, each second training model is obtained by training the initial model according to a second decision sample of a second communication site, and the second The decision samples are samples of decisions already made by the second communication site; each second model parameter modification value represents at least a partial modification of the model parameters of the second training model relative to the model parameters of the initial model.
  • the central node acquires at least one second model parameter modification value of the second training model on the at least one second communication site and sends it to the service node corresponding to the first communication site.
  • the central node and the service node are the same communication site or device, the subsequent processing is directly performed, and the sending step is not required.
  • the second communication site is relative to the first communication site, that is, when a certain communication site is used as the first communication site to execute the method of the embodiment of the present disclosure, other communication sites are second communication site.
  • the first communication site and the second communication site are relative to the process of training the communication decision model, rather than referring to one or some communication sites, that is, the first communication site in the process of training the communication decision model.
  • the communication station may be the second communication station in the process of training the communication decision model next time, and similarly, the second communication station in the process of training the communication decision model once may be the first communication station in the process of training the communication decision model next time site.
  • the service node corresponding to the second communication site collects the second decision samples of the second communication site, and trains the initial model according to the second decision samples (that is, the samples for which decisions have been made on the second communication site) , or adjust the model parameters of the initial model to obtain a second training model (for the second communication site itself, it is also a "first communication site", and this model is its own “first training model)".
  • the adjustment of the model parameter adjustment of the initial model according to the second decision sample is the second model parameter modification value.
  • Each second training model has at least one modification value of the second model parameter, and each modification value of the second model parameter reflects a partial modification of the model parameters of the second training model relative to the model parameters of the initial model, and the entire modification of the second model parameters The value represents the entire modification of the model parameters of the second training model relative to the model parameters of the initial model.
  • step S103 the model parameters of the first training model are adjusted according to the modified values of at least part of the second model parameters, so as to obtain a communication decision model for the first communication site.
  • the service node corresponding to the first communication site adjusts the model parameters of the first training model obtained by training according to at least part of the acquired modified values of the second model parameters, and finally obtains a communication decision model for communication decision of the first communication site.
  • each communication site uses the decision samples of the communication site to train the model, which avoids the difficulty of convergence when training a unified model using the decision samples of all communication sites, and the difficulty of convergence due to the decision samples of different communication sites.
  • the first training model of the first communication site is also modified by using the model modification values of the training models of other communication sites, so as to avoid that when there are fewer first decision samples, the first training model has fewer training samples. , into the problem of local optimal solution.
  • acquiring at least one second model parameter modification value of at least one second training model includes: S1021. Filter second model parameter modifications of all second training models according to a predetermined filtering rule value to obtain the second model parameter modification value of the filtered second training model.
  • the central node After receiving the modified value of the second model parameter sent by each service node, the central node filters the modified value of the second model parameter according to certain rules.
  • the acceptance of the second model parameter modification value of the service node ie whether to modify its first training model according to the service node's second model parameter modification value filters the second model parameter modification value.
  • a specific method for judging whether to filter the modified value of the second model parameter may be clustering, calculating the average sample distance, calculating the overlap ratio calculated by the mean square error of the sample distance, and the like.
  • the modified value of the second model parameter can be reduced by filtering.
  • the waste of transmission resources can be reduced.
  • the second model parameter modification value of each second training model can be obtained through the following steps S201 and S202.
  • step S201 the second decision samples of the second training model are clustered.
  • the service node corresponding to the second communication site clusters the second decision samples in the second communication site for training the initial model, and clusters the second decision samples into several categories with similar characteristics.
  • the method and clustering parameters used for clustering are not limited, and can be selected flexibly according to the actual situation.
  • the effect is to set the number of clusters, and the initial cluster points to cluster the second decision samples, etc.
  • step S202 it is determined that the modified sum of all the second decision samples in each cluster is a second model parameter modification value, and the modification of each second decision sample is a model caused by training the initial model according to the second decision sample Modification of parameters.
  • the change of the model parameters is the modification of the second decision sample.
  • a second model parameter modification value is determined according to the modification of each second decision sample in a cluster, for example, the modified sum of each second decision sample in each cluster is a second model parameter modification value, or each The modified average of each second decision sample in the cluster is a second model parameter modification value.
  • the service node corresponding to the second communication site determines a plurality of second model parameter modification values according to the second decision samples in each cluster and sends them to the service node corresponding to the first communication site through the central node.
  • Each cluster corresponds to a second model parameter modification value.
  • the number of second model parameter modification values can be reduced, that is, the calculation amount of subsequent steps is reduced.
  • the second decision samples of the same cluster have similar characteristics, so the modification of the model parameters of the initial model is also similar, and the modified sum or average value of each second decision sample can represent these decision samples. Modification of the model parameters of the initial model; the modification of the model parameters of the initial model may be very different if it is not a second decision sample with similar characteristics, and the use of the modified sum or average value cannot replace the actual modification .
  • adjusting model parameters of the first training model according to at least part of the second model parameter modification values to obtain a communication decision model for the first communication site ( S103 ) includes steps S1031 and S1032 .
  • step S1031 according to the first decision sample, the decision effect of the model adjusted according to each second model parameter modification value on the first training model is calculated to determine the accepted second model parameter modification value.
  • the business node corresponding to the first communication site verifies the obtained multiple second model parameter modification values one by one, and obtains its decision-making effect. If the second model parameter modification value is not satisfied, the second model parameter modification value can be discarded.
  • modify the first training model according to a second model parameter modification value to obtain an adjusted first training model input the characteristic information of the first decision sample into the adjusted first training model to obtain a decision result, and compare the obtained decision result with the first training model.
  • the decision of a decision sample is compared to obtain the decision effect of the adjusted first training model.
  • the specific comparison method is not limited, and different comparison methods can be used according to the actual situation, such as the average method, the calculation of confidence, and the calculation of key indicators based on statistics.
  • the decision-making effect satisfies a certain condition (for example, higher than a certain threshold)
  • determine that the second model parameter modification value is the accepted second model parameter modification value, and verify all the second model parameter modification values according to the same method. , to determine the accepted modification of the second model parameter.
  • step S1032 the model parameters of the first training model are adjusted according to the accepted modified values of the second model parameters to obtain a communication decision model for the first communication site.
  • the service node corresponding to the first communication site adjusts the model parameters of the first training model according to the accepted second model parameter modification values, such as correspondingly modifying the first training model according to the accepted second model parameter modification values, to obtain A communication decision model for the first communication site.
  • the modified value of the second model parameter is the modification of the initial model by the second decision sample
  • the second decision sample may be different from the first decision sample, or the sample of the first communication site. Therefore, the modified value of the second model parameter is Not all of them are suitable for the first training model of the first communication site. Screening the modified values of the second model parameters according to the first decision sample can eliminate the modified values of the second model parameters that are not suitable for the first training model, so that The obtained communication decision model is more suitable for the first communication site.
  • the model parameters of the first training model are adjusted according to the accepted modified values of the second model parameters to obtain a communication decision model for the first communication site (S1032) including: S10321, if there are multiple The accepted second model parameter modification value, then adjust the model parameters of the first training model according to each accepted second model parameter modification value and its corresponding weight, and the weight corresponding to each accepted second model parameter modification value is based on the The number of second decision samples corresponding to the modified value of the second model parameter is calculated.
  • the service node corresponding to the first communication site determines that there are multiple accepted second model parameter modification values, the number of second decision samples in the cluster corresponding to each second model parameter modification value is calculated, and the number of second decision samples in the cluster corresponding to each second model parameter modification value is calculated.
  • the ratio of the number of second decision samples corresponding to each second model parameter modification value, and the ratio is used as a weight to modify the first training model according to the second model parameter modification value to obtain a communication decision model for the first communication site.
  • the second model If there are three modified values of the second model parameters, and the ratio of the number of corresponding second decision samples is 3:4:3, then according to 30% of the modified value of the first second model parameter, the second second model 40% of the parameter modification value and 30% of the third second model parameter modification value, modify the first training model to obtain a communication decision model for the first communication site.
  • adjusting the model parameters of the first training model according to the accepted second model parameter modification values to obtain a communication decision model for the first communication site includes steps S10322 to S10324.
  • step S10322 the first pre-training model is obtained by adjusting the first training model according to the accepted modification value of the second model parameter.
  • the service node corresponding to the first communication site adjusts the model parameters of the first training model according to the accepted modification values of the second model parameters, such as correspondingly modifying the first training model according to the accepted modification values of the second model parameters, to obtain the first training model.
  • Pretrained model Pretrained model.
  • step S10323 the decision effect of the first pre-training model is calculated according to the first decision sample.
  • the business node corresponding to the first communication site inputs the feature information of the first decision sample into the first pre-training model to obtain the decision result, compares the obtained decision result with the decision of the first decision sample, and obtains the first pre-training model. decision effect.
  • the specific comparison method is not limited, and different comparison methods can be used according to the actual situation, such as the average method, the calculation of confidence, and the calculation of key indicators based on statistics.
  • step S10324 if the decision effect does not satisfy the preset condition, it is determined that the first training model is a communication decision model for the first communication site.
  • the decision-making effect does not meet the preset condition (for example, it is lower than the preset threshold)
  • discard the accepted second model parameter modification value that is, do not adjust the model parameters of the first training model according to the accepted second model parameter modification value
  • use the first A training model is a communication decision model for the first communication site.
  • reduce the modification ratio of the accepted second model parameter modification value to the first training model for example, modify the first training model according to 80% of the accepted second model parameter modification value to obtain a modified model and use the modified
  • the decision-making effect of the model is verified. If the preset conditions are met, the first training model is modified according to the ratio and the accepted modification value of the second model parameter; if the preset conditions are not met, the ratio is reduced and the modification is continued. Until the decision effect of the modified model satisfies the preset condition or reaches the minimum ratio, discard the accepted modified value of the second model parameter.
  • Validating the first training model that is, the first pre-training model after adjusting the received second model parameter modification value according to the first decision sample, can ensure that the finally obtained communication decision model for the first communication site is at least in the first The decision effect is good on a decision sample.
  • the method further includes: S1033, modifying values according to the second model parameters accepted by the plurality of first communication sites, Adjust how the second decision samples are clustered.
  • the service node corresponding to the first communication site After determining the accepted modification value of the second model parameter, the service node corresponding to the first communication site sends the specific acceptance status (such as whether to accept, the ratio of acceptance, etc.) of the modified value of the second model parameter to the central node, and the central node according to The specific acceptance of the received second model parameter modification values sent by the plurality of service nodes adjusts the clustering manner of the second decision samples and sends them to the service nodes.
  • the specific acceptance status such as whether to accept, the ratio of acceptance, etc.
  • the specific acceptance of the modified value of the second model parameter and its corresponding clustering method are used as input, and machine learning (such as a deep neural network, decision tree, etc.) is performed to adjust the clustering method of the second decision samples of each business node.
  • machine learning such as a deep neural network, decision tree, etc.
  • the method of machine learning is not limited, and different methods can be flexibly selected according to the specific situation.
  • the modified sum or average value of each second decision sample in each cluster is a modified value of the second model parameter
  • the modified value of the second model parameter is related to the clustering method and clustering parameters of the second decision sample.
  • the acceptance of the modified value of the second model parameter also reflects the clustering method of the second decision sample and the quality of the clustering parameters.
  • adjusting the clustering method and clustering parameters of the second decision sample can determine an appropriate clustering method and clustering parameter of the second decision sample.
  • the method further includes steps S1041 and S1042.
  • step S1041 the initial model is adjusted according to the communication decision models of the plurality of first communication sites.
  • the central node acquires the model parameters of the communication decision model of each service node, and can adjust the initial model according to these model parameters and the modified value of the second model parameter of each service node.
  • the central node judges the proportion of the influence of each part of the model structure of the initial model on the final decision-making effect according to the difference between the model parameters of the initial model and the model parameters of multiple communication decision-making models (such as The influence of the connection weights of different parts in the neural network on the decision-making effect), and whether to remove the model structure that has less influence on the final decision-making effect is determined by manual decision.
  • the second decision samples and the corresponding second model parameter modification values are counted to assist manual decision-making to increase the model structure of the initial model, that is, to make part of the model structure of the initial model more complex.
  • the model structure corresponding to the densely distributed second decision-making samples may need to be more complex, so that the model can distinguish the densely distributed second-decision-making samples more finely and achieve better decision-making results. For example, increasing the number of rules or thresholds in the area corresponding to the densely distributed second decision samples in the decision tree, or increasing the number of intermediate layer node connections corresponding to the densely distributed second decision samples in the neural network, etc.
  • the model structure corresponding to the sparsely distributed second decision samples does not need to increase the complexity.
  • step S1042 return to the step of training the initial model according to the first decision sample.
  • the central node After completing the adjustment of the model structure and forming a new initial model, the central node sends the new initial model to the business node.
  • All service nodes use the same initial model.
  • the initial model corresponding to the service node can also be separately formed for different situations of different service nodes.
  • the parameters of the model are reduced, and the requirements for decision-making computing power are reduced; while increasing the complexity of the model structure corresponding to the densely distributed second decision-making samples can enhance the performance of the initial model.
  • the sensitivity of the structure at a specific position in the model structure can distinguish the densely distributed decision samples in a finer manner, and achieve better decision-making results.
  • the method for training a communication decision model can train a communication decision model for deciding whether a terminal should switch to a neighboring cell, which may include the following steps A01 to A05.
  • step A01 sample data is collected.
  • Sample data is collected in each cell, and the sample data includes the decisions made and the effects of the decisions.
  • the decisions made include switching to the neighboring cell and not switching to the neighboring cell; the effect of the decision can be obtained through the evaluation function, where the decision made is the effect of the decision corresponding to the sample switched to the neighboring cell, which can be determined by comparing whether the Switch back to this cell, and the time interval between switching back to this cell and switching out of this cell this time, and the feedback after switching out of this cell, etc. are used as inputs, and the input evaluation function is obtained; the decision made is not to switch to the adjacent cell.
  • the effect of the decision can be obtained by inputting the evaluation function by inputting whether the communication process is abnormally interrupted, the communication duration before the abnormal interruption, the throughput during the communication duration, and the corresponding power/scheduling cost.
  • step A02 an initial model is trained.
  • a deep neural network model is established in each cell, the structure of each deep neural network model is the same, and the parameters can be different.
  • the deep neural network model in each cell performs local learning according to the collected sample data to obtain the training model corresponding to the cell, and records the model parameter modification value of each sample data to the model parameters of the initial model.
  • step A03 the samples are clustered.
  • the base station Taking the base station as a unit, collect the sample data of the cells included in the base station and the corresponding modified values of the model parameters, and perform clustering on the collected sample data.
  • the clustering method is the k-center point algorithm.
  • the modified values of the model parameters corresponding to all the sample data in each cluster are arithmetically averaged as the modified values of the model parameters corresponding to the cluster.
  • step A04 a communication decision model is obtained.
  • Each cell in the base station incorporates the modified value of the model parameter into its own training model according to the modified value of the model parameter of the base station and the number of corresponding clustered sample data.
  • the specific process may be: modifying all model parameters corresponding to the base station
  • the values are merged into their own training model to obtain the adjusted model, and the decision-making effect of the adjusted model is verified according to the sample data of the own community. If the decision-making effect is not good, all the modified values of the model parameters will not be accepted, and the model will be restored to the original training model; if the decision-making effect is good, the model parameters will be modified according to the ratio of the number of sample data corresponding to the modified value of each model parameter.
  • the values are merged into their own trained model.
  • the central node forwards all the modified values of the model parameters of the base station to other base stations for the cells in the other base stations to process the modified values of the model parameters of the base station by using a method similar to the cell of the base station.
  • each cell in the base station will also receive the modified values of model parameters sent by other base stations, and also use the same method to process the received modified values of model parameters and adjust the training model accordingly to obtain the corresponding value of the cell.
  • Communication Decision Model
  • step A05 the initial model is adjusted.
  • the central node collects the models of each cell of each base station, including the average value and mean square error information of each model parameter, and at the same time refers to the reported clustering sample information to prune and supplement the deep neural network, and prune and supplement the pruned and supplemented data.
  • the deep neural network model is sent to each cell of each base station.
  • the central node adjusts the clustering method of the sample data according to the specific acceptance of the modified values of the model parameters of all base stations (such as whether to accept, the proportion of acceptance, etc.), and also sends the modified clustering method to each base station. community.
  • the communication decision model of this embodiment is used for query prediction, that is, knowing the current query used by the user or the previous queries, predict the user's next or next query.
  • the query is performed in advance, so that the results can be returned as soon as possible when the user uses it.
  • the steps of the embodiment of the present disclosure may include the following steps B01 to B05.
  • step B01 sample data is collected.
  • the query command format used by the application has a paradigm. All query paradigms of the application can be summed up through pattern matching, and by analyzing the scope clause. The index value corresponding to the query is extracted, the query paradigm and the index value are the sample data.
  • Each query used by the user can be regarded as a sample for learning, and the normal form and index value of the user's next query are used as the label of the predicted result to judge the pros and cons of the decision-making result.
  • step B02 the initial model is trained.
  • the initial model corresponding to each user's business node is two Bayesian belief networks, one for predicting the paradigm of the next query and the other for predicting the index value of the query.
  • the two Bayesian belief networks are trained to obtain a training model, and the network parameter modification values of each sample to each Bayesian belief network are recorded.
  • the Bayesian belief network used to predict the paradigm of the next query makes decisions according to the paradigm and viewing order that the user has viewed several times before, and can also record the values of some fields in the first few items of the current query result.
  • the Bayesian belief network used to predict the index value of the query performs the prediction of the index value. It also includes the index values used in the previous queries, and the values of some fields in the first few items of the previous query results.
  • step B03 the samples are clustered.
  • the sample data corresponding to each user is clustered.
  • the clustering method is the k-center point algorithm.
  • the index used for the clustering of the sample can be the query type and query index information of the previous queries or the user's next query. query type, etc.
  • the modified values of the model parameters corresponding to all the sample data in each cluster are arithmetically averaged as the modified values of the model parameters corresponding to the cluster.
  • step B04 a communication decision model is obtained.
  • the service node corresponding to each user sends all the modified values of model parameters to the central node, and of course, it can also be sent to the EMS (network element management system) or other background nodes.
  • EMS network element management system
  • the central node forwards the modified values of all model parameters corresponding to the user to the business nodes corresponding to other users, and the business nodes corresponding to each other user will modify the values of the model parameters corresponding to the user and the number of corresponding clustered sample data.
  • the modified values of model parameters are merged into one's own training model.
  • the specific process can be as follows: all the modified values of model parameters corresponding to the user are merged into their own training model to obtain the adjusted model, and the adjusted model is verified according to the user's sample data. decision-making effect of the model.
  • the model parameters will be modified according to the ratio of the number of sample data corresponding to the modified value of each model parameter. The values are merged into their own trained model.
  • the service node corresponding to the user will also receive the model parameter modification values sent by the service nodes corresponding to other users, and also use the same method to process the received model parameter modification values and adjust the training model accordingly to obtain the The communication decision model corresponding to the cell.
  • step B05 the initial model is adjusted.
  • the central node collects the communication decision model corresponding to each user, including the average value and mean square error information of each model parameter, and refers to the reported clustering sample information to adjust the input items and the intermediate judgment nodes.
  • the model is sent to the user node corresponding to each user.
  • the central node adjusts the clustering method of the sample data according to the specific acceptance of the modified values of all model parameters corresponding to all users (such as whether to accept, the proportion of acceptance, etc.), and also sends the modified clustering method to each user. the corresponding user node.
  • the present disclosure provides an electronic device comprising: one or more processors, a memory having one or more programs stored thereon, and when the one or more programs are processed by one or more One or more I/O interfaces, connected between the processor and the memory, are used to realize the information between the processor and the memory interact.
  • the processor is a device with data processing capability, which includes but is not limited to a central processing unit (CPU), etc.
  • the memory is a device with data storage capability, which includes but is not limited to random access memory (RAM, more specifically such as SDRAM) , DDR, etc.), read-only memory (ROM), electrified erasable programmable read-only memory (EEPROM), flash memory (FLASH); I/O interface (read and write interface) is connected between the processor and the memory to realize the memory Information interaction with the processor, including but not limited to a data bus (Bus) and the like.
  • RAM random access memory
  • ROM read-only memory
  • EEPROM electrified erasable programmable read-only memory
  • FLASH flash memory
  • I/O interface read and write interface
  • the present disclosure provides a computer-readable medium on which a computer program is stored, and when the program is executed by a processor, implements any of the above methods for training a communication decision model.
  • the processor is a device with data processing capability, which includes but is not limited to a central processing unit (CPU), etc.
  • the memory is a device with data storage capability, which includes but is not limited to random access memory (RAM, more specifically such as SDRAM) , DDR, etc.), read-only memory (ROM), electrified erasable programmable read-only memory (EEPROM), flash memory (FLASH); I/O interface (read and write interface) is connected between the processor and the memory, which can realize the memory and the memory.
  • RAM random access memory
  • ROM read-only memory
  • EEPROM electrified erasable programmable read-only memory
  • FLASH flash memory
  • I/O interface read and write interface
  • Information exchange of processors including but not limited to a data bus (Bus) and the like.
  • the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be composed of several physical components Components execute cooperatively.
  • Some or all physical components may be implemented as software executed by a processor, such as a central processing unit (CPU), digital signal processor or microprocessor, or as hardware, or as an integrated circuit such as Application-specific integrated circuits.
  • a processor such as a central processing unit (CPU), digital signal processor or microprocessor, or as hardware, or as an integrated circuit such as Application-specific integrated circuits.
  • Such software may be distributed on computer-readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media).
  • computer storage media includes both volatile and nonvolatile implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data flexible, removable and non-removable media.
  • Computer storage media include, but are not limited to, random access memory (RAM, more specifically SDRAM, DDR, etc.), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), flash memory (FLASH), or other disk storage ; compact disk-read only (CD-ROM), digital versatile disk (DVD), or other optical disk storage; magnetic cartridge, tape, magnetic disk storage, or other magnetic storage; any other storage that can be used to store desired information and that can be accessed by a computer medium.
  • communication media typically embodies computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism, and can include any information delivery media, as is well known to those of ordinary skill in the art .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

一种训练通信决策模型的方法,该方法包括:根据第一决策样本训练初始模型,以调整初始模型的模型参数,得到第一训练模型;其中,第一决策样本为第一通信站点已经做出的决策的样本(S101);获取至少一个第二训练模型的至少一个第二模型参数修改值;其中,每个第二训练模型是由初始模型根据一个第二通信站点的第二决策样本训练得到的,第二决策样本为该第二通信站点已经做出的决策的样本;每个第二模型参数修改值体现了该第二训练模型的模型参数相对初始模型的模型参数的至少部分修改(S102);根据至少部分第二模型参数修改值调整第一训练模型的模型参数,得到用于第一通信站点的通信决策模型(S103)。

Description

训练通信决策模型的方法、电子设备、计算机可读介质
相关申请的交叉引用
本申请要求2020年7月21日提交给中国专利局的第202010705787.7号专利申请的优先权,其全部内容通过引用合并于此。
技术领域
本公开涉及但不限于通信技术领域。
背景技术
通信网络存在大量的通信站点,这些通信站点常常需要做一些决策,如业务流程的走向、对用户的策略等。
使用人工设定的固定逻辑决策无法适应不同通信站点的不同情况,且需要长时间人工统计分析之后才能做到设定的参数相对准确。
使用机器学习的方法决策模型不容易收敛,且需要大量的传输资源,不同通信站点之间样本数量的差异容易造成样本数量少的通信站点决策效果不佳。
发明内容
第一方面,本公开提供一种训练通信决策模型的方法,所述通信决策模型用于通信站点的决策,所述方法包括:根据第一决策样本训练初始模型,以调整所述初始模型的模型参数,得到第一训练模型;其中,所述第一决策样本为第一通信站点已经做出的决策的样本;获取至少一个第二训练模型的至少一个第二模型参数修改值;其中,每个所述第二训练模型是由所述初始模型根据一个第二通信站点的第二决策样本训练得到的,所述第二决策样本为该第二通信站点已经做出的决策的样本;每个所述第二模型参数修改值体现了该第二训练模型的模型参数相对所述初始模型的模型参数的至少部分修改;根据至少部分所述第二模型参数修改值调整所述第一训练模型的模型参数, 得到用于第一通信站点的通信决策模型。
第二方面,本公开还提供一种电子设备,其包括:一个或多个处理器;存储器,其上存储有一个或多个程序,当所述一个或多个程序被所述一个或多个处理器执行时,使得所述一个或多个处理器执行本文所述任意一种训练通信决策模型的方法;一个或多个I/O接口,连接在所述一个或多个处理器与存储器之间,配置为实现所述一个或多个处理器与存储器的信号交互。
第三方面,本公开还提供一种计算机可读介质,其上存储有计算机程序,所述程序被处理器执行时实现本文所述任意一种训练通信决策模型的方法。
附图说明
图1为本公开提供的一种训练通信决策模型的方法的流程图;
图2为本公开提供的一种训练通信决策模型的方法中部分步骤的流程图;
图3为本公开提供的一种训练通信决策模型的方法的流程图;
图4为本公开提供的一种训练通信决策模型的方法中部分步骤的流程图;
图5为本公开提供的一种训练通信决策模型的方法的流程示意图;
图6为本公开提供的一种电子设备的组成框图;
图7为本公开提供的一种计算机可读介质的组成框图。
具体实施方式
为使本领域的技术人员更好地理解本公开的技术方案,下面结合附图对本公开提供的训练通信决策模型的方法、电子设备、计算机可读介质进行详细描述。
在下文中将参考附图更充分地描述本公开实施方式,但是所示的实施方式可以以不同形式来体现,且不应当被解释为限于本公开阐述的实施方式。反之,提供这些实施方式的目的在于使本公开透彻和 完整,并将使本领域技术人员充分理解本公开的范围。
本公开的附图用来提供对本公开实施方式的进一步理解,并且构成说明书的一部分,与本公开实施方式一起用于解释本公开,并不构成对本公开的限制。通过参考附图对详细示例实施方式进行描述,以上和其他特征和优点对本领域技术人员将变得更加显而易见,
本公开实施方式可借助本公开的理想示意图而参考平面图和/或截面图进行描述。因此,可根据制造技术和/或容限来修改示例图示。
在不冲突的情况下,本公开各实施方式及实施方式中的各特征可相互组合。
本公开所使用的术语仅用于描述特定实施方式,且不意欲限制本公开。如本公开所使用的术语“和/或”包括一个或多个相关列举条目的任何和所有组合。如本公开所使用的单数形式“一个”和“该”也意欲包括复数形式,除非上下文另外清楚指出。如本公开所使用的术语“包括”、“由……制成”,指定存在所述特征、整体、步骤、操作、元件和/或组件,但不排除存在或添加一个或多个其他特征、整体、步骤、操作、元件、组件和/或其群组。
除非另外限定,否则本公开所用的所有术语(包括技术和科学术语)的含义与本领域普通技术人员通常理解的含义相同。还将理解,诸如那些在常用字典中限定的那些术语应当被解释为具有与其在相关技术以及本公开的背景下的含义一致的含义,且将不解释为具有理想化或过度形式上的含义,除非本公开明确如此限定。
本公开实施方式不限于附图中所示的实施方式,而是包括基于制造工艺而形成的配置的修改。因此,附图中例示的区具有示意性属性,并且图中所示区的形状例示了元件的区的具体形状,但并不是旨在限制性的。
通信网络中存在大量的通信站点(例如基站、特定通信设备等),这些通信站点用于进行相应的通信业务,而在进行通信业务的过程中,可能需要作出一些“决策”(即决定“如何”进行通信业务),如做出决定业务流程的走向以及对用户的策略等的决策。
在一些相关技术中,通信站点使用固定逻辑进行决策,不同的通信站点使用相同的逻辑,但可以设置不同的逻辑参数。由于不同通信站点的情况差别较大,相同的逻辑可能无法适应各个通信站点的具体情况;且由于逻辑参数由人工设置,需要长时间的统计分析才能找到相对准确的逻辑参数。
在一些相关技术中,通信站点可以使用机器学习的方法进行决策,从多个通信站点收集已经做出决策的样本数据,并使用机器学习的方法对这些来自多个通信站点的数据进行学习形成对应的机器学习模型,各个通信站点根据得到的机器学习模型进行决策。
由于不同通信站点之间可能存在较大的差异,其对应的样本数据也会存在较大的差异,而机器学习本质是对样本数据进行学习,当不同通信站点之间样本数据数量差异较大时,就可能造成机器学习模型几乎没有学习到数量少的样本数据的特征,造成对应样本数据数量少的通信站点的决策效果较差。
而且,由于不同通信站点之间的较大的差异,故如果希望训练得到对“所有”通信站点的决策效果都好的模型,则可能造成形成的机器学习模型可能较为复杂,在训练的过程中不容易收敛。且机器学习模型较为复杂,其必然也会相对于简单的机器学习模型较大,因此,机器学习模型的传输也会需要较多的传输资源。
第一方面,参照图1,本公开提供一种训练通信决策模型的方法,通信决策模型用于通信网络中多个通信站点的决策。
每一个通信站点对应一个业务节点,该业务节点用于处理该通信站点的业务,如做出决策。多个通信站点对应一个中心节点,用于处理不同通信站点之间的业务,如不同通信站点之间的数据交换等。
不同的业务节点可以是不同的设备或者说位于不同的位置,如业务节点就是其对应的通信站点;不同的业务节点也可以是相同的设备或者位于相同的位置,如业务节点与中心节点是同一个服务器,即一个服务器完成本公开实施方式的方法的全部过程。
换言之,本公开实施方式的方法各步骤,可以是在不同设备中 进行的,也可以是在一个设备中进行的,而当不同步骤在不同设备中进行时,可能需要不同的数据传输过程,但只要包括本公开实施方式的方法的各步骤的实质过程,即属于本公开的保护范围。
如图1所示,在一个实施方式中,本公开的方法可以包括步骤S101至S103。
在步骤S101,根据第一决策样本训练初始模型,以调整初始模型的模型参数,得到第一训练模型;其中,第一决策样本为第一通信站点已经做出的决策的样本。
与第一通信站点对应的业务节点收集第一通信站点的第一决策样本,在收集一定数量的第一决策样本后,根据第一决策样本训练初始模型,或者说根据第一决策样本调整初始模型的模型参数,得到第一训练模型。
其中,第一决策样本为第一通信站点已经做出过决策的样本,其例如可以是特征信息(或者索引信息)、做出的决策以及决策的效果。初始模型是中心节点下发到各业务节点,其例如可以是已经通过大数据进行预训练的机器学习模型(如人工智能模型)的模型结构以及对应的模型参数,不同通信站点的初始模型都是相同的。
在步骤S102,获取至少一个第二训练模型的至少一个第二模型参数修改值;其中,每个第二训练模型是由初始模型根据一个第二通信站点的第二决策样本训练得到的,第二决策样本为该第二通信站点已经做出的决策的样本;每个第二模型参数修改值体现了该第二训练模型的模型参数相对初始模型的模型参数的至少部分修改。
中心节点获取至少一个第二通信站点上的第二训练模型的至少一个第二模型参数修改值并发送至与第一通信站点对应的业务节点。当然,若中心节点和业务节点为相同的通信站点或设备,则直接进行之后的处理,不需要进行发送步骤。
应当理解,本公开实施方式中,第二通信站点是相对于第一通信站点的,即当以某一个通信站点为第一通信站点执行本公开实施方式的方法时,其他的通信站点则都为第二通信站点。
也就是说,第一通信站点和第二通信站点是相对训练通信决策 模型的过程而言的,而不是特指某一个或某一些通信站点,即在一次训练通信决策模型的过程中的第一通信站点在下一次训练通信决策模型的过程中可能为第二通信站点,同样的,在在一次训练通信决策模型的过程中的第二通信站点在下一次训练通信决策模型的过程中可能为第一通信站点。
与第一通信站点类似,与第二通信站点对应的业务节点收集第二通信站点的第二决策样本,并根据第二决策样本(即第二通信站点上已经做出决策的样本)训练初始模型,或者说调整初始模型的模型参数,得到第二训练模型(对该第二通信站点自身而言,其自身也是“第一通信站点”,该模型就是其自己的“第一训练模型)”。
根据第二决策样本对初始模型的模型参数调整的调整,即第二训练模型的模型参数相对于初始模型模型参数的修改,就是第二模型参数修改值。每一个第二训练模型有至少一个第二模型参数修改值,每一个第二模型参数修改值体现了第二训练模型的模型参数相对于初始模型模型参数的部分修改,全部的第二模型参数修改值则体现了第二训练模型的模型参数相对于初始模型模型参数的全部修改。
在步骤S103,根据至少部分第二模型参数修改值调整第一训练模型的模型参数,得到用于第一通信站点的通信决策模型。
与第一通信站点对应的业务节点根据获取的第二模型参数修改值中的至少部分值调整训练得到的第一训练模型的模型参数,最终得到用于第一通信站点通信决策的通信决策模型。
本公开实施方式的训练通信决策模型的方法中,每个通信站点使用该通信站点的决策样本训练模型,避免了使用所有通信站点的决策样本训练统一模型时难以收敛,以及由于不同通信站点决策样本数量差异较大而造成的决策样本较少的通信站点决策效果较差的问题。同时,本公开实施方式中还通过其他通信站点的训练模型的模型修改值对第一通信站点的第一训练模型进行修改,避免第一决策样本较少时,第一训练模型由于训练样本较少,进入局部最优解的问题。
参照图3,在一些实施方式中,获取至少一个第二训练模型的至 少一个第二模型参数修改值(S102)包括:S1021、按照预定的过滤规则过滤所有第二训练模型的第二模型参数修改值,获取通过过滤的第二训练模型的第二模型参数修改值。
中心节点在接收到各个业务节点发送的第二模型参数修改值之后,按照一定的规则对第二模型参数修改值进行过滤,如根据之前的进行的训练通信决策模型的过程中其他业务节点对该业务节点的第二模型参数修改值的接受情况(即是否根据该业务节点的第二模型参数修改值修改其第一训练模型)对第二模型参数修改值进行过滤。
具体的判断是否对第二模型参数修改值过滤的方法可以是聚类,计算平均样本距离、计算样本距离均方差计算的重叠比例等。
通过过滤可以减少第二模型参数修改值,当中心节点和业务节点不在同一位置,数据需要传输时,就可以减少传输资源的浪费。
参照图2,在一些实施方式中,每个第二训练模型的第二模型参数修改值通过可以以下步骤S201和S202得到。
在步骤S201,将该第二训练模型的第二决策样本聚类。
与第二通信站点对应的业务节点将第二通信站点中训练初始模型的第二决策样本进行聚类,将第二决策样本聚成具有相似特征的几类。
聚类所使用的方法以及聚类参数不限定,可根据实际灵活进行选择,例如,根据第二决策样本的特征信息按照密集度对第二决策样本进行均匀聚类、根据第二决策样本的决策效果设定聚类数量、初始聚类点对第二决策样本进行聚类等。
在步骤S202,确定每个聚类中所有第二决策样本的修改的和为一个第二模型参数修改值,每个第二决策样本的修改为根据该第二决策样本训练初始模型时引起的模型参数的修改。
在根据第二决策样本训练初始模型时,每个第二决策样本输入初始模型前后,模型参数的变化,就是该第二决策样本的修改。根据一个聚类中每个第二决策样本的修改确定一个第二模型参数修改值,如每个聚类中每个第二决策样本的修改的和就是一个第二模型参数 修改值,或者每个聚类中每个第二决策样本的修改的平均值就是一个第二模型参数修改值。
与第二通信站点对应的业务节点根据每个聚类中的第二决策样本确定出多个第二模型参数修改值并将其通过中心节点发送至与第一通信站点对应的业务节点。
每一个聚类对应一个第二模型参数修改值,一方面可以减少第二模型参数修改值的数量,也就是减少了之后步骤的计算量。另一方面,相同聚类的第二决策样本具有相似的特征,因此其对初始模型的模型参数的修改也是类似的,每个第二决策样本的修改的和或者平均值是可以代表这些决策样本对初始模型的模型参数的修改的;若不是具有相似特征的第二决策样本其对初始模型的模型参数的修改可能是具有很大不同的,使用修改的和或者平均值并不能代替实际的修改。
参照图3,在一些实施方式中,根据至少部分第二模型参数修改值调整第一训练模型的模型参数,得到用于第一通信站点的通信决策模型(S103)包括步骤S1031和步骤S1032。
在步骤S1031,根据第一决策样本,计算根据各第二模型参数修改值对第一训练模型调整后的模型的决策效果,以确定出接受的第二模型参数修改值。
与第一通信站点对应的业务节点将获取的多个第二模型参数修改值一一进行验证,获取其决策效果,若决策效果满足一定条件,则确定该第二模型参数修改值为接受的第二模型参数修改值,若不满足,则可丢弃该第二模型参数修改值。
即根据一个第二模型参数修改值修改第一训练模型得到调整后的第一训练模型,将第一决策样本的特征信息输入调整后的第一训练模型获取决策结果,将获取的决策结果与第一决策样本的决策进行对比,获取调整后的第一训练模型的决策效果。具体的对比方法并不进行限定,可根据实际情况使用不同的对比方法,如平均法、置信度计算、基于统计的关键指标计算等。
若决策效果满足一定条件(如高于某个阈值),则确定该第二 模型参数修改值为接受的第二模型参数修改值,按照同样的方法,对所有的第二模型参数修改值进行验证,以确定接受的第二模型参数修改值。
在步骤S1032,根据接受的第二模型参数修改值调整第一训练模型的模型参数,得到用于第一通信站点的通信决策模型。
与第一通信站点对应的业务节点根据接受的第二模型参数修改值调整第一训练模型的模型参数,如根据接受的第二模型参数修改值对第一训练模型进行相应的修改,得到用于第一通信站点的通信决策模型。
由于第二模型参数修改值为第二决策样本对初始模型的修改,第二决策样本可能与第一决策样本,或者说第一通信站点的样本存在一定的不同,因此,第二模型参数修改值并不都是适合于第一通信站点的第一训练模型的,根据第一决策样本对第二模型参数修改值进行筛选,可以将不适合第一训练模型的第二模型参数修改值剔除,使得到的通信决策模型更适合第一通信站点。
参照图4,在一些实施方式中,根据接受的第二模型参数修改值调整第一训练模型的模型参数,得到用于第一通信站点的通信决策模型(S1032)包括:S10321、若存在多个接受的第二模型参数修改值,则按照每个接受的第二模型参数修改值以及其对应的权重调整第一训练模型的模型参数,每个接受的第二模型参数修改值对应的权重根据该第二模型参数修改值对应的第二决策样本的数量计算得出。
若与第一通信站点对应的业务节点判断出多个接受的第二模型参数修改值,则计算每个第二模型参数修改值对应的聚类中的第二决策样本的数量,计算出这多个第二模型参数修改值对应的第二决策样本的数量的比值,以该比值作为权重根据第二模型参数修改值对第一训练模型进行修改以得到用于第一通信站点的通信决策模型。
如存在3个第二模型参数修改值,其对应的第二决策样本的数量的比值为3:4:3,则根据第一个第二模型参数修改值的30%、第二个第二模型参数修改值的40%以及第三个第二模型参数修改值的 30%,修改第一训练模型以得到用于第一通信站点的通信决策模型。
当存在异常样本的时候,由于其数量较少,按照样本数量比值对第一训练模型参数修改,其对模型的修改也会相应变得很小,避免其对模型造成较大影响,最终影响模型的决策效果。
参照图4,在一些实施方式中,根据接受的第二模型参数修改值调整第一训练模型的模型参数,得到用于第一通信站点的通信决策模型(S1032)包括步骤S10322至步骤S10324。
在步骤S10322,根据接受的第二模型参数修改值调整第一训练模型获取第一预训练模型。
与第一通信站点对应的业务节点根据接受的第二模型参数修改值调整第一训练模型的模型参数,如根据接受的第二模型参数修改值对第一训练模型进行相应的修改,得到第一预训练模型。
在步骤S10323,根据第一决策样本计算第一预训练模型的决策效果。
与第一通信站点对应的业务节点将第一决策样本将的特征信息输入第一预训练模型获取决策结果,将获取的决策结果与第一决策样本的决策进行对比,获取第一预训练模型的决策效果。
具体的对比方法并不进行限定,可根据实际情况使用不同的对比方法,如平均法、置信度计算、基于统计的关键指标计算等。
在步骤S10324,若决策效果不满足预设条件,则确定第一训练模型为用于第一通信站点的通信决策模型。
若决策效果不满足预设条件(如低于预设阈值),则丢弃接受的第二模型参数修改值,即不根据接受的第二模型参数修改值调整第一训练模型的模型参数,以第一训练模型为用于第一通信站点的通信决策模型。
或者,降低接受的第二模型参数修改值对第一训练模型的修改比例,如按照接受的第二模型参数修改值的80%对第一训练模型进行修改得到修改后的模型并对修改后的模型的决策效果进行验证,若满足预设条件,则按照该比例以及接受的第二模型参数修改值对第一 训练模型进行修改;若不满足预设条件,则降低比例继续进行修改已经验证,直至修改后模型的决策效果满足预设条件或者达到最小比例丢弃接受的第二模型参数修改值。
对根据第一决策样本对接受的第二模型参数修改值调整后的第一训练模型,即第一预训练模型进行验证,可以保证最终得到的用于第一通信站点的通信决策模型至少在第一决策样本上决策效果好。
参照图3,在一些实施方式中,得到用于第一通信站点的通信决策模型(S1032)之后,所述方法还包括:S1033、根据多个第一通信站点接受的第二模型参数修改值,调整第二决策样本的聚类方式。
与第一通信站点对应的业务节点在确定接受的第二模型参数修改值之后,将第二模型参数修改值的具体接受情况(如是否接受、接受的比例等)发送至中心节点,中心节点根据接收到的多个业务节点发送的第二模型参数修改值的具体接受情况调整第二决策样本的聚类方式并发送至业务节点。
如将第二模型参数修改值的具体接受情况以及其对应的聚类方式作为输入,进行机器学习(如深度神经网络、决策树等),调整各个业务节点的第二决策样本的聚类方式。机器学习的方法并不限制,可根据具体情况灵活选择不同的方法。
每个聚类中每个第二决策样本的修改的和或者平均值就是一个第二模型参数修改值,第二模型参数修改值与第二决策样本的聚类方式、聚类参数等是相关的,因此第二模型参数修改值的接受情况也反应了第二决策样本的聚类方式、聚类参数的优劣。根据第二模型参数修改值,调整第二决策样本的聚类方式、聚类参数可以确定合适的第二决策样本的聚类方式、聚类参数等。
参照图3,在一些实施方式中,得到用于第一通信站点的通信决策模型(S103)之后,所述方法还包括步骤S1041和步骤S1042。
在步骤S1041,根据多个第一通信站点的通信决策模型调整初始模型。
中心节点获取各个业务节点的通信决策模型的模型参数,可以根据这些模型参数以及各个业务节点的第二模型参数修改值对初始模型进行调整。
如当多数业务节点的决策效果不好时,中心节点根据初始模型的模型参数与多个通信决策模型的模型参数的不同,判断初始模型的模型结构的各部分对最终决策效果的影响比例(如神经网络中不同部分的连接权值等对决策效果的影响),由人工决策是否去除对最终决策效果影响较小的模型结构。
统计第二决策样本以及对应的第二模型参数修改值,辅助人工决策增加初始模型的模型结构,即将初始模型的模型结构的部分结构变得更加复杂。
例如,可以是,获取所有第二模型参数修改值以及对应的第二决策样本的数量,并统计第二决策样本的分布情况。
对应分布密集的第二决策样本的模型结构可能需要更加复杂,以使模型能更精细的区分分布密集的第二决策样本,达到更好的决策效果。如增加判决树中对应分布密集的第二决策样本的区域的规则或阈值数量,或在神经网络中增加对应分布密集的第二决策样本的中间层节点连接数量等。对应分布稀疏的第二决策样本的模型结构则不需要增加复杂度。
需要注意的是,增加模型结构的复杂度应考虑业务节点的算力负担,增加模型结构后的初始模型的决策过程中使用的算力应保持在业务节点原有负荷水平或预设算力最高限制以下。
在步骤S1042,返回根据第一决策样本训练初始模型的步骤。
中心节点在完成模型结构调整,形成新的初始模型后,将新的初始模型下发至业务节点。
全部业务节点使用同样的初始模型,当然,也可以给针对不同的业务节点的不同情况单独形成对应于该业务节点的初始模型。
通过去除对最终决策效果影响较小的模型结构,减少了模型的参数,降低对决策算力的要求;而增加对应分布密集的第二决策样本的模型结构的复杂度,则可以增强初始模型的模型结构中特定位置的 结构的敏感度,以更精细的区分分布密集的决策样本,达到更好的决策效果。
第一示例性实施方式
参照图5,本公开实施方式的训练通信决策模型的方法,可训练用于决策终端是否切换至邻区的通信决策模型,其可以包括如下步骤A01至A05。
在步骤A01,采集样本数据。
在每一个小区采集样本数据,样本数据包括做出的决策、决策的效果。做出的决策包括切换至邻区以及不切换至邻区;决策的效果可以通过评估函数得到,其中,做出的决策为切换至邻区的样本对应的决策的效果,可以通过将之后是否又切换回本小区、以及切换回本小区与本次切换出本小区的时间间隔以及切换出本小区后的反馈等作为输入,输入评估函数得到;做出的决策为不切换至邻区的样本对应的决策的效果,可以通过将通信过程是否发生异常中断、异常中断前的通信持续时间、通信持续期间的吞吐量以及对应的功率/调度代价等作为输入,输入评估函数得到。
在步骤A02,训练初始模型。
在每一个小区建立一个深度神经网络模型,每个深度神经网络模型的结构是相同的,参数可以不同。
在每个小区的深度神经网络模型根据采集的样本数据,进行本地学习得到该小区对应的训练模型,并记录每个样本数据对初始模型的模型参数的模型参数修改值。
在步骤A03,样本聚类。
以基站为单位,收集该基站包括的小区的样本数据及其对应的模型参数修改值,对收集到的样本数据进行聚类,聚类的方法是k-中心点算法。
在得到相应的聚类后,将每一个聚类中所有样本数据对应的模型参数修改值进行算术平均作为该聚类对应的模型参数修改值。
在步骤A04,得到通信决策模型。
基站中每个小区根据该基站的模型参数修改值以及其对应的聚类的样本数据的数量将模型参数修改值合并自己的训练模型中,具体过程可以是:将该基站对应的所有模型参数修改值均合并至自己的训练模型中得到调整之后的模型,根据自己小区的样本数据验证该调整之后的模型的决策效果。若决策效果不好,则不接受所有的模型参数修改值,将模型还原回原来的训练模型;若决策效果好,则根据每个模型参数修改值对应的样本数据的数量的比值将模型参数修改值合并至自己的训练模型。
如果该基站的所有小区中有较大比例的小区采纳了基站的模型参数修改值,则将该基站的所有模型参数修改值发送给中心节点,当然也可以发送给EMS(网元管理系统)或其它后台节点。
中心节点将该基站的所有模型参数修改值转发给其它基站,以供其它基站中的小区使用与该基站的小区类似的方法对该基站模型参数修改值进行处理。
同样的,该基站中的每个小区也会接收其他基站发送的模型参数修改值,也使用相同的方法对接收到的模型参数修改值进行处理并对训练模型进行相应的调整得到该小区对应的通信决策模型。
在步骤A05,调整初始模型。
中心节点采集各基站的各个小区的模型,包括各模型参数的平均值和均方差信息,同时参考上报的聚类样本信息,进行深度神经网络的剪枝和增补,并将剪枝和增补后的深度神经网络模型发送至各个基站的各个小区。
同时,中心节点根据所有基站的模型参数修改值的具体接受情况(如是否接受、接受的比例等),调整样本数据的聚类方式,同样也将修改后的聚类方式发送至各个基站的各个小区。
第二示例性实施方式
与第一示例性实施方式不同的是,本实施方式的通信决策模型用于查询预测,即在知道用户使用的当前查询,或是之前几条查询的情况下,预测用户的下次或下几次查询,进行预先查询,以便用户使 用时可以尽快返回结果。参照图5,本公开实施方式的步骤可以包括如下步骤B01至B05。
在步骤B01,采集样本数据。
采集每个用户的应用程序对数据源的查询命令,一般而言,应用程序使用的查询命令格式是有范式的,可以通过模式匹配总结出应用程序所有查询范式,并通过分析其中的范围子句提取对应查询的索引值,查询范式以及索引值即为样本数据。
用户使用的每一次查询可以认为是一个供学习的样本,并通过用户的下一次查询的范式和索引值作为预测的结果的标注,判断决策结果的优劣。
在步骤B02,训练初始模型。
对应于每个用户的业务节点的初始模型为两个贝叶斯信念网络,一个用于预测下次查询的范式,另一个预测查询的索引值。根据采集到的样本数据对这两个贝叶斯信念网络进行训练得到训练模型并记录每一个样本对每个贝叶斯信念网络的网络参数修改值。
其中,用于预测下次查询的范式的贝叶斯信念网络根据用户之前几次查看的范式及查看顺序进行决策,同时可以记录当前查询结果前几条的某些字段的取值。
在判断了下次查询的范式之后,根据此范式,用于预测查询的索引值的贝叶斯信念网络进行索引值的预测,该贝叶斯信念网络的输入除了之前几次查询的范式外,还包括前几次查询所使用的索引值,以及前几次查询结果前几条的某些字段的取值。
在步骤B03,样本聚类。
对每个用户对应的样本数据进行聚类,聚类的方法是k-中心点算法,样本的聚类使用的索引可以是之前几条查询的查询类型和查询用索引信息或者用户的下一条的查询类型等。
在得到相应的聚类后,将每一个聚类中所有样本数据对应的模型参数修改值进行算术平均作为该聚类对应的模型参数修改值。
在步骤B04,得到通信决策模型。
每个用户对应的业务节点将所有模型参数修改值发送给中心节 点,当然也可以发送给EMS(网元管理系统)或其它后台节点。
中心节点将该用户对应的所有模型参数修改值转发给其他用户对应的业务节点,每个其他用户对应的业务节点根据该用户对应的模型参数修改值以及其对应的聚类的样本数据的数量将模型参数修改值合并自己的训练模型中,具体过程可以是:将该用户对应的所有模型参数修改值均合并至自己的训练模型中得到调整之后的模型,根据自己用户的样本数据验证该调整之后的模型的决策效果。若决策效果不好,则不接受所有的模型参数修改值,将模型还原回原来的训练模型;若决策效果好,则根据每个模型参数修改值对应的样本数据的数量的比值将模型参数修改值合并至自己的训练模型。
同样的,该用户对应的业务节点也会接收其他用户对应的业务节点发送的模型参数修改值,也使用相同的方法对接收到的模型参数修改值进行处理并对训练模型进行相应的调整得到该小区对应的通信决策模型。
在步骤B05,调整初始模型。
中心节点采集各用户对应的通信决策模型,包括各模型参数的平均值和均方差信息,同时参考上报的聚类样本信息,进行输入项目的调整,以及中间判断节点的调整,并将调整之后的模型发送至各用户对应的用户节点。
同时,中心节点根据所有用户对应的所有模型参数修改值的具体接受情况(如是否接受、接受的比例等),调整样本数据的聚类方式,同样也将修改后的聚类方式发送至各用户对应的用户节点。
第二方面,参照图6,本公开提供一种电子设备,其包括:一个或多个处理器,存储器,其上存储有一个或多个程序,当一个或多个程序被一个或多个处理器执行,使得一个或多个处理器实现上述任意一项的训练通信决策模型的方法;一个或多个I/O接口,连接在处理器与存储器之间,用于实现处理器与存储器的信息交互。
其中,处理器为具有数据处理能力的器件,其包括但不限于中央处理器(CPU)等;存储器为具有数据存储能力的器件,其包括但 不限于随机存取存储器(RAM,更具体如SDRAM、DDR等)、只读存储器(ROM)、带电可擦可编程只读存储器(EEPROM)、闪存(FLASH);I/O接口(读写接口)连接在处理器于存储器间,用于实现存储器与处理器的信息交互,其包括但不限于数据总线(Bus)等。
第三方面,参照图7,本公开提供一种计算机可读介质,其上存储有计算机程序,程序被处理器执行时实现上述任意一种训练通信决策模型的方法。
其中,处理器为具有数据处理能力的器件,其包括但不限于中央处理器(CPU)等;存储器为具有数据存储能力的器件,其包括但不限于随机存取存储器(RAM,更具体如SDRAM、DDR等)、只读存储器(ROM)、带电可擦可编程只读存储器(EEPROM)、闪存(FLASH);I/O接口(读写接口)连接在处理器与存储器间,能实现存储器与处理器的信息交互,其包括但不限于数据总线(Bus)等。
本领域普通技术人员可以理解,上文中所公开的全部或某些步骤、系统、装置中的功能模块/单元可以被实施为软件、固件、硬件及其适当的组合。
在硬件实施方式中,在以上描述中提及的功能模块/单元之间的划分不一定对应于物理组件的划分;例如,一个物理组件可以具有多个功能,或者一个功能或步骤可以由若干物理组件合作执行。
某些物理组件或所有物理组件可以被实施为由处理器,如中央处理器(CPU)、数字信号处理器或微处理器执行的软件,或者被实施为硬件,或者被实施为集成电路,如专用集成电路。这样的软件可以分布在计算机可读介质上,计算机可读介质可以包括计算机存储介质(或非暂时性介质)和通信介质(或暂时性介质)。如本领域普通技术人员公知的,术语计算机存储介质包括在用于存储信息(诸如计算机可读指令、数据结构、程序模块或其他数据)的任何方法或技术 中实施的易失性和非易失性、可移除和不可移除介质。计算机存储介质包括但不限于随机存取存储器(RAM,更具体如SDRAM、DDR等)、只读存储器(ROM)、带电可擦可编程只读存储器(EEPROM)、闪存(FLASH)或其他磁盘存储器;只读光盘(CD-ROM)、数字多功能盘(DVD)或其他光盘存储器;磁盒、磁带、磁盘存储或其他磁存储器;可以用于存储期望的信息并且可以被计算机访问的任何其他的介质。此外,本领域普通技术人员公知的是,通信介质通常包含计算机可读指令、数据结构、程序模块或者诸如载波或其他传输机制之类的调制数据信号中的其他数据,并且可包括任何信息递送介质。
本公开已经公开了示例实施方式,并且虽然采用了具体术语,但它们仅用于并仅应当被解释为一般说明性含义,并且不用于限制的目的。在一些实例中,对本领域技术人员显而易见的是,除非另外明确指出,否则可单独使用与特定实施方式相结合描述的特征、特性和/或元素,或可与其他实施方式相结合描述的特征、特性和/或元件组合使用。因此,本领域技术人员将理解,在不脱离由所附的权利要求阐明的本公开的范围的情况下,可进行各种形式和细节上的改变。

Claims (10)

  1. 一种训练通信决策模型的方法,所述通信决策模型用于通信站点的决策,所述方法包括:
    根据第一决策样本训练初始模型,以调整所述初始模型的模型参数,得到第一训练模型;其中,所述第一决策样本为第一通信站点已经做出的决策的样本;
    获取至少一个第二训练模型的至少一个第二模型参数修改值;其中,每个所述第二训练模型是由所述初始模型根据一个第二通信站点的第二决策样本训练得到的,所述第二决策样本为该第二通信站点已经做出的决策的样本;每个所述第二模型参数修改值体现了该第二训练模型的模型参数相对所述初始模型的模型参数的至少部分修改;
    根据至少部分所述第二模型参数修改值调整所述第一训练模型的模型参数,得到用于第一通信站点的通信决策模型。
  2. 根据权利要求1所述的方法,其中,每个第二训练模型的第二模型参数修改值通过以下方式得到:
    将该第二训练模型的第二决策样本聚类;
    确定每个聚类中所有第二决策样本的修改的和为一个第二模型参数修改值,每个第二决策样本的修改为根据该第二决策样本训练初始模型时引起的模型参数的修改。
  3. 根据权利要求2所述的方法,其中,所述根据至少部分所述第二模型参数修改值调整所述第一训练模型的模型参数,得到用于第一通信站点的通信决策模型包括:
    根据所述第一决策样本,计算根据各第二模型参数修改值对所述第一训练模型调整后的模型的决策效果,以确定出接受的第二模型参数修改值;
    根据接受的第二模型参数修改值调整所述第一训练模型的模型参数,得到用于第一通信站点的通信决策模型。
  4. 根据权利要求3所述的方法,其中,所述根据接受的第二模型参数修改值调整所述第一训练模型的模型参数,得到用于第一通信 站点的通信决策模型包括:
    若存在多个接受的第二模型参数修改值,则按照每个接受的第二模型参数修改值以及其对应的权重调整所述第一训练模型的模型参数,每个接受的第二模型参数修改值对应的权重根据该第二模型参数修改值对应的第二决策样本的数量计算得出。
  5. 根据权利要求3所述的方法,其中,所述根据接受的第二模型参数修改值调整所述第一训练模型的模型参数,得到用于第一通信站点的通信决策模型包括:
    根据接受的第二模型参数修改值调整所述第一训练模型获取第一预训练模型;
    根据所述第一决策样本计算所述第一预训练模型的决策效果;
    若所述决策效果不满足预设条件,则确定第一训练模型为用于第一通信站点的通信决策模型。
  6. 根据权利要求3所述的方法,其中,所述得到用于第一通信站点的通信决策模型之后还包括:
    根据多个第一通信站点接受的第二模型参数修改值,调整第二决策样本的聚类方式。
  7. 根据权利要求1所述的方法,其中,所述获取至少一个第二训练模型的至少一个第二模型参数修改值包括:
    按照预定的过滤规则过滤所有第二训练模型的第二模型参数修改值,获取通过过滤的第二训练模型的第二模型参数修改值。
  8. 根据权利要求1所述的方法,其中,所述得到用于第一通信站点的通信决策模型之后,还包括:
    根据多个第一通信站点的通信决策模型调整所述初始模型;
    返回所述根据第一决策样本训练初始模型的步骤。
  9. 一种电子设备,其包括:
    一个或多个处理器;
    存储器,其上存储有一个或多个程序,当所述一个或多个程序被所述一个或多个处理器执行时,使得所述一个或多个处理器实现根据权利要求1至8中任意一项所述的训练通信决策模型的方法;
    一个或多个I/O接口,连接在所述一个或多个处理器与存储器之间,配置为实现所述一个或多个处理器与存储器的信息交互。
  10. 一种计算机可读介质,其上存储有计算机程序,所述程序被处理器执行时实现根据权利要求1至8中任意一项所述的训练通信决策模型的方法。
PCT/CN2021/106219 2020-07-21 2021-07-14 训练通信决策模型的方法、电子设备、计算机可读介质 WO2022017231A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US18/016,400 US20230274184A1 (en) 2020-07-21 2021-07-14 Method for training communication decision model, electronic device, and computer-readable medium
EP21847428.6A EP4187439A4 (en) 2020-07-21 2021-07-14 METHOD FOR TRAINING A COMMUNICATIONS DECISION MODEL, ELECTRONIC DEVICE AND COMPUTER-READABLE MEDIUM

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010705787.7A CN114036994A (zh) 2020-07-21 2020-07-21 训练通信决策模型的方法、电子设备、计算机可读介质
CN202010705787.7 2020-07-21

Publications (1)

Publication Number Publication Date
WO2022017231A1 true WO2022017231A1 (zh) 2022-01-27

Family

ID=79728518

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/106219 WO2022017231A1 (zh) 2020-07-21 2021-07-14 训练通信决策模型的方法、电子设备、计算机可读介质

Country Status (4)

Country Link
US (1) US20230274184A1 (zh)
EP (1) EP4187439A4 (zh)
CN (1) CN114036994A (zh)
WO (1) WO2022017231A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116233857A (zh) * 2021-12-02 2023-06-06 华为技术有限公司 通信方法和通信装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080298486A1 (en) * 2007-06-04 2008-12-04 Nec Laboratories America, Inc. Multi-cell interference mitigation via coordinated scheduling and power allocation in downlink odma networks
US20100091729A1 (en) * 2008-09-24 2010-04-15 Nec Laboratories America, Inc. Distributed message-passing based resource allocation in wireless systems
CN107465636A (zh) * 2017-08-21 2017-12-12 清华大学 一种毫米波大规模阵列空频双宽带系统的信道估计方法
CN110797124A (zh) * 2019-10-30 2020-02-14 腾讯科技(深圳)有限公司 一种模型多端协同训练方法、医疗风险预测方法和装置
CN111431646A (zh) * 2020-03-31 2020-07-17 北京邮电大学 一种毫米波系统中的动态资源分配方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080298486A1 (en) * 2007-06-04 2008-12-04 Nec Laboratories America, Inc. Multi-cell interference mitigation via coordinated scheduling and power allocation in downlink odma networks
US20100091729A1 (en) * 2008-09-24 2010-04-15 Nec Laboratories America, Inc. Distributed message-passing based resource allocation in wireless systems
CN107465636A (zh) * 2017-08-21 2017-12-12 清华大学 一种毫米波大规模阵列空频双宽带系统的信道估计方法
CN110797124A (zh) * 2019-10-30 2020-02-14 腾讯科技(深圳)有限公司 一种模型多端协同训练方法、医疗风险预测方法和装置
CN111431646A (zh) * 2020-03-31 2020-07-17 北京邮电大学 一种毫米波系统中的动态资源分配方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4187439A4 *

Also Published As

Publication number Publication date
EP4187439A1 (en) 2023-05-31
US20230274184A1 (en) 2023-08-31
CN114036994A (zh) 2022-02-11
EP4187439A4 (en) 2024-07-17

Similar Documents

Publication Publication Date Title
CN108259367B (zh) 一种基于软件定义网络的服务感知的流策略定制方法
CN111294812B (zh) 一种资源扩容规划的方法及系统
WO2020077682A1 (zh) 一种服务质量评估模型的训练方法及装置
WO2021057382A1 (zh) 一种异常检测方法、装置、终端及存储介质
CN111510879B (zh) 基于多约束效用函数的异构车联网网络选择方法及系统
WO2019024553A1 (zh) 一种故障定界方法及设备
CN111476435B (zh) 基于密度峰值的充电桩负荷预测方法
CN108063676A (zh) 通信网络故障预警方法及装置
CN110891283A (zh) 一种基于边缘计算模型的小基站监控装置及方法
CN111385128B (zh) 突发负荷的预测方法及装置、存储介质、电子装置
CN106027317B (zh) 信任感知的Web服务质量预测系统及方法
CN109981234B (zh) 双载波和载波聚合的自适应调整方法、装置、设备及介质
WO2022017231A1 (zh) 训练通信决策模型的方法、电子设备、计算机可读介质
TWI684139B (zh) 基於自動學習的基地台異常之預測的系統與方法
CN111159243A (zh) 用户类型识别方法、装置、设备及存储介质
CN105141446A (zh) 一种基于客观权重确定的网络设备健康度评估方法
CN111797320A (zh) 数据处理方法、装置、设备及存储介质
CN111327480B (zh) 移动边缘环境下的Web服务多元QoS监控方法
CN117081909B (zh) 异常宽带修正方法、装置、电子设备及存储介质
EP3391589B1 (en) Autonomic method for managing a computing system
CN112508408B (zh) 一种边缘计算下无线资源管理指标的映射模型构建方法
CN109167673B (zh) 一种融合异常Qos数据检测的新型云服务筛选方法
KR20110004101A (ko) 계층적 클러스터링을 이용하여 비정상 트래픽을 분석하는 방법 및 장치
CN114880406A (zh) 一种数据管理方法及装置
CN113517990B (zh) 一种网络净推荐值nps的预测方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21847428

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2021847428

Country of ref document: EP

Effective date: 20230221