CN108134979B - Small base station switch control method based on deep neural network - Google Patents

Small base station switch control method based on deep neural network Download PDF

Info

Publication number
CN108134979B
CN108134979B CN201711261843.7A CN201711261843A CN108134979B CN 108134979 B CN108134979 B CN 108134979B CN 201711261843 A CN201711261843 A CN 201711261843A CN 108134979 B CN108134979 B CN 108134979B
Authority
CN
China
Prior art keywords
base station
training
sample
data
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711261843.7A
Other languages
Chinese (zh)
Other versions
CN108134979A (en
Inventor
潘志文
杜鹏程
尤肖虎
刘楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
White Box Shanghai Microelectronics Technology Co ltd
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN201711261843.7A priority Critical patent/CN108134979B/en
Publication of CN108134979A publication Critical patent/CN108134979A/en
Application granted granted Critical
Publication of CN108134979B publication Critical patent/CN108134979B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W52/00Power management, e.g. TPC [Transmission Power Control], power saving or power classes
    • H04W52/02Power saving arrangements
    • H04W52/0203Power saving arrangements in the radio access network or backbone network of wireless communication networks
    • H04W52/0206Power saving arrangements in the radio access network or backbone network of wireless communication networks in access points, e.g. base stations
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Abstract

The invention provides a small base station switch control method based on a deep neural network, which comprises the following steps: collecting user information in a base station; integrating all user data into a path data sample set for model training; constructing a neural network model; inputting data and training a model; collecting user data to be predicted, and predicting the position of the user at the next moment; and calculating the number of future service users of the base station, and controlling the base station to be switched on and off. The method controls the switching on and off of the small base stations in the ultra-dense network by predicting the number of people to be served in the base stations, thereby achieving the purposes of reducing the power consumption of the base stations, reducing the interference among the base stations and optimizing the resource distribution in the ultra-dense network; in the process of establishing the mathematical model, the method combines data mining and machine learning, and improves the accuracy of prediction and the practicability of the system.

Description

Small base station switch control method based on deep neural network
Technical Field
The invention belongs to the technical field of wireless resource management in mobile communication, relates to a base station switch control method, and particularly relates to a small base station switch control method based on a deep neural network.
Background
The ultra-dense heterogeneous network with the low-power small stations densely deployed at the same frequency in the coverage area of the macro station is an effective method for improving the spectrum utilization rate and the network capacity of a wireless network. However, on one hand, the terminals to be served are not distributed uniformly in space, a part of small base stations in a region run at full load, and a part of small base stations are unloaded, which causes waste of processing resources. On the other hand, the terminals to be served are not distributed uniformly in time, and the tidal effect exists in the user distribution in the cell, which also causes the waste of resources.
Disclosure of Invention
In order to solve the problems, the invention provides a small base station switch control method based on a deep neural network based on the idea that in an urban scene, a crowd moves along with an existing road and the position of the crowd at the next time point can be predicted.
In order to achieve the purpose, the invention provides the following technical scheme:
the small base station switch control method based on the deep neural network comprises the following steps:
the method comprises the following steps: collecting user information in a base station
Sampling once every a period of time, recording the user number, access time and user position information of the access base station, and putting a sample set L { (u) { (i,ti,pi) In which uiNumber for access user, tiFor recording the time of day, piFor the geographical position of the user, including the longitude coordinate xiAnd latitude coordinate yi
Step two: data integration
The user data collected by the base station in the step one are sorted and combined into path data for model training, and data samples of all users are combined to obtain a sample set L finally used for trainingtrain={(p1,i,p2,i,p3,i,p4,i,p5,i,p6,i)};
Step three: building neural network model
Selecting a fully-connected neural network as a training model, calculating a training error by adopting an average square error, obtaining a training result by adopting a common forward propagation method during training, and updating parameters in the neural network by adopting a reverse propagation method;
step four: inputting data and training models
1) Selecting the set L obtained in the step twotrainOne sample of (1), the sample being denoted as dj=(p1,j,p2,j,p3,j,p4,j,p5,j,p6,j),djMiddle and top 5 data (p)1,j,p2,j,p3,j,p4,j,p5,j) For input, last 1 data p6,jFor comparing with the predicted result and calculating error;
2) inputting the first 5 data (p) of one sample into the neural network1,j,p2,j,p3,j,p4,j,p5,j) The result of one training is obtained by adopting a forward propagation method in the neural network,i.e. predicted position coordinates
Figure BDA0001493739870000021
3) True values of predicted coordinates in the sample are
Figure BDA0001493739870000022
Calculating errors using actual and predicted coordinates
Figure BDA0001493739870000023
Updating parameters in the neural network by adopting a reverse propagation method to finish one-time sample training;
4) the whole sample set Ltrain is substituted and trained once, which is called a round of sample set training, so that a plurality of rounds of sample set training are carried out, and after each round of training, the training error of the round is calculated
Figure BDA0001493739870000024
When | Ei+1-Ei|<ecWhen so, the training is stopped; at the moment, the parameters of the model are updated, and the model training is finished;
wherein E isiError for the ith round of training, ei,jTraining error for jth sample in ith round of training sample set, ecIs the minimum error constant;
step five: collecting the user data to be predicted, and predicting the position of the user at the next moment
1) Let the time to be predicted be tpredictAcquiring user data according to the step one, and recording the user data as L { (u)i',ti',pi')};
2) Integrating data according to the steps in the step two and recording the data as a set Lpredict={(p1',i,p'2,i,p'3,i,p'4,i,p'5,i)};
3) Will set LpredictThe sample in (1) is input into the model, so that the position prediction result of the user can be obtained, and the prediction result set is recorded as
Figure BDA0001493739870000025
Step six: calculating the number of future service users of the base station, and controlling the switch of the base station
1) Information of base station is recorded as set
Figure BDA0001493739870000026
Wherein the content of the first and second substances,
Figure BDA0001493739870000027
is a longitude coordinate of the location of the base station,
Figure BDA0001493739870000028
as latitude coordinate of base station location, numiNumber of future serving users for the base station;
2) the prediction result set obtained from the step five
Figure BDA0001493739870000029
In the method, samples are selected in sequence
Figure BDA00014937398700000210
Calculating the distance between the base station and each base station in the set C
Figure BDA00014937398700000211
Obtain the number of the base station nearest to the sample
Figure BDA00014937398700000212
Corresponding num in set CiAdding 1;
3) setting the threshold value for controlling the base station switch to numcTraversing all base station data in the set C, when numi≥numcWhen the base station is started, starting a corresponding base station i; when numi≤numcAnd closing the corresponding base station i.
Further, the process of sorting and merging the user data in the second step specifically includes the following steps:
1) selecting a fixed user number c from the set L, collecting the data of the user, and recording as a set
Figure BDA00014937398700000213
2) Mixing L withcAccording to time of the sample
Figure BDA0001493739870000031
Sorting is carried out;
3) the samples in the set are grouped in groups of six in time order, each group being a new sample, i.e. each new sample being
Figure BDA0001493739870000032
k is a packet group number;
4) removing temporal data in new samples
Figure BDA0001493739870000033
Retaining only position coordinate data
Figure BDA0001493739870000034
Namely, it is
Figure BDA0001493739870000035
Get user (u)iC) sample set for training, noted
Figure BDA0001493739870000036
5) The samples of other users are sorted according to the steps, and the sample sets of all users are combined together to obtain a sample set finally used for training, and the sample set is marked as Ltrain={(p1,i,p2,i,p3,i,p4,i,p5,i,p6,i)}。
Further, the neural network selected in the third step has 5 layers of neurons in total, the first layer has 10 input neurons, and the last layer has 2 output neurons, and is used for predicting unknown coordinates; and in the fourth step, each data in the sample data input to the neural network comprises 10 numbers of longitude and latitude coordinates for input, and the data corresponds to input neurons of the neural network one by one.
Further, in the fifth step, the prediction time and the time of collecting the sample are set within a certain time range.
Compared with the prior art, the invention has the following advantages and beneficial effects:
the method controls the switching on and off of the small base stations in the ultra-dense network by predicting the number of people to be served in the base stations, thereby achieving the purposes of reducing the power consumption of the base stations, reducing the interference among the base stations and optimizing the resource distribution in the ultra-dense network; in the process of establishing the mathematical model, the method combines data mining and machine learning, and improves the accuracy of prediction and the practicability of the system.
Detailed Description
The technical solutions provided by the present invention will be described in detail below with reference to specific examples, and it should be understood that the following specific embodiments are only illustrative of the present invention and are not intended to limit the scope of the present invention.
The invention uses a deep neural network model to establish a prediction model of the crowd position and predict the number of people to be served in a future small base station. Specifically, the small base station switch control method based on the deep neural network provided by the invention comprises the following steps:
the first step is as follows: user information in a base station is collected. Sampling once every minute, recording the user number, access time and user position information of the access base station, and putting a sample set L { (u) }i,ti,pi) In which uiNumber for access user, tiTo record the time of day, the time is accurate to minutes, piFor the geographical position of the user, including the longitude coordinate xiAnd latitude coordinate yi
The second step is that: and data integration, namely, sorting and combining the user data collected by the base station in the first step into path data for model training.
1) In the set L, a fixed user number (u) is selectediC), collecting the data of the user and recording the data as a set
Figure BDA0001493739870000037
2) Mixing L withcAccording to time of the sample
Figure BDA0001493739870000038
And (6) sorting.
3) The samples in the set are grouped in groups of six in time order, each group being a new sample, i.e. each new sample being
Figure BDA0001493739870000041
k is the packet group number.
4) Removing temporal data in new samples
Figure BDA0001493739870000042
Retaining only position coordinate data
Figure BDA0001493739870000043
Namely, it is
Figure BDA0001493739870000044
At this point, the user (u) is obtainediC) sample set for training, noted
Figure BDA0001493739870000045
5) The samples of other users are sorted according to the steps, and the sample sets of all users are combined together to obtain a sample set finally used for training, and the sample set is recorded as
Figure BDA0001493739870000046
The third step: and constructing a neural network model.
And selecting a fully-connected neural network as a training model. The model has 5 layers of neurons in total, the first layer has 10 input neurons, the last layer has 2 output neurons to predict unknown coordinates, and the middle three layers each have 50 hidden neurons (the value can be automatically adjusted by an operator according to the complexity of an actual network). The training error is calculated by using the average square error, and the training step size is set to 0.001 (the value can be adjusted by an operator according to the complexity of the actual network). During training, a common forward propagation method is adopted to obtain a training result, and parameters in the neural network are updated by adopting a reverse propagation method.
The fourth step: data is input and the model is trained.
1) Selecting the set L obtained in the second steptrainOne sample of (1), the sample being denoted as dj=(p1,j,p2,j,p3,j,p4,j,p5,j,p6,j),djMiddle and top 5 data (p)1,j,p2,j,p3,j,p4,j,p5,j) For input, last 1 data p6,jFor comparison with the predicted result, calculating an error.
2) Inputting the first 5 data (p) of one sample into the neural network1,j,p2,j,p3,j,p4,j,p5,j) Each datum is position data, comprises two numbers of longitude and latitude coordinates, the total number of 10 numbers is used for inputting, the datum corresponds to input neurons of the neural network one by one, and a result of one training is obtained by adopting a forward propagation method in the neural network and is recorded as
Figure BDA0001493739870000047
I.e. the predicted position coordinates.
3) True values of predicted coordinates in the sample are
Figure BDA0001493739870000048
Calculating errors using actual and predicted coordinates
Figure BDA0001493739870000049
And updating parameters in the neural network by adopting a reverse propagation method to finish one-time sample training.
4) The whole sample set LtrainAnd substituting and training once, which is called a round of sample set training. After one round of sample set training, calculating the training error of the round
Figure BDA00014937398700000410
(EiError for the ith round of training, ei,jTraining error for the jth sample in the set of samples for the ith round of training). Performing multiple rounds of sample set training, when | Ei+1-Ei|<ec(ecThe minimum error constant, which can be adjusted by the operator according to the actual network operation), the training is stopped. At this time, the parameters of the model are updated, and the model training is finished.
The fifth step: and collecting the data of the user to be predicted, and predicting the position of the user at the next moment.
1) Let the time to be predicted be tpredictAcquiring user data according to the step one, and recording the user data as L { (u)i',ti',pi')}(1≤tpredict-ti' ≦ 5) indicates that the time at which the sample was taken is within 5 minutes of the predicted time (this value may be adjusted by the operator on its own depending on the actual network operation).
2) Integrating the data according to the steps in the second step, and recording as a set Lpredict={(p1',i,p'2,i,p'3,i,p'4,i,p'5,i) The set of samples at this time is used for prediction, only input, no output, so there are only 5 position data per sample.
3) Will set LpredictThe sample in (1) is input into the model, so that the position prediction result of the user can be obtained, and the prediction result set is recorded as
Figure BDA0001493739870000051
And a sixth step: and calculating the number of future service users of the base station, and controlling the base station to be switched on and off.
1) Information of base station is recorded as set
Figure BDA0001493739870000052
Wherein the content of the first and second substances,
Figure BDA0001493739870000053
is the longitude coordinate, y, of the base station positioni cAs latitude coordinate of base station location, numiThe number of future serving users for the base station is initialized to 0.
2) Set of predicted results from the fifth step
Figure BDA0001493739870000054
In the method, samples are selected in sequence
Figure BDA0001493739870000055
Calculating the distance between the base station and each base station in the set C
Figure BDA0001493739870000056
Obtain the number of the base station nearest to the sample
Figure BDA0001493739870000057
Corresponding num in set CiPlus 1.
3) Setting the threshold value for controlling the base station switch to numc(the value can be determined by the operator according to the actual network operation condition), all the base station data in the set C are traversed, and when num is reachedi≥numcWhen the base station is started, starting a corresponding base station i; when numi≤numcAnd closing the corresponding base station i.
The technical means disclosed in the invention scheme are not limited to the technical means disclosed in the above embodiments, but also include the technical scheme formed by any combination of the above technical features. It should be noted that those skilled in the art can make various improvements and modifications without departing from the principle of the present invention, and such improvements and modifications are also considered to be within the scope of the present invention.

Claims (2)

1. The small base station switch control method based on the deep neural network is characterized by comprising the following steps:
the method comprises the following steps: collecting user information in a base station
Sampling once every a period of time, recording the user number, access time and user position information of the access base station, and putting a sample set L { (u) { (i,ti,pi) In which uiNumber for access user, tiFor recording the time of day, piFor the geographical position of the user, including the longitude coordinate xiAnd latitude coordinate yi
Step two: data integration
The user data collected by the base station in the step one are sorted and combined into path data for model training, and data samples of all users are combined to obtain a sample set L finally used for trainingtrain={(p1,i,p2,i,p3,i,p4,i,p5,i,p6,i)};
The method specifically comprises the following steps:
1) selecting a fixed user number c from the set L, collecting the data of the user, and recording as a set
Figure FDA0002300271730000011
2) Mixing L withcAccording to time of the sample
Figure FDA0002300271730000012
Sorting is carried out;
3) the samples in the set are grouped in groups of six in time order, each group being a new sample, i.e. each new sample being
Figure FDA0002300271730000013
k is a packet group number;
4) removing temporal data in new samples
Figure FDA0002300271730000014
Retaining only position coordinate data
Figure FDA0002300271730000015
Namely, it is
Figure FDA0002300271730000016
Get user (u)iC) sample set for training, noted
Figure FDA0002300271730000017
5) The samples of other users are sorted according to the steps, and the sample sets of all users are combined together to obtain a sample set finally used for training, and the sample set is marked as Ltrain={(p1,i,p2,i,p3,i,p4,i,p5,i,p6,i)};
Step three: building neural network model
Selecting a full-connection neural network as a training model, wherein the model has 5 layers of neurons in total, the first layer has 10 input neurons, and the last layer has 2 output neurons to predict unknown coordinates; calculating the training error by adopting an average square error, obtaining a training result by adopting a common forward propagation method during training, and updating parameters in the neural network by adopting a reverse propagation method;
step four: inputting data and training models
1) Selecting the set L obtained in the step twotrainOne sample of (1), the sample being denoted as dj=(p1,j,p2,j,p3,j,p4,j,p5,j,p6,j),djMiddle and top 5 data (p)1,j,p2,j,p3,j,p4,j,p5,j) For input, last 1 data p6,jFor comparing with the predicted result and calculating error;
2) inputting the first 5 data (p) of one sample into the neural network1,j,p2,j,p3,j,p4,j,p5,j) Each datum is position data, comprises two numbers of longitude and latitude coordinates, the total number of 10 numbers is used for inputting, the datum corresponds to input neurons of the neural network one by one, and a forward propagation method in the neural network is adopted to obtain a training result, namely a predicted position coordinate
Figure FDA0002300271730000021
3) True values of predicted coordinates in the sample are
Figure FDA0002300271730000022
Calculating errors using actual and predicted coordinates
Figure FDA0002300271730000023
Updating parameters in the neural network by adopting a reverse propagation method to finish one-time sample training;
4) the whole sample set LtrainSubstituting and training once, namely one round of sample set training, so as to carry out multiple rounds of sample set training, and calculating the training error E of each round after each round of trainingi=∑ei,jWhen | Ei+1-Ei|<ecWhen so, the training is stopped; at the moment, the parameters of the model are updated, and the model training is finished;
wherein E isiError for the ith round of training, ei,jTraining error for jth sample in ith round of training sample set, ecIs the minimum error constant;
step five: collecting the user data to be predicted, and predicting the position of the user at the next moment
1) Let the time to be predicted be tpredictAcquiring user data according to the step one, and recording the user data as L '{ (u'i,t′i,p′i)};
2) Integrating data according to the steps in the step two and recording the data as a set Lpredict={(p′1,i,p'2,i,p'3,i,p'4,i,p'5,i)};
3) Will set LpredictThe sample in (1) is input into the model, so that the position prediction result of the user can be obtained, and the prediction result set is recorded as
Figure FDA0002300271730000024
Step six: calculating the number of future service users of the base station, and controlling the switch of the base station
1) Information of base station is recorded as set
Figure FDA0002300271730000025
Wherein the content of the first and second substances,
Figure FDA0002300271730000026
is a longitude coordinate of the location of the base station,
Figure FDA0002300271730000027
as latitude coordinate of base station location, numiNumber of future serving users for the base station;
2) the prediction result set obtained from the step five
Figure FDA0002300271730000028
In the method, samples are selected in sequence
Figure FDA0002300271730000029
Calculating the distance between the base station and each base station in the set C
Figure FDA00023002717300000210
Obtain the number of the base station nearest to the sample
Figure FDA00023002717300000211
Corresponding num in set CiAdding 1;
3) setting the threshold value for controlling the base station switch to numcTraversing all base station data in the set C, when numi≥numcWhen the base station is started, starting a corresponding base station i; when numi≤numcAnd closing the corresponding base station i.
2. The deep neural network-based small cell switch control method according to claim 1, wherein: and in the fifth step, the prediction time and the time for collecting the sample are set within a certain time range.
CN201711261843.7A 2017-12-04 2017-12-04 Small base station switch control method based on deep neural network Active CN108134979B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711261843.7A CN108134979B (en) 2017-12-04 2017-12-04 Small base station switch control method based on deep neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711261843.7A CN108134979B (en) 2017-12-04 2017-12-04 Small base station switch control method based on deep neural network

Publications (2)

Publication Number Publication Date
CN108134979A CN108134979A (en) 2018-06-08
CN108134979B true CN108134979B (en) 2020-04-14

Family

ID=62389867

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711261843.7A Active CN108134979B (en) 2017-12-04 2017-12-04 Small base station switch control method based on deep neural network

Country Status (1)

Country Link
CN (1) CN108134979B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108834079B (en) * 2018-09-21 2020-05-01 北京邮电大学 Load balancing optimization method based on mobility prediction in heterogeneous network
CN109447275B (en) * 2018-11-09 2022-03-29 西安邮电大学 Switching method based on machine learning in UDN
CN109729540B (en) * 2019-01-18 2022-05-17 福建福诺移动通信技术有限公司 Base station parameter automatic optimization method based on neural network
CN110072016A (en) * 2019-01-29 2019-07-30 浙江鹏信信息科技股份有限公司 A method of bad Classification of Speech is realized using call behavior time-domain filtering
CN109819522B (en) * 2019-03-15 2021-08-24 电子科技大学 User bandwidth resource allocation method for balancing energy consumption and user service quality
US20230362823A1 (en) * 2020-07-27 2023-11-09 Telefonaktiebolaget Lm Ericsson (Publ) Method Performed by a Radio Network Node for Determining a Changed Bandwidth Interval
CN114339962B (en) * 2020-09-29 2023-07-14 中国移动通信集团设计院有限公司 Base station energy saving method, device and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101128056A (en) * 2007-09-19 2008-02-20 中兴通讯股份有限公司 A method for interference elimination between cooperative cells
CN106683666A (en) * 2016-12-23 2017-05-17 上海语知义信息技术有限公司 Field adaptive method based on deep neural network (DNN)

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101128056A (en) * 2007-09-19 2008-02-20 中兴通讯股份有限公司 A method for interference elimination between cooperative cells
CN101128056B (en) * 2007-09-19 2012-01-18 中兴通讯股份有限公司 A method for interference elimination between cooperative cells
CN106683666A (en) * 2016-12-23 2017-05-17 上海语知义信息技术有限公司 Field adaptive method based on deep neural network (DNN)

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Coverage and Rate Analysis for Non-uniform Millimeter-Wave Heterogeneous Cellular Network;刘楠,潘志文,尤肖虎 等;《2016年第8届无线通信与信号处理国际会议(WCSP)》;20161015;全文 *

Also Published As

Publication number Publication date
CN108134979A (en) 2018-06-08

Similar Documents

Publication Publication Date Title
CN108134979B (en) Small base station switch control method based on deep neural network
WO2021169577A1 (en) Wireless service traffic prediction method based on weighted federated learning
CN108123828B (en) Ultra-dense network resource allocation method based on access user mobility prediction
RU2534740C2 (en) Method and device for determining terminal mobility state
CN103874132A (en) Heterogeneous wireless network access selection method based on users
CN107249200A (en) A kind of switching method of application Fuzzy Forecasting Model
CN110149595B (en) HMM-based heterogeneous network user behavior prediction method
Zhang et al. Unveiling taxi drivers' strategies via cgail: Conditional generative adversarial imitation learning
CN113498137A (en) Method and device for obtaining cell relation model and recommending cell switching guide parameters
CN111246552A (en) Base station dormancy method based on mobile network flow prediction
CN114585006B (en) Edge computing task unloading and resource allocation method based on deep learning
CN113993172B (en) Ultra-dense network switching method based on user movement behavior prediction
CN111343680A (en) Switching time delay reduction method based on reference signal received power prediction
Yang et al. Deep reinforcement learning based wireless network optimization: A comparative study
Wang et al. Reputation-enabled federated learning model aggregation in mobile platforms
Saffar et al. Semi-supervised deep learning-based methods for indoor outdoor detection
Liu et al. Dynamic multichannel sensing in cognitive radio: Hierarchical reinforcement learning
Gao et al. Accurate load prediction algorithms assisted with machine learning for network traffic
Xue et al. Deep learning based channel prediction for massive MIMO systems in high-speed railway scenarios
Dai et al. Multi-objective intelligent handover in satellite-terrestrial integrated networks
Gao et al. Deep learning based location prediction with multiple features in communication network
CN113919483A (en) Method and system for constructing and positioning radio map in wireless communication network
CN111372182A (en) Positioning method, device, equipment and computer readable storage medium
WO2023011371A1 (en) Method and system for configuring a threshold value for a handover parameter of a wireless communication system
CN109548138B (en) Tracking area list management method based on overlapping community detection in small cellular network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210407

Address after: 201306 building C, No. 888, Huanhu West 2nd Road, Lingang New Area, Pudong New Area, Shanghai

Patentee after: Shanghai Hanxin Industrial Development Partnership (L.P.)

Address before: 211189 No. 2, Four Pailou, Xuanwu District, Nanjing City, Jiangsu Province

Patentee before: SOUTHEAST University

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230913

Address after: 201615 room 301-6, building 6, no.1158, Jiuting Central Road, Jiuting Town, Songjiang District, Shanghai

Patentee after: White box (Shanghai) Microelectronics Technology Co.,Ltd.

Address before: 201306 building C, No. 888, Huanhu West 2nd Road, Lingang New Area, Pudong New Area, Shanghai

Patentee before: Shanghai Hanxin Industrial Development Partnership (L.P.)