CN110972174B - Wireless network interruption detection method based on sparse self-encoder - Google Patents
Wireless network interruption detection method based on sparse self-encoder Download PDFInfo
- Publication number
- CN110972174B CN110972174B CN201911214239.8A CN201911214239A CN110972174B CN 110972174 B CN110972174 B CN 110972174B CN 201911214239 A CN201911214239 A CN 201911214239A CN 110972174 B CN110972174 B CN 110972174B
- Authority
- CN
- China
- Prior art keywords
- encoder
- kpi information
- opt
- layer
- self
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 26
- 238000012549 training Methods 0.000 claims abstract description 10
- 238000012545 processing Methods 0.000 claims abstract description 6
- 239000013598 vector Substances 0.000 claims description 32
- 238000000034 method Methods 0.000 claims description 29
- 210000002569 neuron Anatomy 0.000 claims description 25
- 238000007477 logistic regression Methods 0.000 claims description 11
- 230000004913 activation Effects 0.000 claims description 6
- 238000010348 incorporation Methods 0.000 claims description 6
- 238000000638 solvent extraction Methods 0.000 claims description 4
- 238000011478 gradient descent method Methods 0.000 claims description 3
- 230000007935 neutral effect Effects 0.000 claims description 3
- 238000013528 artificial neural network Methods 0.000 claims description 2
- 238000004364 calculation method Methods 0.000 abstract 1
- 230000006870 function Effects 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 3
- 210000004027 cell Anatomy 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24147—Distances to closest patterns, e.g. nearest neighbour classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W24/00—Supervisory, monitoring or testing arrangements
- H04W24/04—Arrangements for maintaining operational condition
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W24/00—Supervisory, monitoring or testing arrangements
- H04W24/10—Scheduling measurement reports ; Arrangements for measurement reports
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Molecular Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Complex Calculations (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
The invention discloses a wireless network interruption detection method based on a sparse self-encoder 1 And S 0 Respectively carrying out data calculation, recombining a data set V, then processing the data set V by using a self-encoder, calculating by defining a cost function and a reverse algorithm of a sparse self-encoder to obtain a new data training set U, establishing an interrupt detection model of a wireless network according to the data training set, and finally realizing the real-time report of KPI information x according to users in the network i And performing interruption detection. The invention realizes the high-precision detection of the wireless network under the small sample data and also saves a large amount of time for collecting the data.
Description
Technical Field
The invention belongs to a wireless network technology in mobile communication, and particularly relates to a wireless network interruption detection method based on a sparse self-encoder.
Background
The interruption detection is one of key technologies of the wireless network, and has important significance for improving the operation and maintenance performance of the wireless network. The existing interrupt detection technology needs more data samples to ensure better detection performance. However, since wireless network outages are a small probability event, it is difficult to collect sufficient samples. Therefore, how to improve the wireless network outage detection performance under the condition of small sample number becomes an important issue.
Disclosure of Invention
The purpose of the invention is as follows: aiming at the problem of insufficient detection performance of wireless network interruption in the prior art, the invention discloses a wireless network interruption detection method based on a sparse self-encoder, which can accurately complete interruption detection under the condition of small data volume.
The technical scheme is as follows: a wireless network interrupt detection method based on a sparse self-encoder comprises the following steps:
(1) Collecting network key performance indexes and establishing a data set S;
(2) Processing the data set S based on a minority class oversampling algorithm, including partitioning the subsets S according to sample labels in the data set S 1 And subset S 0 And computing the subset S 1 KPI information x of medium element i And x 0 To obtain a data set V, V = S 3 ∪S 1
(3) Processing a data set V by using a sparse self-encoder, wherein the data set V comprises defining a cost function of the sparse self-encoder, solving a minimum value through a back propagation algorithm, and training and updating to obtain a set U;
(4) Taking the U as a training data set, and obtaining an interrupt detection model by using logistic regression;
(5) According to KPI information x reported by users in real time in a wireless network i And performing interruption detection.
Further, the step (1) comprises the following steps:
(11) Acquiring KPI information reported by a user within time T in a wireless network;
(12) The KPI information is saved as a data set S, expressed as follows:
S={(x 1 ,y 1 ),(x 2 ,y 2 ),...,(x i ,y i ),...,(x m ,y m )}
wherein m is the number of elements in S, the ith element (x) in S i ,y i ) In (1),x i ∈R n indicating n-dimensional KPI information, R, reported by a user at a certain time n Is an n-dimensional vector space, i =1,2, …, m;
y i is x i The label (2) indicates the state of the base station, and the value is 1 or 0; y is i =1 denotes that the base station is in a normal state, y i =0 represents that the base station is in an interruption state;
(13) Counting the number of elements with labels of 1 and 0 in the data set S, and respectively recording the number as N 1 And N 0 (ii) a If N is present 0 =0, perform step (11), otherwise perform step (2).
Further, the step (2) comprises the following steps:
(21) Partitioning S into subsets S according to labels of samples in dataset S 1 And subset S 0 The subset S 1 The element in (1) is a sample labeled 1, subset S 0 Element (b) is a sample labeled 0;
(22) If N is present 0 =1, indicating the subset S 0 Has only one element in it, and is marked as (x) 0 0); then traverse subset S 1 According to subset S 1 KPI information x of each element in i Calculating x i And x 0 Euclidean distance of (S), memory set S 1 Neutral and x 0 The KPI information with the minimum Euclidean distance is x clo (ii) a Then at x clo And x 0 The line is close to x 0 Randomly selecting a point on the extension line on one side, and recording the point as x add An element (x) add 0) incorporation of S 0 In, remember S 3 =S 0 ∪{(x add 0), going to step (23);
if N is present 0 Not less than 2, S 3 =S 0 Then, the process proceeds to the step (23);
(23) To S 3 Each piece of KPI information x in i Selecting K pieces of KPI information closest to the KPI, wherein K is more than or equal to 1 and less than or equal to S 3 |-1,|S 3 L is S 3 The number of elements in the step (24) and the specific value of K are determined by an operator;
(24) Randomly selecting L pieces of KPI information from K pieces of KPI information with place-backKPI information, recording the KPI information selected at one time as x sel At x sel And x in step (23) i Randomly selecting a point on the connecting line, and marking as x new Will (x) new 0) incorporation of S 3 In, remember S 3 =S 3 ∪{(x new 0) }; wherein L is more than or equal to 1 and less than or equal to K; the specific value of L is determined by an operator;
(25) Repeating the steps (23) and (24) to continuously update the set S 3 Up to | S 3 |=|S 1 In |, remember V = S 3 ∪S 1 。
Further, the step (3) comprises the following steps:
(31) For dataset V, a cost function for the sparse autoencoder is defined:
wherein:
wherein w is the weight vector of the self-encoder, and is recordedRepresenting the weight between the ith neuron of the l th layer and the jth neuron of the (l + 1) th layer; b is the offset vector of the self-encoderIs the weight between the bias unit of the l layer and the jth neuron of the (l + 1) th layer; n represents the number of elements in V; v. of i ∈R n KPI information of the ith element in V; z is a radical of formula i ∈R n Representing the output of the output layer neuron for the ith input; operatorA 2-norm representing a vector; λ, s l Tool for beta, rhoThe value of the body is determined by the operator: lambda belongs to R and is a regularization coefficient used for reducing weight so as to reduce overfitting; s l Represents the number of neurons in the l-th layer; beta belongs to R and is a penalty factorA weight in the cost function; rho epsilon (0,1) is a sparsity parameter and represents the expected activation degree of each neuron in the hidden layer; rho j Representing the average activation degree of the jth neuron in the hidden layer for all inputs;(v i ) Is expressed at the input of v i Under the condition (1), hiding the output of the jth neuron of the layer from the encoder; after J (w, b) is defined, the step (32) is carried out;
(32) Using a back propagation algorithm to solve the minimum value of the formula cost function to obtain a weight vector and a bias vector of the self-encoder after training, and respectively recording the weight vector and the bias vector as w opt And b opt (ii) a Order toTurning to step (33);
(33) KPI information V of each element in V i Inputting the w obtained in the process (2) into the trained self-encoder opt And b opt Obtaining the output of the hidden layer, which is marked as u i ,Will element (u) i ,y i ) Incorporated in the set U, i.e. U = U { (U {) i ,y i )};
(34) And repeating the step (33), and continuously updating U until the elements in the V are traversed.
Further, the step (4) comprises the following steps:
(41) Determining a log-likelihood function of LR according to the set U, wherein the expression is as follows:
wherein M represents the number of elements in U, y i The label corresponding to each KPI information in the U is represented by 1 and 0 values, h is a weight vector, c is a bias, and c belongs to R; u. u i For KPI information of ith element in U, noteThenThe operation "·" represents the inner product of two vectors; after L (h, c) is obtained, the step (42) is carried out;
(42) The maximum value of the log-likelihood function of LR is calculated by using a gradient descent method to obtain a weight vector and an offset which are recorded as h opt And c opt And (5) performing the step.
Further, the step (5) comprises the following steps:
(51) X is to be i Inputting the obtained w into the self-encoder trained in the step (3) opt And b opt Obtaining an output u from the hidden layer of the encoder i Proceeding to step (52);
(52) According to h obtained in the step (4) opt And c opt The following two probability values are calculated:
if P (y = 1|u) i )<P(y=0|u i ) If the station is interrupted, otherwise, the station is normal.
Has the advantages that: compared with the prior art, the wireless network interruption detection method based on the sparse self-encoder has the remarkable effects that:
(1) Only a small number of samples with labels are needed, so that a large amount of time for collecting data is saved, and the labor cost for labeling a large data set is also saved;
(2) Compared with the traditional logistic regression, the success rate of the method for the interrupt detection is obviously improved.
Drawings
Fig. 1 is a schematic diagram of the structure of the self-encoder of the present invention.
Detailed Description
For the purpose of explaining the technical solution disclosed in the present invention in detail, the following description is further made with reference to the accompanying drawings and specific embodiments.
The invention discloses a wireless network interruption detection method based on a sparse self-encoder, which is used for collecting two types of KPI information: the Reference Signal Received Power (RSRP) and the Signal to Interference plus Noise Ratio (SINR) are exemplified for explanation, i.e. x i = (RSRP, SINR). An embodiment of the method is given below, the steps of which are all performed in a monitoring center for monitoring the operation of the network.
The technical scheme of the invention comprises the following steps:
the first step is as follows: collecting Key Performance Indicators (KPIs) of a network, such as Reference Signal Receiving Power (RSRP), and the like, the method includes the following steps:
(11) KPI information reported by users in time T (the value is determined by operators according to the number of users and the network operation condition) in a wireless network is obtained, and the KPI information is transferred into a process (12);
(12) Saving KPI information as a dataset S = { (x) 1 ,y 1 ),(x 2 ,y 2 ),...,(x i ,y i ),...,(x m ,y m ) In the form of. Wherein m is the number of elements in S. The ith (i =1,2, …, m) element (x) in S i ,y i ) In, x i ∈R n And the n-dimensional KPI information reported by a certain user at a certain moment is shown. R n Is an n-dimensional vector space, the same as below. y is i Is x i The label (2) indicates the state of the base station, and takes a value of 1 or 0.y is i =1 denotes that the base station is in a normal (i.e. non-interrupted) state, y i =0 indicates that the base station is in the interruption state. After S is obtained, the process is switched to a process (13);
(13) The tag in statistic S is 1 (i.e. y) i = 1) and 0 (i.e. y) i Number of elements of = 0), each is represented as N 1 And N 0 . If N is present 0 And (5) =0, and then the process is shifted to the process (1), otherwise, the second step is carried out.
The second step: the data set S is processed using a Synthetic Minrity Over-sampling Technique (SMOTE). The method comprises the following steps:
(21) Dividing S into two subsets according to the labels of the samples in S: s 1 And S 0 . Wherein the subset S 1 The element in (1) is a label of 1 (i.e., y) i Sample of = 1), subset S 0 Is that the label is 0 (i.e., y) i = 0). Obtaining S 1 And S 0 Then, the process is switched to a flow (22);
(22) If N is present 0 =1, denotes S 0 Has only one element in it, and is marked as (x) 0 ,0). Traverse S 1 According to S 1 KPI information x of each element in i Calculating x i And x 0 The euclidean distance of (c). Note S 1 Neutral and x 0 The KPI information with the minimum Euclidean distance is x clo . At x clo And x 0 The line is close to x 0 Randomly selecting a point on the extension line on one side, and recording the point as x add . Will element (x) add 0) incorporation of S 0 In, remember S 3 =S 0 ∪{(x add 0), go to the flow (3). If N is present 0 Not less than 2, marking S 3 =S 0 Then, the process is shifted to the process (3);
(23) To S 3 Each piece of KPI information x in i Selecting the K strips closest to the Euclidean distance (K is more than or equal to 1 and less than or equal to | S) 3 |-1,|S 3 L is S 3 The number of elements in (1) is the same as below. The specific value of K is determined by an operator) KPI information, and the KPI information is transferred into a process (24);
(24) Randomly selecting L pieces of information (L is more than or equal to 1 and less than or equal to K, and the specific value of L is determined by an operator) from K pieces of KPI information in a place-back manner. Recording certain selected KPI information as x sel At x sel And x in scheme (3) i Randomly selecting a point on the connecting line, and marking as x new Will (x) new 0) incorporation of S 3 In, remember S 3 =S 3 ∪{(x new ,0)};
(25) Repeating the processes (23) and (24) to continuously update S 3 Up to | S 3 |=|S 1 Until | time. Note V = S 3 ∪S 1 And the third step is performed.
The third step: the data set V is processed with a sparse autoencoder. The sparse self-encoder is a three-layer forward neural network, and the structure of the sparse self-encoder is shown in fig. 1. The method comprises the following steps:
(31) For data set V, a cost function of the sparse autoencoder is defined:
wherein,
in the formula (3-1), w is a weight vector of the self-encoder, and is expressed asRepresents the weight between the ith neuron of the l-th layer and the jth neuron of the (l + 1) th layer. b is the offset vector of the self-encoderIs the weight between the bias cell of layer i (i.e., the neuron labeled "+1" in fig. 1) and the jth neuron of layer (l + 1). N represents the number of elements in V. v. of i ∈R n And indicates KPI information of the ith element in V. z is a radical of i ∈R n Representing the output of the output layer neuron for the ith input. OperatorRepresenting the 2-norm of the vector. λ, s l The specific values of beta and rho are determined by the operator: lambda belongs to R and is a regularization coefficient used for reducing weight so as to reduce overfitting; s l Represents the number of neurons in the l-th layer; beta belongs to R and is a penalty factorWeights in the cost function; ρ ∈ (0,1), which is a sparsity parameter, represents the desired degree of activation for each neuron in the hidden layer. As shown in the formula (3-2) (. Rho) j Representing the average degree of activation of the jth neuron in the hidden layer for all inputs. In the formula (3-2),(v i ) Is expressed at the input of v i Under the condition (1), the output of the jth neuron of the self-encoder hidden layer. After J (w, b) is defined, the process is switched to the process (2);
(32) Using a back propagation algorithm to solve the minimum value of the formula (3-1) to obtain a weight vector and a bias vector of the self-encoder after training, and respectively recording the weight vector and the bias vector as w opt And b opt . Order toSwitching to a flow (3);
(33) KPI information V of each element in V i Inputting the w obtained in the flow (2) into the trained self-encoder opt And b opt Obtaining the output of the hidden layer, which is recorded as u i ,Will element (u) i ,y i ) Incorporated in the set U, i.e. U = U { (U {) i ,y i )};
(34) And repeating the process (33), continuously updating U until the elements in the V are traversed, and performing the fourth step.
The fourth step: taking U as a training data set, and obtaining an interruption detection model by using Logistic Regression (LR), wherein the method comprises the following steps:
(41) Determining the log-likelihood function of the LR according to U:
wherein M represents the number of elements in U, y i The label corresponding to each KPI information in the U is represented by 1 and 0 values. h is the weight vector, c is the bias, c belongs to R. u. of i For KPI information of ith element in U, noteThenThe operation "·" represents the inner product of two vectors, the same below. After L (h, c) is obtained, the process is switched to the process (2);
(42) Solving the maximum value of the formula (4-1) by using a gradient descent method to obtain a weight vector and an offset, and marking as h opt And c opt And carrying out the fifth step.
Fifthly, according to KPI information x reported by users in network in real time i And carrying out interruption detection, wherein the step comprises the following processes:
(51) X is to be i Inputting into the third trained self-encoder, and calculating w opt And b opt Obtaining an output u from the hidden layer of the encoder i And then, the process is switched to the process (2);
(52) According to h obtained in the fourth step opt And c opt The following two probability values are calculated:
if P (y = 1|u) i )<P(y=0|u i ) If the station is interrupted, otherwise, the station is normal.
Claims (2)
1. A wireless network interrupt detection method based on a sparse self-encoder is characterized by comprising the following steps:
(1) Collecting network key performance indexes and establishing a data set S, and specifically comprising the following steps:
(11) Acquiring KPI information reported by a user within time T in a wireless network;
(12) The KPI information is saved as a data set S, expressed as follows:
S={(x 1 ,y 1 ),(x 2 ,y 2 ),...,(x i ,y i ),...,(x m ,y m )}
wherein m is the number of elements in S, the ith element (x) in S i ,y i ) In, x i ∈R n Indicating n-dimensional KPI information, R, reported by a user at a certain time n Is an n-dimensional vector space, i =1,2, …, m;
y i is x i The label of (2) indicates the state of the base station, and the value is 1 or 0; y is i =1 denotes that the base station is in a normal state, y i =0 represents that the base station is in an interruption state;
(13) Counting the number of elements with labels of 1 and 0 in the data set S, and respectively recording the number as N 1 And N 0 (ii) a If N is present 0 =0, step (11) is executed, otherwise step (2) is executed;
(2) Processing the data set S based on a minority class oversampling algorithm, including partitioning the subsets S according to sample labels in the data set S 1 And subset S 0 And computing the subset S 1 KPI information x of middle element i And x 0 To obtain a data set V, V = S 3 ∪S 1 The method specifically comprises the following steps:
(21) Partitioning S into subsets S according to labels of samples in dataset S 1 And subset S 0 The subset S 1 The element in (1) is a sample labeled 1, subset S 0 Element (b) is a sample labeled 0;
(22) If N is present 0 =1, indicating the subset S 0 Has only one element in it, and is marked as (x) 0 0); then traverse subset S 1 According to subset S 1 KPI information x of each element in i Calculating x i And x 0 Euclidean distance of, subset S 1 Neutral and x 0 The KPI information with the minimum Euclidean distance is x clo (ii) a Then at x clo And x 0 The line is close to x 0 Randomly selecting a point on the extension line on one side, and recording the point as x add An element (x) add 0) incorporation of S 0 In, remember S 3 =S 0 ∪{(x add 0), going to step (23);
if N is present 0 Not less than 2, S 3 =S 0 Turning to step (23);
(23) To S 3 Each piece of KPI information x in i Selecting K pieces of KPI information closest to the KPI, wherein K is more than or equal to 1 and less than or equal to S 3 |-1,|S 3 L is S 3 The number of elements in the step (24) and the specific value of K are determined by an operator;
(24) Randomly selecting L pieces of KPI information from K pieces of KPI information in a place-back mode, and recording the KPI information selected at one time as x sel At x sel And x in step (23) i Randomly selecting a point on the connecting line, and marking as x new Will (x) new 0) incorporation of S 3 In, remember S 3 =S 3 ∪{(x new 0) }; wherein L is more than or equal to 1 and less than or equal to K; the specific value of L is determined by an operator;
(25) Repeating the steps (23) and (24) to continuously update the set S 3 Up to | S 3 |=|S 1 In |, remember V = S 3 ∪S 1 ;
(3) Processing a data set V by using a sparse autoencoder, wherein the processing comprises defining a cost function of the sparse autoencoder, solving a minimum value through a back propagation algorithm, training and updating to obtain a set U, and the method specifically comprises the following steps:
(31) For data set V, a cost function of the sparse autoencoder is defined:
wherein w is the weight vector of the self-encoder, and is recordedRepresenting the weight between the ith neuron of the l th layer and the jth neuron of the (l + 1) th layer; b is the offset vector of the self-encoderIs the weight between the bias unit of the l layer and the jth neuron of the (l + 1) layer; n represents the number of elements in V; v. of i ∈R n KPI information of the ith element in V; z is a radical of i ∈R n Representing the output of the output layer neuron for the ith input; operatorA 2-norm representing a vector; λ, s l The specific values of beta and rho are determined by the operator: lambda belongs to R and is a regularization coefficient used for reducing weight so as to reduce overfitting; s l Represents the number of neurons in the l-th layer; beta belongs to R and is a penalty factorA weight in the cost function; rho epsilon (0,1) is a sparsity parameter and represents the expected activation degree of each neuron in the hidden layer; rho j Representing the average activation degree of the jth neuron in the hidden layer for all inputs;is expressed at the input of v i Under the condition (1), hiding the output of the jth neuron of the layer from the encoder; after J (w, b) is defined, the step (32) is carried out;
(32) Using a back propagation algorithm to solve the minimum value of the formula cost function to obtain a weight vector and a bias vector of the self-encoder after training, and respectively recording the weight vector and the bias vectorIs w opt And b opt (ii) a Order toTurning to step (33);
(33) KPI information V of each element in V i Inputting into trained self-encoder, and calculating w according to step (32) opt And b opt Obtaining the output of the hidden layer, which is recorded as u i ,Will element (u) i ,y i ) Is merged into the set U, i.e. U = U { (U) i ,y i )};
(34) Repeating the step (33), and continuously updating the U until the elements in the V are traversed;
(4) Taking U as a training data set, and obtaining an interrupt detection model by using logistic regression, specifically comprising the following steps:
(41) Determining a log-likelihood function of LR (Logistic regression) according to the set U, wherein the expression is as follows:
wherein M represents the number of elements in U, y i The label corresponding to each KPI information in the U is represented by 1 and 0 values, h is a weight vector, c is a bias, and c belongs to R; u. of i For KPI information of ith element in U, noteThenThe operation "·" represents the inner product of two vectors; after L (h, c) is obtained, the step (42) is carried out;
(42) The maximum value of the log-likelihood function of LR is calculated by using a gradient descent method to obtain a weight vector and an offset which are recorded as h opt And c opt And (5) performing the step;
(5) According to KPI information x reported by users in a wireless network in real time i And performing interrupt detection, specifically comprising the following steps:
(51) X is to be i Inputting the obtained w into the self-encoder trained in the step (3) opt And b opt Obtaining an output u from the hidden layer of the encoder i Proceeding to step (52);
(52) According to h obtained in the step (4) opt And c opt The following two probability values are calculated:
if P (y = 1|u) i )<P(y=0|u i ) If not, the base station is judged to be interrupted, otherwise, the base station is normal.
2. The sparse self-encoder based wireless network outage detection method of claim 1, wherein the sparse self-encoder is a three-layer forward neural network comprising an input layer, a hidden layer and an output layer.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911214239.8A CN110972174B (en) | 2019-12-02 | 2019-12-02 | Wireless network interruption detection method based on sparse self-encoder |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911214239.8A CN110972174B (en) | 2019-12-02 | 2019-12-02 | Wireless network interruption detection method based on sparse self-encoder |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110972174A CN110972174A (en) | 2020-04-07 |
CN110972174B true CN110972174B (en) | 2022-12-30 |
Family
ID=70032620
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911214239.8A Active CN110972174B (en) | 2019-12-02 | 2019-12-02 | Wireless network interruption detection method based on sparse self-encoder |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110972174B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113259972B (en) * | 2021-06-08 | 2021-09-28 | 网络通信与安全紫金山实验室 | Data warehouse construction method, system, equipment and medium based on wireless communication network |
CN114501525B (en) * | 2022-01-28 | 2024-02-02 | 东南大学 | Wireless network interruption detection method based on condition generation countermeasure network |
CN114615685B (en) * | 2022-03-07 | 2024-02-02 | 东南大学 | Wireless network interruption detection method based on generation of countermeasures network up-sampling |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109816002A (en) * | 2019-01-11 | 2019-05-28 | 广东工业大学 | The single sparse self-encoding encoder detection method of small target migrated certainly based on feature |
CN109902564A (en) * | 2019-01-17 | 2019-06-18 | 杭州电子科技大学 | A kind of accident detection method based on the sparse autoencoder network of structural similarity |
CN110139315A (en) * | 2019-04-26 | 2019-08-16 | 东南大学 | A kind of wireless network fault detection method based on self-teaching |
-
2019
- 2019-12-02 CN CN201911214239.8A patent/CN110972174B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109816002A (en) * | 2019-01-11 | 2019-05-28 | 广东工业大学 | The single sparse self-encoding encoder detection method of small target migrated certainly based on feature |
CN109902564A (en) * | 2019-01-17 | 2019-06-18 | 杭州电子科技大学 | A kind of accident detection method based on the sparse autoencoder network of structural similarity |
CN110139315A (en) * | 2019-04-26 | 2019-08-16 | 东南大学 | A kind of wireless network fault detection method based on self-teaching |
Non-Patent Citations (2)
Title |
---|
A sparse autoencoder-based approach for cell outage detection in wireless networks;Ziang MA等;《Science China(Information Sciences)》;20210831;全文 * |
Anomalous Communications Detection in IoT Networks Using Sparse Autoencoders;Mustafizur R. Shahid;《2019 IEEE 18th International Symposium on Network Computing and Applications (NCA)》;20190928;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN110972174A (en) | 2020-04-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110972174B (en) | Wireless network interruption detection method based on sparse self-encoder | |
CN104469833B (en) | A kind of heterogeneous network operation management method based on user's perception | |
CN112733417B (en) | Abnormal load data detection and correction method and system based on model optimization | |
CN104050242A (en) | Feature selection and classification method based on maximum information coefficient and feature selection and classification device based on maximum information coefficient | |
CN101271572A (en) | Image segmentation method based on immunity clone selection clustering | |
CN110062410B (en) | Cell interruption detection positioning method based on self-adaptive resonance theory | |
CN109991591B (en) | Positioning method and device based on deep learning, computer equipment and storage medium | |
CN104572985A (en) | Industrial data sample screening method based on complex network community discovery | |
CN112512069A (en) | Network intelligent optimization method and device based on channel beam pattern | |
CN105425583A (en) | Control method of penicillin production process based on cooperative training local weighted partial least squares (LWPLS) | |
CN116340524B (en) | Method for supplementing small sample temporal knowledge graph based on relational adaptive network | |
CN111882119A (en) | Battery SOH prediction optimization method based on SA-BP neural network | |
CN105844334A (en) | Radial basis function neural network-based temperature interpolation algorithm | |
CN111405605B (en) | Wireless network interruption detection method based on self-organizing mapping | |
CN114298413B (en) | Hydroelectric generating set runout trend prediction method | |
CN115640893A (en) | Industrial data prediction method and device of industrial chain and storage medium | |
CN115081551A (en) | RVM line loss model building method and system based on K-Means clustering and optimization | |
CN115169426A (en) | Anomaly detection method and system based on similarity learning fusion model | |
Li et al. | Smoothed deep neural networks for marine sensor data prediction | |
CN114615685B (en) | Wireless network interruption detection method based on generation of countermeasures network up-sampling | |
CN114501525B (en) | Wireless network interruption detection method based on condition generation countermeasure network | |
CN113762485B (en) | Attention mechanism-based multi-dataset joint prediction method | |
Zhao et al. | MAGRO: Inferring Root Causes of Poor Wireless Network Performance Using Knowledge Graph and Heterogeneous Graph Neural Networks | |
CN117033916B (en) | Power theft detection method based on neural network | |
Sawat et al. | Power Theft and Energy Fraud Detection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |