CN114501525B - Wireless network interruption detection method based on condition generation countermeasure network - Google Patents

Wireless network interruption detection method based on condition generation countermeasure network Download PDF

Info

Publication number
CN114501525B
CN114501525B CN202210108134.XA CN202210108134A CN114501525B CN 114501525 B CN114501525 B CN 114501525B CN 202210108134 A CN202210108134 A CN 202210108134A CN 114501525 B CN114501525 B CN 114501525B
Authority
CN
China
Prior art keywords
samples
sample
label
network
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210108134.XA
Other languages
Chinese (zh)
Other versions
CN114501525A (en
Inventor
潘志文
葛旭
刘楠
尤肖虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Network Communication and Security Zijinshan Laboratory
Original Assignee
Southeast University
Network Communication and Security Zijinshan Laboratory
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University, Network Communication and Security Zijinshan Laboratory filed Critical Southeast University
Priority to CN202210108134.XA priority Critical patent/CN114501525B/en
Publication of CN114501525A publication Critical patent/CN114501525A/en
Application granted granted Critical
Publication of CN114501525B publication Critical patent/CN114501525B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/08Testing, supervising or monitoring using real traffic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Complex Calculations (AREA)

Abstract

The invention discloses a wireless network interruption detection method based on a condition generation countermeasure network, which can accurately detect interruption and correctly classify different types of interruption under the condition that a data set is unbalanced and data classes are overlapped, and specifically comprises the following steps: the first step: collecting network key performance indexes and forming a data set S; and a second step of: training the improved condition by using a data set S to generate an countermeasure network, wherein the countermeasure network consists of a generator G and a discriminator D, and the generator G and the discriminator D are all of a fully-connected neural network structure; and a third step of: gathering wireless network T 2 And (4) KPI information reported by the user in time, and the fourth step: synthesizing interrupt data by using the model obtained in the second step, and balancing a data set H; fifth step: calculating an inter-class overlap index of each sample in the calibrated training set V; sixth step: training an artificial neural network ANN by using the training set V and the inter-class overlap index set O to obtain an interruption detection model; seventh step: reporting KPI information x (x epsilon R) in real time according to user in network n ) And (5) performing interrupt detection.

Description

Wireless network interruption detection method based on condition generation countermeasure network
Technical Field
The invention belongs to the technical field of network interruption detection in wireless networks, and particularly relates to a wireless network interruption detection method based on a condition generation countermeasure network.
Background
As one of key technologies of the wireless network autonomous healing technology, interrupt detection has important significance for improving the wireless network operation and maintenance efficiency and reducing the operation and maintenance cost. However, wireless network outages are small probability events that can be collected with significantly smaller amounts of outage characteristic data than normal data, resulting in a serious imbalance in the data set. Furthermore, when there is more than one type of interruption in the network, there tends to be more severe inter-class overlap between the different types of interruption data. Both of these factors result in reduced wireless network outage detection performance. Therefore, how to improve the wireless network outage detection performance in the presence of data imbalance and overlap between data classes is an important issue.
Disclosure of Invention
Technical problems: in order to solve the problems, the invention discloses a wireless network interruption detection method based on a condition generation countermeasure network, which can accurately detect interruption under the condition that data set unbalance and data class overlap exist and correctly classify different types of interruption.
The technical scheme is as follows: the invention relates to a wireless network interruption detection method based on a condition generation countermeasure network, which comprises the following steps:
the first step: collecting key performance indexes KPIs of the network, and forming a data set S;
and a second step of: generating an antagonism network CGAN-W by using the improved conditions of the data set S training, wherein the CGAN-W consists of a generator G and a discriminator D, and the G and the D are all fully connected neural network structures;
and a third step of: gathering wireless network T 2 KPI information reported by users in time and stored as a data set Wherein N is H Representing the total number of samples in H, element (H w ,y w ),w=1,2,…,N H ,h w ∈R n N-dimensional KPI information reported by a user is represented, wherein the specific value of n can be determined by an operator according to the number of users and the network operation condition, and y w Is h w The value of the tag is 0,1,2 and 3. After the data set H is acquired, entering a fourth step;
fourth step: synthesizing interrupt data by using the CGAN-W model obtained in the second step, and balancing a data set H;
fifth step: calculating an inter-class overlap index of each sample in the calibrated training set V;
sixth step: training an artificial neural network ANN by using the training set V and the inter-class overlap index set O to obtain an interruption detection model;
seventh step: reporting KPI information x (x epsilon R) in real time according to user in network n ) And (5) performing interrupt detection.
Wherein,
the first step is as follows: collecting network key performance indexes KPIs and forming a data set S, specifically:
step 1.1, acquiring time T in a wireless network 1 KPI information reported by an inner user;
step 1.2, the KPI information reported by the user is stored as a data setForm (iv); wherein N is S The number of elements in S, the i-th element (x i ,y i ),i=1,2…,N S ,x i ∈R n The n-dimensional KPI information reported by a user at a certain moment specifically comprises the reference signal receiving power and the signal-to-interference-and-noise ratio of a serving cell and a neighbor cell; wherein, the value of n can be determined by the operator according to the number of users and the network operation condition; y is i Is x i A tag indicating the status of the base station serving the user, the value being 0,1,2,3; wherein y is i =0 indicates that the base station is in a normal state, having the strongest communication capability; y is i =1 indicates that the base station is in a slightly disrupted state, the communication capability is slightly degraded, y i The expression of =2 indicates that the base station is in a moderate interruption state, and the communication capability is severely reduced, which may cause a communication failure phenomenon to occur; y is i =3 indicates that the base station is in a severely interrupted state, and when the base station completely loses communication capability, a large number of link connection failure events and user switching events are triggered to obtain S.
The second step specifically comprises the following steps:
step 2.1, normalizing the data in the data set S according to the formula (1) to enable the final data to be distributed between-1 and 1;
wherein,represents the ith data sample x i Taking on the d-th dimension, d=1, 2, …, n, n representing the sample x i Is a feature dimension of (2); />Representing the normalized sample, acquiring a normalized data set S, and then transferring to step 2.2;
step 2.2, dividing the S into four subsets according to sample tags in the data set S: s is S 0 ,S 1 ,S 2 ,S 3 The method comprises the steps of carrying out a first treatment on the surface of the Wherein subset S 0 The element in the list is labeled 0, namely y i Samples=0, representing normal data; subset S 1 ,S 2 ,S 3 The medium elements are samples with labels of 1,2 and 3 respectively, and represent interrupt type data respectively; statistical subset S k The total number of samples in (1) is recorded as N k K=1, 2,3, obtaining a subset S containing interrupt data 1 ,S 2 ,S 3 Then, the step 2.3 is carried out;
step 2.3, defining D and G loss functions in CGAN-W as shown in formula (2) and formula (3):
wherein L is D Representing loss of discriminator, L G Representing generator loss; m represents the total number of samples used to train CGAN-W; z j Meter-sampled l-dimensional random noise samples subject to standard normal distribution, z j ∈R l ,j=1,2,…,m;x j N-dimensional KPI information x reported by representative user j ∈R n ;y j Representing a sample tag; using interrupt subsets S 1 ,S 2 ,S 3 Training the CGAN-W model to learn the interrupt class data features, so that the sample (x j ,y j )∈S k ,G(z j ,y j ) Is input (z) j ,y j ) When the generator outputs the layer neuron output, namely the synthesized sample; d (x) j ,y j ) Is input (x) j ,y j ) When the neural element of the output layer of the discriminator outputs; d (G (z) j ,y j ),y j ) For inputting (G (z) j ,y j ),y j ) When the neural element of the output layer of the discriminator outputs; after defining the loss function, turning to step 2.4;
step 2.4, setting parameters required by subsequent model training: the method specifically comprises the following steps: learning rate alpha; the cut-off coefficient c is used for limiting the weight range after updating the discriminator; the batch size m is used for setting the number of samples sampled in each round of training; number of training of discriminant n dis The training method is used for setting the number of times that D needs to be trained when G is trained once; model maximum iteration number iteration; number of iterations t, t of the arbiter<n dis The method comprises the steps of carrying out a first treatment on the surface of the Model iteration number iter, iter<An item; wherein α, c, m, n dis The value of the item is determined by the operator; after the setting is completed, the step 2.5 is carried out;
step 2.5, randomly initializing a generator G and a discriminator D weight vector W in the CGAN-W model G ,W D And offset vector b G ,b D The method comprises the steps of carrying out a first treatment on the surface of the Initializing the iteration number iter=0 of the model, and the iteration number t=0 of the discriminator; after the model initialization is completed, entering a step 2.6;
step 2.6, slave interrupt subset S k M samples are randomly sampled to obtain a real sample set { (x) 1, y 1 ),(x 2, y 2 ),…,(x m, y m ) -a }; wherein the set { x } 1 ,x 2 ,…,x m The KPI information representing the samples is noted as real= { x 1 ,x 2 ,…,x m -a }; set { y } 1 ,y 2 ,…,y m The corresponding sample label is denoted as label= { y } 1 ,y 2 ,…,y m };From the l-dimensional random noise z (z e R) l ) M noise samples are sampled to form a set noise= { z 1 ,z 2 ,…,z m -a }; set noise and the label set label= { y 1 ,y 2 ,…,y m Combination, i.e. for each noise sample z j Add tag y j Obtaining element (z) j ,y j ),y j Epsilon label; traversing the sampled m noise samples { z }, as this operation 1 ,z 2 ,…,z m Aggregation { (z) 1 ,y 1 ),(z 2 ,y 2 ),…,(z m ,y m ) -a }; after the sampling is completed, the step 2.7 is carried out;
step 2.7, the set { (z) 1 ,y 1 ),(z 2 ,y 2 ),…,(z m ,y m ) An input generator outputting a set of composite samplesWherein (1)>The synthetic sample set +.> And step 2.6, label set label= { y 1 ,y 2 ,…,y m Binding, i.e. for each synthetic sample +.>Adding the label y j ,y j Epsilon label, get element ∈>Traversing the generated m samples according to this operation +.>Obtaining a combinationSample set->After obtaining the synthetic sample set, transferring to step 2.8;
step 2.8, real sample set { (x) 1 ,y 1 ),(x 2 ,y 2 ),…,(x m ,y m ) Sum of synthetic samplesAs a discriminator input, a small batch random gradient descent algorithm is used to maximize (2), and a discriminator parameter W is updated D ,b D The method comprises the steps of carrying out a first treatment on the surface of the Making the iteration times t=t+1 of the discriminator, and turning to step 2.9;
step 2.9, for the updated arbiter weight coefficient W D Cut off to a value between-c and c, i.e. |W D C is not more than c, wherein c is a cut-off coefficient, and a specific numerical value can be determined by an operator;
step 2.10, repeating steps 2.6, 2.7, 2.8, 2.9 until the number of iterations t of the arbiter>n dis The method comprises the steps of carrying out a first treatment on the surface of the Turning to a flow step 2.11;
step 2.11, again from the normal distribution compliant l-dimensional random noise z (z ε R) l ) M noise samples are sampled to form a set noise '= { z' 1 ,z′ 2 ,…,z′ m And (2) combining the set noise' with the label set label= { y obtained in the step 2.6 1 ,y 2 ,…,y m Combination, i.e. for each noise sample z' j Add tag y j ,y j Epsilon label, get element (z' j ,y j ) The method comprises the steps of carrying out a first treatment on the surface of the Traversing the sampled m noise samples { z } 'in this operation' 1 ,z′ 2 ,…,z′ m (z 'to obtain the set)' 1 ,y 1 ),(z′ 2 ,y 2 ),…,(z′ m ,y m ) -and input into a generator; minimizing (3) using a small batch random gradient descent algorithm, updating the generator parameters W G ,b G Making the iteration number iter=iter+1 of the model, and turning to step 2.12;
step 2.12, if item>item, end training, record generator G, discriminant D weight vector W G_opt ,W D_opt Bias vector b G_opt ,b D_opt Step 2.13 is entered, otherwise, turning to step 2.5, and starting a new training round;
step 2.13, repeating steps 2.3-2.12 until three interrupt subsets S 1 ,S 2 ,S 3 And after training is completed, a CGAN-W model with the learned interrupt characteristics is obtained.
The fourth step is specifically as follows:
step 4.1, sampling n from the l-dimensional random noise z subject to the standard normal distribution gen Noise samples form a setWherein n is gen Representing the number of minority class samples desired to be synthesized, n gen >0; because the number of added synthesized samples can influence the unbalanced proportion of the finally obtained training set samples and further influence the classification effect, the optimal n is searched by a grid searching method gen A value; sampling tag information to obtain tag set +.> Only a few classes of data are synthesized to balance the dataset, so the tag takes value only in the set {1,2,3}, i.e., y r ∈{1,2,3},r=1,2,…,n gen The method comprises the steps of carrying out a first treatment on the surface of the Combine noise collection->And tag setI.e. for noise sample z r (r=1,2,…,n gen ) Add tag y r (r=1,2,…,n gen ),y r ∈label gen Obtaining element (z) r ,y r ) The method comprises the steps of carrying out a first treatment on the surface of the According to this operationAll samples are traversed, the set ∈>Turning to step 4.2;
step 4.2, aggregatingInput generator, according to the W obtained in the second step G_opt ,b G_opt Output of the calculation generator, noted->Wherein (1)> Will collect->And tag set->Combining, i.e. generating samplesAdding the label y r ,y r ∈label gen Obtaining element->Traversing all the generated samples in this operation, obtaining a generated dataset +.>After the generated data set U is obtained, the step 4.3 is carried out;
and 4.3, combining the generated data set U and the original data set H to obtain a calibrated training set V=U U H.
The fifth step is specifically as follows:
step 5.1, for each piece of KPI information V in V e ,e=1,2,…,N V ,N V Represents the total number of samples in V, V e ∈R n Q KPI information closest to the Euclidean distance is selected to form a sample set Neigh= { v 1 ,v 2 ,…,v q -a }; turning to step 5.2;
step 5.2, calculating the sum sample v in the set neighbor e The same sample number of the label is recorded as NN, NN is more than or equal to 0 and less than or equal to q, and the step 5.3 is carried out;
step 5.3, if NN>0, sample v e Inter-class overlap index o e =nn/q; if nn=0, sample v e Inter-class overlap index o e =β; wherein,for adjusting the coefficients, for adjusting the samples v e The specific value of beta can be determined by the operator, and the calculated inter-class overlap index o e Incorporating set O, i.e. o=o ∈o e };
Step 5.4, repeat step 5.1, step 5.2, step 5.3 until all sample traversals in V are completed.
The sixth step is specifically as follows:
step 6.1, determining an ANN loss function according to V and O, wherein the ANN loss function is shown in a formula (4)
Wherein N is V Represents the total number of samples in training set V, o e (e=1,2,…,N V ) Representative sample v e Inter-class overlap index, y eg As a sign function, if sample v e If the label of (2) is equal to g, then 1 is taken, otherwise 0, θ is taken j Represents the weight matrix and bias vector corresponding to the j-th neuron of the output layer, theta g Representing a weight matrix and a bias vector corresponding to a g-th neuron of an output layer, wherein an upper mark T represents transposition, and after a loss function is defined, the step 6.2 is carried out;
step 6.2, obtaining the minimum value of the formula (4) by using a gradient descent algorithm to obtain a weight vector W of the ANN network ANN_opt And bias b ANN_opt
The seventh step is specifically as follows:
step 7.1, inputting x into the ANN model obtained in the sixth step, according to W ANN_opt And b ANN_opt The output layer outputs pred, pred E R 4
Step 7.2, calculating the final predictive labelWherein argmax represents the vector (pred 1 ,pred 2 ,pred 3 ,pred 4 ) The index of the index corresponding to the maximum value of (1), the index number of the index starts from 0, if +.>And judging x as an interrupt sample and determining a specific interrupt type, otherwise judging as a normal sample.
The beneficial effects are that: according to the interrupt detection method for the condition generation countermeasure network, in the case that unbalance and inter-class overlap problems exist in the data set, a few classes of data can be sampled through the CGAN-W model, and meanwhile, the inter-class overlap index is calculated, the ANN is trained in a weighting mode, so that the problems of unbalance of the data and the inter-class overlap are solved, and the overall improvement of interrupt detection performance is brought. Compared with the traditional classification method and the data up-sampling method, the method has the advantage that the interrupt detection performance is remarkably improved.
Drawings
FIG. 1 is a schematic diagram of the CGAN-W model.
Fig. 2 is a schematic diagram of an ANN structure.
Detailed Description
The invention relates to a wireless network interruption detection method based on a condition generation countermeasure network, which is used for collecting two types of KPI information: reference is made to signal received power (RSRP), signal to interference plus noise ratio (SINR), for example. An embodiment of the method is given below, the steps of which are all performed at a monitoring center for monitoring the operation of the network.
The first step: network KPIs are gathered and a data set S is formed. The method comprises the following steps:
(1) And (3) acquiring KPI information reported by users in 80s in the wireless network, and turning to the step (2).
(2) Storing KPI information reported by a user as a data setIn the form of (a). Wherein N is S The number of elements in S. In S (i=1, 2 …, N S ) Individual elements (x) i ,y i ),x i ∈R n And the n-dimensional KPI information reported by the user at a certain moment is shown. In this embodiment, n has a value of 8, and the kpi information includes RSRP and SINR of the serving cell and three nearest neighbor cells. KPI information is specifically expressed as x i ={RSRP sev ,SINR sev ,RSRP nei1 ,SINR nei1 ,RSRP nei2 ,SINR nei2 ,RSRP nei3 ,SINR nei3 }. Where subscript sev denotes the serving cell and subscripts nei1, nei2, nei3 denote the three nearest neighbor cells. y is i Is x i And (3) represents the state of the base station serving the user, and the values are 0,1,2 and 3. Wherein y is i =0 indicates that the base station is in a normal (normal) state. y is i =1 indicates that the base station is in a light interruption (degraded) state. y is i =2 indicates that the base station is in a medium interruption (configured) state. y is i =3 indicates that the base station is in a heavy interruption (catatonic) state. After S is obtained, the process proceeds to the second step.
And a second step of: CGAN-W is trained using dataset S. CGAN-W is composed of generator G and discriminator D. G and D are all fully-connected neural network structures, and the specific structures are shown in figure 1. The method comprises the following steps:
(1) The data in S are normalized according to equation (1) such that the final data is distributed between-1 and 1. After the normalized data set S is acquired, the flow proceeds to step (2).
(2) Dividing S into four subsets according to sample labels in S: s is S 0 ,S 1 ,S 2 ,S 3 . Wherein subset S 0 The element in (i) is a tag of 0 (i.e., y i =0) samples, representing normal data. SonSet S 1 ,S 2 ,S 3 The elements in (a) are samples with labels of 1,2 and 3 respectively, and represent interrupt type data respectively. Statistical subset S k The total number of samples in (k=1, 2, 3), denoted N k (k=1, 2, 3). Obtaining a subset S containing interrupt data 1 ,S 2 ,S 3 After that, the process proceeds to the flow (3).
(3) The loss function of D and G in CGAN-W is defined as formula (2) and formula (3) respectively. After defining the loss function, the flow proceeds to flow (4).
(4) Setting parameters required by subsequent model training. The method specifically comprises the following steps: learning rate α=0.0005; the truncation coefficient c=0.01 is used for limiting the weight range after updating the discriminator; batch size m=64, used for setting up the sample number sampled in each round of training; number of training of discriminant n dis =5, for setting the number of times D needs to be trained every time G is trained; model maximum iteration number iteration=20000; number of arbiter iterations t (t)<n dis ) The method comprises the steps of carrying out a first treatment on the surface of the Model iteration number iter (iter<An item). In addition, in this embodiment, G and D are all set to a fully-connected neural network structure including three hidden layers. The G, D hidden layer activation functions all use the LeakyReLU function, i.e., leakyReLU (x) =max (0, x) +γ×min (0, x), and the G output layer activation function uses the tanh function, i.e.The D output layer does not use an activation function. After the setting is completed, the process flow (5) is entered.
(5) Randomly initializing a generator G and a discriminator D weight vector W in a CGAN-W model G ,W D And offset vector b G ,b D . The initialization model iteration number iter=0 and the arbiter iteration number t=0. After the model initialization is completed, the process flow (6) is entered.
(6) From interrupt subset S k Randomly sampling m samples in (k=1, 2, 3) to obtain a real sample set { (x) 1, y 1 ),(x 2, y 2 ),…,(x m, y m ) }. Wherein the set { x } 1 ,x 2 ,…,x m The KPI information representing the samples is noted as real= { x 1 ,x 2 ,…,x m }. Set { y } 1 ,y 2 ,…,y m The corresponding sample label is denoted as label= { y } 1 ,y 2 ,…,y m }. From a 100-dimensional random noise z (z e R) 100 ) M noise samples are sampled to form a set noise= { z 1 ,z 2 ,…,z m }. Set noise and the label set label= { y 1 ,y 2 ,…,y m Combination, i.e. for each noise sample z j (j=1, 2, …, m) with the label y j (j=1,2,…,m),y j Epsilon label, get element (z j ,y j ). Traversing the sampled m noise samples { z }, as this operation 1 ,z 2 ,…,z m Aggregation { (z) 1 ,y 1 ),(z 2 ,y 2 ),…,(z m ,y m ) }. After the sampling is completed, the flow proceeds to (7).
(7) Will aggregate { (z) 1 ,y 1 ),(z 2 ,y 2 ),…,(z m ,y m ) An input generator outputting a set of composite samplesWherein (1)>To synthesize sample setLabel set label= { y obtained in procedure (6) 1 ,y 2 ,…,y m Binding, i.e. for each synthetic sample +.>Adding the label y j (j=1,2,…,m),y j Epsilon label, get element ∈>Traversing the generated m samples according to this operation +.>Obtaining a synthetic sample set->After the synthetic sample set is obtained, the flow proceeds to flow (8).
(8) Set of true samples { (x) 1, y 1 ),(x 2, y 2 ),…,(x m, y m ) Sum of synthetic samplesAs a discriminator input, a small batch random gradient descent algorithm is used to maximize (2), and a discriminator parameter W is updated D ,b D . Let the number of iterations t=t+1 of the discriminator go to flow (9).
(9) For updated discriminant weight coefficient W D Cut off to obtain |W D |≤0.01。
(10) Repeating steps (6) (7) (8) (9) until the number of arbiter iterations t >5. The process proceeds to flow (11).
(11) Again from a 100-dimensional random noise z (z e R) 100 ) M noise samples are sampled to form a set noise '= { z' 1 ,z′ 2 ,…,z′ m }. Combining the set noise' with the label set label= { y obtained in the procedure (6) 1 ,y 2 ,…,y m Combination, i.e. for each noise sample z j ' tag y is added (j=1, 2, …, m) j (j=1,2,…,m),y j Epsilon label, get element (z j ′,y j ). Traversing the sampled m noise samples { z } 'in this operation' 1 ,z′ 2 ,…,z′ m (z 'to obtain the set)' 1 ,y 1 ),(z′ 2 ,y 2 ),…,(z′ m ,y m ) And input into the generator. Minimizing (3) using a small batch random gradient descent algorithm, updating the generator parameters W G ,b G . Let model iteration number iter=iter+1. The process proceeds to flow (12).
(12) If iter>20000, ending training. Record generator G, discriminant D weight vector W G_opt ,W D_opt Bias vector b G_opt ,b D_opt Flow (13) is entered. Otherwise, turning to a process (5), and starting a new training round.
(13) Repeating the processes (3) - (12) until three interrupt subsets S 1 ,S 2 ,S 3 And after training is completed, a CGAN-W model with the learned interrupt characteristics is obtained. And (3) entering a third step.
And a third step of: collecting KPI information reported by users in wireless network 8s and storing as a data set Wherein N is H Representing the total number of samples in H. Element (h) w ,y w ),w=1,2,…,N H ,h w ∈R n And the n-dimensional KPI information reported by the user is represented. In this embodiment, n has a value of 8, and the kpi information includes RSRP and SINR of the serving cell and three nearest neighbor cells. KPI information is specifically expressed as h w ={RSRP sev ,SINR sev ,RSRP nei1 ,SINR nei1 ,RSRP nei2 ,SINR nei2 ,RSRP nei3 ,SINR nei3 }。y w Is h w The value of the tag is 0,1,2 and 3. After the acquisition of the dataset H, a fourth step is entered.
Fourth step: and synthesizing interrupt data by using the CGAN-W model obtained in the second step, and balancing the data set H. The method comprises the following steps:
(1) Sampling n from 100-dimensional random noise z subject to standard normal distribution gen Noise samples form a setWherein n is gen (n gen >0) Representing the number of minority class samples that are desired to be synthesized. Because the number of the added synthesized samples can influence the unbalanced proportion of the finally obtained training set samples and further influence the classification effect, the optimal n can be searched by a grid searching method gen Values. SamplingTag information, obtaining tag set->It should be noted that the method synthesizes only a few classes of data to achieve the purpose of balancing the data set. Thus, the tag only takes value in the set {1,2,3}, i.e., y r ∈{1,2,3},r=1,2,…,n gen . Combine noise collection->And tag set->I.e. for noise sample z r (r=1,2,…,n gen ) Add tag y r (r=1,2,…,n gen ),y r ∈label gen Obtaining element (z) r ,y r ). Traversing all samples in this operation to obtain a setAnd (5) switching to the process (2).
(2) Will be assembledInput generator, according to the W obtained in the second step G_opt ,b G_opt Output of the calculation generator, noted->Wherein (1)> Will collect->And tag set-> Combining, i.e. generating samplesAdding the label y r (r=1,2,…,n gen ),y r ∈label gen Obtaining element->Traversing all the generated samples in this operation, obtaining a generated dataset +.>After the generated data set U is obtained, the flow proceeds to step (3).
(3) And combining the generated data set U and the original data set H to obtain a calibrated training set V=U U H. And (5) entering a fifth step.
Fifth step: the inter-class overlap index for each sample in the calibrated training set V is calculated. The method comprises the following steps:
(1) For each piece of KPI information V in V e (e=1,2,…,N V ,N V Representing the total number of samples in V), V e ∈R 8 Selecting 5 KPI information with the nearest Euclidean distance to form a sample set Neigh= { v 1 ,v 2 ,…,v 5 }. And (5) switching to the process (2).
(2) Calculate the AND sample v in the set neighbor e The same number of samples as the label of (1) is denoted NN (0. Ltoreq. NN. Ltoreq. 5). And (3) switching to the process.
(3) If NN>0, sample v e Inter-class overlap index o e =nn/5. If nn=0, sample v e Inter-class overlap index o e =0.05. The calculated inter-class overlap index o e Incorporating set O, i.e. o=o ∈o e }。
(4) Repeating the processes (1) (2) (3) until all samples in V are traversed. And (3) entering a sixth step.
Sixth step: and training the ANN by using the training set V and the inter-class overlap index set O to obtain an interruption detection model. The method comprises the following steps:
(1) An ANN loss function is determined from V, O. Specifically, the method is shown as a formula (4). In addition, in this embodiment, an ANN is set to be a three-layer fully-connected neural network structure, and the number of neurons in the input layer is KPI feature dimension, that is, 8. The number of neurons of the output layer is the total number of sample categories, namely 4. The specific structure is shown in fig. 2. The hidden layer activation function uses a LeakyReLU function and the output layer activation function uses a softmax function. After the setting is completed, the flow proceeds to the flow (2).
(2) Obtaining a weight vector W of the ANN network by solving the minimum value of the equation (4) by using a gradient descent algorithm ANN_opt And bias b ANN_opt . And (5) entering a seventh step.
Seventh step: reporting KPI information x (x epsilon R) in real time according to user in network 8 ) And (5) performing interrupt detection. The method comprises the following steps:
(1) Inputting x into the ANN model obtained in the sixth step, and according to W ANN_opt And b ANN_opt The output layer outputs pred, pred E R 4
(2) Calculating final predictive labelsWherein argmax represents the vector (pred 1 ,pred 2 ,pred 3 ,pred 4 ) The index number of the index corresponding to the maximum value of (b) starts from 0. If->Judging x as interrupt sample and determining specific interrupt type, otherwise judging as normal sample
The specific embodiments described herein are offered by way of example only to illustrate the spirit of the invention. Those skilled in the art may make various modifications or additions to the described embodiments or substitutions thereof without departing from the spirit of the invention or exceeding the scope of the invention as defined in the accompanying claims.

Claims (5)

1. A wireless network interruption detection method based on a condition generation countermeasure network is characterized by comprising the following steps:
the first step: collecting key performance indexes KPIs of the network, and forming a data set S;
and a second step of: generating an antagonism network CGAN-W by using the improved conditions of the data set S training, wherein the CGAN-W consists of a generator G and a discriminator D, and the G and the D are all fully connected neural network structures;
and a third step of: gathering wireless network T 2 KPI information reported by users in time and stored as a data set Wherein N is H Representing the total number of samples in H, element (H w ,y w ),w=1,2,…,N H ,h w ∈R n N-dimensional KPI information reported by a user is represented, wherein the specific value of n can be determined by an operator according to the number of users and the network operation condition, and y w Is h w The value of the label is 0,1,2 and 3, and after the data set H is obtained, the fourth step is carried out;
fourth step: synthesizing interrupt data by using the CGAN-W model obtained in the second step, and balancing a data set H;
fifth step: calculating an inter-class overlap index of each sample in the calibrated training set V;
step 5.1, for each piece of KPI information V in V e ,e=1,2,…,N V ,N V Represents the total number of samples in V, V e ∈R n Q KPI information closest to the Euclidean distance is selected to form a sample set Neigh= { v 1 ,v 2 ,…,v q -a }; turning to step 5.2;
step 5.2, calculating the sum sample v in the set neighbor e The same sample number of the label is recorded as NN, NN is more than or equal to 0 and less than or equal to q, and the step 5.3 is carried out;
step 5.3, if NN>0, sample v e Inter-class overlap index o e =nn/q; if nn=0, sample v e Inter-class overlap index o e =β; wherein,for adjusting the coefficients, for adjusting the samples v e The specific value of beta can be determined by the operator, and the calculated inter-class overlap index o e Incorporating set O, i.e. o=o ∈o e };
Step 5.4, repeating step 5.1, step 5.2 and step 5.3 until all samples in V are traversed;
sixth step: training an artificial neural network ANN by using the training set V and the inter-class overlap index set O to obtain an interruption detection model;
step 6.1, determining an ANN loss function according to V and O, wherein the ANN loss function is shown in a formula (4)
Wherein N is V Represents the total number of samples in the training set V, oe represents the sample V e Inter-class overlap index of e=1, 2, …, N V ,y eg As a sign function, if sample v e If the label of (2) is equal to g, then 1 is taken, otherwise 0, θ is taken j Represents the weight matrix and bias vector corresponding to the j-th neuron of the output layer, theta g Representing a weight matrix and a bias vector corresponding to a g-th neuron of an output layer, wherein an upper mark T represents transposition, and after a loss function is defined, the step 6.2 is carried out; ,
step 6.2, obtaining the minimum value of the formula (4) by using a gradient descent algorithm to obtain a weight vector W of the ANN network ANN_opt And bias b ANN_opt
Seventh step: interrupt detection is carried out according to KPI information x reported by a user in real time in a network, wherein x is E R n
2. The method for detecting wireless network outage of a condition-based generating countermeasure network according to claim 1, wherein said first step: collecting network key performance indexes KPIs and forming a data set S, specifically:
step 1.1, acquiring time T in a wireless network 1 KPI information reported by an inner user;
step 1.2, the KPI information reported by the user is stored as a data setForm (iv); wherein N is S The number of elements in S, the i-th element (x i ,y i ),i=1,2…,N S ,x i ∈R n The n-dimensional KPI information reported by a user at a certain moment specifically comprises the reference signal receiving power and the signal-to-interference-and-noise ratio of a serving cell and a neighbor cell; wherein, the value of n can be determined by the operator according to the number of users and the network operation condition; y is i Is x i A tag indicating the status of the base station serving the user, the value being 0,1,2,3; wherein y is i =0 indicates that the base station is in a normal state, having the strongest communication capability; y is i =1 indicates that the base station is in a slightly disrupted state, the communication capability is slightly degraded, y i The expression of =2 indicates that the base station is in a moderate interruption state, and the communication capability is severely reduced, which may cause a communication failure phenomenon to occur; y is i =3 indicates that the base station is in a severely interrupted state, and when the base station completely loses communication capability, a large number of link connection failure events and user switching events are triggered to obtain S.
3. The method for detecting wireless network interruption based on condition generating countermeasure network according to claim 1, wherein the second step is specifically:
step 2.1, normalizing the data in the data set S according to the formula (1) to enable the final data to be distributed between-1 and 1;
wherein,represents the ith data sample x i Taking values on the d-th dimension feature, d=1, 2, …, n, n representing the sample
The X is i Is a feature dimension of (2);representing the normalized sample, acquiring a normalized data set S, and then transferring to step 2.2;
step 2.2, dividing the S into four subsets according to sample tags in the data set S: s is S 0 ,S 1 ,S 2 ,S 3 The method comprises the steps of carrying out a first treatment on the surface of the Wherein subset S 0 The element in the list is labeled 0, namely y i Samples=0, representing normal data; subset S 1 ,S 2 ,S 3 The medium elements are samples with labels of 1,2 and 3 respectively, and represent interrupt type data respectively; statistical subset S k The total number of samples in (1) is recorded as N k K=1, 2,3, obtaining a subset S containing interrupt data 1 ,S 2 ,S 3 Then, the step 2.3 is carried out;
step 2.3, defining D and G loss functions in CGAN-W as shown in formula (2) and formula (3):
wherein L is D Representing loss of discriminator, L G Representing generator loss; m represents the total number of samples used to train CGAN-W; z j Meter-sampled l-dimensional random noise samples subject to standard normal distribution, z j ∈R l ,j=1,2,…,m;x j N-dimensional KPI information x reported by representative user j ∈R n ;y j Representing a sample tag; using interrupt subsets S 1 ,S 2 ,S 3 TrainingThe CGAN-W model is made to learn the interrupt class data feature, and therefore, the sample (x j ,y j )∈S k ,G(z j ,y j ) Is input (z) j ,y j ) When the generator outputs the layer neuron output, namely the synthesized sample; d (x) j ,y j ) Is input (x) j ,y j ) When the neural element of the output layer of the discriminator outputs; d (G (z) j ,y j ),y j ) For inputting (G (z) j ,y j ),y j ) When the neural element of the output layer of the discriminator outputs; after defining the loss function, turning to step 2.4;
step 2.4, setting parameters required by subsequent model training: the method specifically comprises the following steps: learning rate alpha; the cut-off coefficient c is used for limiting the weight range after updating the discriminator; the batch size m is used for setting the number of samples sampled in each round of training; number of training of discriminant n dis The training method is used for setting the number of times that D needs to be trained when G is trained once; model maximum iteration number iteration; number of iterations t, t of the arbiter<n dis The method comprises the steps of carrying out a first treatment on the surface of the Model iteration number iter, iter<An item; wherein α, c, m, n dis The value of the item is determined by the operator; after the setting is completed, the step 2.5 is carried out;
step 2.5, randomly initializing a generator G and a discriminator D weight vector W in the CGAN-W model G ,W D And offset vector b G ,b D The method comprises the steps of carrying out a first treatment on the surface of the Initializing the iteration number iter=0 of the model, and the iteration number t=0 of the discriminator; after the model initialization is completed, entering a step 2.6;
step 2.6, slave interrupt subset S k M samples are randomly sampled to obtain a real sample set { (x) 1 ,y 1 ),(x 2 ,y 2 ),…,(x m ,y m ) -a }; wherein the set { x } 1 ,x 2 ,…,x m The KPI information representing the samples is noted as real= { x 1 ,x 2 ,…,x m -a }; set { y } 1 ,y 2 ,…,y m The corresponding sample label is denoted as label= { y } 1 ,y 2 ,…,y m -a }; sampling m noise samples from a normal distribution compliant l-dimensional random noise z, z ε R t Form the set noise= { z 1 ,z 2 ,…,z m -a }; set noise and the label set label= { y 1 ,y 2 ,…,y m Combination, i.e. for each noise sample z j Add tag y j Obtaining element (z) j ,y j ),y j Epsilon label; traversing the sampled m noise samples { z }, as this operation 1 ,z 2 ,…,z m Aggregation { (z) 1 ,y 1 ),(z 2 ,y 2 ),…,(z m ,y m ) -a }; after the sampling is completed, the step 2.7 is carried out;
step 2.7, the set { (z) 1 ,y 1 ),(z 2 ,y 2 ),…,(z m ,y m ) An input generator outputting a set of composite samplesWherein (1)>The synthetic sample set +.> And step 2.6, label set label= { y 1 ,y 2 ,…,y m Binding, i.e. for each synthetic sample +.>Adding the label y j ,y j Epsilon label, get element ∈>Traversing the generated m samples according to this operation +.>Obtaining the synthesisSample set->After obtaining the synthetic sample set, transferring to step 2.8;
step 2.8, real sample set { (x) 1 ,y 1 ),(x 2 ,y 2 ),…,(x m ,y m ) Sum of synthetic samplesAs a discriminator input, a small batch random gradient descent algorithm is used to maximize (2), and a discriminator parameter W is updated D ,b D The method comprises the steps of carrying out a first treatment on the surface of the Making the iteration times t=t+1 of the discriminator, and turning to step 2.9;
step 2.9, for the updated arbiter weight coefficient W D Cut off to a value between-c and c, i.e. |W D C is not more than c, wherein c is a cut-off coefficient, and a specific numerical value can be determined by an operator;
step 2.10, repeating steps 2.6, 2.7, 2.8, 2.9 until the number of iterations t of the arbiter>n dis The method comprises the steps of carrying out a first treatment on the surface of the Turning to a flow step 2.11;
step 2.11, sampling m noise samples again from the l-dimensional random noise z obeying the standard normal distribution, z ε R t Form the aggregate noise '= { z' 1 ,z′ 2 ,…,z′ m And (2) combining the set noise' with the label set label= { y obtained in the step 2.6 1 ,y 2 ,…,y m Combination, i.e. for each noise sample z' j Add tag y j ,y j E lanel, get element (z' j ,y j ) The method comprises the steps of carrying out a first treatment on the surface of the Traversing the sampled m noise samples { z } 'in this operation' 1 ,z′ 2 ,…,z′ m (z 'to obtain the set)' 1 ,y 1 ),(z′ 2 ,y 2 ),…,(z' m ,y m ) -and input into a generator; minimizing (3) using a small batch random gradient descent algorithm, updating the generator parameters W G ,b G Making the iteration number iter=iter+1 of the model, and turning to step 2.12;
and 2, step 2.12, if iter>item, end training, record generator G, discriminant D weight vector W G_opt ,W D_opt Bias vector b G_opt ,b D_opt Step 2.13 is entered, otherwise, turning to step 2.5, and starting a new training round;
step 2.13, repeating steps 2.3-2.12 until three interrupt subsets S 1 ,S 2 ,S 3 And after training is completed, a CGAN-W model with the learned interrupt characteristics is obtained.
4. The method for detecting wireless network interruption based on condition generating countermeasure network according to claim 1, wherein the fourth step is specifically:
step 4.1, sampling n from the l-dimensional random noise z subject to the standard normal distribution gen Noise samples form a setWherein n is gen Representing the number of minority class samples desired to be synthesized, n gen >0; because the number of added synthesized samples can influence the unbalanced proportion of the finally obtained training set samples and further influence the classification effect, the optimal n is searched by a grid searching method gen A value; sampling tag information to obtain tag set +.> Only a few classes of data are synthesized to balance the dataset, so the tag takes value only in the set {1,2,3}, i.e., y r ∈{1,2,3},r=1,2,…,n gen The method comprises the steps of carrying out a first treatment on the surface of the Combine noise collection->And tag set->I.e. for noise sample z r r=1, 2, … gin, tag y is added r r=1,2,…ngen,y r ∈label gen Obtaining element (z) r ,y t ) The method comprises the steps of carrying out a first treatment on the surface of the Traversing all samples in this way, obtaining the set +.>Turning to step 4.2;
step 4.2, aggregatingInput generator, according to the W obtained in the second step G_opt ,b G_opt Output of the calculation generator, noted->Wherein (1)>r=1,2,…,n gen ,/>Will collect->And tag set->Binding, i.e. for generating samples->Adding the label y t ,y r ∈label gen Obtaining element->Traversing all the generated samples in this operation to obtain a generated datasetAfter the generated data set U is obtained, the step 4.3 is carried out;
and 4.3, combining the generated data set U and the original data set H to obtain a calibrated training set V=U U H.
5. The method for detecting wireless network interruption based on condition generating countermeasure network according to claim 1, wherein the seventh step is specifically:
step 7.1, inputting x into the ANN model obtained in the sixth step, according to W ANN_opt And b ANN_opt The output layer outputs pred, pred E R 4
Step 7.2, calculating the final predictive labelWherein argmax represents the vector (pred 1 ,pred 2 ,pred 3 ,pred 4 ) The index of the index corresponding to the maximum value of (1), the index number of the index starts from 0, if +.>And judging x as an interrupt sample and determining a specific interrupt type, otherwise judging as a normal sample.
CN202210108134.XA 2022-01-28 2022-01-28 Wireless network interruption detection method based on condition generation countermeasure network Active CN114501525B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210108134.XA CN114501525B (en) 2022-01-28 2022-01-28 Wireless network interruption detection method based on condition generation countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210108134.XA CN114501525B (en) 2022-01-28 2022-01-28 Wireless network interruption detection method based on condition generation countermeasure network

Publications (2)

Publication Number Publication Date
CN114501525A CN114501525A (en) 2022-05-13
CN114501525B true CN114501525B (en) 2024-02-02

Family

ID=81477405

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210108134.XA Active CN114501525B (en) 2022-01-28 2022-01-28 Wireless network interruption detection method based on condition generation countermeasure network

Country Status (1)

Country Link
CN (1) CN114501525B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110972174A (en) * 2019-12-02 2020-04-07 东南大学 Wireless network interruption detection method based on sparse self-encoder
WO2020193510A1 (en) * 2019-03-26 2020-10-01 Robert Bosch Gmbh Training for artificial neural networks with better utilization of learning data records
CN112039687A (en) * 2020-07-14 2020-12-04 南京邮电大学 Small sample feature-oriented fault diagnosis method based on improved generation countermeasure network
CN113406437A (en) * 2021-06-21 2021-09-17 西南交通大学 Power transmission line fault detection method for generating countermeasure network based on auxiliary classification

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020193510A1 (en) * 2019-03-26 2020-10-01 Robert Bosch Gmbh Training for artificial neural networks with better utilization of learning data records
CN110972174A (en) * 2019-12-02 2020-04-07 东南大学 Wireless network interruption detection method based on sparse self-encoder
CN112039687A (en) * 2020-07-14 2020-12-04 南京邮电大学 Small sample feature-oriented fault diagnosis method based on improved generation countermeasure network
CN113406437A (en) * 2021-06-21 2021-09-17 西南交通大学 Power transmission line fault detection method for generating countermeasure network based on auxiliary classification

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
无线网络中断检测技术研究;陈彦;《中国优秀硕士论文全文数据库》;全文 *

Also Published As

Publication number Publication date
CN114501525A (en) 2022-05-13

Similar Documents

Publication Publication Date Title
CN112039687A (en) Small sample feature-oriented fault diagnosis method based on improved generation countermeasure network
CN101867960A (en) Comprehensive evaluation method for wireless sensor network performance
CN107105320A (en) A kind of Online Video temperature Forecasting Methodology and system based on user emotion
CN110909977A (en) Power grid fault diagnosis method based on ADASYN-DHSD-ET
CN113709779B (en) Cellular network fault diagnosis method
CN112791997B (en) Method for cascade utilization and screening of retired battery
CN107153845A (en) A kind of isolated island detection method of the adaptive grid-connected photovoltaic system based on machine learning
Zhang et al. What should lenders be more concerned about? Developing a profit-driven loan default prediction model
CN110972174B (en) Wireless network interruption detection method based on sparse self-encoder
CN114501525B (en) Wireless network interruption detection method based on condition generation countermeasure network
CN116759100B (en) Method for constructing chronic cardiovascular disease large model based on federal learning
CN113033678A (en) Lithium battery pack fault diagnosis method based on adaptive countermeasure network
CN113657678A (en) Power grid power data prediction method based on information freshness
CN111198820B (en) Cross-project software defect prediction method based on shared hidden layer self-encoder
Xia et al. Improving the performance of stock trend prediction by applying GA to feature selection
CN115659258B (en) Power distribution network fault detection method based on multi-scale graph roll-up twin network
CN116090757A (en) Method for evaluating capability demand satisfaction of information guarantee system
CN115481788B (en) Phase change energy storage system load prediction method and system
CN116663393A (en) Random forest-based power distribution network continuous high-temperature fault risk level prediction method
CN114615685B (en) Wireless network interruption detection method based on generation of countermeasures network up-sampling
Saxena et al. Hybrid KNN-SVM machine learning approach for solar power forecasting
CN115598459A (en) Power failure prediction method for 10kV feeder line fault of power distribution network
CN110837932A (en) Thermal power prediction method of solar heat collection system based on DBN-GA model
Liu et al. Ultra-short-term wind power forecasting based on stacking model
CN109934489A (en) A kind of status of electric power evaluation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant