TWI684139B - System and method of learning-based prediction for anomalies within a base station - Google Patents

System and method of learning-based prediction for anomalies within a base station Download PDF

Info

Publication number
TWI684139B
TWI684139B TW107136316A TW107136316A TWI684139B TW I684139 B TWI684139 B TW I684139B TW 107136316 A TW107136316 A TW 107136316A TW 107136316 A TW107136316 A TW 107136316A TW I684139 B TWI684139 B TW I684139B
Authority
TW
Taiwan
Prior art keywords
data
neural network
deep neural
output
filtered
Prior art date
Application number
TW107136316A
Other languages
Chinese (zh)
Other versions
TW202016805A (en
Inventor
陳昱安
蔡佳霖
彭楚芸
湯凱傑
龍蒂涵
唐之璇
朱康民
Original Assignee
中華電信股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中華電信股份有限公司 filed Critical 中華電信股份有限公司
Priority to TW107136316A priority Critical patent/TWI684139B/en
Application granted granted Critical
Publication of TWI684139B publication Critical patent/TWI684139B/en
Publication of TW202016805A publication Critical patent/TW202016805A/en

Links

Images

Landscapes

  • Mobile Radio Communication Systems (AREA)

Abstract

A system and method of learning-based prediction for anomalies within a base station, the method includes: filtering data to generate filtered data; labeling the filtered data to generate label data; filtering the label data to generate filtered label data; and training, by a deep neural network, according to the filtered label data and training method to update the deep neural network.

Description

基於自動學習的基地台異常之預測的系統與方法System and method for predicting abnormality of base station based on automatic learning

本發明係關於一種基地台異常之預測的系統與方法,且特別係關於一種基於自動學習的基地台異常之預測的系統與方法。The invention relates to a system and method for predicting abnormality of a base station, and particularly to a system and method for predicting abnormality of a base station based on automatic learning.

針對行動網路訊務與告警進行分析與維運診斷,往往是行動網路維運障礙排除與服務品質優化作業中不可缺少的環結。傳統操作方法中,基地台訊務監測分析與維運問題診斷作業需要倚賴行動網路的維運人員進入機房,並透過訊務監測分析來做維運障礙診斷。此種方法高度仰賴人員操作與專業經驗。但隨著行動網路技術日益成長,行動網路架構日趨複雜,此時若再以傳統人工經驗分析做維運障礙的診斷,不僅費力費時且容易造成人為誤判,而未必能即時找出問題之癥結點。行動網路時刻產生訊務資料與告警資訊,將這些資料進行分析與挖掘利用對於提高行動網路維運品質有著顯著的意義。不僅能將維運等級由人工判別轉為智慧決策,再融入人工智慧科技將障礙偵測轉化為主動預測並預防與提前排除來提升行動網路服務品質。The analysis and maintenance diagnosis of mobile network traffic and alarms are often indispensable links for troubleshooting and maintenance of mobile network maintenance and optimization of service quality. In the traditional operation method, base station communication monitoring analysis and maintenance problem diagnosis operations require maintenance personnel relying on the mobile network to enter the computer room, and use the service monitoring analysis to diagnose maintenance obstacles. This method relies heavily on personnel operation and professional experience. However, with the increasing growth of mobile network technology and increasingly complex mobile network architecture, at this time, if the traditional manual experience analysis is used to diagnose maintenance obstacles, it is not only laborious and time-consuming, but also easy to cause human misjudgment, and may not be able to find the problem in real time. The crux of the problem. The mobile network always generates traffic data and alarm information, and analyzing and mining these data has a significant meaning for improving the quality of mobile network maintenance. Not only can the maintenance level be changed from artificial judgment to intelligent decision-making, but also to integrate artificial intelligence technology to transform obstacle detection into active prediction and prevent and eliminate in advance to improve the quality of mobile network services.

美國專利“Network fault prediction and proactive maintenance system (申請號:09/337,209)”中有提及類似的障礙預測概念,該專利指出創建多個包含有效日誌的特徵資料庫,這些有效日誌代表網路狀態異常警報,其來源為由網路領域的專家或管理員從一大群日誌中專門選擇,往後網路之日誌將根據對資料庫中的有效日誌和特徵的分析來預測未來將發生的故障。此方法在機器學習領域中屬於離線學習,亦即,有效日誌或關鍵特徵之系統參數無法隨新進日誌動態更新,須整批資料輸入模型進行參數或結構調整。The US patent "Network fault prediction and proactive maintenance system (application number: 09/337,209)" mentions a similar obstacle prediction concept. The patent states that multiple feature databases containing valid logs are created, which represent network status The source of anomaly alarms is specifically selected by a network of experts or administrators from a large group of logs. In the future, the network logs will predict future failures based on the analysis of the effective logs and characteristics in the database. This method belongs to offline learning in the field of machine learning, that is, the system parameters of the effective log or key features cannot be dynamically updated with the new log, and the entire batch of data must be entered into the model for parameter or structure adjustment.

中國專利“基於神經網絡和模糊綜合評價的移動網絡健康評價方法 (申請號:201510236105.1)”中有提及類似的障礙預測概念且屬於線上學習。透過告警系統中的歷史告警訊息結合專家意見建立健康度評價標準,該專利指出從每類告警數量中篩選出告警數量最低值、告警數量最高值,再根據告警數量的大小並結合歷史資料及專家意見,將告警指標分為優秀、健康、良好、中等、不健康等五级標準,最後訓練一倒傳遞類神經網路來預測新行動網路系統健康與否。然而,此方法僅考慮健康評價,該評價標準屬於量化後的指標,靈敏度與訊息量受切割之量化區間大小而左右,更佳方式應考慮各種障礙的偵測與預測。The Chinese patent "Mobile network health evaluation method based on neural network and fuzzy comprehensive evaluation (application number: 201510236105.1)" mentions a similar obstacle prediction concept and belongs to online learning. Establishing a health evaluation standard through historical alarm messages in the alarm system combined with expert opinions. The patent states that the lowest number of alarms and the highest value of alarms are selected from the number of each type of alarm, and then combined with historical data and experts according to the size of the number Opinion, the warning indicators are divided into five levels: excellent, healthy, good, medium, and unhealthy. Finally, we train a transitive neural network to predict the health of the new mobile network system. However, this method only considers the health evaluation. The evaluation standard is a quantified index. The sensitivity and the amount of information are determined by the size of the quantization interval. The better way should consider the detection and prediction of various obstacles.

美國專利“Alarm prediction in a telecommunication network (申請號:15/029,834)”中亦有提及屬於線上學習之障礙預測概念,其蒐集網路元件中之關鍵績效指標,該指標來源可為:使用記憶體量、功率水準、掉話率與答話率等衡量網路品質指標。訓練方法則採圖型理論中的有向圖與無向圖的延伸:馬可夫隨機場與貝斯網路模型。然而上述之方法皆源自圖型領域,在圖形識別中,物件具有高相似性與密集性,反觀行動網路障礙則具稀疏性,亦即正常與異常之網路數量具顯著差異性。這將造成人工智慧訓練結果極度偏差,亦即訓練結果顯示行動網路無障礙即可平均地達到很高之準確率,而精確率與召回率卻不高。此外上述方法之告警皆受限於基地台供應商提供之告警系統詳細程度。The US patent "Alarm prediction in a telecommunication network (application number: 15/029,834)" also mentions the concept of obstacle prediction that belongs to online learning. It collects key performance indicators in network components. The source of this indicator can be: using memory Volume, power level, dropped call rate and answer rate, etc. measure network quality indicators. The training method adopts the extension of directed graph and undirected graph in graph theory: Markov random field and bass network model. However, the above-mentioned methods are all derived from the field of graphics. In pattern recognition, objects have high similarity and denseness. In contrast, mobile network obstacles are sparse, that is, the number of normal and abnormal networks is significantly different. This will cause extreme deviation of the artificial intelligence training results, that is, the training results show that the mobile network is barrier-free and can achieve a high accuracy rate on average, but the accuracy rate and recall rate are not high. In addition, the alarms of the above methods are limited by the detail level of the alarm system provided by the base station supplier.

由此可見,上述習用之方法仍有諸多缺失,並非一良善之設計,而需加以改進。It can be seen that there are still many defects in the above-mentioned conventional methods. It is not a good design, but needs to be improved.

為了改善上述習用之方法的諸多缺失,本發明提出一種基於自動學習的基地台異常之預測的系統與方法。In order to improve many defects of the above-mentioned conventional methods, the present invention proposes a system and method for predicting abnormality of base stations based on automatic learning.

本發明提出一種基於自動學習的基地台異常之預測的系統,包括儲存媒體及處理器。儲存媒體儲存多個模組。處理器耦接儲存媒體,存取並執行該儲存媒體的該些模組,該些模組包括:資料庫,儲存資料;篩選模組,對該資料進行篩選以產生篩選後的資料;障礙貼標模組,對該篩選後的資料進行貼標以產生標籤資料;第二篩選模組,對該標籤資料進行篩選以產生篩選後的標籤資料;以及深層類神經網路,根據該篩選後的標籤資料及訓練方法進行訓練以更新該深層類神經網路。The invention provides a system for predicting abnormality of a base station based on automatic learning, including a storage medium and a processor. The storage medium stores multiple modules. The processor is coupled to the storage medium, and accesses and executes the modules of the storage medium. The modules include: a database, storing data; a filtering module, which filters the data to generate filtered data; and barrier stickers Labeling module, labeling the filtered data to generate label data; second screening module, filtering the label data to generate filtered label data; and deep neural network, based on the filtered Label data and training methods are trained to update the deep neural network.

本發明提出一種基於自動學習的基地台異常之預測的方法,包括:對資料進行篩選以產生篩選後的資料;對該篩選後的資料進行貼標以產生標籤資料;對該標籤資料進行篩選以產生篩選後的標籤資料;以及由深層類神經網路根據該篩選後的標籤資料及訓練方法進行訓練以更新該深層類神經網路。The invention provides a method for predicting abnormality of a base station based on automatic learning, which includes: screening data to generate screened data; labeling the screened data to generate label data; screening the label data to Generate the filtered label data; and train the deep neural network according to the filtered label data and the training method to update the deep neural network.

基於上述,本發明旨在利用大數據優勢降低人為介入基地台維運而引入之誤判,並提出無貼標資料系統欲應用監督式學習技巧之方法。Based on the above, the present invention aims to use the advantages of big data to reduce the misjudgment introduced by human intervention in the maintenance of a base station, and proposes a method for applying a supervised learning technique to a non-labeled data system.

為讓本發明的上述特徵和優點能更明顯易懂,下文特舉實施例,並配合所附圖式作詳細說明如下。In order to make the above-mentioned features and advantages of the present invention more obvious and understandable, the embodiments are specifically described below in conjunction with the accompanying drawings for detailed description as follows.

圖1根據本發明的實施例繪示基於自動學習的基地台異常之預測的系統10的示意圖。系統10可包括儲存媒體100以及處理器200。儲存媒體100儲存多個模組。處理器200耦接儲存媒體100,其可存取並執行儲存媒體100的多個模組。儲存媒體100的多個模組可包括資料庫110、篩選模組120、障礙貼標模組130、第二篩選模組140以及深層類神經網路150。資料庫110可儲存資料。篩選模組120可對該資料進行篩選以產生篩選後的資料。障礙貼標模組130可對篩選後的資料進行貼標以產生標籤資料。第二篩選模組140可對標籤資料進行篩選以產生篩選後的標籤資料。深層類神經網路150可根據篩選後的標籤資料及訓練方法進行訓練以更新深層類神經網路150。FIG. 1 is a schematic diagram of a system 10 for predicting base station abnormality based on automatic learning according to an embodiment of the present invention. The system 10 may include a storage medium 100 and a processor 200. The storage medium 100 stores multiple modules. The processor 200 is coupled to the storage medium 100 and can access and execute multiple modules of the storage medium 100. The plurality of modules of the storage medium 100 may include a database 110, a screening module 120, an obstacle labeling module 130, a second screening module 140, and a deep neural network 150. The database 110 can store data. The filtering module 120 can filter the data to generate filtered data. The obstacle labeling module 130 can label the filtered data to generate label data. The second filtering module 140 can filter the label data to generate the filtered label data. The deep neural network 150 can be trained according to the filtered label data and the training method to update the deep neural network 150.

儲存媒體100可用以儲存系統10運行時所需的各項軟體、資料及各類程式碼。儲存媒體100可例如是任何型態的固定式或可移動式的隨機存取記憶體(random access memory,RAM)、唯讀記憶體(read-only memory,ROM)、快閃記憶體(flash memory)、硬碟(hard disk drive,HDD)、固態硬碟(solid state drive,SSD)或類似元件或上述元件的組合,本發明不限於此。The storage medium 100 can be used to store various software, data, and various types of code required by the system 10 during operation. The storage medium 100 may be, for example, any type of fixed or removable random access memory (RAM), read-only memory (ROM), flash memory (flash memory) ), hard disk drive (HDD), solid state drive (SSD) or similar components or a combination of the above components, the invention is not limited thereto.

處理器200可例如是中央處理單元(central processing unit,CPU)、處理單元(graphics processing unit,GPU),或是其他可程式化之一般用途或特殊用途的微處理器(microprocessor)、數位信號處理器(digital signal processor,DSP)、可程式化控制器、特殊應用積體電路(application specific integrated circuit,ASIC)或其他類似元件或上述元件的組合,本發明不限於此。The processor 200 may be, for example, a central processing unit (CPU), a graphics processing unit (GPU), or other programmable general-purpose or special-purpose microprocessors, digital signal processing Digital signal processor (DSP), programmable controller, application specific integrated circuit (ASIC) or other similar components or combinations of the above components, the invention is not limited thereto.

系統10可實施於下列的至少其中之一:戶外大瓦數基地台、基於自組織網路管控之小瓦數微型基地台等,但本發明不限於此。The system 10 can be implemented in at least one of the following: outdoor large wattage base stations, small wattage micro base stations based on self-organizing network management, etc., but the invention is not limited thereto.

圖2根據本發明的實施例繪示基於自動學習的基地台異常之預測的方法20的流程圖,其中方法20可由如圖1所示的系統10實施。在步驟S201,資料庫110可接收來自一行動網路的資料,並且儲存該些資料。該些資料可包括下列的至少其中之一:無線接取端資料、核心網路信令及/或告警訊息。系統10可視障礙種類決定使用核心網路信令及/或無線接取端資料作為篩選模組120的輸入。由於核心網路的資料較具機密性,因此,若無法取得或易造成系統10運作上之困難,則以無線接取端資料為主,而以核心網路信令的資料為輔,亦即以無線接取端資料及核心網路信令作為篩選模組120的輸入用以透過障礙貼標模組130產生對應無線接取端資料及核心網路信令的標籤資料。另一方面,深層類神經網路150則以無線接取端資料來學習已融入核心網路信令之資料的障礙貼標模組130來兼顧資料取得即時與便利性同時考量預測準確度。上述的資料(即:無線接取端資料、核心網路信令及/或告警訊息)對應於行動網路,且該行動網路提供終端行動網路服務,並且使用以下通訊標準的至少其中之一:分碼多重存取(CDMA)、寬頻分碼多重存取(WCDMA)、高速封包存取(HSPA)、演進式高速封包存取(HSPA+)、長期演進技術(LTE)、全球互通微波存取(WiMAX)及進階長期演進技術(LTE-A)。FIG. 2 illustrates a flowchart of a method 20 for predicting base station abnormality based on automatic learning according to an embodiment of the present invention, where the method 20 may be implemented by the system 10 shown in FIG. 1. In step S201, the database 110 can receive data from a mobile network and store the data. The data may include at least one of the following: wireless access terminal data, core network signaling, and/or alarm messages. The system 10 decides to use core network signaling and/or wireless access terminal data as input to the filtering module 120 depending on the type of obstacle. Since the data of the core network is more confidential, if it is impossible to obtain or easily cause difficulty in the operation of the system 10, the data of the wireless access terminal is mainly used, and the data of the core network signaling is supplemented, that is, The wireless access terminal data and the core network signaling are used as the input of the filtering module 120 to generate the label data corresponding to the wireless access terminal data and the core network signaling through the barrier labeling module 130. On the other hand, the deep neural network 150 uses the wireless access terminal data to learn the obstacle labeling module 130 that has been integrated into the core network signaling data to take into account the real-time and convenience of data acquisition while considering the accuracy of prediction. The above data (ie: wireless access terminal data, core network signaling and/or alarm messages) corresponds to the mobile network, and the mobile network provides terminal mobile network services, and uses at least one of the following communication standards One: code division multiple access (CDMA), broadband frequency division multiple access (WCDMA), high-speed packet access (HSPA), evolved high-speed packet access (HSPA+), long-term evolution technology (LTE), global interoperable microwave storage Access (WiMAX) and advanced long-term evolution technology (LTE-A).

在一些實施例中,無線接取端資料可包括下列的至少其中之一:物理資源區塊使用率、參考訊號接收功率、接收訊號強度指示符、上下行吞吐量、上下行使用者數、調變與編碼指標、中央處理器使用率、設備距離上次重啟時間、信令建立成功率及記憶體使用量。In some embodiments, the wireless access terminal data may include at least one of the following: physical resource block utilization rate, reference signal received power, received signal strength indicator, upstream and downstream throughput, number of upstream and downstream users, adjustment Variable and coding indicators, CPU usage, device time since last restart, signaling success rate, and memory usage.

在一些實施例中,告警訊息可包括下列的至少其中之一:射頻類告警基帶處理單元告警、連接類告警、單版告警、軟體升版配置告警、維護測試告警、呆機及無預警重啟。In some embodiments, the alarm message may include at least one of the following: radio frequency alarm baseband processing unit alarm, connection alarm, single-version alarm, software upgrade configuration alarm, maintenance test alarm, idle and restart without warning.

在一些實施例中,核心網路信令可包括下列的至少其中之一:封包掉包率、傳輸延遲數、頻寬利用率、通道容量、移動性管理組件連結建立成功率及使用者使用應用類別。In some embodiments, the core network signaling may include at least one of the following: packet loss rate, transmission delay number, bandwidth utilization rate, channel capacity, mobility management component connection establishment success rate, and user application type .

在步驟S202,篩選模組120自資料庫110接收資料,並且對資料進行篩選以產生篩選後的資料。無論篩選模組120之輸入來源為何,皆可進行統計關鍵因素鑑定。統計關鍵因素鑑定可篩選出關鍵指標因素,避免輸入參數維度過大而影響系統即時性。具體來說,篩選模組120可根據資料以及篩選方法產生關鍵模型。關鍵模型可篩選出資料中的不必要之指標以產生篩選後的資料,並且篩選方法包括統計關鍵因素鑑定及主成分分析中的至少其中之一。In step S202, the filtering module 120 receives data from the database 110, and filters the data to generate filtered data. Regardless of the input source of the filtering module 120, statistical key factor identification can be performed. The identification of statistical key factors can screen out key index factors to prevent the input parameter from being too large and affecting the system's immediacy. Specifically, the screening module 120 can generate a key model according to the data and the screening method. The key model can screen out unnecessary indicators in the data to generate the screened data, and the screening method includes at least one of statistical key factor identification and principal component analysis.

在一些實施例中,篩選模組120可根據告警訊息中的障礙種類的數量決定是否增加新障礙種類至障礙種類中,其中所增加的新障礙種類可基於人員的使用需求而由人員定義。步驟S202的詳細實施方式可參考圖3及表一。In some embodiments, the screening module 120 may decide whether to add a new barrier type to the barrier type according to the number of barrier types in the alarm message, where the added new barrier type may be defined by the personnel based on the use requirements of the personnel. For a detailed implementation of step S202, refer to FIG. 3 and Table 1.

圖3根據本發明的實施例繪示步驟S202的詳細實施方式的流程圖。表一根據本發明的實施例描述方法20所使用的相關參數。 表一 使用資料區間 M 預測障礙區間 N 學習率 η = 1e-6 隱藏層使用之激勵函數 Rectified Linear Unit 輸出層使用之激勵函數 Sigmoid 最佳化器 RMSprop或Momentum 最佳化器之衰減速率 ρ = 0.9 最佳化器之學習率 ε= 1e-5 批尺寸 512 損失函數(二類與多類) 對數損失函數(binary_crossentropy)或歸一化指數函數(softmax) 優化指標 準確率(accuracy)、精準率(precision)及/或召回率(recall) P值(P-value) < 1e-6 FIG. 3 is a flowchart illustrating a detailed implementation of step S202 according to an embodiment of the present invention. Table 1 describes related parameters used by the method 20 according to an embodiment of the present invention. Table I Use data interval M Predict the obstacle interval N Learning rate η = 1e-6 Excitation function used by hidden layer Rectified Linear Unit Excitation function used by the output layer Sigmoid Optimizer RMSprop or Momentum Optimizer Decay Rate ρ = 0.9 The learning rate of the optimizer ε = 1e-5 Batch size 512 Loss function (two types and multiple types) Logarithmic loss function (binary_crossentropy) or normalized exponential function (softmax) Optimization index Accuracy, precision and/or recall P-value (P-value) < 1e-6

請參照圖3及表一。在步驟S301,篩選模組120可根據欲預測的障礙種類決定取用無線接取端資料、核心網路信令中的至少其中之一以作為第一集合。接著在步驟S302,從該第一集合中剔除一示例以產生第二集合,其中該示例使由該第一集合所建構之模型有最少的訊息減少量。在步驟S303,可計算剔除上述之示例後之剩餘的各示例(即:第二集合中的各示例)之P值,亦即,可計算由第二集合所建構之模型對應之各示例的P值。在步驟S304,判斷所計算出來的P值中的最大P值示例是否已達到容忍值(例如:判斷最大P值示例是否小於1e-6),若是,則進入步驟S305。若不是,則進入步驟S306。在步驟S305,篩選模組120可將第二集合作為障礙貼標模組130之輸入。在步驟S306,篩選模組120可刪除對應最大P值之示例並令剩餘示例(即:第二集合減去對應最大P值之示例)組合為更新後的第一集合,並且重新執行步驟S302。Please refer to Figure 3 and Table 1. In step S301, the screening module 120 may decide to use at least one of the wireless access terminal data and the core network signaling according to the type of obstacle to be predicted as the first set. Then in step S302, an example is removed from the first set to generate a second set, where the example makes the model constructed by the first set have the least amount of message reduction. In step S303, the P values of the remaining examples (that is, the examples in the second set) after excluding the above examples can be calculated, that is, the P of each example corresponding to the model constructed by the second set can be calculated value. In step S304, it is judged whether the maximum P value example in the calculated P value has reached the tolerance value (for example, it is judged whether the maximum P value example is less than 1e-6), and if so, step S305 is entered. If not, go to step S306. In step S305, the screening module 120 may use the second set as an input of the obstacle labeling module 130. In step S306, the screening module 120 may delete the example corresponding to the maximum P value and combine the remaining examples (ie, the second set minus the example corresponding to the maximum P value) into the updated first set, and perform step S302 again.

回到圖2,在步驟S203,障礙貼標模組130可對該篩選後的資料進行貼標以產生標籤資料。具體來說,障礙貼標模組130可對該篩選後的資料進行特徵篩選並執行歸一化,亦即針對各選定之特徵對應之資料轉為平均數為零及單位變異數之標準分數(z-score)。此動作能避免各輸入參數單位不一致或數量級距懸殊造成障礙貼標模組130之輸出的變異過大而導致後續訓練出的深層類神經網路150的權重變異數過大。Returning to FIG. 2, in step S203, the obstacle labeling module 130 may label the filtered data to generate label data. Specifically, the obstacle labeling module 130 can perform feature filtering on the filtered data and perform normalization, that is, the data corresponding to each selected feature is converted into a standard score of zero mean and unit variation ( z-score). This action can avoid the inconsistency of the input parameter units or the difference in order of magnitude causing the output variation of the obstacle labeling module 130 to be too large, which can cause the weight variation of the deep neural network 150 that is subsequently trained to be too large.

在步驟S204,第二篩選模組140可自障礙貼標模組130接收標籤資料,並且對標籤資料進行篩選以產生篩選後的標籤資料。具體來說,第二篩選模組140可根據標籤資料及第二篩選方法產生第二關鍵模型,其中,第二關鍵模型可篩選出標籤資料中的不必要之指標以產生篩選後的標籤資料,並且第二篩選方法可包括統計關鍵因素鑑定(或P-value假說檢定)及主成分分析(或基於特徵向量之主成分分析)中的至少其中之一。P-value假說檢定具良好視覺直觀性,主成分分析則可將輸入坐標轉為特徵坐標系統。此步驟可避免深層類神經網路150的輸入維數過大造成資料密度稀疏性,文獻上稱為:維數災難。In step S204, the second screening module 140 may receive the label data from the obstacle labeling module 130, and filter the label data to generate the filtered label data. Specifically, the second filtering module 140 can generate a second key model according to the label data and the second filtering method, wherein the second key model can filter out unnecessary indexes in the label data to generate the filtered label data, And the second screening method may include at least one of statistical key factor identification (or P-value hypothesis test) and principal component analysis (or principal component analysis based on feature vectors). The P-value hypothesis test has good visual intuition, and principal component analysis can convert the input coordinates into a feature coordinate system. This step can avoid the sparseness of the data density caused by the input dimension of the deep neural network 150 being too large, which is called: dimension disaster in the literature.

在步驟S205,深層類神經網路150可接收來自第二篩選模組140的篩選後的標籤資料,並且根據篩選後的標籤資料及訓練方法進行訓練以更新深層類神經網路150,其中更新深層類神經網路150可例如是更新深層類神經網路150所使用的權重。所述訓練方法可包括下列的至少其中之一(但本發明不限於此):採用倒傳遞訓練方式並逐層增加深層類神經網路150的隱藏層;通過調整訓練學習率改善訓練誤差率;使用RMSprop及Momentum中的至少其中之一作為最佳化器;根據篩選後的標籤資料中的正例與反例之比例調整深層類神經網路150的輸出層的輸出閾值;以及將深層類神經網路150的輸出資料回授至深層類神經網路150,並且根據篩選後的標籤資料(或篩選後的資料)以及深層類神經網路150之輸出資料來調整深層類神經網路150的權重。In step S205, the deep neural network 150 may receive the filtered label data from the second filtering module 140, and train to update the deep neural network 150 according to the filtered label data and the training method, in which the deep layer is updated The neural-like network 150 may, for example, update the weights used by the deep neural-like network 150. The training method may include at least one of the following (but the invention is not limited thereto): adopting the backward transfer training method and increasing the hidden layer of the deep neural network 150 layer by layer; adjusting the training learning rate to improve the training error rate; Use at least one of RMSprop and Momentum as an optimizer; adjust the output threshold of the output layer of the deep neural network 150 according to the ratio of positive examples and negative examples in the filtered label data; and use the deep neural network The output data of the path 150 is fed back to the deep neural network 150, and the weight of the deep neural network 150 is adjusted according to the filtered label data (or filtered data) and the output data of the deep neural network 150.

更具體來說,訓練方法:根據篩選後的標籤資料中的正例與反例之比例調整深層類神經網路150的輸出層的輸出閾值,可包括下列的至少其中之一:將輸出閾值乘上正例與反例之比例的倒數;以及在正例與反例的數量達到特定差距時,捨棄正例與反例中數量較多者以使正例與反例之比例逼近一。More specifically, the training method: adjust the output threshold of the output layer of the deep neural network 150 according to the ratio of positive examples and negative examples in the filtered label data, which may include at least one of the following: multiply the output threshold by The reciprocal of the ratio of positive examples and negative examples; and when the number of positive examples and negative examples reaches a certain gap, the larger number of positive examples and negative examples is discarded so that the ratio of positive examples and negative examples approaches one.

在一些實施例中,當深層類神經網路150欲分類的輸出種類等於二時,可選擇對數損失函數(binary_crossentropy)作為損失函數。另一方面,若深層類神經網路150欲分類的輸出種類大於二時,可選擇歸一化指數函數(softmax)作為損失函數。In some embodiments, when the output type of the deep neural network 150 to be classified is equal to two, a logarithmic loss function (binary_crossentropy) may be selected as the loss function. On the other hand, if the output type of the deep neural network 150 to be classified is greater than two, a normalized exponential function (softmax) may be selected as the loss function.

深層類神經網路150可包括輸入層及輸出層。深層類神經網路150還可根據滑動視窗的前視窗參數調整其輸入層的輸入資料量,並且根據滑動視窗的後視窗參數調整其輸出層的輸出資料量,如圖4所示。圖4根據本發明的實施例繪示(由深層類神經網路150)利用滑動視窗的資料管理方式的示意圖。具體來說,深層類神經網路150可根據滑動視窗的前視窗參數將過去第一時段中的篩選後的標籤資料(例如:輸出自第二篩選模組140的篩選後的標籤資料)作為其輸入層的輸入資料,以及根據後視窗參數估測未來第二時段中發生的障礙以產生其輸出層的輸出資料。在本實施例中,前視窗參數表示利用過去M分鐘的資料(例如:輸出自第二篩選模組140的篩選後的標籤資料)作為深層類神經網路150的輸入,後視窗參數則表示預測未來N分鐘將有障礙發生,其中,當N(或後視窗參數)被設定為零時,則系統10由預測系統障礙轉換為偵測系統障礙。The deep neural network 150 may include an input layer and an output layer. The deep neural network 150 can also adjust the input data amount of its input layer according to the front window parameters of the sliding window, and adjust the output data amount of its output layer according to the rear window parameters of the sliding window, as shown in FIG. 4. FIG. 4 is a schematic diagram illustrating a data management method using a sliding window (by a deep neural network 150) according to an embodiment of the present invention. Specifically, the deep neural network 150 may use the filtered label data in the past first period (eg, the filtered label data output from the second filtering module 140) according to the front window parameters of the sliding window as its The input data of the input layer and the obstacles occurring in the second period in the future according to the parameters of the rear window to generate the output data of the output layer. In this embodiment, the front window parameter indicates the use of the past M minutes of data (for example, the filtered label data output from the second filter module 140) as the input of the deep neural network 150, and the rear window parameter indicates the prediction Obstacles will occur in the next N minutes. When N (or the rear window parameter) is set to zero, the system 10 switches from predicting system obstacles to detecting system obstacles.

在一些實施例中,深層類神經網路150的訓練方法可包括將深層類神經網路150的輸出資料回授至深層類神經網路150,並且根據該篩選後的標籤資料(或篩選後的資料)以及該輸出資料調整該深層類神經網路的權重。具體來說,深層類神經網路150可在資料進入深層類神經網路150的輸入層之前,比較深層類神經網路150的輸出資料及未來第二時段(例如:如圖4所示的未來N分鐘)中的篩選後的標籤資料(或篩選後的資料)(例如:輸出自第二篩選模組140的篩選後的標籤資料或輸出自篩選模組120的篩選後的資料),其中深層類神經網路150的輸出資料用以預測未來第二時段中的障礙發生。若比較的結果為不一致,代表深層類神經網路150自第二篩選模組140接收的篩選後的標籤資料中並沒有如深層類神經網路150所預測般的發生/未發生障礙,因此,深層類神經網路150可根據篩選後的標籤資料(或篩選後的資料)以及深層類神經網路150的輸出資料調整深層類神經網路中各層(例如:輸入層、隱藏層及輸出層)的權重,改善深層類神經網路150的預測精準率。In some embodiments, the training method of the deep neural network 150 may include feeding back the output data of the deep neural network 150 to the deep neural network 150, and according to the filtered label data (or filtered Data) and the output data to adjust the weight of the deep neural network. Specifically, the deep neural network 150 can compare the output data of the deep neural network 150 with the second period in the future before the data enters the input layer of the deep neural network 150 (for example: the future shown in FIG. 4 N minutes) filtered label data (or filtered data) (for example: filtered label data output from the second filtering module 140 or filtered data output from the filtering module 120), of which The output data of the neural network 150 is used to predict the occurrence of obstacles in the second period in the future. If the comparison result is inconsistent, it means that the filtered label data received by the deep neural network 150 from the second filtering module 140 does not have/has not occurred as predicted by the deep neural network 150, therefore, The deep neural network 150 can adjust the layers of the deep neural network according to the filtered label data (or filtered data) and the output data of the deep neural network 150 (for example: input layer, hidden layer and output layer) The weights of, improve the prediction accuracy of the deep neural network 150.

在一些實施例中,深層類神經網路150的訓練方法可包括採用倒傳遞訓練方式並逐層增加深層類神經網路150的隱藏層。圖5根據本發明的實施例繪示倒傳遞深層類神經網路150的示意圖。深層類神經網路150的訓練方式可採用倒傳遞訓練方法,並採逐層加深隱藏層方式進行。當訓練誤差過低時則停止增加引藏層的動作。此時,深層類神經網路150具有足夠多的自由度以操縱。In some embodiments, the training method of the deep neural network 150 may include adopting the backward transfer training method and adding hidden layers of the deep neural network 150 layer by layer. FIG. 5 illustrates a schematic diagram of a reverse transfer deep neural network 150 according to an embodiment of the present invention. The training method of the deep neural network 150 can adopt the backward transfer training method, and adopt the method of deepening hidden layers layer by layer. When the training error is too low, the action to increase the reservoir layer is stopped. At this time, the deep neural network 150 has enough degrees of freedom to manipulate.

在一些實施例中,深層類神經網路150的訓練方法可包括通過調整訓練學習率改善訓練誤差率。具體來說,測試誤差之再精進方式為調整訓練學習率。此方法能避免過擬合情形發生,亦即,訓練誤差率遞減而測試誤差率卻呈現增加的U型結構。In some embodiments, the training method of the deep neural network 150 may include improving the training error rate by adjusting the training learning rate. Specifically, the way to further improve the test error is to adjust the training learning rate. This method can avoid the occurrence of overfitting, that is, the training error rate decreases while the test error rate shows an increased U-shaped structure.

在一些實施例中,深層類神經網路150的訓練方法可包括使用RMSprop及Momentum中的至少其中之一作為最佳化器。具體來說,最佳化器的選擇則因應深層網路於倒傳遞時易受梯度消失影響而陷入區域最佳解,故採用RMSprop或Momentum作為最佳化器。此兩者無顯著差易且有收斂快速亦能達最佳解之優點。In some embodiments, the training method of the deep neural network 150 may include using at least one of RMSprop and Momentum as an optimizer. Specifically, the choice of the optimizer falls into the regional optimal solution because the deep network is easily affected by the disappearance of the gradient during the backward transfer, so RMSprop or Momentum is used as the optimizer. There is no significant difference between the two, and it has the advantage of fast convergence and the best solution.

在一些實施例中,深層類神經網路150的訓練方法可包括將深層類神經網路150的輸出資料回授至深層類神經網路150,並且根據篩選後的標籤資料以及深層類神經網路150之輸出資料來調整深層類神經網路150的權重。如此,深層類神經網路150可通過將第二篩選模組140(或篩選模組120)的輸出(即:篩選後的標籤資料或篩選後的資料)作為基準,再透過回授方式將被錯誤識別之示例重新輸入深層類神經網路150,從而達到深層類神經網路150的自適性學習。圖6根據本發明的實施例繪示深層類神經網路的自適性學習的流程圖。In some embodiments, the training method of the deep neural network 150 may include feeding back the output data of the deep neural network 150 to the deep neural network 150, and according to the filtered label data and the deep neural network 150 output data to adjust the weight of the deep neural network 150. In this way, the deep neural network 150 can use the output of the second filtering module 140 (or the filtering module 120) (that is, the filtered label data or the filtered data) as a benchmark, and then be fed back through the feedback method. The example of error recognition is re-entered into the deep neural network 150 to achieve self-adaptive learning of the deep neural network 150. FIG. 6 shows a flowchart of adaptive learning of a deep neural network according to an embodiment of the present invention.

如圖6所示,在步驟S601,定義一時間t。在步驟S602,獲得未來第二時段(例如:未來N分鐘)的資料。在步驟S603,通過障礙貼標模組130(或第二篩選模組140)將所蒐集的資料輸出。在步驟S604,獲得與步驟S603同時刻之深層類神經網路150的輸出。步驟S605,將障礙貼標模組130(或第二篩選模組140)的輸出資料(例如:篩選後的標籤資料)與深層類神經網路150的輸出資料進行比較,亦即,比較兩筆資料的一致性。若比較的結果為一致,則進入步驟S606以重新執行下一時間點的自適性學習流程。若比較的結果為不一致,則進入步驟S607,將不一致的示例重置回授至深層類神經網路150以進行參數修正。As shown in FIG. 6, in step S601, a time t is defined. In step S602, data of a second period in the future (for example: N minutes in the future) is obtained. In step S603, the collected data is output through the obstacle labeling module 130 (or the second screening module 140). In step S604, the output of the deep neural network 150 simultaneously with step S603 is obtained. Step S605: Compare the output data (for example, the filtered label data) of the obstacle labeling module 130 (or the second filtering module 140) with the output data of the deep neural network 150, that is, compare two strokes Consistency of information. If the comparison result is consistent, step S606 is entered to re-execute the adaptive learning process at the next time point. If the comparison result is inconsistent, step S607 is entered, and the inconsistent examples are reset and fed back to the deep neural network 150 for parameter correction.

回到圖2,以呆機障礙預測作為簡單的範例說明,其他障礙或異常亦可同理推之。在步驟S201,資料庫110可接收來自一行動網路的無線接取端資料、核心網路信令及/或告警訊息。在步驟S202,篩選模組120可視障礙種類決定取用無線接取端資料作為障礙貼標模組130之輸入,並以核心網路信令為輔作為驗證工具。在本實施例中,篩選模組120可通過進行例如統計關鍵因素鑑定來決定取用參數為一分鐘之物理資源區塊(physical resource block,PRB)使用量與使用者啟用數量(或稱:UEActive數量),以將該些參數作為判斷呆機與否之依據,並以該些參數作為障礙貼標模組130之輸入。在一些實施例中,篩選模組120可根據告警訊息中關聯於呆機的障礙種類的數量決定是否增加關聯於呆機的新障礙種類至障礙種類中。Returning to Figure 2, using the prediction of a lazy obstacle as a simple example, other obstacles or abnormalities can be inferred in the same way. In step S201, the database 110 may receive wireless access terminal data, core network signaling and/or alarm messages from a mobile network. In step S202, the screening module 120 decides to use wireless access terminal data as the input of the obstacle labeling module 130 according to the type of obstacle, and uses core network signaling as a verification tool. In this embodiment, the screening module 120 may determine, for example, statistical key factor identification to determine the physical resource block (PRB) usage and user activation quantity (or UEActive) for one minute. Quantity) to use these parameters as the basis for determining whether to stay idle or not, and use these parameters as the input of the obstacle labeling module 130. In some embodiments, the screening module 120 may determine whether to add a new type of obstacle associated with a dead machine to the type of obstacle according to the number of types of obstacles associated with a dead machine in the warning message.

接著,在步驟S203,障礙貼標模組130可對該篩選後的資料進行貼標以產生標籤資料。在本實施例中,以基地台一分鐘的輸出資料做為障礙貼標模組130之輸入,再以障礙貼標模組130作為自動貼標工具。在步驟S204,第二篩選模組140可自障礙貼標模組130接收標籤資料,並且對標籤資料進行篩選以產生篩選後的標籤資料。Next, in step S203, the obstacle labeling module 130 may label the filtered data to generate label data. In this embodiment, the output data of the base station for one minute is used as the input of the obstacle labeling module 130, and then the obstacle labeling module 130 is used as an automatic labeling tool. In step S204, the second screening module 140 may receive the label data from the obstacle labeling module 130, and filter the label data to generate the filtered label data.

在步驟S205,深層類神經網路150可接收來自第二篩選模組140的篩選後的標籤資料,並且根據篩選後的標籤資料及訓練方法進行訓練以更新深層類神經網路150。深層類神經網路150還可根據滑動視窗的前視窗參數調整其輸入層的輸入資料量,並且根據滑動視窗的後視窗參數調整其輸出層的輸出資料量,其中滑動視窗之參數決定如下:設定後視窗參數所以使十分鐘內作為系統靈敏度需求依據,前視窗參數則以交叉驗證法得出。在進行深層類神經網路105之訓練時,訓練方式採倒傳遞訓練方法,並以精準度、精確度與召回率為訓練最佳化目標。此外,為避免陷入區域最佳解,最佳化器選擇為rmsprop或momentum,同時激勵函數於隱藏層採用ReLU函數,而最後輸出層為保留機率形式輸出則採用Sigmoid函數。In step S205, the deep neural network 150 may receive the filtered label data from the second filtering module 140, and perform training according to the filtered label data and the training method to update the deep neural network 150. The deep neural network 150 can also adjust the input data volume of its input layer according to the front window parameters of the sliding window, and adjust the output data volume of its output layer according to the rear window parameters of the sliding window. The parameters of the sliding window are determined as follows: The back window parameters are used as the basis for the system sensitivity requirements within ten minutes, and the front window parameters are obtained by cross-validation. During the training of the deep neural network 105, the training method adopts the reverse transfer training method, and the training optimization goal is based on the accuracy, precision and recall rate. In addition, in order to avoid falling into the regional optimal solution, the optimizer chooses rmsprop or momentum, while the excitation function uses the ReLU function in the hidden layer, and the final output layer uses the Sigmoid function for the output of the reserved probability form.

特點及功效:深層類神經網路之訓練常受限於須有貼標資料,而設備供應商提供之告警系統往往不足以全然反應該行動網路之服務品質。本發明所提供之基於自動學習的基地台異常之預測的方法,係在障礙貼標模組中引入核心網路信令資料增加貼標動作之準確度,解決無法自動貼標的窘境,而深層類神經網路之輸入仍以無線接取端資料來學習已學習核心網路資料之障礙貼標模組。此作法主要考量核心網路資料取得不易但融入障礙貼標模組能增加障礙辨識率。此外,滑動視窗之資料讀取方式能有效地利用過往訊務資料並自由地控制預測區間來決定系統反應機動性。本發明與其他習用技術相互比較時,具備下列優點:Features and effects: The training of deep neural networks is often limited to the need for labeling data, and the alarm system provided by the equipment supplier is often insufficient to fully reflect the service quality of the mobile network. The method for predicting abnormality of a base station based on automatic learning provided by the present invention is to introduce core network signaling data into the obstacle labeling module to increase the accuracy of labeling actions and solve the dilemma of automatic labeling. The neural network input still uses the wireless access terminal data to learn the obstacle labeling module of the learned core network data. This method mainly considers that the core network data is not easy to obtain, but the integration of the obstacle labeling module can increase the obstacle recognition rate. In addition, the sliding window data reading method can effectively use the past traffic data and freely control the prediction interval to determine the system reaction flexibility. The present invention has the following advantages when compared with other conventional technologies:

本發明提出之基於自動學習的基地台異常之預測的方法,同時考量核心網路信令與無線接取端資料來辨識行動網路障礙或異常。具體取用的資料視障礙種類而定,並以此為障礙貼標模組,再以此貼標模組訓練一深層類神經網路進行障礙預測。The method for predicting abnormality of a base station based on automatic learning proposed by the present invention simultaneously considers core network signaling and wireless access terminal data to identify mobile network obstacles or abnormalities. The specific data used depends on the type of obstacle, and is used as an obstacle labeling module, and then the labeling module is used to train a deep neural network for obstacle prediction.

本發明提出的滑動視窗之資料讀取方式能利用前視窗參數來決定要利用過去第一時段(例如:過去M分鐘)的資料作為深層類神經網路之輸入,並透過後視窗參數來決定將預測未來第二時段(例如:未來N分鐘)中基地台將發生的障礙。此方式能有效利用資料避免發生維數災難議題,並能視系統靈敏度需求調整預測區間。The data reading method of the sliding window proposed by the present invention can use the front window parameters to decide whether to use the data of the past first period (for example: the past M minutes) as the input of the deep neural network, and the rear window parameters to determine the Predict the obstacles that will occur in the base station in the second period of the future (eg: N minutes in the future). This method can effectively use data to avoid the occurrence of dimensional disaster issues, and can adjust the prediction interval according to the system sensitivity requirements.

在深度學習與機器學習等隸屬監督式學習方法受限於須有貼標資料。基地台的告警系統是一良好貼標工具,而此工具則受制於系統供應商定義之障礙或異常品項多寡與精細程度,且可能因系統忙碌而無法即時產出告警或為了獲得良好的行動網路品質監測而可定義更細節之障礙。透過障礙貼標模組來完成自動貼標機制並可進一步提供深層類神經網路訓練之貼標示例所需。Subordinate supervised learning methods such as deep learning and machine learning are limited to the need for labeling materials. The alarm system of the base station is a good labeling tool, and this tool is subject to the number of obstacles or abnormal items defined by the system supplier and the level of detail, and may not be able to immediately generate alarms or obtain good actions due to the busy system. Network quality monitoring can define more detailed obstacles. The automatic labeling mechanism is completed through the barrier labeling module and can further provide the labeling examples required for deep neural network training.

基地台障礙往往具顯著比例差異,亦即障礙台之訓練資料稀少將導致深度學習機制輸出判別錯誤。此特徵為網路輸出具高準確度而精確度與召回率皆低,因此額外考慮閾值的調整,即深層類神經網路的輸出層之閾值乘上正例與反例之比例之倒數來矯正輸出判斷。再者,當資料量龐大時,能以捨棄正例資料以求正例與反例數目比值接近一來取代上述閾值。Obstacles in the base station often have a significant difference in proportion, that is, the scarcity of training data on the obstacle station will cause the deep learning mechanism to output errors. This feature is that the network output has high accuracy and the accuracy and recall are low, so the adjustment of the threshold is additionally considered, that is, the threshold of the output layer of the deep neural network is multiplied by the reciprocal of the ratio of the positive and negative examples to correct the output judgment. Furthermore, when the amount of data is huge, the above threshold can be replaced by discarding the positive data to find that the ratio of the number of positive and negative examples is close to one.

本發明可將訓練之深層類神經網路與障礙貼標模組輸出進行兩者比較,並將錯誤預測之示例重新訓練或以較多權重方式回授至深層類神經網路之輸入來修正深層類神經網路之參數,達到線上學習之優點。The invention can compare the trained deep neural network with the output of the obstacle labeling module, and retrain the example of wrong prediction or feed back to the input of the deep neural network with more weights to correct the deep layer Neural network-like parameters achieve the advantages of online learning.

基於上述,本發明旨在利用大數據優勢降低人為介入基地台維運而引入之誤判,並提出無貼標資料系統欲應用監督式學習技巧之方法,藉此將基地台維運等級由偵測轉為主動預測。本發明係利用裝設於行動網路機房的訊務品質監測儀器自動化地監測行動網路各方面的訊務,並將所述訊務與告警系統中的告警訊息一併送往資料庫進行儲存。篩選模組提取資料庫資料並建立針對各種障礙的關鍵模型,此可作為往後行動網路各種障礙辨識與偵測之工具。所述障礙亦可以為無顯示於告警系統中之維運障礙。若基地台告警系統無法有效反映該基地台之運作狀況,則將可人為定義的障礙種類與訊務資料、核心網路資料做統計模型識別鑑定以取得關鍵模型來濾除不必要之指標,以加速系統即時運作性並避免共線性問題而造成系統不穩定。Based on the above, the present invention aims to use the advantages of big data to reduce the misjudgment introduced by human intervention in the maintenance of base stations, and proposes a method for non-labeled data systems to apply supervised learning techniques, so as to detect the level of base station maintenance Turn to proactive prediction. The present invention utilizes the communication quality monitoring instrument installed in the mobile network computer room to automatically monitor all aspects of the mobile network's traffic, and sends the alarm information in the traffic and alarm system to the database for storage . The filtering module extracts database data and creates key models for various obstacles, which can be used as a tool for identifying and detecting various obstacles in the mobile network in the future. The obstacle may also be a maintenance obstacle that is not displayed in the alarm system. If the base station alarm system can not effectively reflect the operation status of the base station, then the artificially defined types of obstacles, traffic data, and core network data are statistically modeled and identified to obtain key models to filter out unnecessary indicators to Accelerate the system's real-time operation and avoid collinearity problems that can cause system instability

障礙貼標模組的輸出資料可用以訓練一深層類神經網路以主動預測網路是否即將發生障礙,類神經網路之示例特徵包含訊務資料、設備壽年與維修週期等相關因素。為避免維數災難,可針對輸入示例進行統計關鍵因素鑑定(或顯著統計因素識別鑑定)或主成分分析(Principal Components Analysis,PCA)以達到降維目的。The output data of the obstacle labeling module can be used to train a deep neural network to proactively predict whether the network is about to impede. Examples of neural network characteristics include communication data, equipment life and maintenance cycle and other related factors. In order to avoid dimensionality disasters, statistical key factor identification (or significant statistical factor identification and identification) or Principal Components Analysis (PCA) can be performed on the input examples to achieve the purpose of dimensionality reduction.

資料的利用方式可採用滑動視窗方法,前視窗參數表示欲取用過去第一時段中的資料作為一筆示例之特徵維度,後視窗參數表示欲預測未來第二時段中將出現障礙或異常。具體視窗大小則透過交叉驗證測試資料集並以最小測試誤差得出或隨特定系統之敏感度與機動性考量而定。The data can be used in a sliding window method. The front window parameter indicates that the data in the first period of the past is used as the characteristic dimension of an example, and the rear window parameter indicates that the obstacle or abnormality will be predicted in the second period of the future. The specific window size is obtained by cross-validating the test data set with the minimum test error or depending on the sensitivity and maneuverability of the specific system.

一般情況係告警系統無法囊括全數網路障礙,原因可能是由於系統供應商不願承認瑕疵存在、操作人員須定義更詳細之網路維運障礙或異常來方便維運進行或因系統忙碌而無法產出即時無線接取端資料等。一方法為利用無線網路的核心信令之資料,例如控制信令來針對不同使用場景與應用情境得出之障礙判斷依據。具體方法可由網路專家針對核心網路信令異常之參數逐一做為障礙貼標器之輸入。逐一納入的好處為可隨參數納入序列式地監控熵(Entropy)的變化,直至熵達到先前定義之容忍值。此做法另一好處是可以控制障礙貼標器的複雜度來達到系統即時操作性。另一做法係採非監督式學習,將正常與異常之網路對應之控制信令置入群集演算法,如此,通常能顯著區分正常或異常的狀態。若非表示二者互訊息(Mutual Information)接近零,則須適度調整群集演算法輸入參數,群集演算法可為判斷輸入參數之歐幾里得或曼哈頓距離是否顯著大於正常值。若輸入參數維度過大則歐幾里得距離將漸趨不適用,則任一形式之降維與取相關係數方法,例如特徵向量之主成份分析為必要之前置作業。The general situation is that the alarm system cannot cover all network obstacles. The reason may be that the system supplier is unwilling to admit that the defect exists, and the operator must define a more detailed network maintenance obstacle or anomaly to facilitate the maintenance operation or because the system is busy. Output real-time wireless access terminal data, etc. One method is to use the data of the core signaling of the wireless network, such as control signaling, to determine the obstacles for different usage scenarios and application scenarios. The specific method can be used by the network expert for the input of the barrier labeler one by one for the core network signaling abnormal parameters. The advantage of one-by-one inclusion is that the entropy can be monitored serially with the parameters until the entropy reaches the previously defined tolerance value. Another advantage of this approach is that the complexity of the barrier labeler can be controlled to achieve immediate system operability. Another method is to adopt unsupervised learning, and put the control signaling corresponding to the normal and abnormal networks into the clustering algorithm, so that it can usually distinguish the normal or abnormal state significantly. If it does not mean that the mutual information (Mutual Information) is close to zero, the input parameters of the cluster algorithm must be adjusted appropriately. The cluster algorithm can be used to determine whether the Euclidean or Manhattan distance of the input parameters is significantly greater than the normal value. If the input parameter dimension is too large, the Euclidean distance will gradually become unsuitable, and any form of dimensionality reduction and correlation coefficient method, such as the principal component analysis of the feature vector, is necessary for the previous operation.

深層類神經網路之訓練採逐層加深方式進行,主要可控制自由度並避免梯度消失而陷入區域最佳解。深層類神經網路之參數優化方式為由訓練集與測試集間之交叉相容,最終預期該訓練之深層類神經網路之輸出與障礙貼標模組之輸出能一致。若無法達成一致,則將代表錯誤估計之示例的輸出資料回授至深層類神經網路以進行參數調整達到線上學習之功能與動態調整優點,此外閾值調整議題亦納入本發明中以避免基地台障礙貼標不平衡而造成學習結果極度偏差。The training of deep neural networks is carried out layer by layer, which mainly controls the degree of freedom and avoids the gradient disappearing and falling into the regional optimal solution. The parameter optimization method of the deep neural network is the cross-compatibility between the training set and the test set, and the output of the deep neural network of the training is expected to be consistent with the output of the obstacle labeling module. If no agreement can be reached, the output data representing the example of wrong estimation is fed back to the deep neural network for parameter adjustment to achieve the function of online learning and dynamic adjustment. In addition, the threshold adjustment issue is also included in the present invention to avoid the base station Unbalanced labeling of obstacles causes extreme deviations in learning results.

本發明能減少操作人員判斷失誤與提升基地台設備運作穩定性,並降低維運成本。本發明主要包括:(一)建立障礙之關鍵因素分析體系,根據告警系統中的告警訊息,結合專家意見或基於訊務資料進行統計關鍵因素鑑定或主成分分析以取得關鍵模型。關鍵模組可篩選出有效的資料。若告警系統中之告警訊息無法有效地反映行動網路細節品質指標,則可取行動網路核心網路信令的資料作為篩選模組之輸入參數。(二)建立深層類神經網路,根據告警訊息與訊務資料確定深層類神經網路的輸入與輸出架構。(三)訓練倒傳遞深層類神經網路,在訓練過程中依據神經網路之訓練誤差率、訓練準確率或訓練召回率對訓練資料量、資料特徵工程、深層學習網路隱藏層數量、最佳化器與訓練學習比率等進行微調整。此外,深層類神經網路的輸出層尚須考慮閾值調整議題,使其能避免因基地台訓練資料之正常與異常示例之比例極度懸殊而造成訓練模型偏差。(四)回授與自學習,對於神經網路之預測結果與障礙貼標模組輸出結果進行比對,以即時回饋於該深層類神經網路進行參數修正或架構調整,達到線上學習目的與優點。The invention can reduce the judgment error of the operator, improve the operation stability of the base station equipment, and reduce the maintenance cost. The present invention mainly includes: (1) Establishing a key factor analysis system for obstacles, and performing statistical key factor identification or principal component analysis based on alarm messages in the alarm system in combination with expert opinions or based on communication data to obtain key models. Key modules can filter out effective data. If the alarm message in the alarm system cannot effectively reflect the detailed quality index of the mobile network, the data of the mobile network core network signaling can be used as the input parameter of the screening module. (2) Establish a deep neural network, and determine the input and output architecture of the deep neural network based on the alarm messages and communication data. (3) Training reverse transfer deep neural network, according to the training error rate, training accuracy rate or training recall rate of the neural network, the amount of training data, data feature engineering, the number of hidden layers of the deep learning network, the most Fine-tuning the optimizer and training learning ratio. In addition, the output layer of the deep neural network still needs to consider the threshold adjustment issue, so that it can avoid the deviation of the training model due to the extreme disparity between the normal and abnormal examples of the base station training data. (4) Feedback and self-learning, compare the prediction results of the neural network with the output results of the obstacle labeling module, and feed back to the deep neural network for parameter modification or structural adjustment in real time to achieve the purpose of online learning and advantage.

上列詳細說明乃針對本發明之一可行實施例進行具體說明,惟該實施例並非用以限制本發明之專利範圍,凡未脫離本發明技藝精神所為之等效實施或變更,均應包含於本發明之專利範圍中。The above detailed description is a specific description of a feasible embodiment of the present invention, but this embodiment is not intended to limit the patent scope of the present invention, and any equivalent implementation or change without departing from the technical spirit of the present invention should be included in The patent scope of the present invention.

10‧‧‧基於自動學習的基地台異常之預測的系統10‧‧‧Based on automatic learning base station anomaly prediction system

100‧‧‧儲存媒體100‧‧‧storage media

110‧‧‧資料庫110‧‧‧ Database

120‧‧‧篩選模組120‧‧‧ Screening module

130‧‧‧障礙貼標模組130‧‧‧Barrier Labeling Module

140‧‧‧第二篩選模組140‧‧‧Second screening module

150‧‧‧深層類神經網路150‧‧‧Deep Neural Network

20‧‧‧基於自動學習的基地台異常之預測的方法20‧‧‧Based on the method of automatic base station abnormal prediction

200‧‧‧處理器200‧‧‧ processor

I1、I2、In、Ii、H1、H2、Hn、Hj、y1、y2、ym、yk‧‧‧神經元I1, I2, In, Ii, H1, H2, Hn, Hj, y1, y2, ym, yk‧‧‧ neurons

S201、S202、S203、S204、S205、S301、S302、S303、S304、S305、S306、601、602、603、604、605、606、607‧‧‧步驟S201, S202, S203, S204, S205, S301, S302, S303, S304, S305, S306, 601, 602, 603, 604, 605, 606, 607

圖1根據本發明的實施例繪示基於自動學習的基地台異常之預測的系統的示意圖。 圖2根據本發明的實施例繪示基於自動學習的基地台異常之預測的方法的流程圖。 圖3根據本發明的實施例繪示步驟S202的詳細實施方式的流程圖。 圖4根據本發明的實施例繪示利用滑動視窗的資料管理方式的示意圖。 圖5根據本發明的實施例繪示倒傳遞深層類神經網路的示意圖。 圖6根據本發明的實施例繪示深層類神經網路的自適性學習的流程圖。FIG. 1 is a schematic diagram of a system for predicting base station abnormality based on automatic learning according to an embodiment of the present invention. FIG. 2 is a flowchart illustrating a method for predicting anomaly of a base station based on automatic learning according to an embodiment of the present invention. FIG. 3 is a flowchart illustrating a detailed implementation of step S202 according to an embodiment of the present invention. FIG. 4 is a schematic diagram of a data management method using a sliding window according to an embodiment of the present invention. FIG. 5 shows a schematic diagram of a reverse transfer deep neural network according to an embodiment of the present invention. FIG. 6 shows a flowchart of adaptive learning of a deep neural network according to an embodiment of the present invention.

10‧‧‧基於自動學習的基地台異常之預測的系統 10‧‧‧Based on automatic learning base station anomaly prediction system

100‧‧‧儲存媒體 100‧‧‧storage media

110‧‧‧資料庫 110‧‧‧ Database

120‧‧‧篩選模組 120‧‧‧ Screening module

130‧‧‧障礙貼標模組 130‧‧‧Barrier Labeling Module

140‧‧‧第二篩選模組 140‧‧‧Second screening module

150‧‧‧深層類神經網 150‧‧‧Deep Neural Network

200‧‧‧處理器 200‧‧‧ processor

Claims (15)

一種基於自動學習的基地台異常之預測的系統,包括:儲存媒體,儲存多個模組;以及處理器,耦接該儲存媒體,該處理器存取並執行該儲存媒體的該些模組,該些模組包括:資料庫,儲存資料;篩選模組,對該資料進行篩選以產生篩選後的資料;障礙貼標模組,對該篩選後的資料進行貼標以產生標籤資料;第二篩選模組,對該標籤資料進行篩選以產生篩選後的標籤資料;以及深層類神經網路,根據該篩選後的標籤資料及訓練方法進行訓練以更新該深層類神經網路,其中若該深層類神經網路欲分類的輸出種類等於二,則選擇對數損失函數(binary_crossentropy)作為損失函數,若該深層類神經網路欲分類的該輸出種類大於二,則選擇歸一化指數函數(softmax)作為該損失函數。 A system for predicting abnormality of a base station based on automatic learning includes: a storage medium storing a plurality of modules; and a processor coupled to the storage medium, the processor accessing and executing the modules of the storage medium, The modules include: a database to store data; a filtering module to filter the data to generate filtered data; a barrier labeling module to label the filtered data to generate label data; second A filtering module, which filters the label data to generate filtered label data; and a deep neural network, which is trained according to the filtered label data and a training method to update the deep neural network, wherein if the deep layer The output type of the neural network to be classified is equal to two, then the logarithmic loss function (binary_crossentropy) is selected as the loss function. If the output type of the deep neural network to be classified is greater than two, the normalized index function (softmax) is selected As this loss function. 如申請專利範圍第1項所述之系統,其中該資料包括下列的至少其中之一:無線接取端資料、核心網路信令與告警訊息。 The system as described in item 1 of the patent application scope, wherein the data includes at least one of the following: wireless access terminal data, core network signaling, and alarm messages. 如申請專利範圍第2項所述之系統,其中該篩選模組根據該告警訊息中的障礙種類的數量決定是否增加新障礙種類至該障礙種類中。 The system as described in item 2 of the patent application scope, wherein the screening module determines whether to add a new obstacle type to the obstacle type according to the number of obstacle types in the warning message. 如申請專利範圍第2項所述之系統,其中該篩選模組根據該資料以及篩選方法產生關鍵模型,其中,該關鍵模型篩選出該資料中的不必要之指標以產生該篩選後的資料,並且該篩選方法包括統計關鍵因素鑑定及主成分分析中的至少其中之一。 The system as described in item 2 of the patent application scope, wherein the screening module generates a key model based on the data and the screening method, wherein the key model screens out unnecessary indicators in the data to generate the screened data, And the screening method includes at least one of statistical key factor identification and principal component analysis. 如申請專利範圍第1項所述之系統,其中該第二篩選模組根據該標籤資料及第二篩選方法產生第二關鍵模型,其中,該第二關鍵模型篩選出該標籤資料中的不必要之指標以產生該篩選後的標籤資料,並且該第二篩選方法包括統計關鍵因素鑑定及主成分分析中的至少其中之一。 The system as described in item 1 of the patent application scope, wherein the second screening module generates a second key model based on the label data and the second screening method, wherein the second key model screens out unnecessary data in the label data To generate the label data after the screening, and the second screening method includes at least one of statistical key factor identification and principal component analysis. 如申請專利範圍第1項所述之系統,其中該訓練方法包括下列的至少其中之一:採用倒傳遞訓練方式並逐層增加隱藏層;通過調整訓練學習率改善訓練誤差率;使用RMSprop及Momentum中的至少其中之一作為最佳化器;根據該篩選後的標籤資料中的正例與反例之比例調整該深層類神經網路的輸出層的輸出閾值;以及將該深層類神經網路的輸出資料回授至該深層類神經網路,並且根據該篩選後的標籤資料以及該輸出資料調整該深層類神經網路的權重。 The system as described in item 1 of the patent application scope, wherein the training method includes at least one of the following: adopting the backward transfer training method and adding hidden layers layer by layer; adjusting the training learning rate to improve the training error rate; using RMSprop and Momentum At least one of them serves as an optimizer; adjusts the output threshold of the output layer of the deep neural network according to the ratio of positive examples and negative examples in the filtered label data; and The output data is fed back to the deep neural network, and the weight of the deep neural network is adjusted according to the filtered label data and the output data. 如申請專利範圍第6項所述之系統,其中該根據該篩選後的標籤資料中的正例與反例之比例調整該深層類神經網路的輸出層的輸出閾值,包括下列的至少其中之一:將該輸出閾值乘上該正例與該反例之該比例的倒數;以及在該正例與該反例的數量達到特定差距時,捨棄該正例與該反例中數量較多者以使該正例與該反例之該比例逼近一。 The system as described in item 6 of the patent application scope, wherein the output threshold of the output layer of the deep neural network is adjusted according to the ratio of positive examples and negative examples in the filtered label data, including at least one of the following : Multiply the output threshold by the reciprocal of the ratio of the positive example and the negative example; and when the number of the positive example and the negative example reaches a certain gap, discard the larger number of the positive example and the negative example to make the positive example The ratio of the case and the counterexample is close to one. 如申請專利範圍第6項所述之系統,其中該深層類神經網路更包括輸入層及輸出層,該深層類神經網路更根據滑動視窗的前視窗參數調整該輸入層的輸入資料量,並且根據該滑動視窗的後視窗參數調整該輸出層的輸出資料量,包括:根據該前視窗參數將過去第一時段中的該篩選後的標籤資料作為該輸入層的輸入資料;以及根據該後視窗參數估測未來第二時段中發生的障礙以產生該輸出層的輸出資料。 The system as described in item 6 of the patent application scope, wherein the deep neural network further includes an input layer and an output layer, and the deep neural network further adjusts the input data amount of the input layer according to the front window parameters of the sliding window, And adjusting the output data amount of the output layer according to the rear window parameters of the sliding window, including: using the filtered label data in the past first period as input data of the input layer according to the front window parameters; and The window parameter estimates the obstacles that occur in the second period in the future to generate the output data of the output layer. 如申請專利範圍第8項所述之系統,其中該將該深層類神經網路的輸出資料回授至該深層類神經網路,並且根據該篩選後的標籤資料以及該輸出資料調整該深層類神經網路的權重,包括:比較該輸出資料及未來該第二時段中的該篩選後的標籤資料,若比較的結果為不一致,則根據該篩選後的標籤資料以及該輸出資料調整該深層類神經網路的該權重。 The system as described in item 8 of the patent application scope, wherein the output data of the deep neural network is fed back to the deep neural network, and the deep neural network is adjusted according to the filtered label data and the output data The weights of the neural network include: comparing the output data with the filtered label data in the second period in the future, and if the comparison result is inconsistent, adjust the deep class according to the filtered label data and the output data The weight of the neural network. 如申請專利範圍第1項所述之系統,其中該資料對應於行動網路,且該行動網路提供終端行動網路服務,並且使用以下通訊標準的至少其中之一:分碼多重存取(CDMA)、寬頻分碼多重存取(WCDMA)、高速封包存取(HSPA)、演進式高速封包存取(HSPA+)、長期演進技術(LTE)、全球互通微波存取(WiMAX)及進階長期演進技術(LTE-A)。 The system as described in item 1 of the patent application scope, wherein the data corresponds to a mobile network, and the mobile network provides terminal mobile network services, and uses at least one of the following communication standards: code division multiple access ( CDMA), wideband code division multiple access (WCDMA), high-speed packet access (HSPA), evolved high-speed packet access (HSPA+), long-term evolution technology (LTE), global interoperable microwave access (WiMAX), and advanced long-term Evolutionary technology (LTE-A). 如申請專利範圍第1項所述之系統,其中該系統實施於下列的至少其中之一:戶外大瓦數基地台、基於自組織網路管控之小瓦數微型基地台。 The system as described in item 1 of the patent application scope, wherein the system is implemented in at least one of the following: outdoor large wattage base stations, small wattage micro base stations based on self-organizing network management and control. 如申請專利範圍第2項所述之系統,其中該告警訊息包括下列的至少其中之一:射頻類告警基帶處理單元告警、連接類告警、單版告警、軟體升版配置告警、維護測試告警、呆機及無預警重啟。 The system as described in item 2 of the patent application scope, wherein the alarm message includes at least one of the following: radio frequency alarm baseband processing unit alarm, connection alarm, single version alarm, software upgrade configuration alarm, maintenance test alarm, Stand still and restart without warning. 如申請專利範圍第1項所述之系統,其中該無線接取端資料是物理資源區塊使用率、參考訊號接收功率、接收訊號強度指示符、上下行吞吐量、上下行使用者數、調變與編碼指標、中央處理器使用率、設備距離上次重啟時間、信令建立成功率以及記憶體使用量中的至少其中之一,並且該核心網路信令是封包掉包率、傳輸延遲數、頻寬利用率、通道容量、移動性管理組件連結建立成功率及使用者使用應用類別中的至少其中之一。 The system as described in item 1 of the scope of patent application, wherein the data of the wireless access terminal is the utilization rate of the physical resource block, the reference signal received power, the received signal strength indicator, the upstream and downstream throughput, the number of upstream and downstream users, the adjustment At least one of the change and coding index, the CPU utilization rate, the time since the device was last restarted, the success rate of signaling establishment, and the memory usage, and the core network signaling is the packet loss rate and the number of transmission delays , At least one of the bandwidth utilization rate, channel capacity, the success rate of link establishment of the mobility management component, and the user application category. 如申請專利範圍第1所述之系統,其中該對該資料進行篩選以產生篩選後的資料,包括: 步驟一,包括:根據欲預測的障礙種類決定取用無線接取端資料、核心網路信令中的至少其中之一以作為第一集合;步驟二,包括:從該第一集合中剔除一示例以產生第二集合,其中該示例使由該第一集合所建構之模型有最少的訊息減少量;步驟三,包括:計算由該第二集合所建構之模型對應之各示例的P值,並判斷最大P值是否小於1e-6,若是,則以該第二集合作為該障礙貼標模組之輸入,若不是,則刪除對應該最大P值之示例並令剩餘示例組合為更新後的該第一集合,並且重新執行步驟二。 The system as described in claim 1 of patent scope, wherein the data is screened to generate screened data, including: Step one includes: deciding to use at least one of wireless access terminal data and core network signaling as the first set according to the type of obstacle to be predicted; step two, including: removing one from the first set An example to generate a second set, where the example causes the model constructed by the first set to have the least amount of message reduction; step three includes: calculating the P value of each example corresponding to the model constructed by the second set, And determine whether the maximum P value is less than 1e-6, if it is, then use the second set as the input of the obstacle labeling module, if not, delete the example corresponding to the maximum P value and make the remaining examples combined into the updated The first set, and perform step two again. 一種基於自動學習的基地台異常之預測的方法,包括:對資料進行篩選以產生篩選後的資料;對該篩選後的資料進行貼標以產生標籤資料;對該標籤資料進行篩選以產生篩選後的標籤資料;以及由深層類神經網路根據該篩選後的標籤資料及訓練方法進行訓練以更新該深層類神經網路,其中若該深層類神經網路欲分類的輸出種類等於二,則選擇對數損失函數(binary_crossentropy)作為損失函數,若該深層類神經網路欲分類的該輸出種類大於二,則選擇歸一化指數函數(softmax)作為該損失函數。A method for predicting abnormality of a base station based on automatic learning, including: screening data to generate filtered data; labeling the filtered data to generate label data; filtering the label data to generate filtered data Label data; and deep neural network training based on the filtered label data and training method to update the deep neural network, where if the output type of the deep neural network to be classified is equal to two, select A logarithmic loss function (binary_crossentropy) is used as a loss function. If the output type of the deep neural network to be classified is greater than two, a normalized exponential function (softmax) is selected as the loss function.
TW107136316A 2018-10-16 2018-10-16 System and method of learning-based prediction for anomalies within a base station TWI684139B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW107136316A TWI684139B (en) 2018-10-16 2018-10-16 System and method of learning-based prediction for anomalies within a base station

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW107136316A TWI684139B (en) 2018-10-16 2018-10-16 System and method of learning-based prediction for anomalies within a base station

Publications (2)

Publication Number Publication Date
TWI684139B true TWI684139B (en) 2020-02-01
TW202016805A TW202016805A (en) 2020-05-01

Family

ID=70413426

Family Applications (1)

Application Number Title Priority Date Filing Date
TW107136316A TWI684139B (en) 2018-10-16 2018-10-16 System and method of learning-based prediction for anomalies within a base station

Country Status (1)

Country Link
TW (1) TWI684139B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI747452B (en) * 2020-08-20 2021-11-21 慧景科技股份有限公司 System, method and storage medium for intelligent monitoring of case field anomaly detection using artificial intelligence
TWI752577B (en) * 2020-08-03 2022-01-11 中華電信股份有限公司 Obstacle management system and method thereof
TWI770646B (en) * 2020-10-23 2022-07-11 遠傳電信股份有限公司 Iot devices management system and iot devices management method
TWI793604B (en) * 2021-05-12 2023-02-21 國立陽明交通大學 Method for fault diagnosis in communication network

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2023038095A (en) * 2021-09-06 2023-03-16 オムロン株式会社 Device management system, method of estimating failure cause of device, and program

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080185432A1 (en) * 2007-02-01 2008-08-07 Caballero Aldo M Apparatus and methods for monitoring one or more portable data terminals
US7555410B2 (en) * 2002-11-29 2009-06-30 Freescale Semiconductor, Inc. Circuit for use with multifunction handheld device with video functionality
US20120040662A1 (en) * 2010-08-12 2012-02-16 At&T Intellectual Property I, L.P. Redirecting handovers in lte networks
CN104869581A (en) * 2014-02-20 2015-08-26 香港城市大学 Determining faulty nodes via label propagation within wireless sensor network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7555410B2 (en) * 2002-11-29 2009-06-30 Freescale Semiconductor, Inc. Circuit for use with multifunction handheld device with video functionality
US20080185432A1 (en) * 2007-02-01 2008-08-07 Caballero Aldo M Apparatus and methods for monitoring one or more portable data terminals
US20120040662A1 (en) * 2010-08-12 2012-02-16 At&T Intellectual Property I, L.P. Redirecting handovers in lte networks
CN104869581A (en) * 2014-02-20 2015-08-26 香港城市大学 Determining faulty nodes via label propagation within wireless sensor network

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI752577B (en) * 2020-08-03 2022-01-11 中華電信股份有限公司 Obstacle management system and method thereof
TWI747452B (en) * 2020-08-20 2021-11-21 慧景科技股份有限公司 System, method and storage medium for intelligent monitoring of case field anomaly detection using artificial intelligence
TWI770646B (en) * 2020-10-23 2022-07-11 遠傳電信股份有限公司 Iot devices management system and iot devices management method
TWI793604B (en) * 2021-05-12 2023-02-21 國立陽明交通大學 Method for fault diagnosis in communication network

Also Published As

Publication number Publication date
TW202016805A (en) 2020-05-01

Similar Documents

Publication Publication Date Title
TWI684139B (en) System and method of learning-based prediction for anomalies within a base station
WO2020077672A1 (en) Method and device for training service quality evaluation model
WO2020077682A1 (en) Service quality evaluation model training method and device
Mulvey et al. Cell fault management using machine learning techniques
WO2021109578A1 (en) Method and apparatus for alarm prediction during service operation and maintenance, and electronic device
WO2017215647A1 (en) Root cause analysis in a communication network via probabilistic network structure
WO2021057576A1 (en) Method for constructing cloud network alarm root cause relational tree model, device, and storage medium
CN111047082A (en) Early warning method and device for equipment, storage medium and electronic device
CN113282635B (en) Method and device for positioning fault root cause of micro-service system
CN109872003B (en) Object state prediction method, object state prediction system, computer device, and storage medium
US11294754B2 (en) System and method for contextual event sequence analysis
CN105325023B (en) Method and the network equipment for cell abnormality detection
CN108366386B (en) Method for realizing wireless network fault detection by using neural network
CN112543465B (en) Abnormity detection method, abnormity detection device, terminal and storage medium
CN110335168B (en) Method and system for optimizing power utilization information acquisition terminal fault prediction model based on GRU
CN115809183A (en) Method for discovering and disposing information-creating terminal fault based on knowledge graph
EP4020315A1 (en) Method, apparatus and system for determining label
CN114511112B (en) Intelligent operation and maintenance method and system based on Internet of things and readable storage medium
WO2021103823A1 (en) Model update system, model update method, and related device
CN113542039A (en) Method for positioning 5G network virtualization cross-layer problem through AI algorithm
US20230034061A1 (en) Method for managing proper operation of base station and system applying the method
Tuli et al. Deepft: Fault-tolerant edge computing using a self-supervised deep surrogate model
CN110647086B (en) Intelligent operation and maintenance monitoring system based on operation big data analysis
CN117216713A (en) Fault delimiting method, device, electronic equipment and storage medium
Liu et al. KQis-driven QoE anomaly detection and root cause analysis in cellular networks