WO2023079567A1 - Premier nœud, second nœud, système de communication et procédés exécutés par ceux-ci pour traiter un événement anormal - Google Patents

Premier nœud, second nœud, système de communication et procédés exécutés par ceux-ci pour traiter un événement anormal Download PDF

Info

Publication number
WO2023079567A1
WO2023079567A1 PCT/IN2021/051050 IN2021051050W WO2023079567A1 WO 2023079567 A1 WO2023079567 A1 WO 2023079567A1 IN 2021051050 W IN2021051050 W IN 2021051050W WO 2023079567 A1 WO2023079567 A1 WO 2023079567A1
Authority
WO
WIPO (PCT)
Prior art keywords
node
data
events
anomalous
sequence
Prior art date
Application number
PCT/IN2021/051050
Other languages
English (en)
Inventor
Kiran Uppuluri PRATYUSH
Sharma Rahul
Bandyopadhyay SUBHADIP
Vuppala SUNIL KUMAR
Yerraguntla SREE KANTH REDDY
BANERJEE Serene
Original Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget Lm Ericsson (Publ) filed Critical Telefonaktiebolaget Lm Ericsson (Publ)
Priority to PCT/IN2021/051050 priority Critical patent/WO2023079567A1/fr
Publication of WO2023079567A1 publication Critical patent/WO2023079567A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/145Network analysis or design involving simulating, designing, planning or modelling of a network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0631Management of faults, events, alarms or notifications using root cause analysis; using analysis of correlation between notifications, alarms or events based on decision criteria, e.g. hierarchy, tree or time analysis
    • H04L41/064Management of faults, events, alarms or notifications using root cause analysis; using analysis of correlation between notifications, alarms or events based on decision criteria, e.g. hierarchy, tree or time analysis involving time analysis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/147Network analysis or design for predicting network behaviour
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/04Arrangements for maintaining operational condition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/142Network analysis or design using statistical or mathematical methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/16Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/16Threshold monitoring
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/08Testing, supervising or monitoring using real traffic

Definitions

  • the present disclosure relates generally to a first node and methods performed thereby for handling an anomalous event.
  • the present disclosure also relates generally to a second node, and methods performed thereby, for handling the anomalous event.
  • the present disclosure also relates generally to a communications system, and methods performed thereby, for handling the anomalous event.
  • the present disclosure further relates generally to computer programs and computer-readable storage mediums, having stored thereon the computer programs to carry out these methods.
  • Computer systems in a communications network may comprise one or more network nodes.
  • a node may comprise one or more processors which, together with computer program code may perform different functions and actions, a memory, a receiving port and a sending port.
  • a node may be, for example, a server. Nodes may perform their functions entirely on the cloud.
  • the communications network may cover a geographical area which may be divided into cell areas, each cell area being served by another type of node, a network node in the Radio Access Network (RAN), radio network node or Transmission Point (TP), for example, an access node such as a Base Station (BS), e.g. a Radio Base Station (RBS), which sometimes may be referred to as e.g., evolved Node B (“eNB”), “eNodeB”, “NodeB”, “B node”, or Base Transceiver Station (BTS), depending on the technology and terminology used.
  • BS Base Station
  • RBS Radio Base Station
  • eNB evolved Node B
  • eNodeB evolved Node B
  • BTS Base Transceiver Station
  • the base stations may be of different classes such as e.g., Wide Area Base Stations, Medium Range Base Stations, Local Area Base Stations and Home Base Stations, based on transmission power and thereby also cell size.
  • a cell is the geographical area where radio coverage is provided by the base station at a base station site.
  • One base station, situated on the base station site, may serve one or several cells. Further, each base station may support one or several communication technologies.
  • the telecommunications network may also comprise network nodes which may serve receiving nodes, such as user equipments, with serving beams.
  • UEs within the communications network may be e.g., wireless devices, stations (STAs), mobile terminals, wireless terminals, terminals, and/or Mobile Stations (MS).
  • STAs stations
  • MS Mobile Stations
  • UEs may be understood to be enabled to communicate wirelessly in a cellular communications network or wireless communication network, sometimes also referred to as a cellular radio system, cellular system, or cellular network.
  • the communication may be performed e.g., between two UEs, between a wireless device and a regular telephone and/or between a wireless device and a server via a Radio Access Network (RAN) and possibly one or more core networks, comprised within the wireless communications network.
  • RAN Radio Access Network
  • UEs may further be referred to as mobile telephones, cellular telephones, laptops, or tablets with wireless capability, just to mention some further examples.
  • the UEs in the present context may be, for example, portable, pocket-storable, hand-held, computer-comprised, or vehiclemounted mobile devices, enabled to communicate voice and/or data, via the RAN, with another entity, such as another terminal or a server.
  • base stations which may be referred to as eNodeBs or even eNBs, may be directly connected to one or more core networks.
  • eNodeBs In 3rd Generation Partnership Project (3GPP) Long Term Evolution (LTE), base stations, which may be referred to as eNodeBs or even eNBs, may be directly connected to one or more core networks.
  • the expression Downlink (DL) may be used for the transmission path from the base station to the user equipment.
  • Uplink (UL) may be used for the transmission path in the opposite direction i.e. , from the wireless device to the base station.
  • anomalous events may be detected.
  • One such anomalous event may be Passive InterModulation (PIM).
  • PIM may be understood as a distortion in an uplink channel that may be understood to arise from harmonics of frequencies that may be used in the downlink, and from vibrations of a loose metal object around the antennas.
  • Uplink reception quality may be understood to be quite dependent upon the signal to noise ratio (SINR), which may also be quite dependent upon uplink noise.
  • SINR signal to noise ratio
  • Uplink noise may vary with time and by cell. Time-variant and bursty uplink noise sources may typically vary with network load. These sources may include UEs served by neighboring cells and Passive InterModulation (PIM).
  • PIM may be understood to be a product of downlink transmitters, which may mix to yield uplink interference.
  • dynamic interference sources may be addressed by dynamic link adaptation mechanisms which may configure scheduling grants, Modulation Coding Schemes (MCS) and other radio parameters per Transmission Time Interval (TTI), every millisecond.
  • MCS Modulation Coding Schemes
  • TTI Transmission Time Interval
  • sources may include active components in the radio path including external interferes, distributed antenna systems and repeaters and/or coverage enhancers.
  • PIM is a major concern for operators, as its effects are pronounced with carrier aggregation, rising downlink power and feeble and sensitive uplink reception, aging of metal components, changes in temperature and weather conditions, changes in environmental conditions, and many passive components having non-linear characteristics being placed close to the antennas.
  • Previous work [1] shows that by comparing the interference pattern of primary cells and its immediate neighbors, it may be possible to find time series intervals of when a cell may be experiencing PIM. For example, if the noise is from traffic, the neighboring cells may also be experiencing a similar rise in uplink noise. But, for PIM, the noise rise may be understood to be local to that particular cell only, with minimal environmental impact. This is illustrated in Figure 1.
  • Figure 1 depicts two graphical representations of average Signal to Interference plus Noise Ratio (SINR) in the Physical Uplink Control Channel (PUCCH) along time of two example cells, in panel a) and panel b), respectively, to show how to differentiate if a rise in uplink noise may be from traffic or from PIM.
  • SINR Signal to Interference plus Noise Ratio
  • the line with empty squares shows average interference of the primary cell
  • the line with solid circles shows the average SINR of the primary cell
  • the line with solid triangles shows the average interference of the first neighbor.
  • the rise in uplink noise of the primary cell follows a pattern very similar to that of its neighbors.
  • the noise may likely originate from PIM.
  • PIM was corrected and hence a significant rise in SINR may be observed after the second day.
  • An operator may have intervened and fixed the cause.
  • Cell 19 it may be seen that, also on the second day PIM occurred, as there is high noise only on the primary cell, but not on the neighbour. This may have been caused by a temporary metal object that gave rise to the artifact for a day. The problem appears to have been temporary problem as it disappears after a day.
  • the nominal power, indicated by pO change in Figure 1 was changed by the operator, as indicated. However, the discussion on this change is beyond the scope of the document.
  • the learning-based approaches referred to in the Background section [1-4] have to learn thresholds, or adjust network weights dynamically. As they may be understood to relate to a learning-based system and environment conditions, such as, number of user equipments connected, their velocities, etc. may be changing, the statistics of the interference patterns may be understood to also keep changing. Hence, the thresholds and the weights of the models that may be generated may need to be tuned adaptively based on environmental conditions and intermediate outputs. These adaptations may not always be explainable. That is, while empirically the selection may work on many types of data, but the mathematical formulations of why certain values work may be understood to be still under research.
  • Traditional offline reinforcement learning may be understood to refer to methods wherein an agent may be trained with historical data. This suffers from the problem that the policy that the agent that may be trained may be using may not match the policy by which the logged data may have been collected.
  • the environmental conditions in radio settings keep changing dynamically. These may include, number of user equipments connected, their speeds, their relative positions, atmospheric conditions, antenna power, antenna tilt, etc.
  • statistics of any data collected, such as interference may be understood to be different for different locations and different days and different time of the days. Therefore, it may be difficult to match the statistics of when the training data was collected, and the statistics of the place where the model may be deployed.
  • Off-policy corrections usually suffer from high variances. Prevalent state-of-the-art methods in transfer learning or model retraining may not work perfectly.
  • online data drift may be understood to be very common in radio environment, which makes it even more difficult to match training and test conditions.
  • an object of embodiments herein to improve the handling of an anomalous event, such as PIM, in a communications system. More particularly, it is an object of embodiments herein to improve the prediction of an anomalous event, such as PIM, in a communications system.
  • the object is achieved by a computer- implemented method, performed by a first node.
  • the method is for handling an anomalous event.
  • the first node operates in a communications system.
  • the first node obtains a first set of data comprising a first sequence of events over a first time period.
  • the first sequence of events has been categorized as normal or anomalous.
  • the first node determines, using machine self-supervised reinforcement learning, a predictive model of the anomalous event based on the obtained first set of data.
  • the first node then provides a first indication based on the determined predictive model to a second node operating in the communications system.
  • the object is achieved by a computer-implemented method, performed by the second node.
  • the method is for handling the anomalous event.
  • the second node operates in the communications system.
  • the second node receives the first indication from the first node operating in the communications system.
  • the first indication indicates the future occurrence of the anomalous event in the communications system.
  • the first indication is based on the predictive model determined using machine self-supervised reinforcement learning.
  • the second node then initiates performing an action to prevent the anomalous event.
  • the object is achieved by a computer-implemented method, performed by the communications system.
  • the method is for handling the anomalous event.
  • the communications system comprises the first node and the second node.
  • the method comprises obtaining, by the first node, the first set of data comprising the first sequence of events over the first time period.
  • the first sequence of events has been categorized as normal or anomalous.
  • the method also comprises determining, by the first node, using machine self-supervised reinforcement learning, the predictive model of the anomalous event based on the obtained first set of data.
  • the method then comprises The providing, by the first node, the first indication based on the determined predictive model to the second node operating in the communications system.
  • the method also comprises receiving, by the second node, the first indication from the first node operating in the communications system.
  • the first indication indicates the future occurrence of the anomalous event in the communications system.
  • the first indication is based on the predictive model determined using machine self-supervised reinforcement learning.
  • the method additionally comprises initiating, by the second node, performing an action to prevent the anomalous event.
  • the object is achieved by the first node, for handling the anomalous event.
  • the first node is configured to operate in the communications system.
  • the first node is further configured to obtain the first set of data configured to comprise the first sequence of events over the first time period.
  • the first sequence of events is configured to have been categorized as normal or anomalous.
  • the first node is also configured to determine, using machine self-supervised reinforcement learning, the predictive model of the anomalous event based on the first set of data configured to be obtained.
  • the first node is further configured to provide the first indication based on the predictive model configured to be determined to the second node configured to operate in the communications system.
  • the object is achieved by the second node, for handling the anomalous event.
  • the second node is configured to operate in the communications system.
  • the second node is further configured to receive the first indication from the first node configured to operate in the communications system.
  • the first indication is configured to indicate the future occurrence of the anomalous event in the communications system.
  • the first indication is configured to be based on the predictive model configured to be determined using machine self-supervised reinforcement learning.
  • the second node is also configured to initiate performing the action to prevent the anomalous event.
  • the object is achieved by the communications system, for handling the anomalous event.
  • the communications system comprises the first node and the second node.
  • the communications system is further configured to obtain, by the first node, the first set of data configured to comprise the first sequence of events over the first time period.
  • the first sequence of events is configured to have been categorized as normal or anomalous.
  • the communications system is also configured to determine, by the first node, using machine self-supervised reinforcement learning, the predictive model of the anomalous event based on the first set of data configured to be obtained.
  • the communications system is further configured to provide, by the first node, the first indication based on the predictive model configured to be determined to the second node configured to operate in the communications system.
  • the communications system is additionally configured to receive, by the second node, the first indication from the first node configured to operate in the communications system.
  • the first indication is configured to indicate the future occurrence of the anomalous event in the communications system.
  • the first indication is configured to be based on the predictive model configured to be determined using machine self-supervised reinforcement learning.
  • the communications system is also configured to initiate, by the second node, performing the action to prevent the anomalous event.
  • the object is achieved by a computer program, comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out the method performed by the first node.
  • the object is achieved by a computer-readable storage medium, having stored thereon the computer program, comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out the method performed by the first node.
  • the object is achieved by a computer program, comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out the method performed by the second node.
  • the object is achieved by a computer-readable storage medium, having stored thereon the computer program, comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out the method performed by the second node.
  • the object is achieved by a computer program, comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out the method performed by the communications system.
  • the object is achieved by a computer-readable storage medium, having stored thereon the computer program, comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out the method performed by the communications system.
  • the first node may then be enabled to train the predictive model of the anomalous event.
  • the first node may be enabled to learn to predict occurrences of the anomalous event before it may happen, based on historical trends.
  • the first node may therefore be enabled to prevent the occurrence of the anomalous event, or to dampen its occurrence. This may be achieved by the first node sending the first indication to the second node, indicating that the anomalous event is predicted, which may in turn enable the second node to take an action to prevent or remedy the occurrence of the event.
  • the first node may learn which factors may precede, lead or correlate with the occurrence of the anomalous event, and enable to derive how to counteract them to prevent or dampen the occurrence of the anomalous event.
  • the performance of the communications system may have fewer occurrences of anomalous events and may therefore be improved.
  • Figure 1 depicts two graphical representations of average SINR in the PLICCH, along time, of two example cells, in panel a) and panel b), respectively.
  • Figure 2 is a schematic diagram illustrating a non-limiting example of a communications system, according to embodiments herein.
  • Figure 3 is a flowchart depicting embodiments of a method in a first node, according to embodiments herein.
  • Figure 4 is graphical representation of an example of a time sequence of interference, according to embodiments herein.
  • Figure 5 is graphical representation of an example of a modified time sequence of interference, according to embodiments herein.
  • Figure 6 is a flowchart depicting embodiments of a method in a second node, according to embodiments herein.
  • Figure 7 is a flowchart depicting embodiments of a method in a communications system, according to embodiments herein.
  • Figure 8 is a schematic diagram depicting a non-limiting example of decreasing loss with training, according to embodiments herein.
  • Figure 9 is a schematic diagram depicting a non-limiting example of a method according to embodiments herein.
  • Figure 10 is a schematic block diagram illustrating two non-limiting examples, a) and b), of a first node, according to embodiments herein.
  • Figure 11 is a schematic block diagram illustrating two non-limiting examples, a) and b), of a second node, according to embodiments herein.
  • Figure 12 is a schematic block diagram illustrating two non-limiting examples, a) and b), of a communications system, according to embodiments herein.
  • embodiments herein may be understood to relate to a method, system and framework for self-supervised reinforcement learning for detection and mitigation of an anomalous event, e.g., any local problem experienced by antennas.
  • Particular embodiments herein may relate to a method, system and framework for self-supervised reinforcement learning for PIM mitigation.
  • Embodiments herein may relate to a system that may use self-supervised reinforcement learning to predict occurrences of an anomalous event such as PIM, based on logged data of interference from a primary cell and its neighbors.
  • the supervised learning approach previously described in [3] may be understood to be augmented with first, a conventional selfsupervised layer trained with cross-entropy loss to perform ranking, and second, an RL-based layer where rewards may be defined based on past PIM detections based on historic data.
  • the supervised learning approach previously described in [3] may be understood to be extended further to build a self-supervised reinforcement learning system, that may be able to predict PIM patterns, given historic data of interference. The approach may be tested with one month of real network data collected at 15 minutes intervals.
  • embodiments herein may be understood to provide a framework to use historical data to train a system to learn patterns that may give rise to an anomalous event, based on self-supervised reinforcement learning for sequential recommendation tasks from historical data.
  • Particular embodiments herein may be understood to provide a framework to use historical data to train a system to learn potential interference patterns that may give rise to PIM, based on self-supervised reinforcement learning for sequential recommendation tasks from historical data [5],
  • Figure 2 depicts two non-limiting examples, in panels “a” and “b”, respectively, of a communications system 100, in which embodiments herein may be implemented.
  • the communications system 100 may be a computer network.
  • the communications system 100 may be implemented in a telecommunications system, sometimes also referred to as a telecommunications network, cellular radio system, cellular network or wireless communications system.
  • the telecommunications system may comprise network nodes which may serve receiving nodes, such as wireless devices, with serving beams.
  • the telecommunications system may for example be a network such as 5G system, or a newer system supporting similar functionality.
  • the telecommunications system may also support other technologies, such as a Long-Term Evolution (LTE) network, e.g.
  • LTE Long-Term Evolution
  • LTE Frequency Division Duplex (FDD), LTE Time Division Duplex (TDD), LTE HalfDuplex Frequency Division Duplex (HD-FDD), LTE operating in an unlicensed band, Wideband Code Division Multiple Access (WCDMA), Universal Terrestrial Radio Access (UTRA) TDD, Global System for Mobile communications (GSM) network, GSM/Enhanced Data Rate for GSM Evolution (EDGE) Radio Access Network (GERAN) network, Ultra-Mobile Broadband (UMB), EDGE network, network comprising of any combination of Radio Access Technologies (RATs) such as e.g.
  • RATs Radio Access Technologies
  • the telecommunications system may for example support a Low Power Wide Area Network (LPWAN).
  • LPWAN technologies may comprise Long Range physical layer protocol (LoRa), Haystack, SigFox, LTE-M, and Narrow-Band loT (NB-loT).
  • the communications system 100 may comprise a plurality of nodes, whereof a first node 111, and a second node 112 are depicted in Figure 2. Any of the first node 111 and the second node 112 may be understood, respectively, as a first computer system and a second computer system. In some examples, any of the first node 111 and the second node 112 may be implemented as a standalone server in e.g., a host computer in the cloud 120, as depicted in the non-limiting example depicted in panel b) of Figure 2.
  • any of the first node 111 and the second node 112 may in some examples be a distributed node or distributed server, with some of their respective functions being implemented locally, e.g., by a client manager, and some of its functions implemented in the cloud 120, by e.g., a server manager. Yet in other examples, any of the first node 111 and the second node 112 may also be implemented as processing resources in a server farm.
  • any of the first node 111 and the second node 112 may be independent and separated nodes.
  • the first node 111 and the second node 112 may be one of: co-localized and the same node. All the possible combinations are not depicted in Figure 2 to simplify the Figure. It may be understood that the communications system 100 may comprise more nodes than those represented on panel a) of Figure 2.
  • the first node 111 may be understood as a node having a capability to train a predictive model using machine self-supervised reinforcement learning in the communications system 100.
  • a non-limiting example of the first node 111 may be, e.g., in embodiments wherein the communications system 100 may be a 5G network, a Network Data Analytics Function (NWDAF), or e.g., a the central unit (CU) and a distributed unit (DU) of a radio network node.
  • NWDAAF Network Data Analytics Function
  • the second node 112 may be a node having a capability to execute a machine learning predictive model.
  • the second node 112 may be e.g., a Radio Unit (RU), a CU and a DU of another a radio network node.
  • RU Radio Unit
  • the communications system 100 may comprise one or more radio network nodes, whereof a radio network node 130 is depicted in Figure 2.
  • the radio network node 130 may typically be a base station or Transmission Point (TP), or any other network unit capable to serve a wireless device or a machine type node in the communications system 100.
  • the radio network node 130 may be e.g., a 5G gNB, a 4G eNB, or a radio network node in an alternative 5G radio access technology, e.g., fixed or WiFi.
  • the radio network node 130 may be e.g., a Wide Area Base Station, Medium Range Base Station, Local Area Base Station and Home Base Station, based on transmission power and thereby also coverage size.
  • the radio network node 130 may be a stationary relay node or a mobile relay node.
  • the radio network node 130 may support one or several communication technologies, and its name may depend on the technology and terminology used.
  • the radio network node 130 may be directly connected to one or more networks and/or one or more core networks.
  • the communications system 100 may cover a geographical area, which in some embodiments may be divided into cell areas, wherein each cell area may be served by a radio network node, although, one radio network node may serve one or several cells.
  • the network node 130 serves a first cell 141.
  • the first cell 141 may have one or more neighbor cells 142.
  • Two cells neighbor cells are depicted in Figure 2, but it may be understood that the one or more neighbor cells 142 may comprise more or fewer cells.
  • the network node 130 may be of different classes, such as, e.g., macro eNodeB, home eNodeB or pico base station, based on transmission power and thereby also cell size.
  • the network node 130 may serve receiving nodes with serving beams.
  • the radio network node may support one or several communication technologies, and its name may depend on the technology and terminology used. Any of the radio network nodes that may be comprised in the communications network 100 may be directly connected to one or more core networks.
  • the communications system 100 may comprise a plurality of devices whereof a device 150 is depicted in Figure 2.
  • the device 150 may be also known as e.g., user equipment (UE), a wireless device, mobile terminal, wireless terminal and/or mobile station, mobile telephone, cellular telephone, or laptop with wireless capability, or a Customer Premises Equipment (CPE), just to mention some further examples.
  • UE user equipment
  • CPE Customer Premises Equipment
  • the device 150 in the present context may be, for example, portable, pocket-storable, hand-held, computer-comprised, or a vehicle-mounted mobile device, enabled to communicate voice and/or data, via a RAN, with another entity, such as a server, a laptop, a Personal Digital Assistant (PDA), or a tablet computer, sometimes referred to as a tablet with wireless capability, or simply tablet, a Machine-to- Machine (M2M) device, a device equipped with a wireless interface, such as a printer or a file storage device, modem, Laptop Embedded Equipped (LEE), Laptop Mounted Equipment (LME), USB dongles, CPE or any other radio network unit capable of communicating over a radio link in the communications system 100.
  • PDA Personal Digital Assistant
  • M2M Machine-to- Machine
  • M2M Machine-to- Machine
  • LOE Laptop Embedded Equipped
  • LME Laptop Mounted Equipment
  • USB dongles CPE or any other radio network unit capable of communicating over a radio link
  • the device 150 may be wireless, i.e., it may be enabled to communicate wirelessly in the communications system 100 and, in some particular examples, may be able support beamforming transmission.
  • the communication may be performed e.g., between two devices, between a device and a radio network node, and/or between a device and a server.
  • the communication may be performed e.g., via a RAN and possibly one or more core networks, comprised, respectively, within the communications system 100.
  • the first node 111 may communicate with the second node 112 over a first link 151, e.g., a radio link or a wired link.
  • the first node 111 may communicate with the radio network node 130 over a second link 152, e.g., a radio link or a wired link.
  • the radio network node 130 may communicate, directly or indirectly, with the device 150 over a third link 153, e.g., a radio link or a wired link.
  • Any of the first link 151 , the second link 152 and/or the third link 153 may be a direct link or it may go via one or more computer systems or one or more core networks in the communications system 100, or it may go via an optional intermediate network.
  • the intermediate network may be one of, or a combination of more than one of, a public, private or hosted network; the intermediate network, if any, may be a backbone network or the Internet, which is not shown in Figure 2.
  • first”, “second”, and/or “third” herein may be understood to be an arbitrary way to denote different elements or entities, and may be understood to not confer a cumulative or chronological character to the nouns these adjectives modify.
  • LTE Long Term Evolution
  • 6G sixth generation
  • Embodiments of a computer-implemented method, performed by the first node 111 will now be described with reference to the flowchart depicted in Figure 3.
  • the method may be understood to be for handling an anomalous event.
  • the first node 111 operates in the communications system 100.
  • the anomalous event may be passive intermodulation (PIM). While PIM may be been used as an illustrative example of an anomalous event in embodiments herein, embodiments herein may be understood to not only be applicable for PIM, but may be equally applicable to other anomalies in a time series
  • the wireless communications network 100 may support at least one of: New Radio (NR), Long Term Evolution (LTE), LTE for Machines (LTE-M), enhanced Machine Type Communication (eMTC), and Narrow Band Internet of Things (NB-loT).
  • NR New Radio
  • LTE Long Term Evolution
  • LTE-M LTE for Machines
  • eMTC enhanced Machine Type Communication
  • NB-loT Narrow Band Internet of Things
  • the method may comprise the actions described below. In some embodiments, all the actions may be performed. In other embodiments, some of the actions may be performed. One or more embodiments may be combined, where applicable. Components from one embodiment may be tacitly assumed to be present in another embodiment and it will be obvious to a person skilled in the art how those components may be used in the other exemplary embodiments. All possible combinations are not described to simplify the description.
  • a non-limiting example of the method performed by the first node 111 is depicted in Figure 2. In Figure 2, optional actions in some embodiments may be represented with dashed lines.
  • the first node 111 obtains a first set of data.
  • the first set of data comprises a first sequence of events over a first time period.
  • the first sequence of events has been categorized as normal or anomalous; that is, as either normal or anomalous.
  • the first sequence of events may be understood as raw, or processed data which may be uncategorized.
  • the first sequence of events may comprise, for example, interference, e.g., sampled at 15 minutes or less.
  • the first set of data may be understood as the first set of data that may have been categorized, as either normal or anomalous.
  • the obtaining in this Action 301 may comprise, retrieving, collecting, measuring or receiving, e.g., from another node, which may be operating in the communications system 100.
  • the anomalous event may be PIM
  • each cell may be denoted by c_i
  • the interference that the cells may be experiencing at any point in time may be the first sequence of events given as ⁇ x_1 , x_2, x_t-1 , x_t ⁇ .
  • the values of interference may range from -125 decibel-milliwatts (dBM), that is, a unit of level used to indicate that a power ratio is expressed in decibels (dB) with reference to one milliwatt (mW), to -96 dBM.
  • dBM decibel-milliwatts
  • the values may be quantized between 0 to 7000.
  • the first sequence of events may be limited, since all the values between 0 to 7000 may not be Gaussian distributed, they may be pre-processed to include at least one value in the sequence.
  • the original histogram of the interference values in a non-limiting example of an interference sequence is depicted in Figure 4.
  • Figure 4 the horizontal axis is time in 15 minutes intervals, and the vertical axis is interference in dBM.
  • Figure 5 depicts the same first set of data, wherein the interference value that had maximum number of samples has been modified to add the missing values, to ensure that all values between 0 to 7000 are present.
  • the horizontal axis is also time in 15 minutes intervals, and the vertical axis is interference in dBM. While the first sequence of events in the non-limiting examples of Figure 4 and Figure 5 corresponds to a simulation, in a real-life setting, there may be enough samples, and this modification may be understood to not be necessary. However, for any occasional occurrence of missing values, this strategy may also be used.
  • the first set of data may comprise three series of quantized values comprising: a) a first series of quantized values indicating first interference in the first cell 141 , b) a second series of quantized values indicating second interference in one or more neighbor cells 142 of the first cell 141 , and c) a third series of quantized values indicating signal to noise ratio in the first cell 141.
  • the first set of data may comprise anomalous events, such as e.g., PIM, detected in historical data with machine-learning methods.
  • anomalous events such as e.g., PIM
  • the first set of data may have been generated by having input the first sequence of events, e.g., the first series of quantized values, the second series of quantized values and the third series of quantized values, to a node having a capability to detect anomalous events such as PIM with AI/ML based methods.
  • This node may be the first node 111 itself, or another node, e.g., comprised in the communications system 100, from which the first set of data may be then received in this Action 301.
  • the first sequence of events may comprise, for example, interference, sampled at 15 minutes or less.
  • the first set of data may have been generated according to existing methods, such as those described in [1-4, 7],
  • This AI/ML based detection method of the anomalous event e.g., PIM
  • PIM may, comprise an ensemble of approaches that may be able to detect the anomalous event, e.g., PIM as it may happen, or post-occurrence. It may be understood that these other methods may be able to detect, while they may not be able to predict the occurrence of the event, as opposed to the method of embodiments herein.
  • fault traces may be understood as human-expert traces. For example, whenever there is an anomaly and a subsequent root cause detection, fault traces may be collected so that from the trace, the sequence of events may be analysed either by a radio engineer or by a machine learning model. For example, if there is a throughput degradation in one of the nodes due to a sleeping cell issue, before the cell goes into sleep mode there may be traces that may be indicative that the cell may be going to be in sleep mode, and this may be captured in the fault traces. Domain experts may manually go through the fault traces and find out the root cause, or state-of-the-art may use machine learning models to automate the same.
  • anomalies in the first sequence of events may be marked to categorize the events in the first sequence of events as normal or anomalous, resulting in the first set of data. That is, in some embodiments, the first set of data may comprise a correlation between the anomalous events detected in the historical data with machine-learning methods, and fault trace parsing based on machine-learning of the same historical data.
  • the first node 111 may then be enabled to train a predictive model of the anomalous event in the next Action 302, thereby enabling that the occurrence of an anomalous event may be predicted before it may happen, thereby enabling that the performance of the communications system 100 may have fewer occurrences of anomalous events and may therefore be improved.
  • the first node 111 determines, using machine self-supervised reinforcement learning, a predictive model of the anomalous event based on the obtained first set of data.
  • Determining may be understood as calculating, generating, e.g., by training, or deriving. Based on the obtained first set of data may be understood to mean that the predictive model may be trained using the obtained first set of data, as will be explained below.
  • the first node 111 may learn to predict occurrences of the anomalous event based on load, or any other factor.
  • a neural network may learn non-linear mappings based on training the obtained first set of data.
  • the determining in this Action 302 may be performed by two different layers comprised in the first node 111 , an RL layer and a self-supervised learning layer.
  • the RL-layer may act as a regularizer to capture patterns of interference that may lead to PIM, and the selfsupervised learning layer may influence the parameter updates.
  • a regularizer in machine learning may be understood to fit a function to a training set and may try to reduce the training set error. At the same time, the regularizer may try to see that there is no overfitting.
  • the task may be understood to be to forecast the anomaly.
  • Step 302 from historical data, regions of the anomaly may be marked, and in a first step, (i) the supervised learning may try to match interference/load to prediction of the anomalies. In a second step, (ii), this may be performed with Q-learning, so that the Q-values or quality values may be given more importance, so that conditions may be chosen so that they may map to the anomalies. Steps (i) and (ii) may be considered in parallel, (ii) may use temporal difference (TD) error based learning. Then, (i) and (ii) may be combined in a soft actor-critic approach, where (i) may become the actor and (ii) may become the critic. A neural network may try to find parameters that may minimize the loss.
  • TD temporal difference
  • a gradient descent This may be referred to as a gradient descent. That is, parameters may be found where the gradient may be zero, that is, it may be a minima.
  • the feedback from the critic may be used to decide when to stop gradient descent.
  • a model may be determined that may forecast an anomaly based on the conditions, such as, interference and load.
  • the regularizer may be understood to be the module that may try to find the neural network model to the training data, and at the same time ensure that there is no overfitting.
  • the parameters may be understood to refer herein to coefficients of neural models for the approach.
  • PIM may be understood to occur when interference may be on only one cell, and not on the neighbors. That is, it may be understood to be a local problem and may be understood to not be an effect of traffic. For traffic, the effect may be seen in all the neighbors. This may be posed as a multi-class classification problem, that may generate a binary sequence [y_1 , y_2, y_n] indicating presence or absence of PIM.
  • This may be understood to pertain to the first step described above to map positions of anomalies to the underlying environmental conditions.
  • y_t values may mark the anomaly
  • the top-k marked anomalies that fit the model may be chosen.
  • the topOk items from y_t+1 may then be chosen as a PIM presence/absence recommendation list for timestamp t+1.
  • Any generative model may be used to get this mapping of the underlying conditions to the anomalies.
  • a non-limiting example used in illustrative examples herein may be a selfattention transformer model.
  • Reinforcement Learning may then be used for forecasting the anomalous event, e.g., PIM.
  • the next occurrence of the anomalous event, e.g., PIM may be formulated as, e.g., a Markov Decision Process (MDP), where state, action and rewards may be used as described below.
  • MDP Markov Decision Process
  • the self-supervised reinforcement learning may use, as a state, first quantized values, from the obtained first set of data, of sequential interference the first cell 141 may be experiencing. Higher interference may lead to call drop, service degradation etc.
  • the state may be understood as the one or more conditions that may give rise to an anomaly. Anomaly may be understood as that interference was higher than a certain threshold.
  • sequential interference may be used in the event that the anomalous event may be PIM or a related event.
  • Other sequential data may be chosen as the state for other anomalous events.
  • Self-supervised reinforcement learning may be understood to refer to a branch of unsupervised learning, wherein it may be understood to not be necessary for a person to manually generate labels.
  • Supervised learning may be understood to need labelled data.
  • the labelled data may be the presence of the anomaly, and a neural network may be used to indicate the anomalies, no separate labelling may be necessary.
  • the first node 111 e.g., via a neural network, may generate its labels.
  • the self-supervised reinforcement learning may use, as actions, second quantized values, from the obtained first set of data, indicating occurrence of the anomalous event.
  • actions may be quantized interference values that may lead to a positive affirmation of presence of the anomalous event, e.g., PIM.
  • the second quantized values may be understood as a subset of the first quantized values that may have given rise to an anomaly.
  • the second quantized values may be understood to correspond to the first quantized values, plus marked anomalies.
  • the self-supervised reinforcement learning may use, as reward, one or more key performance indicators (KPIs) of a radio access network (RAN), that is, a RAN of the communications system 100.
  • the reward may be dependent on RAN Key Performance Indicator (KPI) values and their thresholds.
  • KPI Key Performance Indicator
  • the reward may be defined as per domain knowledge, that is, based on what the anomalous event may be.
  • Some of the important RAN KPIs which may be considered to identify PIM may, for example, call drop rate (CDR), call set-up success rate (CSSR), handover success rate (HSR), traffic channel congestion rate (TCH), call completion rate, speech quality index and signal strength. For example, if the signal strength is below a threshold that is, if the signal strength is not falling between a required dBM range, then the reward may be positive, as interference may lead to poor signal strength.
  • CDR call drop rate
  • CSSR call set-up success rate
  • HSR handover success rate
  • TCH traffic channel
  • the reward may be either positive or negative.
  • the reward may be positive if interference is observed, and negative if there is no interference at that state.
  • traffic may be a trigger to give more reward.
  • other factors that may be considered may include but may not be limited to, path loss, time of the day, etc.
  • the reward may be constant, e.g., the traffic as reward
  • intelligent combinations may be used for rewards. Intelligent combinations may be possible for different radio environments and conditions. For example, based on the time of the day, the weightage given to traffic may be changed.
  • the anomaly, e.g., PIM, detection threshold may be learnable based on confidence measures as e.g., described in [3], but they may be adapted based on convergence rate of accuracy rate of the determined predictive model.
  • the RL-agent that may be managed by the first node 111 may then try to maximize the expected cumulative reward.
  • the determining in this Action 302 may further comprise iterating a training of the predictive model using at least one of the following three options.
  • cross-entropy loss may be used to rank the sequence of events to a binarized indicator of the occurrence or absence of the anomalous event.
  • Cross-entropy loss may be understood to measure performance of a classification model where the output may be a probability value between 0 and 1. For example, an anomaly may be a 1 , and a non-anomaly may be a 0.
  • the first node 111 e.g., via a neural network, may now generate probability values between 0 and 1 , to match the training data.
  • a perfect model may have the cross-entropy loss to be 0, that is, e.g., that the 1 may be predicted as 1.
  • the self-supervised reinforcement learning module may then, according to this Action 302, learn patterns in the data that may potentially give rise to local anomalies such as PIM.
  • self-supervised Q-learning e.g., a self-supervised Q- learning network
  • factors may be understood as variables, or features, that is combinations of variables that may give rise to an anomalous event, e.g., combinations of KPIs, such as, interference, load, atmospheric conditions, etc.
  • Q-learning may be understood as a branch of reinforcement learning where, based on the future outcomes, quality values or Q-values may be associated with past actions.
  • the agent may learn to pick actions that may lead to an anomalous event.
  • the self-supervised Q-learning loss may be defined as a cross entropy loss.
  • the determining in this Action 302 may comprise iterating a training of the predictive model using a self-supervised Q-learning block to to get TD error updates.
  • TD error may be understood as the error in matching the output, that is the anomalous events, to the input conditions at each time step.
  • self-supervised actor-critic may be used, wherein a supervised neural network model may act as the actor, and the critic may measure a goodness of actions taken by the actor. Feedback from the critic may be used as a stop gradient to stop the training.
  • a self-supervised head may be the “actor” and the Q-learning module may be the “critic”.
  • the determining in Action 302 may comprise iterating a training of the predictive model using the self-supervised actorcritic, wherein a supervised neural network model may act as the actor, and the critic may measure the goodness of actions taken by the actor, and the feedback from the critic may be used as a stop gradient to stop the training.
  • the output may be matching the mapping from the input.
  • the goal may be understood to be to match the output, given the input.
  • the parameters may be calculated so that the loss in estimating the output from the input may be minimized. If the error is plotted, it may have peaks and troughs, and the perfect model may be determined when the error has a trough, or it may be at its minimum. This may be referred to as gradient descent. This may be used to build a model so that the output may match the input.
  • a self-supervised Q-learning network and a self-supervised actor critic may be used to determine the predictive model of the anomalous event, wherein the obtained first set of data may e.g., comprise 1 month of interference patterns collected at 15 minutes intervals.
  • the iterating of the training may generate a forecasted sequence of anomalous events over the first time period. That is, by determining the predictive model of the anomalous event based on the obtained first set of data in this Action 302, the first node 111 may be enabled to learn to predict occurrences of the anomalous event and then to forecast “local anomalies” such as PIM, before they may happen, based on historical trends. During the training phase, this may be done of the same first time period. The first node 111 may try to predict, with the determined predictive model, the occurrence of the anomalous events in the first sequence of events, to see how well it matches the categories detected, not predicted, earlier with other methods.
  • the first node 111 may in turn be enabled to prevent the occurrence of the anomalous event, or to dampen its occurrence. This may be achieved by the first node 111 indicating to the second node 112, that the anomalous event is predicted, which may in turn enable the second node 112 to take an action to prevent or remedy the occurrence of the event. For example, during the determining of the predictive model in this Action 302, the first node 111 may learn which factors may precede, lead or correlate with the occurrence of the anomalous event, and enable to derive how to counteract them to prevent or dampen the occurrence of the anomalous event.
  • any resultant frequency scheduling that may be derived based on the detected anomalous event may then be used by a 802.11 transceiver and to choose the optimal rate and transmission mode to send data packets to a transmission channel. Since frequency combinations may give rise to PIM, provided the first node 111 may be able to forecast a PIM anomalous event before it may happen, it may be possible, e.g., by the second node 112, to then change the frequency setting derived from past history in order to avoid the anomalous event.
  • the iterating of the training may generate the forecasted sequence of anomalous events over the first time period.
  • the method may further comprise, in this Action 303, the first node 111 obtaining a second set of data.
  • the second set of data may comprise a second sequence of events over a second time period. That is, the second sequence of events, may be a new or fresh sequence of events, e.g., of interference values.
  • the second sequence of events may lack a categorization.
  • the second set of data may comprise primary cell and neighbors’ interference as time series, which may be converted into a sequence of quantized interference.
  • the obtaining in this Action 303 may comprise, retrieving, collecting, measuring or receiving the second sequence of events, e.g., from the device 150 or another similar device via the radio network node 130 or another similar radio network node, e.g., via the second link 152 or another similar link.
  • the first set of data and the second set of data may comprise three series of quantized values comprising: a) the first series of quantized values indicating first interference in the first cell 141, b) the second series of quantized values indicating second interference in the one or more neighbor cells 142 of the first cell 141, and c) the third series of quantized values indicating signal to noise ratio in the first cell 141. How these time series may be used in embodiments herein will be explained in Action 304.
  • the first node 111 may enable to validate the determined predictive model in the next Action 304, once new or fresh data may have arrived in time, according to this Action 303.
  • the first node 111 may thereby be enabled to determine whether further training of the predictive model may be necessary, or whether the determined predictive model may have attained a desired level of accuracy.
  • the first node 111 may thereby enable a scalable method for detecting anomalous events in other time-series based on detection and forecasting of local anomalies, on various, or any, environmental conditions.
  • the iterating of the training may generate the forecasted sequence of anomalous events over the first time period.
  • the method further may comprise, in this Action 304, that the first node 111 validates the predictive model using the obtained second set of data.
  • the validating in this Action 304 may be performed by using tracking trend discrepancies between a time series of the forecasted sequence of anomalous events and the first sequence of events to categorize the second sequence of events as normal or anomalous.
  • the first node 111 may use a band-depth based validation method to evaluate the effectiveness of the determined predictive model, that is, by tracking the difference between the forecasted anomaly time series, and an actual time series when it may happen.
  • the banddepth based validation method will be explained further down below.
  • one possibility may be to validate the output of the determined predictive model with a human-in-the-loop.
  • embodiments herein may comprise, in this Action 304 a validation step, that may use, for example interference, SI NR and forecasted anomalous events, e.g., PIM, intervals to distinguish between the following three possibilities: a) the anomalous event, e.g., PIM, was forecasted and it occurred, b) the anomalous event, e.g., PIM, was forecasted, but it did not occur, and c) the anomalous event, e.g., PIM, was not forecasted, but it occurred. Cases a) and c), may be associated with a performance degradation, which may be a dip in SINR, or in the worst case, packet loss or call drop.
  • the first node 111 may thus perform validation in two phases, as follows.
  • First phase Detection of the first cell 141, e.g., the primary cell, interference pattern as ‘outlier’ with respect to the neighboring cells.
  • the first node 111 may detect an interference pattern of the first cell 141 , e.g., the primary cell, as uncorrelated, or an ‘outlier’ with respect to the neighboring cells.
  • the first node 111 may detect opposite movement of the forecasted anomalous event, e.g., PIM, interval and SINR pattern. This may be done by comparing the first series of quantized values indicating first interference in the first cell 141 with the second series of quantized values indicating second interference in the one or more neighbor cells 142 of the first cell 141 , and the third series of quantized values indicating signal to noise ratio in the first cell 141.
  • band depth may be understood as a notion in functional data analysis which may represent the centrality of a data set representing a sample from a distribution curve, typically a time series, in comparison with other similar data sets. For example, if an interference pattern of a set of cells studied over time is considered, for a given duration, each data set from a cell may be viewed as a time series curve. For a set of such curves, a notion of central curve may be constructed by measuring its relative distance with other curves. This is discussed in great detail in Ref. [6], Following the notion of central curve, the curve that may be lying far from the central most curve along the direction of higher magnitude may be identified.
  • This may be done by computing signed distance of a candidate curve with the central curve. It may be noted that when a primary cell exhibits PIM, it may be detected as an outlier with respect to UL noise. As described in Figure 1 (a) and (b), when PIM occurs, the primary cell interference may be high but not the neighbor’s interference. Statistically, it may appear that the statistical nature of the curve may change when PIM occurs. For this validation step however, the forecasted time series with the actual time series when it happens may be compared. The first node 111 may try to see where the statistics may change, and this may give an indication of how good the forecasting may be. A change of curve may be understood to mean, a change of statistical properties of the time series at that particular time.
  • the first node 111 may compare it with the corresponding SINR pattern in the following way.
  • the SINR and forecasted anomalous event, e.g., PIM, data may be smoothed using a non-parametric approach such as kernel smoothing. This may be done in order to minimize noise. Both the actual data and the forecasts may be noisy. Kernel smoothing may be used to minimize the noise.
  • a sign of change of values, forecasted anomalous event, e.g., PIM, and SINR, in consecutive time points may be computed for both the data series comparing the forecasted time series, with the actual time series to validate the results. For convenience, this may be referred to simply as a change.
  • the changes obtained from the SINR series and forecasted anomalous event, e.g., PIM, series may be added. This new series may be termed as a resultant direction series.
  • resultant direction series values may be zero and may continue to be zero until the anomalous event, e.g., PIM, effect may be present. This may be because the change of values in the forecasted anomalous event, e.g., PIM, and SINR data in consecutive time points may be of opposite sign and hence may start cancelling each other. From the resultant series values the first node 111 may easily identify runs of zeros and hence may identify the start and end of the runs of zeroes. This in turn may enable the first node 111 to identify the duration of a real anomalous event, e.g., PIM, occurrence, and hence validate the forecast determined by the first node 111 in Action 302.
  • a real anomalous event e.g., PIM
  • the validating in this Action 304 may comprise the following actions.
  • a first action may be detecting outlier subsets of values in the first series with respect to the second series.
  • a second action may be smoothing the detected outlier subsets of values and corresponding subsets of values in the third series using a nonparametric approach.
  • a third action may be determining a respective sign of a change in the detected outlier subsets of values and the corresponding subsets of values in the third series.
  • a fourth action may be adding the changes with the determined respective sign.
  • a fifth action may be identifying subsets of zeros thereby identifying occurrence of the anomalous event.
  • the validated samples may be used to re-train the neural networks that may be managed by the first node 111 to further improve the accuracy of the predicted model determined in Action 302, and also to take care of concept drifts that may happen.
  • the validated samples may be fed back, according to Action 305 for meta-learning. Further validating the results may provide a clue of what parameters may have worked and which may have not. That information may be fed back and more importance may be given to conditions that predicted the anomalous event accurately.
  • This feedback may be used to., for example, prune the training dataset further, or to learn how to tune the hyperparameters, or to learn the neural network parameters, e.g., with modified weights..
  • the iterating of the training may generate the forecasted sequence of anomalous events over the first time period.
  • the method further may comprise, in this Action 305, the first node 111 using the categorized second set of data to further train the predictive model. That is, there may be a possibility of re-feeding the validated output for further training in a meta-learning framework.
  • the first node 111 may be enabled to further increase the accuracy of the predictive model, and thereby improve the prediction, and resulting prevention or dampening of the occurrence of the anomalous event in the communications system 100. Since this may be understood to be a repetitive process and also the environmental conditions may change dynamically, forecasting and then validating may help the first node 111 to also learn what may work and what may not. The feedback scheme from the previous runs may help improve both accuracy and training time of the subsequent training runs. The performance of the communications system 100 may thereby be enabled to be improved.
  • the first node 111 provides a first indication based on the determined predictive model to the second node 112 operating in the communications system 100.
  • the providing in this Action may be e.g., sending or transmitting, e.g., via the first link 151.
  • the first indication may indicate a probability of occurrence of the anomalous event over another time period.
  • the another time period may be understood to be a different time period than the first time period.
  • the first indication may additionally or alternatively indicate the actual determined predictive model, so it may be used by the second node 112, or another node in the communications system 100.
  • the second node 112 may be enabled to take an action to prevent or remedy the occurrence of the event. For example, during the determining of the predictive model in Action 302, the first node 111 may learn which factors may precede, lead or correlate with the occurrence of the anomalous event, and enable to derive how to counteract them to prevent or dampen the occurrence of the anomalous event. As an example, any resultant optimal frequency schedulings that may be derived based on the detected anomalous event may then be used by an 802.11 transceiver and to choose the optimal rate and transmission mode to send data packets to a transmission channel.
  • Figure 4 is a graphical representation of a first example of a first set of data that may be obtained according to embodiments herein in Action 301. Particularly, Figure 4 depicts a histogram of interference values dBM, in the vertical axis, sampled over 15 minute intervals, which is depicted in the horizontal axis.
  • Figure 5 is a graphical representation of the first example of the first set of data that may be obtained according to embodiments herein in Action 301 , modified to add the missing values, to ensure that all values between 0 to 7000 are present. Particularly, Figure 5 depicts a histogram of interference values dBM, in the vertical axis, sampled over 15 minute intervals, which is depicted in the horizontal axis.
  • Embodiments of a computer-implemented method, performed by the second node 112, will now be described with reference to the flowchart depicted in Figure 6.
  • the method may be understood to be for handling the anomalous event.
  • the second node 112 operates in the communications system 100.
  • the method comprises the following actions. Components from one embodiment may be tacitly assumed to be present in another embodiment and it will be obvious to a person skilled in the art how those components may be used in the other exemplary embodiments. All possible combinations are not described to simplify the description.
  • a non-limiting example of the method performed by the second node 112 is depicted in Figure 6.
  • optional actions in some embodiments may be represented with dashed lines.
  • the detailed description of some of the following corresponds to the same references provided above, in relation to the actions described for the first node 111 and will thus not be repeated here to simplify the description.
  • the anomalous event may be PIM
  • the second node 112 receives the first indication from the first node 111 operating in the communications system 100.
  • the first indication indicates a future occurrence of the anomalous event in the communications system 100.
  • the first indication is based on the predictive model determined using machine self-supervised reinforcement learning.
  • the receiving of the first request may be performed e.g., via the first link 151.
  • the first indication may indicate a probability of occurrence of the anomalous event over another time period.
  • the second node 112 initiates performing an action to prevent the anomalous event.
  • Initiating may be understood as starting, triggering, facilitating or enabling.
  • the action may be an adjustment of an angle of an antenna in the radio network node 130.
  • Embodiments of a computer-implemented method, performed by the communications system 100 will now be described with reference to the flowchart depicted in Figure 7. The method may be understood to be for handling the anomalous event.
  • the communications system 100 comprises the first node 111 and the second node 112.
  • the method comprises the actions described below. In some embodiments, all the actions may be performed. In other embodiments, some of the actions may be performed. One or more embodiments may be combined, where applicable. Components from one embodiment may be tacitly assumed to be present in another embodiment and it will be obvious to a person skilled in the art how those components may be used in the other exemplary embodiments. All possible combinations are not described to simplify the description.
  • a non-limiting example of the method performed by the communications system 100 is depicted in Figure 7. In Figure 7, optional actions in some embodiments may be represented with dashed lines. The detailed description of some of the following corresponds to the same references provided above, in relation to the actions described for the first node 111 and will thus not be repeated here to simplify the description.
  • the anomalous event may be PIM.
  • This Action 701 which corresponds to Action 301, comprises obtaining, by the first node 111 , the first set of data.
  • the first set of data comprises a first sequence of events over a first time period.
  • the first sequence of events has been categorized as normal or anomalous; that is, as either normal or anomalous.
  • the first set of data may comprise the three series of quantized values comprising: a) the first series of quantized values indicating the first interference in the first cell 141 , b) the second series of quantized values indicating the second interference in the one or more neighbor cells 142 of the first cell 141, and c) the third series of quantized values indicating signal to noise ratio in the first cell 141.
  • the first set of data may comprise anomalous events, such as e.g., PIM, detected in historical data with machine-learning methods.
  • anomalous events such as e.g., PIM
  • the first set of data may comprise the correlation between the anomalous events detected in the historical data with machine-learning methods, and fault trace parsing based on machine-learning of the same historical data.
  • This Action 702 which corresponds to Action 302, comprises determining, by the first node 111 , using machine self-supervised reinforcement learning, the predictive model of the anomalous event based on the obtained first set of data.
  • the self-supervised reinforcement learning may use, as the state, the first quantized values, from the obtained first set of data, of the sequential interference the first cell 141 may be experiencing.
  • the self-supervised reinforcement learning may use, as the actions, the second quantized values, from the obtained first set of data, indicating the occurrence of the anomalous event.
  • the self-supervised reinforcement learning may use, as the reward, the one or more key performance indicators (KPIs) of a radio access network (RAN), that is, a RAN of the communications system 100.
  • KPIs key performance indicators
  • the determining in this Action 302 may further comprise iterating the training of the predictive model using at least one of the following three options.
  • cross-entropy loss may be used to rank the sequence of events to the binarized indicator of the occurrence or absence of the anomalous event.
  • self-supervised Q-learning e.g., a self-supervised Q- learning network
  • self-supervised actor-critic may be used, wherein the supervised neural network model may act as the actor, and the critic may measure the goodness of actions taken by the actor. Feedback from the critic may be used as a stop gradient to stop the training.
  • the iterating of the training may generate the forecasted sequence of anomalous events over the first time period.
  • the method may further comprise, in this Action 703, which corresponds to Action 303, obtaining, by the first node 111 , the second set of data.
  • the second set of data may comprise the second sequence of events over the second time period.
  • the second sequence of events may lack a categorization.
  • the iterating of the training may generate the forecasted sequence of anomalous events over the first time period.
  • the method further may comprise, in this Action 704, which corresponds to Action 304, validating, by the first node 111 , the predictive model using the obtained second set of data.
  • the validating in this Action 304 may be performed by using tracking trend discrepancies between the time series of the forecasted sequence of anomalous events and the first sequence of events to categorize the second sequence of events as normal or anomalous.
  • the validating in this Action 704 may comprise the following actions.
  • a first action may be detecting outlier subsets of values in the first series with respect to the second series.
  • a second action may be smoothing the detected outlier subsets of values and corresponding subsets of values in the third series using the non-parametric approach.
  • a third action may be determining the respective sign of the change in the detected outlier subsets of values and the corresponding subsets of values in the third series.
  • a fourth action may be adding the changes with the determined respective sign.
  • a fifth action may be identifying subsets of zeros thereby identifying occurrence of the anomalous event.
  • the iterating of the training may generate the forecasted sequence of anomalous events over the first time period.
  • the method further may comprise, in this Action 705, which corresponds to Action 305, using, by the first node 111 , the categorized second set of data to further train the predictive model.
  • This Action 706, which corresponds to Action 306, comprises providing, by the first node 111 , the first indication based on the determined predictive model to the second node 112 operating in the communications system 100.
  • This Action 707 which corresponds to Action 601, comprises receiving, by the second node 112, the first indication from the first node 111 operating in the communications system 100.
  • the first indication indicates a future occurrence of the anomalous event in the communications system 100.
  • the first indication is based on the predictive model determined using machine self-supervised reinforcement learning.
  • the first indication may indicate the probability of occurrence of the anomalous event over the another time period.
  • This Action 708, which corresponds to Action 602, comprises initiating, by the second node 112, performing an action to prevent the anomalous event.
  • Figure 8 is a graphic representation depicting a non-limiting example of experimental data resulting from an implementation of a method performed by the first node 111 according to embodiments herein.
  • the vertical axis depicts training loss, that is, cross-entropy loss.
  • the horizontal axis depicts the number of epochs, that is the number of iterations of the predictive model of the anomalous event.
  • the dotted line depicts the results of performing the determining of Action 302 using a Self-Attention Network (SAS).
  • SAS-AC Self-Attention Network
  • the obtained first set of data comprises interference KPIS collected over the first time period which is one hour.
  • each sequence has 4 values.
  • the data from the cells for a month thus makes it to 25463 sessions.
  • the number of quantized interference values used were -7000 as described above.
  • PIM/anomalies were indicated 22126 times.
  • Data from prior work [3] was used as a baseline to indicate/detect PIM intervals from the time series, that is, baseline for generating labels for the PIM anomalies.
  • the training loss is plotted for 1 month of interference data, and as may be appreciated, the loss is progressively decreasing. This may be understood to mean that the model is converging. .
  • the accuracy of the predictive model determined in Action 302 on the experimental setup was calculated as number of forecasted PIM intervals to the number of actual PIM intervals.
  • the forecasting accuracy was 59%. Using more months of data, or probably years of data, may be understood as a way to further increase the accuracy.
  • Figure 9 is a system block diagram depicting a non-limiting example of a method performed by a first node according to embodiments herein. Particularly, Figure 9 depicts a non-limiting implementation aspect of the method, for the particular detection, based on a selfsupervised learning procedure, of PIM/anomalies.
  • interference sampled at 15 minutes or less, may be input to an AI/ML based PIM/anomaly detection node, which may operate according to existing methods, as described in [1-4, 7],
  • This AI/ML based PIM/anomaly detection method may comprise an ensemble of approaches that may be able to detect PIM as it may happen, or post-occurrence.
  • This output may also be correlated at 803 with fault traces based on fault trace parsing, that may have been input at 802.
  • the marked anomalies may be fed, according to Action 301, to a supervised training module managed by the first node 111 , which may, according to Action 302, learn to predict occurrences of PIM based on load, or any other factor.
  • the neural network may learn non-linear mappings based on training data.
  • the supervised training module may be performed with cross-entropy loss to map the input sequence to a binarized PIM/anomaly indicator.
  • the self-supervised reinforcement learning modules may then, according to Action 302, learn patterns in the data that may potentially give rise to PIM or local anomalies.
  • the determining in Action 302 may further comprise iterating a training of the predictive model using a self-supervised Q-learning block to to get TD error updates.
  • the determining in Action 302 may further comprise iterating a training of the predictive model using a self-supervised actor-critic, wherein a supervised neural network model acts as the actor, and the critic measures a goodness of actions taken by the actor, and the feedback from the critic is used as a stop gradient to stop the training.
  • the first node 111 may be enabled to forecast an anomaly before it may happen, based on historical trends.
  • the forecasts may be validated, according to Action 304, once the real data may arrive in time, according to Action 303, using, according to Action 304, the band-depth based validation described earlier.
  • the real data may comprise primary cell and neighbors’ interference as time series, which may be converted into a sequence of quantized interference.
  • the validate samples may be used to re-train the neural networks to further improve the accuracy of the predicted model determined in Action 302, and also to take care of concept drifts that may happen in a real-time setting.
  • the validated samples may be fed back, according to Action 305 for meta-learning.
  • Embodiments herein may be understood to be the first proposition of Artificial Intelligence (Al)/Machine Learning (ML)-driven forecasting of an anomalous event such as PIM artifacts, from historical KPIs. Furthermore, this may be understood to be the first attempt to forecast an anomalous event such as PIM, based on self-supervised reinforcement learning. While PIM has been used as an illustrative example of an anomalous event in embodiments herein, embodiments herein may be understood to not only be applicable for PIM but may be equally applicable to other anomalies in a time series, where anomalies may, e.g., be defined by sharp peaks in a time series. Certain embodiments disclosed herein may provide one or more of the following technical advantage(s), which may be summarized as follows.
  • embodiments herein may be understood to be able to “forecast” an anomaly before it happens, which makes may be understood to make it an augmented module for any other kinds of learning-based approaches. That is, embodiments herein may be used with any anomaly detection method, e.g., offline or online to help forecast an anomaly. Embodiments herein may work with other forms of learning, such as supervised learning or online learning based anomaly detection.
  • embodiments herein may be understood to be amenable to online learning, as they may be understood to be based on a window-ed approach in time.
  • embodiments herein may be understood to provide a validation strategy based on a band-depth approach. The correctly validated samples may be used for re-training in a meta-learning framework.
  • embodiments herein may be understood to provide a lightweight method, and hence amenable to be implemented in a distributed node and cloud implementation for the reinforcement learning module.
  • Various distributed processing options that may suit data source, storage, compute and coordination needs may be possible.
  • data sampling may be done at the node, which may be referred to as a worker, with data analysis, inference, model creation, model sharing and channel state estimation being done at a cloud server, which may be referred to as a master.
  • Figure 10 depicts two different examples in panels a) and b), respectively, of the arrangement that the first node 111 may comprise to perform the method actions described above in relation to Figure 3, and/or Figures 8-9.
  • the first node 111 may comprise the following arrangement depicted in Figure 10a.
  • the first node 111 may be understood to be for handling the anomalous event.
  • the first node 111 is configured to operate in the communications system 100.
  • the first node 111 is configured to, e.g. by means of an obtaining unit 1001 within the first node 111 configured to, obtain the first set of data configured to comprise the first sequence of events over the first time period.
  • the first sequence of events is configured to have been categorized as normal or anomalous.
  • the first node 111 is also configured to, e.g. by means of a determining unit 1002 within the first node 111 configured to, determine, using machine self-supervised reinforcement learning, the predictive model of the anomalous event based on the first set of data configured to be obtained.
  • the first node 111 is further configured to, e.g. by means of a providing unit 1003 within the first node 111 configured to, provide the first indication based on the predictive model configured to be determined to the second node 112 configured to operate in the communications system 100.
  • the first indication may be configured to indicate the probability of occurrence of the anomalous event over another time period.
  • the self-supervised reinforcement learning may be configured to use: a) as the state, the first quantized values, from the first set of data configured to be obtained, of sequential interference the first cell 141 may be configured to be experiencing, b) as actions, the second quantized values, from the first set of data configured to be obtained, configured to indicate the occurrence of the anomalous event and c) as the reward, the one or more key performance indicators of the radio access network.
  • the determining may be configured to further comprise iterating a training of the predictive model using at least one of: a) cross-entropy loss to rank the sequence of events to the binarized indicator of the occurrence or absence of the anomalous event, b) self-supervised Q-learning to determine which factors to give more weightage to in the predictive model, and c) self-supervised actor-critic, wherein the supervised neural network model may be configured to act as the actor, and the critic may be configured to measure the goodness of actions taken by the actor. Feedback from the critic may be configured to be used as a stop gradient to stop the training.
  • the first node 111 may be further configured to, e.g. by means of the obtaining unit 1001 within the first node 111 configured to, obtain the second set of data configured to comprise the second sequence of events over the second time period.
  • the second sequence of events may be configured to lack a categorization.
  • the first node 111 may be further configured to, e.g. by means of a validating unit 1004 within the first node 111 configured to, validate the predictive model using the second set of data configured to be obtained by using tracking trend discrepancies between the time series of the forecasted sequence of anomalous events and the first sequence of events, to categorize the second sequence of events as normal or anomalous.
  • the first node 111 may be further configured to, e.g. by means of a using unit 1005 within the first node 111 configured to, use the categorized second set of data to further train the predictive model.
  • the first set of data and the second set of data may be configured to comprise three series of quantized values configured to comprise: a) the first series of quantized values configured to indicate first interference in a first cell 141, b) the second series of quantized values configured to indicate the second interference in one or more neighbor cells 142 of the first cell 141 , and c) the third series of quantized values configured to indicate signal to noise ratio in the first cell 141.
  • the validating may be configured to comprise: a) detecting outlier subsets of values in the first series with respect to the second series, b) smoothing the outlier subsets of values configured to be detected and corresponding subsets of values in the third series using a non-parametric approach, c) determining the respective sign of the change in the outlier subsets of values configured to be detected and the corresponding subsets of values in the third series, d) adding the changes with the respective sign configured to be determined, and e) identifying the subsets of zeros thereby identifying occurrence of the anomalous event.
  • the first set of data may be configured to comprise anomalous events configured to be detected in historical data with machine-learning first node 111.
  • the first set of data may be configured to comprise a correlation between the anomalous events configured to be detected in the historical data with machine-learning, and fault trace parsing configured to be based on machine-learning of the same historical data.
  • the embodiments herein may be implemented through one or more processors, such as a processor 1006 in the first node 111 depicted in Figure 10, together with computer program code for performing the functions and actions of the embodiments herein.
  • the program code mentioned above may also be provided as a computer program product, for instance in the form of a data carrier carrying computer program code for performing the embodiments herein when being loaded into the in the first node 111.
  • a data carrier carrying computer program code for performing the embodiments herein when being loaded into the in the first node 111.
  • One such carrier may be in the form of a CD ROM disc. It is however feasible with other data carriers such as a memory stick.
  • the computer program code may furthermore be provided as pure program code on a server and downloaded to the first node 111.
  • the first node 111 may further comprise a memory 1007 comprising one or more memory units.
  • the memory 1007 is arranged to be used to store obtained information, store data, configurations, schedulings, and applications etc. to perform the methods herein when being executed in the first node 111.
  • the first node 111 may receive information from, e.g., the second node 112, the radio network node 130, the device 150, and/or another node through a receiving port 1008.
  • the receiving port 1008 may be, for example, connected to one or more antennas in the first node 111.
  • the first node 111 may receive information from another structure in the communications system 100 through the receiving port 1008. Since the receiving port 1008 may be in communication with the processor 1006, the receiving port 1008 may then send the received information to the processor 1006.
  • the receiving port 1008 may also be configured to receive other information.
  • the processor 1006 in the first node 111 may be further configured to transmit or send information to e.g., the second node 112, the radio network node 130, the device 150, another node, and/or another structure in the communications system 100, through a sending port 1009, which may be in communication with the processor 1006, and the memory 1007.
  • any of the units 1001-1005 described above may refer to a combination of analog and digital circuits, and/or one or more processors configured with software and/or firmware, e.g., stored in memory, that, when executed by the one or more processors such as the processor 1006, perform as described above.
  • processors as well as the other digital hardware, may be included in a single Application-Specific Integrated Circuit (ASIC), or several processors and various digital hardware may be distributed among several separate components, whether individually packaged or assembled into a System-on-a-Chip (SoC).
  • ASIC Application-Specific Integrated Circuit
  • SoC System-on-a-Chip
  • any of the units 1001-1005 described above may be the processor 1006 of the first node 111 , or an application running on such processor.
  • the methods according to the embodiments described herein for the first node 111 may be respectively implemented by means of a computer program 1010 product, comprising instructions, i.e. , software code portions, which, when executed on at least one processor 1006, cause the at least one processor 1006 to carry out the actions described herein, as performed by the first node 111.
  • the computer program 1010 product may be stored on a computer-readable storage medium 1011.
  • the computer-readable storage medium 1011, having stored thereon the computer program 1010 may comprise instructions which, when executed on at least one processor 1006, cause the at least one processor 1006 to carry out the actions described herein, as performed by the first node 111.
  • the computer-readable storage medium 1011 may be a non-transitory computer-readable storage medium, such as a CD ROM disc, a memory stick, or stored in the cloud space.
  • the computer program 1010 product may be stored on a carrier containing the computer program, wherein the carrier is one of an electronic signal, optical signal, radio signal, or the computer-readable storage medium 1011 , as described above.
  • the first node 111 may comprise an interface unit to facilitate communications between the first node 111 and other nodes or devices, e.g., the second node 112, the radio network node 130, the device 150, another node, and/or another structure in the communications system 100.
  • the interface may, for example, include a transceiver configured to transmit and receive radio signals over an air interface in accordance with a suitable standard.
  • the first node 111 may comprise the following arrangement depicted in Figure 10b.
  • the first node 111 may comprise a processing circuitry 1006, e.g., one or more processors such as the processor 1006, in the first node 111 and the memory 1007.
  • the first node 111 may also comprise a radio circuitry 1012, which may comprise e.g., the receiving port 1008 and the sending port 1009.
  • the processing circuitry 1006 may be configured to, or operable to, perform the method actions according to Figure 3, and/or Figures 8-9, in a similar manner as that described in relation to Figure 10a.
  • the radio circuitry 1012 may be configured to set up and maintain at least a wireless connection with the second node 112, the radio network node 130, the device 150, another node, and/or another structure in the communications system 100.
  • embodiments herein also relate to the first node 111 operative for handling the anomalous event, the first node 111 being operative to operate in the communications system 100.
  • the first node 111 may comprise the processing circuitry 1006 and the memory 1007, said memory 1007 containing instructions executable by said processing circuitry 1006, whereby the first node 111 is further operative to perform the actions described herein in relation to the first node 111 , e.g., in Figure 3, and/or Figures 8-9.
  • Figure 11 depicts two different examples in panels a) and b), respectively, of the arrangement that the second node 112 may comprise to perform the method actions described above in relation to Figure 6, and/or Figures 8-9.
  • the second node 112 may comprise the following arrangement depicted in Figure 11a.
  • the second node 112 may be understood to be for handling the anomalous event.
  • the second node 112 is configured to operate in the communications system 100.
  • the second node 112 is configured to, e.g. by means of a receiving unit 1101 within the second node 112 configured to, receive the first indication from the first node 111 configured to operate in the communications system 100.
  • the first indication is configured to indicate the future occurrence of the anomalous event in the communications system 100.
  • the first indication is configured to be based on the predictive model configured to be determined using machine self-supervised reinforcement learning.
  • the second node 112 is also configured to, e.g. by means of an initiating unit 1102 within the second node 112 configured to, initiate performing the action to prevent the anomalous event.
  • the first indication may be configured to indicate the probability of occurrence of the anomalous event over the another time period.
  • the embodiments herein may be implemented through one or more processors, such as a processor 1103 in the second node 112 depicted in Figure 11, together with computer program code for performing the functions and actions of the embodiments herein.
  • the program code mentioned above may also be provided as a computer program product, for instance in the form of a data carrier carrying computer program code for performing the embodiments herein when being loaded into the in the second node 112.
  • a data carrier carrying computer program code for performing the embodiments herein when being loaded into the in the second node 112.
  • One such carrier may be in the form of a CD ROM disc. It is however feasible with other data carriers such as a memory stick.
  • the computer program code may furthermore be provided as pure program code on a server and downloaded to the second node 112.
  • the second node 112 may further comprise a memory 1104 comprising one or more memory units.
  • the memory 1104 is arranged to be used to store obtained information, store data, configurations, schedulings, and applications etc. to perform the methods herein when being executed in the second node 112.
  • the second node 112 may receive information from, e.g., the first node 111 , the radio network node 130, the device 150, and/or another node, through a receiving port 1105.
  • the receiving port 1105 may be, for example, connected to one or more antennas in the second node 112.
  • the second node 112 may receive information from another structure in the communications system 100 through the receiving port 1105. Since the receiving port 1105 may be in communication with the processor 1103, the receiving port 1105 may then send the received information to the processor 1103.
  • the receiving port 1105 may also be configured to receive other information.
  • the processor 1103 in the second node 112 may be further configured to transmit or send information to e.g., the first node 111, the second node 112, the radio network node 130, the device 150, another node, and/or another structure in the communications system 100, through a sending port 1106, which may be in communication with the processor 1103, and the memory 1104.
  • the units 1101-1102 described above may refer to a combination of analog and digital circuits, and/or one or more processors configured with software and/or firmware, e.g., stored in memory, that, when executed by the one or more processors such as the processor 1103, perform as described above.
  • processors as well as the other digital hardware, may be included in a single Application-Specific Integrated Circuit (ASIC), or several processors and various digital hardware may be distributed among several separate components, whether individually packaged or assembled into a System-on-a-Chip (SoC).
  • ASIC Application-Specific Integrated Circuit
  • SoC System-on-a-Chip
  • the units 1101-1102 described above may be the processor 1103 of the second node 112, or an application running on such processor.
  • the methods according to the embodiments described herein for the second node 112 may be respectively implemented by means of a computer program 1107 product, comprising instructions, i.e. , software code portions, which, when executed on at least one processor 1103, cause the at least one processor 1103 to carry out the actions described herein, as performed by the second node 112.
  • the computer program 1107 product may be stored on a computer-readable storage medium 1108.
  • the computer-readable storage medium 1108, having stored thereon the computer program 1107, may comprise instructions which, when executed on at least one processor 1103, cause the at least one processor 1103 to carry out the actions described herein, as performed by the second node 112.
  • the computer-readable storage medium 1108 may be a non-transitory computer-readable storage medium, such as a CD ROM disc, a memory stick, or stored in the cloud space.
  • the computer program 1107 product may be stored on a carrier containing the computer program, wherein the carrier is one of an electronic signal, optical signal, radio signal, or the computer-readable storage medium 1108, as described above.
  • the second node 112 may comprise an interface unit to facilitate communications between the second node 112 and other nodes or devices, e.g., the first node 111, the second node 112, the radio network node 130, the device 150, another node, and/or another structure in the communications system 100.
  • the interface may, for example, include a transceiver configured to transmit and receive radio signals over an air interface in accordance with a suitable standard.
  • the second node 112 may comprise the following arrangement depicted in Figure 11b.
  • the second node 112 may comprise a processing circuitry 1103, e.g., one or more processors such as the processor 1103, in the second node 112 and the memory 1104.
  • the second node 112 may also comprise a radio circuitry 1109, which may comprise e.g., the receiving port 1105 and the sending port 1106.
  • the processing circuitry 1103 may be configured to, or operable to, perform the method actions according to Figure 6 and/or Figures 8-9, in a similar manner as that described in relation to Figure 11a.
  • the radio circuitry 1109 may be configured to set up and maintain at least a wireless connection with the first node 111, the second node 112, the radio network node 130, the device 150, another node, and/or another structure in the communications system 100.
  • embodiments herein also relate to the second node 112 operative for handling the anomalous event, the second node 112 being operative to operate in the communications system 100.
  • the second node 112 may comprise the processing circuitry 1103 and the memory 1104, said memory 1104 containing instructions executable by said processing circuitry 1103, whereby the second node 112 is further operative to perform the actions described herein in relation to the second node 112, e.g., in Figure 6 and/or Figures 8-9.
  • Figure 12 depicts two different examples in panels a) and b), respectively, of the arrangement that the communications system 100 may comprise to perform the method actions described above in relation to Figure 7 and/or Figure 9.
  • the arrangement depicted in panel a) corresponds to that described in relation to panel a) in Figure 10 and Figure 11 for each of the first node 111 and the second node 112, respectively.
  • the arrangement depicted in panel b) corresponds to that described in relation to panel b) in Figure 10 and Figure 11 for each of the first node 111 and the second node 112, respectively.
  • the communications system 100 may be for handling the anomalous event.
  • the communications system 100 is configured to comprise the first node 111 and the second node 112.
  • the communications system 100 is configured to, e.g. by means of the obtaining unit 1001 within the first node 111 configured to, obtain, by the first node 111 , the first set of data configured to comprise the first sequence of events over the first time period.
  • the first sequence of events is configured to have been categorized as normal or anomalous.
  • the communications system 100 is also configured to, e.g. by means of the determining unit 1002 within the first node 111 configured to, determine, by the first node 111, using machine self-supervised reinforcement learning, the predictive model of the anomalous event based on the first set of data configured to be obtained.
  • the communications system 100 is configured to, e.g. by means of the providing unit 1003 within the first node 111 configured to, provide, by the first node 111, the first indication based on the predictive model configured to be determined to the second node 112 configured to operate in the communications system 100.
  • the first indication may be configured to indicate the probability of occurrence of the anomalous event over another time period.
  • the self-supervised reinforcement learning may be configured to use: a) as the state, the first quantized values, from the first set of data configured to be obtained, of sequential interference the first cell 141 may be configured to be experiencing, b) as actions, the second quantized values, from the first set of data configured to be obtained, configured to indicate the occurrence of the anomalous event and c) as the reward, the one or more key performance indicators of the radio access network.
  • the determining may be configured to further comprise iterating a training of the predictive model using at least one of: a) cross-entropy loss to rank the sequence of events to the binarized indicator of the occurrence or absence of the anomalous event, b) self-supervised Q-learning to determine which factors to give more weightage to in the predictive model, and c) self-supervised actor-critic, wherein the supervised neural network model may be configured to act as the actor, and the critic may be configured to measure the goodness of actions taken by the actor. Feedback from the critic may be configured to be used as a stop gradient to stop the training.
  • the first node 111 may be further configured to, e.g. by means of the obtaining unit 1001 within the first node 111 configured to, obtain the second set of data configured to comprise the second sequence of events over the second time period.
  • the second sequence of events may be configured to lack a categorization.
  • the communications system 100 may be also configured to, e.g. by means of the validating unit 1004 within the first node 111 configured to, validate, by the first node 111 , the predictive model using the second set of data configured to be obtained by using tracking trend discrepancies between the time series of the forecasted sequence of anomalous events and the first sequence of events, to categorize the second sequence of events as normal or anomalous.
  • the communications system 100 may be configured to, e.g. by means of the using unit 1005 within the first node 111 configured to, use, by the first node 111 , the categorized second set of data to further train the predictive model.
  • the first set of data and the second set of data may be configured to comprise three series of quantized values configured to comprise: a) the first series of quantized values configured to indicate first interference in a first cell 141 , b) the second series of quantized values configured to indicate the second interference in one or more neighbor cells 142 of the first cell 141 , and c) the third series of quantized values configured to indicate signal to noise ratio in the first cell 141.
  • the validating may be configured to comprise: a) detecting outlier subsets of values in the first series with respect to the second series, b) smoothing the outlier subsets of values configured to be detected and corresponding subsets of values in the third series using a non-parametric approach, c) determining the respective sign of the change in the outlier subsets of values configured to be detected and the corresponding subsets of values in the third series, d) adding the changes with the respective sign configured to be determined, and e) identifying the subsets of zeros thereby identifying occurrence of the anomalous event.
  • the first set of data may be configured to comprise anomalous events configured to be detected in historical data with machine-learning first node 111.
  • the first set of data may be configured to comprise a correlation between the anomalous events configured to be detected in the historical data with machine-learning, and fault trace parsing configured to be based on machine-learning of the same historical data.
  • the communications system 100 is also configured to, e.g. by means of the receiving unit 1101 within the second node 112 configured to, receive, by the second node 112, the first indication from the first node 111 configured to operate in the communications system 100.
  • the first indication is configured to indicate the future occurrence of the anomalous event in the communications system 100.
  • the first indication is configured to be based on the predictive model configured to be determined using machine self-supervised reinforcement learning.
  • the communications system 100 is further configured to, e.g. by means of the initiating unit 1102 within the second node 112 configured to, initiate, by the second node 112, performing the action to prevent the anomalous event.
  • the methods according to the embodiments described herein for the communications system 100 may be respectively implemented by means of a computer program 1201 product, comprising instructions, i.e. , software code portions, which, when executed on at least one processor 1006, 1103, cause the at least one processor 1006, 1103 to carry out the actions described herein, as performed by the communications system 100.
  • the computer program 1201 product may be stored on a computer-readable storage medium 1202.
  • the computer-readable storage medium 1202, having stored thereon the computer program 1201 may comprise instructions which, when executed on at least one processor 1006, 1103, cause the at least one processor 1006, 1103 to carry out the actions described herein, as performed by the communications system 100.
  • the computer-readable storage medium 1202 may be a non-transitory computer-readable storage medium, such as a CD ROM disc, a memory stick, or stored in the cloud space.
  • the computer program 1201 product may be stored on a carrier containing the computer program, wherein the carrier is one of an electronic signal, optical signal, radio signal, or the computer-readable storage medium 1202, as described above.
  • first node 111 and the second node 112 in relation to Figure 12 may be understood to correspond to those described in Figure 10, and Figure 11, respectively, and to be performed, e.g., by means of the corresponding units and arrangements described in Figure 10 and Figure 11 , which will not be repeated here.
  • the expression “at least one of:” followed by a list of alternatives separated by commas, and wherein the last alternative is preceded by the “and” term, may be understood to mean that only one of the list of alternatives may apply, more than one of the list of alternatives may apply or all of the list of alternatives may apply.
  • This expression may be understood to be equivalent to the expression “at least one of:” followed by a list of alternatives separated by commas, and wherein the last alternative is preceded by the “or” term.
  • processor and circuitry may be understood herein as a hardware component.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

La présente invention concerne un procédé informatisé exécuté par un premier nœud (111). Le procédé est destiné à traiter un événement anormal. Le premier nœud (111) fonctionne dans un système de communication (100). Le premier nœud (111) procède aux opérations consistant à : obtenir (301) un premier ensemble de données contenant une première séquence d'événements pendant une première période de temps, la première séquence d'événements ayant été classée normale ou anormale; sur la base du premier ensemble de données obtenu, à l'aide d'un apprentissage machine par renforcement auto-supervisé, déterminer (302) un modèle prédictif de l'événement anormal; sur la base du modèle prédictif déterminé, fournir (306) une première indication à un second nœud (112) fonctionnant dans le système de communication (100). Le second nœud (112) reçoit la première indication et déclenche une action visant à empêcher l'événement anormal.
PCT/IN2021/051050 2021-11-05 2021-11-05 Premier nœud, second nœud, système de communication et procédés exécutés par ceux-ci pour traiter un événement anormal WO2023079567A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/IN2021/051050 WO2023079567A1 (fr) 2021-11-05 2021-11-05 Premier nœud, second nœud, système de communication et procédés exécutés par ceux-ci pour traiter un événement anormal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/IN2021/051050 WO2023079567A1 (fr) 2021-11-05 2021-11-05 Premier nœud, second nœud, système de communication et procédés exécutés par ceux-ci pour traiter un événement anormal

Publications (1)

Publication Number Publication Date
WO2023079567A1 true WO2023079567A1 (fr) 2023-05-11

Family

ID=86240733

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IN2021/051050 WO2023079567A1 (fr) 2021-11-05 2021-11-05 Premier nœud, second nœud, système de communication et procédés exécutés par ceux-ci pour traiter un événement anormal

Country Status (1)

Country Link
WO (1) WO2023079567A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160034823A1 (en) * 2014-07-31 2016-02-04 Collision Communications, Inc. Methods, Systems, And Computer Program Products For Optimizing A Predictive Model For Mobile Network Communications Based On Historical Context Information
WO2016019832A1 (fr) * 2014-08-08 2016-02-11 Telefonaktiebolaget L M Ericsson (Publ) Procédé d'arrêt de rapport de mesure et nœud b évolué utilisant ce dernier
US20210160746A1 (en) * 2018-04-20 2021-05-27 Telefonaktiebolaget Lm Ericsson (Publ) Automated observational passive intermodulation (pim) interference detection in cellular networks

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160034823A1 (en) * 2014-07-31 2016-02-04 Collision Communications, Inc. Methods, Systems, And Computer Program Products For Optimizing A Predictive Model For Mobile Network Communications Based On Historical Context Information
WO2016019832A1 (fr) * 2014-08-08 2016-02-11 Telefonaktiebolaget L M Ericsson (Publ) Procédé d'arrêt de rapport de mesure et nœud b évolué utilisant ce dernier
US20210160746A1 (en) * 2018-04-20 2021-05-27 Telefonaktiebolaget Lm Ericsson (Publ) Automated observational passive intermodulation (pim) interference detection in cellular networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZHANG WUYANG; FORD RUSSELL; CHO JOONYOUNG; ZHANG CHARLIE JIANZHONG; ZHANG YANYONG; RAYCHAUDHURI DIPANKAR: "Self-Organizing Cellular Radio Access Network with Deep Learning", IEEE INFOCOM 2019 - IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS (INFOCOM WKSHPS), IEEE, 29 April 2019 (2019-04-29), pages 429 - 434, XP033619536, DOI: 10.1109/INFCOMW.2019.8845280 *

Similar Documents

Publication Publication Date Title
CN109845310B (zh) 利用强化学习进行无线资源管理的方法和单元
US10966108B2 (en) Optimizing radio cell quality for capacity and quality of service using machine learning techniques
CN111466103B (zh) 用于网络基线的生成和适配的方法和系统
US20150331771A1 (en) Simulating Burst Errors in Mobile Data Communication Network System Level Simulations
US11122467B2 (en) Service aware load imbalance detection and root cause identification
US11096092B2 (en) Service aware coverage degradation detection and root cause identification
US20220149980A1 (en) Link adaptation optimized with machine learning
US10863400B1 (en) Wireless terminal roaming
CN107210852A (zh) 通过预测平滑的传输块大小来控制应用的操作的系统和方法
US20230198640A1 (en) Channel state information values-based estimation of reference signal received power values for wireless networks
CN114731524A (zh) 监测多个网络节点的性能
WO2021048742A1 (fr) Système et procédé de filtrage intelligent piloté par scénario pour surveillance de réseau
US20220210682A1 (en) SYSTEM AND METHOD FOR ARTIFICIAL INTELLIGENCE (AI) DRIVEN VOICE OVER LONG-TERM EVOLUTION (VoLTE) ANALYTICS
Bartoli et al. CQI prediction through recurrent neural network for UAV control information exchange under URLLC regime
US20230254709A1 (en) First node, third node, fourth node and methods performed thereby, for handling parameters to configure a node in a communications network
WO2023079567A1 (fr) Premier nœud, second nœud, système de communication et procédés exécutés par ceux-ci pour traiter un événement anormal
US20220377616A1 (en) Method and apparatus for service level agreement monitoring and violation mitigation in wireless communication networks
EP3744125A1 (fr) Procédé, appareil et programme informatique pour modification de critères de contrôle d'admission
US20240073720A1 (en) First node and methods performed thereby for handling anomalous values
US20240027567A1 (en) Increasing Wireless Network Performance Using Contextual Fingerprinting
US20230246791A1 (en) Methods and apparatuses for interference mitigation and related intelligent network management
EP4150861B1 (fr) Détermination de la mise à niveau de cellules
EP4366433A1 (fr) Dispositif électronique et procédé de fourniture d'informations de planification par apprentissage dans un système de communication sans fil
WO2023119304A1 (fr) Nœud et procédés mis en œuvre par celui-ci pour gérer la dérive dans des données
WO2023095150A1 (fr) Premier nœud, second nœud, système de communication et procédés exécutés par ceux-ci pour gérer des modèles prédictifs

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21963181

Country of ref document: EP

Kind code of ref document: A1