WO2015144211A1 - Method and system for monitoring qoe - Google Patents

Method and system for monitoring qoe Download PDF

Info

Publication number
WO2015144211A1
WO2015144211A1 PCT/EP2014/055972 EP2014055972W WO2015144211A1 WO 2015144211 A1 WO2015144211 A1 WO 2015144211A1 EP 2014055972 W EP2014055972 W EP 2014055972W WO 2015144211 A1 WO2015144211 A1 WO 2015144211A1
Authority
WO
WIPO (PCT)
Prior art keywords
qoe
service
data
values
network
Prior art date
Application number
PCT/EP2014/055972
Other languages
French (fr)
Inventor
Mohammad Abdul AWAL
MingXue Wang
Sidath Handurukande
Original Assignee
Telefonaktiebolaget L M Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget L M Ericsson (Publ) filed Critical Telefonaktiebolaget L M Ericsson (Publ)
Priority to PCT/EP2014/055972 priority Critical patent/WO2015144211A1/en
Publication of WO2015144211A1 publication Critical patent/WO2015144211A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5003Managing SLA; Interaction between SLA and QoS
    • H04L41/5009Determining service level performance parameters or violations of service level contracts, e.g. violations of agreed response time or mean time between failures [MTBF]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/142Network analysis or design using statistical or mathematical methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5061Network service management, e.g. ensuring proper service fulfilment according to agreements characterised by the interaction between service providers and their network customers, e.g. customer relationship management
    • H04L41/5067Customer-centric QoS measurements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/04Processing captured monitoring data, e.g. for logfile generation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/16Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence

Definitions

  • the present invention relates to a method and system for monitoring Quality of Experience, QoE, of a service, and particularly but not exclusively to a method and system for monitoring QoE of a service transported over Transmission Control Protocol, Secure Transmission Control Protocol, Hypertext Transfer Protocol, and/or Hypertext Transfer Protocol Secure.
  • ISP Internet Service Protocol
  • QoE Quality of Experience
  • QoE is a measure of a user's experience about a service. This is a subjective measure and could vary from user to user.
  • QoE consists of a number of measures for a given service, for example listenability of voice, usability, video impairments and the like.
  • One example of a QoE metric is Mean Opinion Score, MOS, which provides a numerical indication of the perceived quality of the service from the viewpoint of its users.
  • MOS represents voice quality of the received voice after encoding and transmission of voice.
  • voice quality is the primary indicator of the quality of user experience.
  • the voice quality measurement approaches can be classified in two categories: subjective and objective methods.
  • PESO Perceptual Speech Quality Measure
  • PSQM Perceptual Speech Quality Measure
  • PAMS Perceptual Analysis Measurement System
  • the model currently recommended by the ITU-T for network planning is the E-model (ITU-T Rec. G.107, 2005).
  • the E-model is mainly empirical by nature, and was developed on the basis of large amounts of subjective test data. This model takes into account a wide range of telephony-band impairments, for example impairment due to low bit-rate coding devices and one-way delay, as well as the "classical" telephony impairments of loss, noise and echo.
  • the E-model can be applied to assess the voice quality of Local Area Network, LAN, and wireless scenarios, based on circuit-switched and packet-switched technology.
  • the primary output of the E-model calculations is a scalar quality rating value known as the Transmission Rating Scale or R-scale.
  • the R-scale ranges from 0 to 100, with 100 corresponding to optimum quality.
  • the R-scale is then mapped to the MOS scale using a codec-specific mapping.
  • the E-model uses the following N-KPIs to assess voice quality: packet loss rate, end-to-end delay, and jitter (delay variation).
  • the values of these N-KPIs are determined by exploiting the way in which the data is packetized into packets before being transmitted over the network.
  • the values of the N-KPIs are determined through the different network protocol headers used to transport the voice signal from source to destination.
  • packet structure and header information in an Ethernet/IP network
  • the packet structure comprises an Ethernet header; an IP header; a User Datagram Protocol, UDP, header; and a Real-time Transport Protocol, RTP, header.
  • the Ethernet header contains the source and destination Media Access Control, MAC, addresses of local link.
  • the IP header contains the source and destination IP addresses to enable routing.
  • the UDP header contains the source and destination ports of the sending and receiving applications, respectively.
  • the RTP header contains the timestamp of the data packet, and the sequence number of RTP header indicates whether packets are missing or arrived out of order.
  • the payload type of RTP header informs about the type of payload, for example, voice signal encoded with codec G.71 1.
  • RTP was designed to support the determination of different network KPIs from the received voice packet.
  • the UDP When using UDP and RTP transport protocols, the UDP requires a number of UDP ports to be opened as each data stream requires two ports for data and control. This creates a sizeable problem as Internet firewalls and/or routers may not open these ports. It is also normal for Internet firewalls and/or routers to filter or ignore UDP packets. To avoid this problem, it is known to use RTP or RTP-like protocol over Transmission Control Protocol, TCP. This is commonly used for Voice over IP, VoIP, services.
  • TCP Transmission Control Protocol
  • This is commonly used for Voice over IP, VoIP, services.
  • communications can only take place through Hypertext Transfer Protocol (HTTP) proxies that allow traffic to and from organizations/enterprises. As a result, any service that is provided over the network must be provided over HTTP.
  • HTTP Hypertext Transfer Protocol
  • OTT services are an important subset of services delivered using the telecommunications and ISP networks. Initially OTT services were used only for audio and video content, but more recently this has expanded to encompass any form of data. The distinguishing feature of OTT services is that such services are not provided by the network service provider. Whilst the network service provided is aware of the data packets and their contents, it does not have any control over the OTT service. However, in order to maintain the end-user quality and QoE for customers, the network operators want to monitor the status of the OTT services at different points of their network. To enable the QoE monitoring for this kind of situation, we are experiencing the following challenges: a) Monitoring services independently from the service providers
  • End devices are not synchronized in terms of clock
  • a method, implemented within a network for generating a mapping between Network Key Performance Indicators, N-KPIs, and Quality of Experience, QoE, of a service implemented on the network, the method comprising:
  • Data traffic associated with said service may be transported over Transmission Control Protocol, Secure Transmission Control Protocol, Hypertext Transfer Protocol, and/or Hypertext Transfer Protocol Secure.
  • the step of generating a mapping between the calculated N-KPI values and determined QoE values may comprise pairing the data traffic that has been intercepted and logged with corresponding data files recorded by or at the UEs.
  • the key may comprise one or more of the following metrics: session start time; session end time; host address; port numbers; service provider name.
  • the step of receiving and logging data representative of intercepted data traffic associated with a service may comprise receiving and logging said one or more metrics.
  • the step of receiving and logging data files recorded by or at UEs may comprise extracting and logging said one more metrics.
  • the step of receiving and logging data traffic associated with the service may comprise receiving data representative of traffic that has been intercepted in its transit from a first UE to a second UE.
  • the step of receiving and logging data files recorded by or at the UEs may comprise receiving and logging data files recorded by or at the first and second UEs.
  • the step of determining QoE values may comprise comparing the data files recorded by or at the first UE and with data files recorded by or at the second UE.
  • the step of generating a mapping between the calculated N-KPI values and determined QoE values may comprise mapping the N-KPI values calculated from the data traffic that has been intercepted or logged in its transit from the first UE to the second UE onto the QoE values that were determined from the data files recorded by or at the first and second UEs.
  • the method may comprise determining identifying information from the data packets that have been received and logged and combining related data into session records. .
  • the method may comprise intercepting data traffic associated with the service prior to the step of receiving and logging data representative of said data traffic associated with the service.
  • the step of intercepting data traffic associated with the service may be implemented through intermediate packet inspection probes.
  • the intermediate packet inspection probes may comprise shallow packet inspection probes.
  • the intermediate packet inspection probes may comprise deep packet inspection probes.
  • the step of determining a QoE value from the data files recorded by or at the UEs may comprise objective measurement of QoE.
  • the step of determining a QoE value from the data files recorded by or at the UEs may comprise implementation of a Full Reference, FR, algorithm.
  • FR Full Reference
  • Such an algorithm utilizes the signal sent by a sender UE, and hence recorded at said sender UE, as a reference signal.
  • the FR algorithm then compares this signal to the signal received by the recipient UE, and hence recoded at said recipient UE.
  • the step of determining a QoE value from the data files recorded by or at the UEs may comprise Perceptual Objective Listening Quality Assessment, POLQA.
  • the step of determining a QoE value from the data files recorded from the UEs may comprise Perceptual Evaluation of Speech Quality, PESQ.
  • the step of determining a QoE value from the data files recorded by or at the UEs may comprise Perceptual Evaluation of Video Quality, PEVQ.
  • the N-KPIs may relate to one or more of the following properties:
  • these new N-KPIs provide hints about a variety of network characteristics which are not possible to achieve when a service uses unconventional protocol or encrypted transport protocol.
  • a Transmission Control Protocol, TCP retransmission event provides information about possible packet loss and delay.
  • the variance of TCP transmission rate provides an indication of the burst size of packet loss.
  • the N-KPIs may also provide information about congestion, variable delay, loss and the like.
  • the N-KPIs may be calculated as:
  • the one N-KPI may be variance of TCP transmission rate.
  • Another N- KPI may be gap duration of out-of-order packet rate.
  • the service may comprise an Over The Top, OTT, service.
  • the service may comprise a Voice over IP, VoIP, service.
  • the data traffic may be encrypted.
  • the method may comprise decrypting the data that has been intercepted, which may be carried out prior to logging the data.
  • the above-described method can be considered as a learning phase within a method for quantitatively estimating Quality of Experience, QoE, of services implemented on the network.
  • QoE Quality of Experience
  • the method comprises a learning phase and a live monitoring phase
  • the learning phase comprises a method for generating a mapping between Network Key Performance Indicators, N-KPIs, and QoE of a service implemented on the network in accordance with the first aspect and various embodiments of the first aspect
  • the live monitoring phase comprises receiving and logging data representative of data traffic associated with a service; calculating N-KPI values from the data traffic that has been intercepted and/or logged; and quantitatively estimating QoE of the service by using said stored mapping between the N-KPI values and QoE values.
  • the live monitoring phase comprises the steps of determining identifying information from the data packets that have been received and logged, combining related data into session records, and calculating the N-KPI values based on the session records.
  • the mapping generated and stored during the learning phase is then used during the live monitoring phase for estimating a QoE value of the service.
  • the learning phase requires data to be recorded by or at the UEs.
  • the step of monitoring data traffic may comprise intermediate packet inspection such as deep packet inspection and/or shallow packet inspection.
  • monitoring the data traffic in this way avoids reliance on terminal reports or feedback from the end-user about the service quality.
  • the learning phase may be repeated periodically. Alternatively or additionally, the learning phase may be repeated on demand, whenever it is deemed necessary.
  • the method may be implemented on a Network Element, NE.
  • the method may be implemented on a QoE monitoring system.
  • N-KPIs Network Key Performance Indicators
  • QoE Quality of Experience
  • a packet analyzer for receiving and logging data representative of intercepted data traffic associated with the service
  • N-KPI Network Key Performance Indicator
  • a machine learning engine for generating a mapping between the N-KPI values calculated by the N-KPI engine and QoE values determined by the QoE engine
  • a memory for storing the mapping generated by the machine learning engine.
  • a packet analyzer may also be known as a network analyzer, protocol analyzer or packet sniffer.
  • the monitoring system may comprise a packet classification engine for determining identifying information from the data packets received and logged by the packet analyzer and combining related data into session records.
  • the session records may be input into the N-KPI engine to enable the N-KPI engine to calculate values of the N-KPIs.
  • the monitoring system may comprise a workstation terminal for allowing a human network operator to access data stored in the memory.
  • the monitoring system may be arranged for implementing the learning phase of the above-described method.
  • the monitoring system may also be arranged for implementing the live monitoring phase of the above-described method.
  • the monitoring system may comprise a QoE monitoring engine for receiving N-KPI values from the N-KPI engine and consulting the stored mapping to estimate QoE values.
  • a system for generating a mapping between Network Key Performance Indicators (N-KPIs) and Quality of Experience (QoE) of a service implemented on a network comprising a processor and a memory, said memory containing instructions executable by said processor whereby said system is operative to:
  • the system may consist of a single network element.
  • the data traffic may be transported between UEs over Transmission Control Protocol, Secure Transmission Control Protocol, Hypertext Transfer Protocol, and/or Hypertext Transfer Protocol Secure
  • QoE Quality of Experience
  • the system comprising a processor and a memory, said memory containing instructions executable by said processor whereby said system is operative to implement a learning phase and a live monitoring phase, in the learning phase the system is operative to:
  • N-KPI Network Key Performance Indicator
  • the system in the live monitoring phase the system is operative to:
  • N-KPI values from the data traffic that has been intercepted and/or logged; and, use said stored mapping between the N-KPI values and QoE values to quantitatively estimate QoE of the service.
  • Figure 1 is a schematic illustration of a voice packet with payload inside nested network headers by different protocols
  • Figure 2 is a block diagram of a network comprising a system for monitoring QoE of a service in accordance with an embodiment of the present invention
  • Figure 3 is a block diagram of the system for monitoring QoE of a service illustrated in figure 2;
  • Figure 4 is a block diagram of an N-KPI engine, which forms part of the system illustrated in figure 3;
  • Figure 5 is a block diagram of a machine learning engine, which forms part of the system illustrated in figure 3;
  • Figure 6 is a block diagram of a system for monitoring QoE of a service in accordance with another embodiment of the present invention.
  • Figure 7 is a flow chart illustrating a method for calculating positive and negative surface areas
  • Figure 8 is a graphical representative of an example of data rate time variation of a voice call
  • Figure 9 is a diagram illustrating an (N+1 )-state Markov process for transition frequency modeling; and, Figure 10 is a flow diagram illustrating a method for monitoring QoE of a service in accordance with an embodiment of the present invention.
  • FIG. 2 is a simplified block diagram illustrating a network 208 comprising a plurality of UEs 204-207.
  • the UEs 204-206 may include any computer system or device such as, for example, a personal computer, laptop computer, tablet computer, mobile device, smart phone, web-enabled phones etc.
  • VoIP service providers 201 -203 provide VoIP service to several UEs 204-
  • One UE may call another UE where both UEs have the subscription of same VoIP service.
  • the network 208 may include, for example, any suitable wired or wireless computer or data network including, for example, the Internet, or a third generation (3G) or a fourth generation (4G) wireless network.
  • the bitrate used for each voice session is dependent upon the network 208 bandwidth assigned to the session or the client. The bandwidth may vary on a session-by-session basis and/or may vary during individual voice sessions.
  • the capabilities of UEs 204-207 may also affect the bitrate used for individual voice sessions.
  • the protocols that transport the voice data between UEs 204-207 through VoIP servers are selected based on the service provider, and/or UEs 204-207.
  • the VoIP service is supported, for example, on HTTP, HTTPS, a proprietary protocol on top of TCP or secured TCP.
  • a system 300 for monitoring OTT VoIP service is also provided within the network
  • the VoIP services are monitored by QoE monitoring system 300 as the data packets containing the UE voice pass through network 208.
  • UEs 204-207 are using VoIP service from many different VoIP service providers 201 -203, which results in many different HTTP, HTTPS, secured TCP sessions through network 208.
  • Each of the VoIP sessions for the UEs 204-207 has different session duration and has different bandwidths available.
  • the talk period and silent period for that session may change multiple times.
  • the voice session may suffer from network losses, and/or congestion, which is likely to create gaps in the voice session. These issues make it difficult for the service provider to determine the QoE for subscribers.
  • the system 300 comprises a packet analyzer, 307, for receiving and logging data representative of intercepted data traffic associated with the service, an N-KPI engine, 400, for calculating values of N-KPIs from the data traffic that has been intercepted and/or logged.
  • the system 300 also comprises a receiver 306 for receiving data files associated with the service recorded at User Equipment, UEs, and a QoE engine, 312, for determining QoE values from the data files associated with the service and received from the UEs.
  • system 300 comprises a machine learning engine, 500, for generating a mapping between the N-KPI values calculated by the N-KPI engine and QoE values determined by the QoE engine and a memory, 313, for storing the mapping generated by the machine learning engine.
  • a machine learning engine 500, for generating a mapping between the N-KPI values calculated by the N-KPI engine and QoE values determined by the QoE engine and a memory, 313, for storing the mapping generated by the machine learning engine.
  • FIG 3 is a block diagram of the QoE monitoring system 300 illustrated in figure 2.
  • QoE monitoring system 300 works in two phases: a "learning phase” and "live monitoring phase".
  • learning phase the QoE monitoring system 300 collects recorded voice files from the UEs 204-207 and tracks the related data packets to determine N-KPIs.
  • the monitoring system uses PESQ to generate the QoE (in terms of MOS value) of the recorded voice file.
  • the QoE monitoring system 300 then correlates the QoE and N-KPIs to generate QoE models.
  • the monitoring system can identify the QoE of the voice session using QoE models and the N-KPIs extracted from the voice data.
  • the system 300 does not rely on terminal reports or any sort of feedbacks from the end-users about their service quality (i.e., reports about VoIP quality itself).
  • the system 300 only relies on N-KPIs, which may be calculated from data representative of intercepted data traffic.
  • the network service provider may use this QoE information to adjust the network services available to UEs 204-207, such as the bandwidth assigned to each user and the routing of data packets through network 208.
  • the QoE monitoring system 300 comprises a packet analyzer 307, packet classification engine 308, N-KPI engine 400, a QoE engine 312 and a machine learning engine 500.
  • the packet analyzer 307 is configured to intercept and capture data packets from network 208, including data for voice sessions 301 .
  • the packet analyzer 307 is further configured to log data indicative of this intercepted data.
  • the packet analyzer may be configured to passively receive data indicative of intercepted data traffic, and log all or part of this received data.
  • the packet classification engine 308 is configured to identify information from the data logged by the packet analyser 307 and combine related data into session records.
  • the session records may comprise of session start time, session end time, VoIP service providers name, host addresses and port numbers of the session, extracted N-KPIs, extracted service QoE during training phase, monitored service QoE during live phase, etc.
  • the session records are stored in the database 418, which can be queried by the N-KPI engine 400.
  • the N-KPI engine 400 is configured to calculate numerous N-KPIs, which mimic the network characteristics like congestion, variable delay, loss etc.
  • the N-KPI values are stored in the database 418 for the corresponding session record.
  • the N-KPI engine 400 is described in illustrated in greater detail in figure 4.
  • the QoE engine 312 is configured to collect session start time, session end time, host addresses, port numbers, service provider name, and recorded data files from a subset of UEs 204-207. In the case of VoIP, the data files will be "voice files" representative of a telephone conversation.
  • the QoE engine 312 uses PESQ to generate the QoE (in terms of MOS value) of the recorded voice file.
  • the extracted QoE values are then stored in the database 418 for the corresponding session record using the collected session start time, session end time, host addresses, port numbers, service provider name as key.
  • advanced tools like POLQA or others if available.
  • the machine learning engine 500 is configured to correlate the QoE and N-KPIs relating to each specific VoIP service provider 201 - 203 to generate QoE models. These QoE models are then stored in the database 418.
  • the machine learning engine 500 can identify the QoE of the voice session using QoE models and the N-KPIs determined from the voice data.
  • the session data is provided as a monitoring feed 314 to QoE monitoring application 31 1 .
  • Database 418 may also store subscriber information and client device data. A network operator may access the real-time or stored session data via workstation terminal 315. Data stored to database 418 can be queried by the service provider, for example, on a per-session, per-user, per-device, or per service basis.
  • FIG. 4 is a block diagram that illustrates the N-KPI engine 400 in greater detail.
  • data packets are first classified into sessions by a packet classification engine 401 .
  • the various metrics produced by the N-KPI engine 400 may be session-independent, in which case a packet classification engine 401 will not be required.
  • the N-KPI engine 400 preferably includes a retransmission rate calculation method 408, out- of-order rate calculation method 409, duplicate ACK (acknowledgement) rate calculation method 410, flight rate calculation method 41 1 , ACK RTT (round trip time) calculation method 412, data rate calculation method 413, and packet rate calculation method 414.
  • Each of these methods 408-414 produces an array of values with respective TCP (Transmission Control Protocol) level parameters.
  • the array size is 20 to 60 for a session which duration is at least 20 seconds.
  • the retransmission calculation method 41 1 calculates the number of data packet retransmission in each second.
  • the out-of-order rate calculation method 409 calculates the number of out-of-order packets in each second.
  • the duplicate ACK rate calculation method 410 calculates the number of duplicate ACK packets in each second.
  • the flight rate calculation method 41 1 calculates the flight size in bytes in each second.
  • the ACK RTT calculation method 412 calculates the average ACK RTT in each second.
  • the data rate calculation method 413 calculates the number of data bytes in each second.
  • the packet rate calculation method 414 calculates the number of data packets in each second.
  • Retransmission rate calculation 408 - A packet is considered to have been retransmitted if a TCP sender does not receive any ACK of the packet within the TCP retransmission timeout (RTO) period.
  • RTO retransmission timeout
  • the RTO period is 4 times the end-to-end round trip time (RTT) value.
  • the number of retransmissions is calculated by observing the data packet sequence number.
  • the method examines the TCP header of the packet and extracts the sequence number, which can be expected (in the absence of data transfer errors) to follow on from the sequence number of the previous packet within a current communications session. The method compares the extracted sequence number of the current TCP packet with the highest sequence number previously observed within the current communications session.
  • the present TCP packet is deemed to relate to a regular packet. Otherwise, the present TCP packet is deemed to be a retransmitted packet.
  • the N-KPI in this case is the number of retransmitted packets which occur within a given time interval.
  • Out-of-order rate calculation 409 - The determination of the out-of-order rate involves using sequence numbers.
  • the method comprises receiving both a data packet and a corresponding ACK packet.
  • the data packet When the data packet is received, its sequence number and its payload size are extracted from the TCP header.
  • the payload size is added to the sequence number to determine an expected sequence number for an expected ACK packet.
  • an ACK packet When an ACK packet is received, its sequence number is extracted and compared with the expected sequence number calculated based on the data packet. If the sequence number extracted from the ACK packet matches the expected sequence number then this indicates an in order transmission, whereas if the sequence number extracted from the ACK packet does not match the expected sequence number then this indicates an out-of-order transmission.
  • the N-KPI in this case is the number of out of order transmissions detected within a given time interval.
  • Duplicate ACK rate calculation 410 Duplicates ACKs can be detected using acknowledgement sequence number.
  • this calculation method comprises keeping track of acknowledgement sequence numbers from received ACKs, and incrementing a counter each time an ACK is received which has an acknowledgement sequence number which is the same as that of a previously received acknowledgment.
  • the N-KPI in this case is the number of duplicate ACKs detected within a given time interval.
  • Flight rate calculation 41 1 - Flight rate is the amount or proportion of the data within the current time window which has not been acknowledged.
  • One way of calculating this is, upon reception of an ACK packet, to extract its current acknowledgement sequence number from its TCP header. The highest seen sequence number is also tracked, in the same manner as for the retransmission calculation. The flight size is then calculated as the difference between the highest previous observed sequence number and the sequence number of the current ACK packet. The flight sizes calculated in this way are indicative of the amount of data which cannot be acknowledged, and can be accumulated over a period to give a flight rate for that period.
  • ACK RTT calculation 412 - This can be determined as the time delay between the time stamp on a transmission (set by the transmitting device) and the time stamp on the ACK (set by the receiving device). This delay could be determined for each transmission/ACK pairing within a time interval (e.g. one second) and then averaged. If the TCP timestamps option is not enabled at both the transmitting and receiving devices, the ACK RTT is computed in the probe, based on the difference between a time of capture of a data packet and the time of capture of the corresponding acknowledgement packet.
  • Data rate calculation 413 - This is the total number of non-duplicate data bytes conveyed by the TCP transmission packets per second. This can be readily determined by interrogating TCP headers to identify the payload size of conveyed packets.
  • Packet rate calculation 414 - This is the total number of non-duplicate transmission packets conveyed per second. This can be readily determined by the probe, and by ignoring duplicate packets identified by having the same sequence number.
  • the core component 402 of N-KPI engine 400 includes variance calculation method 403, surface area calculation method 404, transition frequency calculation method 405, gap duration calculation method 406, number of gaps calculation method 407.
  • Each of these methods 403-407 produces a list of N-KPIs taking input from any of the above-described methods 408-414, namely the retransmission rate calculation method 408, out-of-order rate calculation method 409, duplicate ACK rate calculation method 410, flight rate calculation method 41 1 , ACK RTT calculation method 412, data rate calculation method 413, and packet rate calculation method 414.
  • the variance calculation method 403 provides a measure of how diverse is the specific N-KPI and how far from the mean value. A low variance indicates that the data points tend to be very close to the mean, whereas high variance indicates that the data points are spread out over a large range of values.
  • the variance of an array of values X is calculated as Equation 1 where n is the size of the array X, x, is the value of h index of X, ⁇ is the average value of array X calculated as Equation 2.
  • the variance of difference N-KPIs are important when N-KPIs fluctuate in an error-prone network such as wireless networks due to its inherent interference.
  • the new N-KPIs produced from variance calculation method 403 would be variance of retransmission rate, variance of out-of-order rate, variance of duplicate ACK rate, variance of flight rate, variance of ACK RTT rate, variance of data rate, and variance of packet rate which are saved to database 418 though feed 415.
  • the surface area calculation method 404 provides a snapshot of a particular N-
  • KPI k for a specific period of time. This period of time is referred as a "window size" w.
  • the current status of a N-KPI k(t) could be high or low compared to the average value of the N-KPI k_avg.
  • the average value could be an average taken over any specified period of time which is longer than the window size. For example, the average could be assessed for the data array X, or for a longer period than this.
  • the surface area metric gives us the status of the N-KPI k over a period of time w. If the status of k over the window w is higher than the average k_avg, this is referred to as a positive surface area (PSA).
  • PSA positive surface area
  • the method takes w consecutive values (i.e. window size) at a time and determines the surface area.
  • Figure 7 schematically illustrates the surface area measurements according to one possible implementation.
  • the algorithm is commenced on the basis of a particular array of N-KPI values X, and on the basis of a selected or predetermined window size w.
  • the algorithm is initialized by setting a loop control variable n to match the number of elements in the array X, by setting a PSA variable to zero, by setting an NSA variable to zero and by setting a loop tracking variable / to zero.
  • a step S3 it is determined whether the value of loop tracking variable / is less than the loop control variable n. This step merely constrains the algorithm to function only while there are values still to obtain from the data array X. If the step S3 is answered in the affirmative then the algorithm proceeds to a step S4, where an accumulation variable sum is initialized to zero, and another loop tracking variable j is set to the current value of / ' .
  • the algorithm of Figure 7 effectively considers each consecutive set of w data values (window) in the array X, before moving on to consider the next consecutive step of w data values (window) in the array X.
  • the step S4 will be carried out for each window to set the starting variable in the window, and to initalise the variable sum required to determine the surface area state within that window.
  • step S5 it is determined whether the current position in the data array falls within the current window. If the step S5 is answered in the affirmative, then at a step S6 the magnitude of the difference between the current data value in the array X and the average value for the N-KPI is calculated, and it is determined whether the calculated magnitude is greater than a threshold S t h- If so, the algorithm progresses to a step S7, where the value of (xj - ⁇ ), which will be a positive number if xj > ⁇ and a negative number if xj ⁇ ⁇ , is added to the variable sum. Then, at a step S8, the value of j is incremented to permit the next value in the data array to be evaluated at the step S5.
  • step S6 determines whether a value in the data array X is close to the average (S6 answered in the negative) or deviates substantially from the average (S6 answered in the positive), and only values in the array which deviate substantially from the average are permitted to influence the sum variable.
  • the steps S5 to S8 continue in a loop until all data values of the array X within the current window have been considered. Then, at a step S9, the loop tracking variable / is incremented by the window size w to set a new start point for the next window. Then, at a step S10 it is determined whether the value of sum is greater than or less than zero.
  • the PSA variable will be increased by the amount of the variable sum, and the process will return to the step S3 for the next window to be evaluated.
  • the NSA variable will be increased by the amount of the variable sum (although given that in this case the variable sum is a negative number, this will result in NSA being reduced - although the NSA metric could equally be kept as a positive value by adding to it the magnitude of the variable sum.).
  • This process will continue until the entirety of the data array X has been considered, at which point the step S3 will be answered in the negative and the process will terminate at a step S13.
  • This algorithm has the effective of determining either a positive or negative sum for each window, which is indicative of whether, during that time window, the value of the N-KPI deviated substantially from an average value, and whether it deviated more up or down from that average.
  • the PSA or NSA variable is accumulated with the variable sum. Over the course of the entire data array X, it will be appreciated that for some time windows the PSA variable will increase in magnitude, while in other time windows the NSA variable will increase in magnitude.
  • a PSA value and an NSA value is calculated.
  • the relative magnitudes of the PSA and NSA values are indicative of whether the N-KPI is generally in excess of its average value (PSA is greater than NSA), or generally below its average value (NSA is greater than PSA). Moreover, the absolute values of PSA and NSA provide some indication of how great the deviation from the average is.
  • the surface area calculation method 404 provides the Positive Surface Area, PSA, and Negative Surface Area, NSA, of N- KPIs during the session.
  • the described algorithm takes as an input a window size w and an array X of values produced by any of the methods 408-414.
  • is the average value of either the array or a longer period of time
  • Sm is a threshold value defined in Equation 3 where p is a coefficient of ⁇ with 0 ⁇ p ⁇ 1 , hence Sm is p% of ⁇ .
  • the new N-KPIs produced from surface area calculation method 404 would be PSA and NSA of retransmission rate, PSA and NSA of out-of-order rate, PSA and NSA of duplicate ACK rate, PSA and NSA of flight rate, PSA and NSA of ACK RTT rate, PSA and NSA of data rate, PSA and NSA of packet rate which are saved to database 418 though feed 415.
  • the transition frequency calculation method 405 provides the frequency of changes of N-KPI values produced from any of the methods 408-414.
  • the (N+1 )- state Markov chain is used as shown in figure 9 to model the transitions.
  • the model is generated in the learning phase by stepping through and observing changes in a data array corresponding to an N-KPI being modelled.
  • the state 0 is the initial state which represents good network conditions.
  • the Markov chain starts at the state 0 at the time the first value in the data array X is evaluated, but can also be returned to at any time that a current value of the data array X is indicative of good network conditions.
  • the states 1 to N represent bad network conditions.
  • the threshold F f/7 is determined according to the Equation 4 where q is a coefficient of ⁇ with 0 ⁇ q ⁇ 1 , hence F th is q% of ⁇ .
  • the state transition probability P 0 i represents the transition from the good state 0 to the bad state 1 .
  • P 0 i is the probability of a next value in the data array representing bad network conditions if the current value in the data array represents good network conditions.
  • the state transition probability 1 -P 0 i represents the transition from a good state to a good state, that is, the probability of a next value in the data array representing good network conditions if the current value in the data array represents good network conditions.
  • the state transition probability P 12 is the probability of a next value in the data array representing bad network conditions from the state arrived at following the state transition P 0 i .
  • the state transition probability P 0 i provides an indication of the likelihood of poor network conditions occurring for at least one sample in duration
  • the state transition probabilities P 0 i and Pi 2 together provide an indication of the probability of poor network conditions prevailing for at least two samples in duration.
  • a state probability indicative of the likelihood of the Markov chain being in that state can be calculated.
  • a state probability ⁇ represents the likelihood of the N-KPI indicating that the network is in a good state.
  • a state probability of ⁇ represents the likelihood of the N-KPI indicating that the network has been in a poor state for exactly / samples.
  • the higher number of bad states provides an indication of burst size of bad network conditions. With only one bad state it is possible to identify the occurrence of at least one consecutive bad burst in the network. With 5 bad states for example, it is possible to identify that there were at least 5 consecutive bad bursts in the network if the state probabilities of ⁇ 5 is non-zero.
  • Equation 5 the total number of transitions in Equations 5 and 6 is the size of (number of elements in) array X.
  • the number of transitions at state / (Equation 5), and number of transitions from state / to j (Equation 6) is calculated by stepping through the array X and comparing each value with the threshold value F th as in equation 4.
  • the new KPIs produced from transition frequency calculation method 405 would be the state probabilities ⁇ , and transition probabilities P,y of retransmission rate, state and transition probabilities of out-of- order rate, state and transition probabilities of duplicate ACK rate, state and transition probabilities of flight rate, state and transition probabilities of ACK RTT rate, state and transition probabilities of data rate, state and transition probabilities of packet rate which are saved to database 418 though feed 415.
  • the transition frequency KPIs comprise a set of state and transition probabilities for each of the properties measured by the methods 408 to 414 of Figure 4.
  • the gap duration calculation method 406 provides duration of gaps of N-KPI values produced from any of the methods 408-414.
  • a gap is a period of time during which a particular N-KPI drops below a certain value, perhaps due to a silence in a conversation carried over VOIP.
  • a rate of an N-KPI can be calculated as the measured value of N-KPI divided by the duration of the flow or measurement. The rate is calculated to normalize the N-KPIs for flows of different durations. The rate of N-KPIs distinctively differs during the voice with talk spurt and silence gap. If the rate is below a certain threshold D th we can mark that duration as silence gap in the voice conversation.
  • the gap duration of an array of values X is calculated as Equation 7 where n is the size of the array X.
  • the variable a is calculated as Equation 8 where x, is the value of h index of X.
  • the threshold D th is calculated as Equation 9 where r is a coefficient of ⁇ with 0 ⁇ r ⁇ 1 , hence D th is r% of ⁇ .
  • the new N-KPIs produced from gap duration calculation method 406 would be gap duration of retransmission rate, gap duration of out-of-order rate, gap duration of duplicate ACK rate, gap duration of flight rate, gap duration of ACK RTT rate, gap duration of data rate, gap duration of packet rate which are saved to database 418 though feed 415.
  • the number of gaps calculation method 407 provides the number of gaps found in the N-KPIs of the conversation. Whenever the rate of N-KPs is below threshold D th as in Equation 9 and subsequently, goes above the threshold D th , it is counted as one gap count.
  • the new N-KPIs produced from number of gaps calculation method 407 would be number of gaps of retransmission rate, number of gaps of out-of-order rate, number of gaps of duplicate ACK rate, number of gaps of flight rate, number of gaps of ACK RTT rate, number of gaps of data rate, number of gaps of packet rate which are saved to database 418 though feed 415.
  • the average calculation method 416 provides the average of the array of values coming from methods 408-414.
  • the average values are average retransmission rate, average out-of-order rate, average duplicate ACK rate, average flight rate, average ACK RTT rate, average data rate, average packet rate which are saved to database 418 though feed 417.
  • average retransmission rate for example
  • the same values as used to generate the average retransmission rate go to block 402, where more complex functions are used to calculate further N-KPIs.
  • the retransmission rate goes from box 408 to box 403 and then variance of retransmission rate is calculated as an N-KPI. This way from only retransmission rate we produce two N-KPIs: Average retransmission rate, and Variance of retransmission rate, both of which can be valuable indicators of the likely QoE which will be experienced on the network.
  • FIG. 5 is a block diagram that illustrates the machine learning engine 500 in greater detail.
  • the machine learning engine 500 works in two phases: the learning phase and the live monitoring phase.
  • the learning phase corresponds to the functional block 501 and live monitoring phase corresponding to the functional block 502.
  • the machine learning engine 500 works per VoIP service basis.
  • the feature selection and correlation method 503 takes N-KPIs and corresponding QoE values for a VoIP service from database 418. The method then selects a subset of N-KPIs that provide the best correlation to the QoE.
  • the QoE model generation method 504 uses the correlated N-KPIs and QoE values and, using machine learning algorithms, generates a QoE model which is saved back to the database 418.
  • FIG. 6 illustrates a QoE monitoring system 600 in accordance with an alternative embodiment of the present invention.
  • the system 600 comprises a processor 601 and a memory 602.
  • the memory 602 contains instructions executable by the processor 601 , whereby said system is operative to implement the method illustrated in figure 10 and described in detail below.
  • Figure 10 illustrates a method 100 for monitoring QoE of a service in accordance with an embodiment of the present invention.
  • the method calculates, 1 13, N-KPI values from the data traffic that has been intercepted and/or logged.
  • the method also comprises receiving and logging, 1 14, data files recorded by or at User Equipment, UEs, participating in the service and determining, 1 15, QoE values from the data files received from the UEs.
  • the method further comprises generating and storing, 1 16, a mapping between the calculated N-KPI values and determined QoE values.
  • the method comprises a learning phase and a live monitoring phase
  • the learning phase comprises a method for generating a mapping between Network Key Performance Indicators, N-KPIs, and QoE of a service implemented on the network in accordance with various embodiments of the method involving operations 1 1 1 — 1 16.
  • the live monitoring phase comprises receiving and logging, 121 , data representative of data traffic associated with a service and calculating, 23, N- KPI values from the data traffic that has been intercepted and/or logged.
  • the method comprises quantitatively estimating, 124, QoE of the service by using said stored mapping between the N-KPI values and QoE values.
  • the method 100 comprises a learning phase 1 10 and a live monitoring phase 120, which will be described below in more detail.
  • the learning phase 1 10 comprises receiving and logging data representative of intercepted data traffic associated with a VoIP service at step 1 1 1.
  • the data traffic may be intercepted by means of intermediate packet inspection probes. This step may be implemented by a packet analyzer such as that illustrated in figure 3.
  • the learning phase 1 10 preferably comprises classifying the data that has been intercepted and/or logged into session records, which may then be stored within the memory 602. This step may be implemented by a packet classification engine, such as that illustrated in figure 3.
  • the learning phase 1 10 then comprises, at step 1 13, calculating N-KPI values from the session records.
  • This step may be implemented by an N-KPI engine such as that illustrated in figures 3 and 4.
  • steps 1 1 1 -1 13 relate to the calculation of N-KPI values. It is envisaged that the learning phase 1 10 will comprise steps 1 14 and 1 15 relating to the determination of QoE values, which may be implemented concurrently to steps 1 1 1 -1 13.
  • Step 1 14 comprises receiving and logging data files recorded by or at the UEs participating in the service. This step may include normalizing the received data files and/or clipping the received data files when some portion of a received data file does not contain any voice signals.
  • Step 1 15 comprises determining QoE values from the data files received at step 1 14. This step may be implemented by a QoE engine such as that illustrated in figure 3.
  • the final step 1 16 of the learning phase 1 10 is generating a mapping between the N-KPI values calculated at step 1 13 and the QoE values determined at step 1 15.
  • This step may be implemented by a machine learning engine such as that illustrated in figures 3 and 5.
  • the generated mapping may then be stored within the memory 602.
  • the mapping may be of the generic form:
  • QoE f(N-KPI 1 , N-KPI2, N-KPIj), where each of the N-KPIs may be weighted in dependence on the outcome of the learning phase.
  • machine learning algorithms there are many different machine learning algorithms available. Depending on the application, scenario or acceptable complexity, any of them could be used to generate the model. Some examples machine learning algorithms are described in References [1 ] to [5] identified below. At its simplest level, the determined QoE value from a particular time is mapped onto (matched with) N-KPI value combinations present in the network at the same time.
  • mappings allow for later estimation of QoE values only by determining N-KPIs and finding in the database what QoE is mapped onto this specific combination of N-KPI values.
  • the mapping which is arrived at based on the learning phase can be described by a function which relates selected N-KPI values (with appropriate weightings) to a QoE value indicative of a Quality of Experience which could be expected when the network is operating with those N-KPI values.
  • the live-monitoring phase 120 comprises receiving and logging data representative of data traffic associated with the VoIP service at step 121 .
  • This step is analogous to step 1 1 1 of the learning phase 1 10 and may be implemented by a packet analyzer such as that illustrated in figure 3.
  • the live monitoring phase 120 preferably comprises classifying the data that has been intercepted and/or logged into session records. This step is analogous to step 1 12 of the learning phase 1 10 and may be implemented by a packet classification engine such as that illustrated in figure 3.
  • the live monitoring phase 120 preferably comprises calculating N-KPI values from the session records. This step is analogous to step 1 13 of the learning phase and may be implemented by an N-KPI engine such as that illustrated in figures 3 and 4.
  • the live monitoring phase 120 comprises estimating QoE of the VoIP service using the mapping that was generated at step 1 16.
  • the learning phase 1 10 may be repeated periodically or on-demand in order to ensure that the mapping remains up to date.
  • the QoE monitoring system 600 illustrated in figure 6 and comprising the processor 601 and the memory 602 in one embodiment may contain in said memory instructions executable by the processor, which make said system operative to implement only the learning phase as discussed above in relation to the embodiments of the method illustrated in figure 10.
  • the memory, 602 may contain instructions for implementing both the learning phase and the live monitoring phase as described above in relation to the embodiments of the method illustrated in figure 10.
  • the present invention provides an effective means of monitoring QoE.
  • the present invention enables a network operator to determine the QoE of a service from the network monitoring without any need for cooperation from the end devices during a live monitoring phase.
  • the invention also enables a network operator to determine the QoE of a service even when the service data and protocol information are encrypted and not available to the operators.
  • Another advantage of the present invention is that a network operator does not have to predefine network resources that are used by end user services.
  • the invention enables automatic identification of the network resourcing that are significantly impacting upon the performance of a service.
  • the method is generic, independent of network architecture and can be applied to various types of network (mobile, fixed, satellite, etc.).

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Algebra (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Physics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Pure & Applied Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

A method, comprising a machine learning phase and a live-monitoring phase, for monitoring QoE of an OTT or other service. The machine learning phase comprises: (a) receiving and logging data representative of intercepted data traffic associated with the service; (b) calculating values of Network Key Performance Indicators, N-KPIs, from the data traffic that has been intercepted and/or logged; (c) receiving and logging data files recorded by or at User Equipment, UEs, participating in the service; (d) determining QoE values from the data files received from the UEs; and, (e) generating and storing a mapping between the calculated N-KPI values and determined QoE values. The live monitoring phase comprises steps (a) and (b), followed by the step of quantitatively estimating QoE of the service by using said stored mapping between the N-KPI values and QoE values.

Description

METHOD AND SYSTEM FOR MONITORING QoE
Technical Field
The present invention relates to a method and system for monitoring Quality of Experience, QoE, of a service, and particularly but not exclusively to a method and system for monitoring QoE of a service transported over Transmission Control Protocol, Secure Transmission Control Protocol, Hypertext Transfer Protocol, and/or Hypertext Transfer Protocol Secure.
Background
A wide variety of different services are delivered to users using the telecommunication and Internet Service Protocol, ISP, networks. The quality of such services may be impaired by issues and problems within the network. From both user and a service provider perspective it is important to monitor service quality delivered to the end user so that any issues can be observed and rectified.
Quality of Experience, QoE, is a measure of a user's experience about a service. This is a subjective measure and could vary from user to user. Generally, QoE consists of a number of measures for a given service, for example listenability of voice, usability, video impairments and the like. One example of a QoE metric is Mean Opinion Score, MOS, which provides a numerical indication of the perceived quality of the service from the viewpoint of its users. Consider the example of voice telephony: in this context, MOS represents voice quality of the received voice after encoding and transmission of voice. A major difficulty with monitoring QoE is that it is far from trivial to obtain QoE for every network, user equipment and software configuration without user's personal feedback. In other words, without some sort of human intervention it is difficult to measure the QoE in a specific service set up. Since getting user feedback is not always possible and/or such user feedbacks are very limited, other methods are used to estimate the user's QoE for a given service. These methods are dependent on the specific service and vary according to services.
Considering the example of voice transmission, voice quality is the primary indicator of the quality of user experience. The voice quality measurement approaches can be classified in two categories: subjective and objective methods.
In subjective measurement methods, human test subjects are asked to assess the quality of a particular voice sample. The subjects make their judgment on the voice based on a rating scale. In telecommunications, Absolute Category Rating, ACR, tests are most commonly used to assess voice quality (ITU-T Rec. P.800, 1996). The scale most frequently used is the 5-point ACR quality scale, where 1 represents bad and 5 represents excellent voice quality. The mean of the individual ratings is then calculated, to give a Mean Opinion Score, MOS.
As subjective speech quality tests are time-consuming and costly, objective quality measurement methods have been developed. One of the well accepted objective approaches is a signal based model which measures the quality from paired comparison situation. This model provides a quality estimate based on the comparison of a source voice signal (as reference) with the voice signal received over the network or similar system under test. After loudness equalization and time-alignment, the two signals are transformed into an internal, psychoacoustics- based representation. Then, the difference or similarity vectors are determined from the two representations, which are time-averaged and mapped onto a MOS- equivalent quality estimate. The model currently recommended by the ITU Telecommunications Standardization Sector, ITU-T, is Perceptual Evaluation of Speech Quality, PESO (ITU-T Rec. P.862, 2001 ). PESO is an optimized combination of two algorithms: Perceptual Speech Quality Measure, PSQM, and Perceptual Analysis Measurement System, PAMS. A wideband version of PESQ has recently been defined in ITU-T Rec. P.862.2 (2005).
For network planning, before setting up a network, it is desirable to estimate the quality of services that may be transmitted over the network. Such quality predictions can significantly support the selection of certain components or configurations. Different models have been developed for this purpose, but all models share the common attribute that measurable characteristics of the system are used to estimate QoE. The model currently recommended by the ITU-T for network planning is the E-model (ITU-T Rec. G.107, 2005). The E-model is mainly empirical by nature, and was developed on the basis of large amounts of subjective test data. This model takes into account a wide range of telephony-band impairments, for example impairment due to low bit-rate coding devices and one-way delay, as well as the "classical" telephony impairments of loss, noise and echo. The E-model can be applied to assess the voice quality of Local Area Network, LAN, and wireless scenarios, based on circuit-switched and packet-switched technology. The primary output of the E-model calculations is a scalar quality rating value known as the Transmission Rating Scale or R-scale. The R-scale ranges from 0 to 100, with 100 corresponding to optimum quality. The R-scale is then mapped to the MOS scale using a codec-specific mapping.
The E-model uses the following N-KPIs to assess voice quality: packet loss rate, end-to-end delay, and jitter (delay variation). The values of these N-KPIs are determined by exploiting the way in which the data is packetized into packets before being transmitted over the network. In particular, the values of the N-KPIs are determined through the different network protocol headers used to transport the voice signal from source to destination. As an example, packet structure and header information (in an Ethernet/IP network) is shown in simplified form in figure 1 . With reference to figure 1 , the packet structure comprises an Ethernet header; an IP header; a User Datagram Protocol, UDP, header; and a Real-time Transport Protocol, RTP, header. The Ethernet header contains the source and destination Media Access Control, MAC, addresses of local link. The IP header contains the source and destination IP addresses to enable routing. The UDP header contains the source and destination ports of the sending and receiving applications, respectively. The RTP header contains the timestamp of the data packet, and the sequence number of RTP header indicates whether packets are missing or arrived out of order. The payload type of RTP header informs about the type of payload, for example, voice signal encoded with codec G.71 1. RTP was designed to support the determination of different network KPIs from the received voice packet.
When using UDP and RTP transport protocols, the UDP requires a number of UDP ports to be opened as each data stream requires two ports for data and control. This creates a sizeable problem as Internet firewalls and/or routers may not open these ports. It is also normal for Internet firewalls and/or routers to filter or ignore UDP packets. To avoid this problem, it is known to use RTP or RTP-like protocol over Transmission Control Protocol, TCP. This is commonly used for Voice over IP, VoIP, services. Furthermore, in certain networks, communications can only take place through Hypertext Transfer Protocol (HTTP) proxies that allow traffic to and from organizations/enterprises. As a result, any service that is provided over the network must be provided over HTTP.
The trend therefore appears to be a shift away from UDP and RTP transport protocols and towards protocols such as Transmission Control Protocol, Secure Transmission Control Protocol, Hypertext Transfer Protocol, and Hypertext Transfer Protocol Secure. In these cases, it is extremely difficult to know the payload type, codec rate etc and to extract N-KPIs necessary for E-model based QoE estimation from network probes in real-time. For example, in contrast to UDP/RTP, when TCP/HTTP is used as the transport protocol there is no such N- KPI/measurement as packet loss. This is because of the retransmission nature of the TCP. However retransmissions may not necessarily help in real time applications such as in VoIP. Retransmissions lead to late arrival of packets and, due the nature of VoIP, these late arrived packets may not be useful anymore. In other words in certain cases the lately arrived packets can be considered as lost packets due to real time nature of VoIP. In view of the differences between protocols, traditional QoE models such as E-model cannot be used to measure the QoE when transport protocols such as TCP, secure TCP, HTTP and secure HTTP (HTTPS) are used.
Furthermore, current techniques for estimating QoE encounter difficulties in relation to Over The Top, OTT, services. OTT services are an important subset of services delivered using the telecommunications and ISP networks. Initially OTT services were used only for audio and video content, but more recently this has expanded to encompass any form of data. The distinguishing feature of OTT services is that such services are not provided by the network service provider. Whilst the network service provided is aware of the data packets and their contents, it does not have any control over the OTT service. However, in order to maintain the end-user quality and QoE for customers, the network operators want to monitor the status of the OTT services at different points of their network. To enable the QoE monitoring for this kind of situation, we are experiencing the following challenges: a) Monitoring services independently from the service providers
I. The service traffic is often encrypted
I. QoE reports are only available for the OTT service provider (for example the VoIP provider); not the network manager. b) Coping with dynamic, and emerging estimation models
I. New codecs, protocols and applications
II. End-to-end monitoring is not always feasible c) Coping with incomplete data-sets
I. Partially received data due to network issues
II. End devices are not synchronized in terms of clock
Summary
In accordance with the present invention, as seen from a first aspect, there is provided a method, implemented within a network, for generating a mapping between Network Key Performance Indicators, N-KPIs, and Quality of Experience, QoE, of a service implemented on the network, the method comprising:
receiving and logging data representative of intercepted data traffic associated with a service;
calculating N-KPI values from the data traffic that has been intercepted and/or logged;
receiving and logging data files recorded by or at User Equipment, UEs, participating in the service;
determining QoE values from the data files received from the UEs; and, generating and storing a mapping between the calculated N-KPI values and determined QoE values.
Data traffic associated with said service may be transported over Transmission Control Protocol, Secure Transmission Control Protocol, Hypertext Transfer Protocol, and/or Hypertext Transfer Protocol Secure. The step of generating a mapping between the calculated N-KPI values and determined QoE values may comprise pairing the data traffic that has been intercepted and logged with corresponding data files recorded by or at the UEs. The key may comprise one or more of the following metrics: session start time; session end time; host address; port numbers; service provider name. The step of receiving and logging data representative of intercepted data traffic associated with a service may comprise receiving and logging said one or more metrics. Similarly, the step of receiving and logging data files recorded by or at UEs may comprise extracting and logging said one more metrics.
The step of receiving and logging data traffic associated with the service may comprise receiving data representative of traffic that has been intercepted in its transit from a first UE to a second UE. The step of receiving and logging data files recorded by or at the UEs may comprise receiving and logging data files recorded by or at the first and second UEs. The step of determining QoE values may comprise comparing the data files recorded by or at the first UE and with data files recorded by or at the second UE. The step of generating a mapping between the calculated N-KPI values and determined QoE values may comprise mapping the N-KPI values calculated from the data traffic that has been intercepted or logged in its transit from the first UE to the second UE onto the QoE values that were determined from the data files recorded by or at the first and second UEs.
The method may comprise determining identifying information from the data packets that have been received and logged and combining related data into session records. .
The method may comprise intercepting data traffic associated with the service prior to the step of receiving and logging data representative of said data traffic associated with the service.
The step of intercepting data traffic associated with the service may be implemented through intermediate packet inspection probes. The intermediate packet inspection probes may comprise shallow packet inspection probes. Alternatively or additionally, the intermediate packet inspection probes may comprise deep packet inspection probes.
The step of determining a QoE value from the data files recorded by or at the UEs may comprise objective measurement of QoE.
The step of determining a QoE value from the data files recorded by or at the UEs may comprise implementation of a Full Reference, FR, algorithm. Such an algorithm utilizes the signal sent by a sender UE, and hence recorded at said sender UE, as a reference signal. The FR algorithm then compares this signal to the signal received by the recipient UE, and hence recoded at said recipient UE.
For voice applications, the step of determining a QoE value from the data files recorded by or at the UEs may comprise Perceptual Objective Listening Quality Assessment, POLQA. Alternatively or additionally, the step of determining a QoE value from the data files recorded from the UEs may comprise Perceptual Evaluation of Speech Quality, PESQ.
For video applications, the step of determining a QoE value from the data files recorded by or at the UEs may comprise Perceptual Evaluation of Video Quality, PEVQ.
The N-KPIs may relate to one or more of the following properties:
(a) Transmission Control Protocol, TCP, retransmission rate;
(b) Duplicate acknowledgement, ACK, rate;
(c) packet rate;
(d) data rate;
(e) out-of-order packet rate;
(f) flight rate;
(g) ACK round-trip-time.
Advantageously, these new N-KPIs provide hints about a variety of network characteristics which are not possible to achieve when a service uses unconventional protocol or encrypted transport protocol. To give an example, for a particular network setup, a Transmission Control Protocol, TCP, retransmission event provides information about possible packet loss and delay. The variance of TCP transmission rate provides an indication of the burst size of packet loss. The N-KPIs may also provide information about congestion, variable delay, loss and the like.
The N-KPIs may be calculated as:
(i) average;
(ii) variance;
(iii) surface area;
(iv) transition frequency;
(v) gap duration;
(vi) number of gaps,
of any properties of the service, for example properties (a) to (g) listed above. For example, the one N-KPI may be variance of TCP transmission rate. Another N- KPI may be gap duration of out-of-order packet rate.
The service may comprise an Over The Top, OTT, service. In one embodiment, the service may comprise a Voice over IP, VoIP, service.
The data traffic may be encrypted. In this case, the method may comprise decrypting the data that has been intercepted, which may be carried out prior to logging the data.
The above-described method can be considered as a learning phase within a method for quantitatively estimating Quality of Experience, QoE, of services implemented on the network. In accordance with the present invention, as seen from a second aspect, there is provided a method, implemented within a network, for monitoring Quality of Experience, QoE, of a service implemented on the network. The method comprises a learning phase and a live monitoring phase, the learning phase comprises a method for generating a mapping between Network Key Performance Indicators, N-KPIs, and QoE of a service implemented on the network in accordance with the first aspect and various embodiments of the first aspect, whereas the live monitoring phase comprises receiving and logging data representative of data traffic associated with a service; calculating N-KPI values from the data traffic that has been intercepted and/or logged; and quantitatively estimating QoE of the service by using said stored mapping between the N-KPI values and QoE values.
Preferably, the live monitoring phase comprises the steps of determining identifying information from the data packets that have been received and logged, combining related data into session records, and calculating the N-KPI values based on the session records.
It will be appreciated that the mapping generated and stored during the learning phase is then used during the live monitoring phase for estimating a QoE value of the service. The learning phase requires data to be recorded by or at the UEs. However, once the learning phase has generated a sufficiently detailed mapping between the N-KPIs and QoE values, this recordal of data at or by the UEs is no longer necessary: the step of monitoring data traffic may comprise intermediate packet inspection such as deep packet inspection and/or shallow packet inspection. Advantageously, monitoring the data traffic in this way avoids reliance on terminal reports or feedback from the end-user about the service quality.
Over time, the mapping between the N-KPI values and QoE values may become outdated. Accordingly, the learning phase may be repeated periodically. Alternatively or additionally, the learning phase may be repeated on demand, whenever it is deemed necessary.
The method may be implemented on a Network Element, NE. In particular, the method may be implemented on a QoE monitoring system.
In accordance with the present invention, as seen from a third aspect, there is provided a system for generating a mapping between Network Key Performance Indicators (N-KPIs) and Quality of Experience (QoE) of a service implemented on a network, the system comprising:
a packet analyzer for receiving and logging data representative of intercepted data traffic associated with the service;
a Network Key Performance Indicator, N-KPI, engine for calculating values of N-KPIs from the data traffic that has been intercepted and/or logged; a receiver for receiving data files associated with the service recorded at User Equipment, UEs; a QoE engine for determining QoE values from the data files associated with the service and received from the UEs;
a machine learning engine for generating a mapping between the N-KPI values calculated by the N-KPI engine and QoE values determined by the QoE engine; and,
a memory for storing the mapping generated by the machine learning engine.
As those skilled in the art will be aware, a packet analyzer may also be known as a network analyzer, protocol analyzer or packet sniffer.
The monitoring system may comprise a packet classification engine for determining identifying information from the data packets received and logged by the packet analyzer and combining related data into session records. The session records may be input into the N-KPI engine to enable the N-KPI engine to calculate values of the N-KPIs.
The monitoring system may comprise a workstation terminal for allowing a human network operator to access data stored in the memory.
The monitoring system may be arranged for implementing the learning phase of the above-described method. The monitoring system may also be arranged for implementing the live monitoring phase of the above-described method. In this embodiment, the monitoring system may comprise a QoE monitoring engine for receiving N-KPI values from the N-KPI engine and consulting the stored mapping to estimate QoE values.
In accordance with the present invention, as seen from a fourth aspect, there is provided a system for generating a mapping between Network Key Performance Indicators (N-KPIs) and Quality of Experience (QoE) of a service implemented on a network, the system comprising a processor and a memory, said memory containing instructions executable by said processor whereby said system is operative to:
receive and log data representative of intercepted data traffic associated with a service; calculate N-KPI values from the data traffic that has been intercepted and/or logged;
receive and log data files recorded by or at User Equipment, UEs, participating in the service;
determine QoE values from the data files received from the UEs; and, generate and store a mapping between the calculated N-KPI values and determined QoE values.
The system may consist of a single network element.
The data traffic may be transported between UEs over Transmission Control Protocol, Secure Transmission Control Protocol, Hypertext Transfer Protocol, and/or Hypertext Transfer Protocol Secure In accordance with the present invention, as seen from a fifth aspect, there is provided a system for monitoring Quality of Experience (QoE) of a service implemented on a network, the system comprising a processor and a memory, said memory containing instructions executable by said processor whereby said system is operative to implement a learning phase and a live monitoring phase, in the learning phase the system is operative to:
receive and log data representative of intercepted data traffic associated with a service;
calculate Network Key Performance Indicator, N-KPI, values from the data traffic that has been intercepted and/or logged;
- receive and log data files recorded by or at User Equipment,
UEs, participating in the service;
determine QoE values from the data files received from the UEs; and,
generate and store a mapping between the calculated N-KPI values and determined QoE value,
in the live monitoring phase the system is operative to:
receive and log data representative of data traffic associated with a service;
calculate N-KPI values from the data traffic that has been intercepted and/or logged; and, use said stored mapping between the N-KPI values and QoE values to quantitatively estimate QoE of the service.
Brief description of the drawings
Embodiments of the present invention will now be described by way of example only and with reference to the accompanying drawings, in which:
Figure 1 is a schematic illustration of a voice packet with payload inside nested network headers by different protocols;
Figure 2 is a block diagram of a network comprising a system for monitoring QoE of a service in accordance with an embodiment of the present invention;
Figure 3 is a block diagram of the system for monitoring QoE of a service illustrated in figure 2;
Figure 4 is a block diagram of an N-KPI engine, which forms part of the system illustrated in figure 3; Figure 5 is a block diagram of a machine learning engine, which forms part of the system illustrated in figure 3;
Figure 6 is a block diagram of a system for monitoring QoE of a service in accordance with another embodiment of the present invention;
Figure 7 is a flow chart illustrating a method for calculating positive and negative surface areas;
Figure 8 is a graphical representative of an example of data rate time variation of a voice call;
Figure 9 is a diagram illustrating an (N+1 )-state Markov process for transition frequency modeling; and, Figure 10 is a flow diagram illustrating a method for monitoring QoE of a service in accordance with an embodiment of the present invention. Detailed description
The invention will be described in relation to a VoIP service transported over TCP or secured TCP protocol. However, it will be appreciated that the invention is also applicable to other OTT services or indeed non-OTT services. Furthermore, the invention is applicable to other transport protocols such as HTTP, and/or secured HTTP. Figure 2 is a simplified block diagram illustrating a network 208 comprising a plurality of UEs 204-207. The UEs 204-206 may include any computer system or device such as, for example, a personal computer, laptop computer, tablet computer, mobile device, smart phone, web-enabled phones etc. Several VoIP service providers 201 -203 provide VoIP service to several UEs 204-
207, which may be of different types, through network 208. One UE may call another UE where both UEs have the subscription of same VoIP service.
The network 208 may include, for example, any suitable wired or wireless computer or data network including, for example, the Internet, or a third generation (3G) or a fourth generation (4G) wireless network. The bitrate used for each voice session is dependent upon the network 208 bandwidth assigned to the session or the client. The bandwidth may vary on a session-by-session basis and/or may vary during individual voice sessions. The capabilities of UEs 204-207 may also affect the bitrate used for individual voice sessions.
The protocols that transport the voice data between UEs 204-207 through VoIP servers are selected based on the service provider, and/or UEs 204-207. The VoIP service is supported, for example, on HTTP, HTTPS, a proprietary protocol on top of TCP or secured TCP.
A system 300 for monitoring OTT VoIP service is also provided within the network
208. The VoIP services are monitored by QoE monitoring system 300 as the data packets containing the UE voice pass through network 208. At any particular time, UEs 204-207 are using VoIP service from many different VoIP service providers 201 -203, which results in many different HTTP, HTTPS, secured TCP sessions through network 208. Each of the VoIP sessions for the UEs 204-207 has different session duration and has different bandwidths available. During each VoIP session, the talk period and silent period for that session may change multiple times. Also, the voice session may suffer from network losses, and/or congestion, which is likely to create gaps in the voice session. These issues make it difficult for the service provider to determine the QoE for subscribers.
With reference to figure 3 a system, 300, for generating a mapping between Network Key Performance Indicators (N-KPIs) and Quality of Experience (QoE) of a service implemented on a network is illustrated in one embodiment of the present invention. The system 300 comprises a packet analyzer, 307, for receiving and logging data representative of intercepted data traffic associated with the service, an N-KPI engine, 400, for calculating values of N-KPIs from the data traffic that has been intercepted and/or logged. The system 300 also comprises a receiver 306 for receiving data files associated with the service recorded at User Equipment, UEs, and a QoE engine, 312, for determining QoE values from the data files associated with the service and received from the UEs. Further the system 300 comprises a machine learning engine, 500, for generating a mapping between the N-KPI values calculated by the N-KPI engine and QoE values determined by the QoE engine and a memory, 313, for storing the mapping generated by the machine learning engine.
Figure 3 is a block diagram of the QoE monitoring system 300 illustrated in figure 2. To evaluate QoE for subscribers, QoE monitoring system 300 works in two phases: a "learning phase" and "live monitoring phase". During the learning phase, the QoE monitoring system 300 collects recorded voice files from the UEs 204-207 and tracks the related data packets to determine N-KPIs. The monitoring system uses PESQ to generate the QoE (in terms of MOS value) of the recorded voice file. The QoE monitoring system 300 then correlates the QoE and N-KPIs to generate QoE models. During the live monitoring phase, the monitoring system can identify the QoE of the voice session using QoE models and the N-KPIs extracted from the voice data. In this live monitoring phase the system 300 does not rely on terminal reports or any sort of feedbacks from the end-users about their service quality (i.e., reports about VoIP quality itself). The system 300 only relies on N-KPIs, which may be calculated from data representative of intercepted data traffic. The network service provider may use this QoE information to adjust the network services available to UEs 204-207, such as the bandwidth assigned to each user and the routing of data packets through network 208.
The QoE monitoring system 300 comprises a packet analyzer 307, packet classification engine 308, N-KPI engine 400, a QoE engine 312 and a machine learning engine 500.
The packet analyzer 307 is configured to intercept and capture data packets from network 208, including data for voice sessions 301 . The packet analyzer 307 is further configured to log data indicative of this intercepted data. Alternatively, the packet analyzer may be configured to passively receive data indicative of intercepted data traffic, and log all or part of this received data.
The packet classification engine 308 is configured to identify information from the data logged by the packet analyser 307 and combine related data into session records. The session records may comprise of session start time, session end time, VoIP service providers name, host addresses and port numbers of the session, extracted N-KPIs, extracted service QoE during training phase, monitored service QoE during live phase, etc. The session records are stored in the database 418, which can be queried by the N-KPI engine 400.
The N-KPI engine 400 is configured to calculate numerous N-KPIs, which mimic the network characteristics like congestion, variable delay, loss etc. The N-KPI values are stored in the database 418 for the corresponding session record. The N-KPI engine 400 is described in illustrated in greater detail in figure 4.
The QoE engine 312 is configured to collect session start time, session end time, host addresses, port numbers, service provider name, and recorded data files from a subset of UEs 204-207. In the case of VoIP, the data files will be "voice files" representative of a telephone conversation. The QoE engine 312 uses PESQ to generate the QoE (in terms of MOS value) of the recorded voice file. The extracted QoE values are then stored in the database 418 for the corresponding session record using the collected session start time, session end time, host addresses, port numbers, service provider name as key. To extract the QoE, it is also possible to use advanced tools like POLQA or others if available.
During the learning phase, the machine learning engine 500 is configured to correlate the QoE and N-KPIs relating to each specific VoIP service provider 201 - 203 to generate QoE models. These QoE models are then stored in the database 418. During the live monitoring phase, the machine learning engine 500 can identify the QoE of the voice session using QoE models and the N-KPIs determined from the voice data. The session data is provided as a monitoring feed 314 to QoE monitoring application 31 1 . Database 418 may also store subscriber information and client device data. A network operator may access the real-time or stored session data via workstation terminal 315. Data stored to database 418 can be queried by the service provider, for example, on a per-session, per-user, per-device, or per service basis.
Figure 4 is a block diagram that illustrates the N-KPI engine 400 in greater detail. In the present embodiment, data packets are first classified into sessions by a packet classification engine 401 . However, in alternative implementations the various metrics produced by the N-KPI engine 400 may be session-independent, in which case a packet classification engine 401 will not be required. The N-KPI engine 400 preferably includes a retransmission rate calculation method 408, out- of-order rate calculation method 409, duplicate ACK (acknowledgement) rate calculation method 410, flight rate calculation method 41 1 , ACK RTT (round trip time) calculation method 412, data rate calculation method 413, and packet rate calculation method 414. Each of these methods 408-414 produces an array of values with respective TCP (Transmission Control Protocol) level parameters. The array size is 20 to 60 for a session which duration is at least 20 seconds. The retransmission calculation method 41 1 calculates the number of data packet retransmission in each second. The out-of-order rate calculation method 409 calculates the number of out-of-order packets in each second. The duplicate ACK rate calculation method 410 calculates the number of duplicate ACK packets in each second. The flight rate calculation method 41 1 calculates the flight size in bytes in each second. The ACK RTT calculation method 412 calculates the average ACK RTT in each second. The data rate calculation method 413 calculates the number of data bytes in each second. The packet rate calculation method 414 calculates the number of data packets in each second.
Taking each of these in turn:
Retransmission rate calculation 408 - A packet is considered to have been retransmitted if a TCP sender does not receive any ACK of the packet within the TCP retransmission timeout (RTO) period. Usually the RTO period is 4 times the end-to-end round trip time (RTT) value. The number of retransmissions is calculated by observing the data packet sequence number. In particular, as TCP packets are received, the method examines the TCP header of the packet and extracts the sequence number, which can be expected (in the absence of data transfer errors) to follow on from the sequence number of the previous packet within a current communications session. The method compares the extracted sequence number of the current TCP packet with the highest sequence number previously observed within the current communications session. If the extracted sequence number is higher than the highest previously observed sequence number then the present TCP packet is deemed to relate to a regular packet. Otherwise, the present TCP packet is deemed to be a retransmitted packet. The N-KPI in this case is the number of retransmitted packets which occur within a given time interval.
Out-of-order rate calculation 409 - The determination of the out-of-order rate involves using sequence numbers. The method comprises receiving both a data packet and a corresponding ACK packet. When the data packet is received, its sequence number and its payload size are extracted from the TCP header. The payload size is added to the sequence number to determine an expected sequence number for an expected ACK packet. When an ACK packet is received, its sequence number is extracted and compared with the expected sequence number calculated based on the data packet. If the sequence number extracted from the ACK packet matches the expected sequence number then this indicates an in order transmission, whereas if the sequence number extracted from the ACK packet does not match the expected sequence number then this indicates an out-of-order transmission. The N-KPI in this case is the number of out of order transmissions detected within a given time interval.
Duplicate ACK rate calculation 410 - Duplicates ACKs can be detected using acknowledgement sequence number. In particular, if two packets have same acknowledgement sequence number, the second packet is considered to be a duplicate ACK packet. Accordingly, this calculation method comprises keeping track of acknowledgement sequence numbers from received ACKs, and incrementing a counter each time an ACK is received which has an acknowledgement sequence number which is the same as that of a previously received acknowledgment. The N-KPI in this case is the number of duplicate ACKs detected within a given time interval.
Flight rate calculation 41 1 - Flight rate is the amount or proportion of the data within the current time window which has not been acknowledged. One way of calculating this is, upon reception of an ACK packet, to extract its current acknowledgement sequence number from its TCP header. The highest seen sequence number is also tracked, in the same manner as for the retransmission calculation. The flight size is then calculated as the difference between the highest previous observed sequence number and the sequence number of the current ACK packet. The flight sizes calculated in this way are indicative of the amount of data which cannot be acknowledged, and can be accumulated over a period to give a flight rate for that period. ACK RTT calculation 412 - This can be determined as the time delay between the time stamp on a transmission (set by the transmitting device) and the time stamp on the ACK (set by the receiving device). This delay could be determined for each transmission/ACK pairing within a time interval (e.g. one second) and then averaged. If the TCP timestamps option is not enabled at both the transmitting and receiving devices, the ACK RTT is computed in the probe, based on the difference between a time of capture of a data packet and the time of capture of the corresponding acknowledgement packet. Data rate calculation 413 - This is the total number of non-duplicate data bytes conveyed by the TCP transmission packets per second. This can be readily determined by interrogating TCP headers to identify the payload size of conveyed packets.
Packet rate calculation 414 - This is the total number of non-duplicate transmission packets conveyed per second. This can be readily determined by the probe, and by ignoring duplicate packets identified by having the same sequence number.
The core component 402 of N-KPI engine 400 includes variance calculation method 403, surface area calculation method 404, transition frequency calculation method 405, gap duration calculation method 406, number of gaps calculation method 407. Each of these methods 403-407 produces a list of N-KPIs taking input from any of the above-described methods 408-414, namely the retransmission rate calculation method 408, out-of-order rate calculation method 409, duplicate ACK rate calculation method 410, flight rate calculation method 41 1 , ACK RTT calculation method 412, data rate calculation method 413, and packet rate calculation method 414.
The variance calculation method 403 provides a measure of how diverse is the specific N-KPI and how far from the mean value. A low variance indicates that the data points tend to be very close to the mean, whereas high variance indicates that the data points are spread out over a large range of values. The variance of an array of values X is calculated as Equation 1 where n is the size of the array X, x, is the value of h index of X, μ is the average value of array X calculated as Equation 2.
Equation 1
Equation 2
Figure imgf000020_0001
The variance of difference N-KPIs are important when N-KPIs fluctuate in an error-prone network such as wireless networks due to its inherent interference. The new N-KPIs produced from variance calculation method 403 would be variance of retransmission rate, variance of out-of-order rate, variance of duplicate ACK rate, variance of flight rate, variance of ACK RTT rate, variance of data rate, and variance of packet rate which are saved to database 418 though feed 415.
The surface area calculation method 404 provides a snapshot of a particular N-
KPI k for a specific period of time. This period of time is referred as a "window size" w. If we take one specific time f, the current status of a N-KPI k(t) could be high or low compared to the average value of the N-KPI k_avg. The average value could be an average taken over any specified period of time which is longer than the window size. For example, the average could be assessed for the data array X, or for a longer period than this. The surface area metric gives us the status of the N-KPI k over a period of time w. If the status of k over the window w is higher than the average k_avg, this is referred to as a positive surface area (PSA). If the status of k over the window w is lower than the average k_avg, this is referred to as a negative surface area (NSA). For each data array X, the method takes w consecutive values (i.e. window size) at a time and determines the surface area. Figure 7 schematically illustrates the surface area measurements according to one possible implementation. At a step S1 , the algorithm is commenced on the basis of a particular array of N-KPI values X, and on the basis of a selected or predetermined window size w. At a step S2, the algorithm is initialized by setting a loop control variable n to match the number of elements in the array X, by setting a PSA variable to zero, by setting an NSA variable to zero and by setting a loop tracking variable / to zero. At a step S3 it is determined whether the value of loop tracking variable / is less than the loop control variable n. This step merely constrains the algorithm to function only while there are values still to obtain from the data array X. If the step S3 is answered in the affirmative then the algorithm proceeds to a step S4, where an accumulation variable sum is initialized to zero, and another loop tracking variable j is set to the current value of /'. As will be explained below, the algorithm of Figure 7 effectively considers each consecutive set of w data values (window) in the array X, before moving on to consider the next consecutive step of w data values (window) in the array X. The step S4 will be carried out for each window to set the starting variable in the window, and to initalise the variable sum required to determine the surface area state within that window.
At a step S5, it is determined whether the current position in the data array falls within the current window. If the step S5 is answered in the affirmative, then at a step S6 the magnitude of the difference between the current data value in the array X and the average value for the N-KPI is calculated, and it is determined whether the calculated magnitude is greater than a threshold Sth- If so, the algorithm progresses to a step S7, where the value of (xj - μ), which will be a positive number if xj > μ and a negative number if xj < μ, is added to the variable sum. Then, at a step S8, the value of j is incremented to permit the next value in the data array to be evaluated at the step S5. If the step S6 is answered in the negative, then the algorithm will progress directly to the step S8. In effect, the step S6 determines whether a value in the data array X is close to the average (S6 answered in the negative) or deviates substantially from the average (S6 answered in the positive), and only values in the array which deviate substantially from the average are permitted to influence the sum variable. The steps S5 to S8 continue in a loop until all data values of the array X within the current window have been considered. Then, at a step S9, the loop tracking variable / is incremented by the window size w to set a new start point for the next window. Then, at a step S10 it is determined whether the value of sum is greater than or less than zero. In the case where it is determined to be less than zero, the PSA variable will be increased by the amount of the variable sum, and the process will return to the step S3 for the next window to be evaluated. In the case where sum is determined to be less than zero, the NSA variable will be increased by the amount of the variable sum (although given that in this case the variable sum is a negative number, this will result in NSA being reduced - although the NSA metric could equally be kept as a positive value by adding to it the magnitude of the variable sum.). Once the step S12 has been completed, the process will return to the step S3 for the next window to be evaluated. This process will continue until the entirety of the data array X has been considered, at which point the step S3 will be answered in the negative and the process will terminate at a step S13. This algorithm has the effective of determining either a positive or negative sum for each window, which is indicative of whether, during that time window, the value of the N-KPI deviated substantially from an average value, and whether it deviated more up or down from that average. For each time window, either the PSA or NSA variable is accumulated with the variable sum. Over the course of the entire data array X, it will be appreciated that for some time windows the PSA variable will increase in magnitude, while in other time windows the NSA variable will increase in magnitude. For each data array X, a PSA value and an NSA value is calculated. The relative magnitudes of the PSA and NSA values are indicative of whether the N-KPI is generally in excess of its average value (PSA is greater than NSA), or generally below its average value (NSA is greater than PSA). Moreover, the absolute values of PSA and NSA provide some indication of how great the deviation from the average is.
It will therefore be appreciated that the surface area calculation method 404 provides the Positive Surface Area, PSA, and Negative Surface Area, NSA, of N- KPIs during the session. The described algorithm takes as an input a window size w and an array X of values produced by any of the methods 408-414. μ is the average value of either the array or a longer period of time, and Sm is a threshold value defined in Equation 3 where p is a coefficient of μ with 0 < p≤ 1 , hence Sm is p% of μ.
sth = μ Equation 3
To give an example of surface area, we use the array of data rates coming from data rate calculation method 413. We use the N-KPIs PSA of data rates and NSA of data rates as possible indicators of play out buffer state. During the network congestion the data packets may arrive at the receiver with irregular intervals and the data rates vary over time. This phenomenon may overflow or underflow the play out buffer of the VoIP client, hence the quality of voice is affected. Figure 8 shows the data rates (kbps) fluctuation over time of a voice call captured by the network probe. The figure also shows the average data rate of the whole voice flow. There is a high probability that play out buffer of the VoIP clients will be in steady state during the beginning of the flow (i.e. from 0 to 10 sec). During the period from 10 seconds to 15 seconds, the play out buffer may underflow and during the period from 15.5 to 17.5 seconds, the buffer may overflow.
The new N-KPIs produced from surface area calculation method 404 would be PSA and NSA of retransmission rate, PSA and NSA of out-of-order rate, PSA and NSA of duplicate ACK rate, PSA and NSA of flight rate, PSA and NSA of ACK RTT rate, PSA and NSA of data rate, PSA and NSA of packet rate which are saved to database 418 though feed 415.
The transition frequency calculation method 405 provides the frequency of changes of N-KPI values produced from any of the methods 408-414. The (N+1 )- state Markov chain is used as shown in figure 9 to model the transitions. The model is generated in the learning phase by stepping through and observing changes in a data array corresponding to an N-KPI being modelled. The state 0 is the initial state which represents good network conditions. The Markov chain starts at the state 0 at the time the first value in the data array X is evaluated, but can also be returned to at any time that a current value of the data array X is indicative of good network conditions. The states 1 to N represent bad network conditions. We characterize the network condition as "good" if the current value (in the data array X) of a N-KPI from any of the methods 408-414 is above a threshold Fth otherwise, the network condition is considered as "bad". The threshold Ff/7 is determined according to the Equation 4 where q is a coefficient of μ with 0 < q≤ 1 , hence Fth is q% of μ.
Fth = qμ Equation 4
The state transition probability P0i represents the transition from the good state 0 to the bad state 1 . In other words, P0i is the probability of a next value in the data array representing bad network conditions if the current value in the data array represents good network conditions. The state transition probability 1 -P0i represents the transition from a good state to a good state, that is, the probability of a next value in the data array representing good network conditions if the current value in the data array represents good network conditions. The state transition probability P12 is the probability of a next value in the data array representing bad network conditions from the state arrived at following the state transition P0i . It will be appreciated that the state transition probability P0i provides an indication of the likelihood of poor network conditions occurring for at least one sample in duration, while the state transition probabilities P0i and Pi2 together provide an indication of the probability of poor network conditions prevailing for at least two samples in duration. More generally, the state transition probability P ( +1) represents a transition from a bad state to a bad state followed by / consecutive transitions from a bad state to a bad state consecutively / times where i=0,1 ,...,N. In this way, it is possible to model the likelihood of poor network conditions prevailing for various different durations. For each state (0, 1 , 2 ... N) a state probability indicative of the likelihood of the Markov chain being in that state can be calculated. A state probability ττο represents the likelihood of the N-KPI indicating that the network is in a good state. A state probability of ττ, represents the likelihood of the N-KPI indicating that the network has been in a poor state for exactly / samples. The higher number of bad states provides an indication of burst size of bad network conditions. With only one bad state it is possible to identify the occurrence of at least one consecutive bad burst in the network. With 5 bad states for example, it is possible to identify that there were at least 5 consecutive bad bursts in the network if the state probabilities of π5 is non-zero. The state probabilities ττ, and transition probabilities P,y are calculated according to the Equation 5 and Equation 6 where / = 0,1 ,...,N and j = 0,1 ,. , .,Ν. number of transitions to state i __
7Γ; = Equation 5 total number of transitions number of transitions from state i to state / __
R = Equation 6
1 total number of transitions
Within the context of a data array X, the total number of transitions in Equations 5 and 6 is the size of (number of elements in) array X. The number of transitions at state / (Equation 5), and number of transitions from state / to j (Equation 6) is calculated by stepping through the array X and comparing each value with the threshold value Fth as in equation 4. The new KPIs produced from transition frequency calculation method 405 would be the state probabilities ττ, and transition probabilities P,y of retransmission rate, state and transition probabilities of out-of- order rate, state and transition probabilities of duplicate ACK rate, state and transition probabilities of flight rate, state and transition probabilities of ACK RTT rate, state and transition probabilities of data rate, state and transition probabilities of packet rate which are saved to database 418 though feed 415. In other words, the transition frequency KPIs comprise a set of state and transition probabilities for each of the properties measured by the methods 408 to 414 of Figure 4. The gap duration calculation method 406 provides duration of gaps of N-KPI values produced from any of the methods 408-414. A gap is a period of time during which a particular N-KPI drops below a certain value, perhaps due to a silence in a conversation carried over VOIP. A rate of an N-KPI can be calculated as the measured value of N-KPI divided by the duration of the flow or measurement. The rate is calculated to normalize the N-KPIs for flows of different durations. The rate of N-KPIs distinctively differs during the voice with talk spurt and silence gap. If the rate is below a certain threshold Dth we can mark that duration as silence gap in the voice conversation. The gap duration of an array of values X is calculated as Equation 7 where n is the size of the array X. The variable a, is calculated as Equation 8 where x, is the value of h index of X. The threshold Dth is calculated as Equation 9 where r is a coefficient of μ with 0 < r < 1 , hence Dth is r% of μ.
n
D(X) = ^ cii Equation 7
i = 0
_ (1, ii Xi < Dth Equation 8
1 [0, otherwise
Dth = r i Equation 9
The new N-KPIs produced from gap duration calculation method 406 would be gap duration of retransmission rate, gap duration of out-of-order rate, gap duration of duplicate ACK rate, gap duration of flight rate, gap duration of ACK RTT rate, gap duration of data rate, gap duration of packet rate which are saved to database 418 though feed 415. The number of gaps calculation method 407 provides the number of gaps found in the N-KPIs of the conversation. Whenever the rate of N-KPs is below threshold Dth as in Equation 9 and subsequently, goes above the threshold Dth, it is counted as one gap count. The new N-KPIs produced from number of gaps calculation method 407 would be number of gaps of retransmission rate, number of gaps of out-of-order rate, number of gaps of duplicate ACK rate, number of gaps of flight rate, number of gaps of ACK RTT rate, number of gaps of data rate, number of gaps of packet rate which are saved to database 418 though feed 415.
The average calculation method 416 provides the average of the array of values coming from methods 408-414. The average values are average retransmission rate, average out-of-order rate, average duplicate ACK rate, average flight rate, average ACK RTT rate, average data rate, average packet rate which are saved to database 418 though feed 417. In this way we can record average retransmission rate (for example) as one N-KPI. However, the same values as used to generate the average retransmission rate go to block 402, where more complex functions are used to calculate further N-KPIs. For example the retransmission rate goes from box 408 to box 403 and then variance of retransmission rate is calculated as an N-KPI. This way from only retransmission rate we produce two N-KPIs: Average retransmission rate, and Variance of retransmission rate, both of which can be valuable indicators of the likely QoE which will be experienced on the network.
Figure 5 is a block diagram that illustrates the machine learning engine 500 in greater detail. As described previously, the machine learning engine 500 works in two phases: the learning phase and the live monitoring phase. The learning phase corresponds to the functional block 501 and live monitoring phase corresponding to the functional block 502. The machine learning engine 500 works per VoIP service basis. During the training phase, the feature selection and correlation method 503 takes N-KPIs and corresponding QoE values for a VoIP service from database 418. The method then selects a subset of N-KPIs that provide the best correlation to the QoE. The QoE model generation method 504 uses the correlated N-KPIs and QoE values and, using machine learning algorithms, generates a QoE model which is saved back to the database 418. As the services and network scenario changes over time, the QoE models are prone to become obsolete. The learning phase 501 is therefore repeated on-demand whenever necessary or periodically. During the live monitoring phase, the QoE mapping method 505 uses the QoE model saved in the database 418 and selected N-KPIs to provide QoE for the VoIP session. The QoE value is saved back to the database 418 to be queried by the QoE monitoring application 507, for example, on a per-session, per-user, per-device, or per service basis. Figure 6 illustrates a QoE monitoring system 600 in accordance with an alternative embodiment of the present invention. The system 600 comprises a processor 601 and a memory 602. The memory 602 contains instructions executable by the processor 601 , whereby said system is operative to implement the method illustrated in figure 10 and described in detail below.
Figure 10 illustrates a method 100 for monitoring QoE of a service in accordance with an embodiment of the present invention.
With reference to figure 10 a method for generating a mapping between Network Key Performance Indicators, N-KPIs, and Quality of Experience, QoE, of a service implemented on a network is illustrated in one embodiment of the present invention. The method, implemented within the network comprises receiving, 1 1 1 , and logging data representative of intercepted data traffic associated with a service. In the following operation the method calculates, 1 13, N-KPI values from the data traffic that has been intercepted and/or logged. The method also comprises receiving and logging, 1 14, data files recorded by or at User Equipment, UEs, participating in the service and determining, 1 15, QoE values from the data files received from the UEs. The method further comprises generating and storing, 1 16, a mapping between the calculated N-KPI values and determined QoE values.
With reference to figure 10 a method, implemented within a network, for monitoring Quality of Experience, QoE, of a service implemented on the network is illustrated in alternative embodiment of the present invention. The method comprises a learning phase and a live monitoring phase, the learning phase comprises a method for generating a mapping between Network Key Performance Indicators, N-KPIs, and QoE of a service implemented on the network in accordance with various embodiments of the method involving operations 1 1 1 — 1 16. The live monitoring phase comprises receiving and logging, 121 , data representative of data traffic associated with a service and calculating, 23, N- KPI values from the data traffic that has been intercepted and/or logged. In the final operation the method comprises quantitatively estimating, 124, QoE of the service by using said stored mapping between the N-KPI values and QoE values.
As explained above the method 100 comprises a learning phase 1 10 and a live monitoring phase 120, which will be described below in more detail.
The learning phase 1 10 comprises receiving and logging data representative of intercepted data traffic associated with a VoIP service at step 1 1 1. The data traffic may be intercepted by means of intermediate packet inspection probes. This step may be implemented by a packet analyzer such as that illustrated in figure 3.
At step 1 12, the learning phase 1 10 preferably comprises classifying the data that has been intercepted and/or logged into session records, which may then be stored within the memory 602. This step may be implemented by a packet classification engine, such as that illustrated in figure 3.
The learning phase 1 10 then comprises, at step 1 13, calculating N-KPI values from the session records. This step may be implemented by an N-KPI engine such as that illustrated in figures 3 and 4.
The above-described steps 1 1 1 -1 13 relate to the calculation of N-KPI values. It is envisaged that the learning phase 1 10 will comprise steps 1 14 and 1 15 relating to the determination of QoE values, which may be implemented concurrently to steps 1 1 1 -1 13. Step 1 14 comprises receiving and logging data files recorded by or at the UEs participating in the service. This step may include normalizing the received data files and/or clipping the received data files when some portion of a received data file does not contain any voice signals. Step 1 15 comprises determining QoE values from the data files received at step 1 14. This step may be implemented by a QoE engine such as that illustrated in figure 3.
The final step 1 16 of the learning phase 1 10 is generating a mapping between the N-KPI values calculated at step 1 13 and the QoE values determined at step 1 15. This step may be implemented by a machine learning engine such as that illustrated in figures 3 and 5. The generated mapping may then be stored within the memory 602. The mapping may be of the generic form:
QoE = f(N-KPI 1 , N-KPI2, N-KPIj), where each of the N-KPIs may be weighted in dependence on the outcome of the learning phase. In practice, there are many different machine learning algorithms available. Depending on the application, scenario or acceptable complexity, any of them could be used to generate the model. Some examples machine learning algorithms are described in References [1 ] to [5] identified below. At its simplest level, the determined QoE value from a particular time is mapped onto (matched with) N-KPI value combinations present in the network at the same time. Having a large database with such mappings allows for later estimation of QoE values only by determining N-KPIs and finding in the database what QoE is mapped onto this specific combination of N-KPI values. In some cases, the mapping which is arrived at based on the learning phase can be described by a function which relates selected N-KPI values (with appropriate weightings) to a QoE value indicative of a Quality of Experience which could be expected when the network is operating with those N-KPI values.
The live-monitoring phase 120 comprises receiving and logging data representative of data traffic associated with the VoIP service at step 121 . This step is analogous to step 1 1 1 of the learning phase 1 10 and may be implemented by a packet analyzer such as that illustrated in figure 3.
At step 122, the live monitoring phase 120 preferably comprises classifying the data that has been intercepted and/or logged into session records. This step is analogous to step 1 12 of the learning phase 1 10 and may be implemented by a packet classification engine such as that illustrated in figure 3.
At step 123, the live monitoring phase 120 preferably comprises calculating N-KPI values from the session records. This step is analogous to step 1 13 of the learning phase and may be implemented by an N-KPI engine such as that illustrated in figures 3 and 4.
Finally, at step 124, the live monitoring phase 120 comprises estimating QoE of the VoIP service using the mapping that was generated at step 1 16. The learning phase 1 10 may be repeated periodically or on-demand in order to ensure that the mapping remains up to date.
The QoE monitoring system 600 illustrated in figure 6 and comprising the processor 601 and the memory 602 in one embodiment may contain in said memory instructions executable by the processor, which make said system operative to implement only the learning phase as discussed above in relation to the embodiments of the method illustrated in figure 10. Alternatively, the memory, 602, may contain instructions for implementing both the learning phase and the live monitoring phase as described above in relation to the embodiments of the method illustrated in figure 10.
From the foregoing therefore, it is evident that the present invention provides an effective means of monitoring QoE. For example, the present invention enables a network operator to determine the QoE of a service from the network monitoring without any need for cooperation from the end devices during a live monitoring phase. The invention also enables a network operator to determine the QoE of a service even when the service data and protocol information are encrypted and not available to the operators. Another advantage of the present invention is that a network operator does not have to predefine network resources that are used by end user services. Furthermore, the invention enables automatic identification of the network resourcing that are significantly impacting upon the performance of a service. The method is generic, independent of network architecture and can be applied to various types of network (mobile, fixed, satellite, etc.).
References
[1 ] Why is retransmission representative of congestion on your network? What is its impact on user experience? Website: http://blog.securactive.net/?p=530
[2] Alexander Raake, "Speech Quality of VoIP: Assessment and Prediction", 2006, John Wiley and Sons, Ltd.; Section 4.2: Macroscopic Loss Behavior
[3] David Soldani, Man Li, Renaud Cuny, "QoS and QoE Management in UMTS Cellular Systems", 2006, John Wiley and Sons, Ltd.; Section 9.4: Post-processing and statistical methods
[4] Szymon Fedor, Sidath Handurukande, "Apparatus and Method for Monitoring Performance in a Communications Network", patent application WO2013/091715A1 , June 2013.
[5] Szymon Fedor, Sidath Handurukande, "Service performance in communications network", patent application WO2012055449A1 , May 2012.

Claims

Claims
1 . A method, implemented within a network, for generating a mapping between Network Key Performance Indicators, N-KPIs, and Quality of Experience, QoE, of a service implemented on the network, the method comprising:
receiving and logging data representative of intercepted data traffic associated with a service;
calculating N-KPI values from the data traffic that has been intercepted and/or logged;
receiving and logging data files recorded by or at User Equipment, UEs, participating in the service;
determining QoE values from the data files received from the UEs; and, generating and storing a mapping between the calculated N-KPI values and determined QoE values.
2. A method as claimed in claim 1 , wherein data traffic associated with said service is transported over Transmission Control Protocol, Secure Transmission Control Protocol, Hypertext Transfer Protocol, and/or Hypertext Transfer Protocol Secure.
3. A method as claimed in claim 1 or claim 2, wherein the step of generating a mapping between the calculated N-KPI values and determined QoE values comprises pairing the data traffic that has been intercepted and/or logged with corresponding data files recorded by or at the UEs.
4. A method as claimed in claim 3, wherein the key comprises one or more of the following metrics: session start time; session end time; host address; port numbers; service provider name.
5. A method as claimed in any preceding claim, further comprising the step of determining identifying information from the data packets that have been received and logged and combining related data into session records.
6. A method as claimed in any preceding claim, further comprising the step of intercepting data traffic associated with the service.
7. A method as claimed in claim 6, wherein the step of intercepting data traffic associated with the service comprises intercepting the data by means of shallow and/or deep packet inspection probes.
8. A method as claimed in any preceding claim, wherein the step of determining QoE values from the data files recorded by or at the UEs comprises objective measurement of QoE through a Full Reference algorithm.
9. A method as claimed in any preceding claim, wherein the step of determining QoE values from the data files recorded by or at the UEs comprises implementation of one or more of: Perceptual Objective Listening Quality Assessment, POLQA; Perceptual Evaluation of Speech Quality, PESQ; Perceptual Evaluation of Video Quality, PEVQ.
10. A method as claimed in any preceding claim, wherein the N-KPIs relate to one or more of the following properties:
a) Transmission Control Protocol, TCP, retransmission rate;
b) Duplicate acknowledgement, ACK, rate;
c) packet rate;
d) data rate;
e) out-of-order packet rate;
f) flight rate;
g) ACK round-trip-time.
1 1 . A method as claimed in any preceding claim, wherein the N-KPIs are calculated as:
(i) average; or
(ii) variance; or
(iii) surface area; or
(iv) transition frequency; or
(v) gap duration; or
(vi) number of gaps,
of any properties of the service.
12. A method as claimed in any preceding claim, wherein the service comprises an Over The Top, OTT, service.
13. A method, implemented within a network, for monitoring Quality of Experience, QoE, of a service implemented on the network, the method comprising a learning phase and a live monitoring phase, the learning phase comprising a method for generating a mapping between Network Key Performance Indicators, N-KPIs, and QoE of a service implemented on the network in accordance with any one of claims 1 to 12, the live monitoring phase comprising:
receiving and logging data representative of data traffic associated with a service;
calculating N-KPI values from the data traffic that has been intercepted and/or logged; and,
quantitatively estimating QoE of the service by using said stored mapping between the N-KPI values and QoE values.
14. A method according to claim 13, wherein the live monitoring phase comprises the steps of determining identifying information from the data packets that have been received and logged, combining related data into session records, and calculating the N-KPI values based on the session records.
15. A method as claimed in claim 13, wherein the learning phase is repeated periodically and/or on demand.
16. A system for generating a mapping between Network Key Performance Indicators, N-KPIs, and Quality of Experience, QoE, of a service implemented on a network, the system comprising:
a packet analyzer for receiving and logging data representative of intercepted data traffic associated with the service;
a Network Key Performance Indicator, N-KPI, engine for calculating values of N-KPIs from the data traffic that has been intercepted and/or logged; a receiver for receiving data files associated with the service recorded at User Equipment, UEs;
a QoE engine for determining QoE values from the data files associated with the service and received from the UEs;
a machine learning engine for generating a mapping between the N-KPI values calculated by the N-KPI engine and QoE values determined by the QoE engine; and,
a memory for storing the mapping generated by the machine learning engine.
17. A monitoring system as claimed in claim 16, further comprising a packet classification engine for determining identifying information from the data packets that have been received and logged by the packet analyzer and combining related data into session records.
18. A monitoring system as claimed in claim 16 or claim 17, further comprising a workstation terminal for allowing a human network operator to access data stored in the memory.
19. A monitoring system as claimed in any one of claims 16 to 18, wherein the system is a single network element.
20. A monitoring system as claimed in any one of claims 16 to 19, wherein the data traffic is transported between UEs over Transmission Control Protocol, Secure Transmission Control Protocol, Hypertext Transfer Protocol, and/or Hypertext Transfer Protocol Secure.
21 . A monitoring system as claimed in any one of claims 16 to 20, wherein the monitoring system is arranged for implementing the method of any one of claims 1 to 12.
22. A monitoring system as claimed in any one of claims 16 to 20, wherein the monitoring system is arranged for implementing the method of claim 13 or claim 14, the monitoring system further comprising a QoE monitoring engine for receiving N-KPI values from the N-KPI engine and consulting the stored mapping to estimate QoE values.
23. A system for generating a mapping between Network Key Performance Indicators, N-KPIs, and Quality of Experience, QoE, of a service implemented on a network, the system comprising a processor and a memory, said memory containing instructions executable by said processor whereby said system is operative to:
receive and log data representative of intercepted data traffic associated with a service;
calculate N-KPI values from the data traffic that has been intercepted and/or logged;
receive and log data files recorded by or at User Equipment, UEs, participating in the service;
determine QoE values from the data files received from the UEs; and, generate and store a mapping between the calculated N-KPI values and determined QoE values.
24. A system as claimed in claim 23, wherein the system is a single network element.
25. A system for monitoring Quality of Experience, QoE, of a service implemented on a network, the system comprising a processor and a memory, said memory containing instructions executable by said processor whereby said system is operative to implement a learning phase and a live monitoring phase;
in the learning phase the system is operative to:
receive and log data representative of intercepted data traffic associated with a service;
calculate Network Key Performance Indicator, N-KPI, values from the data traffic that has been intercepted and/or logged;
receive and log data files recorded by or at User Equipment, UEs, participating in the service;
determine QoE values from the data files received from the UEs; and, generate and store a mapping between the calculated N-KPI values and determined QoE value,
in the live monitoring phase the system is operative to:
receive and log data representative of data traffic associated with a service;
calculate N-KPI values from the data traffic that has been intercepted and/or logged; and,
use said stored mapping between the N-KPI values and QoE values to quantitatively estimate QoE of the service.
26. A system as claimed in claim 25, wherein in the live monitoring phase the system is operative to determine identifying information from the data packets that have been received and logged, combine related data into session records, and calculate the N-KPI values based on the session records.
PCT/EP2014/055972 2014-03-25 2014-03-25 Method and system for monitoring qoe WO2015144211A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP2014/055972 WO2015144211A1 (en) 2014-03-25 2014-03-25 Method and system for monitoring qoe

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2014/055972 WO2015144211A1 (en) 2014-03-25 2014-03-25 Method and system for monitoring qoe

Publications (1)

Publication Number Publication Date
WO2015144211A1 true WO2015144211A1 (en) 2015-10-01

Family

ID=50391165

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2014/055972 WO2015144211A1 (en) 2014-03-25 2014-03-25 Method and system for monitoring qoe

Country Status (1)

Country Link
WO (1) WO2015144211A1 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107026750A (en) * 2016-02-02 2017-08-08 中国移动通信集团广东有限公司 A kind of user's online QoE evaluation methods and device
EP3226472A1 (en) * 2016-04-01 2017-10-04 Thomson Licensing Method for predicting a level of qoe of an application intended to be run on a wireless user equipment
CN107733705A (en) * 2017-10-10 2018-02-23 锐捷网络股份有限公司 A kind of user experience quality assessment models method for building up and equipment
EP3439308A1 (en) * 2017-07-31 2019-02-06 Zhilabs S.L. Determination of qoe in encrypted video streams using supervised learning
WO2019051119A1 (en) * 2017-09-06 2019-03-14 InfoVista Sweden AB System and Method for Machine Learning Based QoE Prediction of Voice/Video Services in Wireless Networks
EP3866483A4 (en) * 2018-11-22 2021-10-20 Huawei Technologies Co., Ltd. Network performance bottleneck value determination method and apparatus
US11178056B2 (en) 2019-04-08 2021-11-16 Electronics And Telecommunications Research Institute Communication method and apparatus for optimizing TCP congestion window
US11196652B2 (en) 2017-02-07 2021-12-07 Telefonaktiebolaget Lm Ericsson (Publ) Transport layer monitoring and performance assessment for OTT services
CN114422386A (en) * 2022-01-20 2022-04-29 南方电网数字电网研究院有限公司 Monitoring method and device for micro-service gateway
CN115277581A (en) * 2022-07-21 2022-11-01 腾讯科技(深圳)有限公司 Network transmission control method and device, computer equipment and storage medium
WO2023048617A1 (en) * 2021-09-21 2023-03-30 Telefonaktiebolaget Lm Ericsson (Publ) Wireless terminal, network node and methods in a wireless communications network
US11665261B1 (en) 2022-03-17 2023-05-30 Cisco Technology, Inc. Reporting path measurements for application quality of experience prediction using an interest metric

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130290525A1 (en) * 2010-10-29 2013-10-31 Telefonaktiebolaget L M Ericsson (Publ) Service Performance in Communications Network
US20140033242A1 (en) * 2012-07-24 2014-01-30 Srinivasa Rao Video service assurance systems and methods in wireless networks
WO2014040646A1 (en) * 2012-09-14 2014-03-20 Huawei Technologies Co., Ltd. Determining the function relating user-centric quality of experience and network performance based quality of service

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130290525A1 (en) * 2010-10-29 2013-10-31 Telefonaktiebolaget L M Ericsson (Publ) Service Performance in Communications Network
US20140033242A1 (en) * 2012-07-24 2014-01-30 Srinivasa Rao Video service assurance systems and methods in wireless networks
WO2014040646A1 (en) * 2012-09-14 2014-03-20 Huawei Technologies Co., Ltd. Determining the function relating user-centric quality of experience and network performance based quality of service

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107026750B (en) * 2016-02-02 2020-05-26 中国移动通信集团广东有限公司 User Internet QoE evaluation method and device
CN107026750A (en) * 2016-02-02 2017-08-08 中国移动通信集团广东有限公司 A kind of user's online QoE evaluation methods and device
EP3226472A1 (en) * 2016-04-01 2017-10-04 Thomson Licensing Method for predicting a level of qoe of an application intended to be run on a wireless user equipment
EP3226474A1 (en) * 2016-04-01 2017-10-04 Thomson Licensing Method for predicting a level of qoe of an application intended to be run on a wireless user equipment
CN107360017A (en) * 2016-04-01 2017-11-17 汤姆逊许可公司 For the method for the QoE grades for predicting the application for being intended to be run on wireless user equipment
CN115021837A (en) * 2016-04-01 2022-09-06 艾尔泰斯比利时公司 Method for predicting QoE level of an application intended to run on a wireless user equipment
US20220247646A1 (en) * 2016-04-01 2022-08-04 Airties Belgium Sprl METHOD FOR PREDICTING A LEVEL OF QoE OF AN APPLICATION INTENDED TO BE RUN ON A WIRELESS USER EQUIPMENT
EP4009548A1 (en) * 2016-04-01 2022-06-08 AirTies Belgium SPRL Method for predicting a level of qoe of an application intended to be run on a wireless user equipment
US11316759B2 (en) 2016-04-01 2022-04-26 Airties Belgium Sprl Method for predicting a level of QoE of an application intended to be run on a wireless user equipment
CN107360017B (en) * 2016-04-01 2022-04-15 艾尔泰斯比利时公司 Method for predicting QoE level of an application intended to run on a wireless user equipment
US11196652B2 (en) 2017-02-07 2021-12-07 Telefonaktiebolaget Lm Ericsson (Publ) Transport layer monitoring and performance assessment for OTT services
US11234048B2 (en) 2017-07-31 2022-01-25 Zhilabs S.L. Determination of QOE in encrypted video streams using supervised learning
EP3439308A1 (en) * 2017-07-31 2019-02-06 Zhilabs S.L. Determination of qoe in encrypted video streams using supervised learning
US10963803B2 (en) 2017-09-06 2021-03-30 InfoVista Sweden AB System and method for machine learning based QoE prediction of voice/video services in wireless networks
WO2019051119A1 (en) * 2017-09-06 2019-03-14 InfoVista Sweden AB System and Method for Machine Learning Based QoE Prediction of Voice/Video Services in Wireless Networks
US11748643B2 (en) 2017-09-06 2023-09-05 Info Vista Sweden AB System and method for machine learning based QoE prediction of voice/video services in wireless networks
CN107733705A (en) * 2017-10-10 2018-02-23 锐捷网络股份有限公司 A kind of user experience quality assessment models method for building up and equipment
EP3866483A4 (en) * 2018-11-22 2021-10-20 Huawei Technologies Co., Ltd. Network performance bottleneck value determination method and apparatus
US11178056B2 (en) 2019-04-08 2021-11-16 Electronics And Telecommunications Research Institute Communication method and apparatus for optimizing TCP congestion window
WO2023048617A1 (en) * 2021-09-21 2023-03-30 Telefonaktiebolaget Lm Ericsson (Publ) Wireless terminal, network node and methods in a wireless communications network
CN114422386A (en) * 2022-01-20 2022-04-29 南方电网数字电网研究院有限公司 Monitoring method and device for micro-service gateway
CN114422386B (en) * 2022-01-20 2023-08-11 南方电网数字电网研究院有限公司 Monitoring method and device for micro-service gateway
US11665261B1 (en) 2022-03-17 2023-05-30 Cisco Technology, Inc. Reporting path measurements for application quality of experience prediction using an interest metric
CN115277581A (en) * 2022-07-21 2022-11-01 腾讯科技(深圳)有限公司 Network transmission control method and device, computer equipment and storage medium
CN115277581B (en) * 2022-07-21 2024-04-30 腾讯科技(深圳)有限公司 Control method and device for network transmission, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
WO2015144211A1 (en) Method and system for monitoring qoe
Mazhar et al. Real-time video quality of experience monitoring for https and quic
CN100473025C (en) Methods and systems for coordinated monitoring of network transmission events
EP1793528B1 (en) Method of monitoring the quality of a realtime communication
EP2915303B1 (en) Detection of periodic impairments in media streams
US6850525B2 (en) Voice over internet protocol (VoIP) network performance monitor
EP2678990B1 (en) Voip quality measurement enhancements using the internet control message protocol
US10963803B2 (en) System and method for machine learning based QoE prediction of voice/video services in wireless networks
Ickin et al. The effects of packet delay variation on the perceptual quality of video
CN111164947A (en) Method and device for encoding audio and/or video data
KR100954593B1 (en) Method for measuring qos of voip network
Carofiglio et al. Characterizing the relationship between application QoE and network QoS for real-time services
Ammar et al. Exploring the usefulness of machine learning in the context of WebRTC performance estimation
Collange et al. User impatience and network performance
Adibi Traffic Classification œ Packet-, Flow-, and Application-based Approaches
EP2369807A1 (en) Impairment detection and recording of isochronous media streams
Toral et al. Self-similarity, packet loss, jitter, and packet size: Empirical relationships for VoIP
Kim et al. End-to-end qos monitoring tool development and performance analysis for NGN
US7848243B2 (en) Method and system for estimating modem and fax performance over packet networks
Jaish et al. Quality of experience for voice over internet protocol (voip)
Ivanovici et al. User-perceived quality assessment for multimedia applications
Orosz et al. VoicePerf: A Quality Estimation Approach for No-reference IP Voice Traffic
US8284676B1 (en) Using measurements from real calls to reduce the number of test calls for network testing
Shirmohamadi et al. Bridging between quality of experience and quality of service through TCP flag ratios
Dolezal et al. Improving QoE of SIP-based automated voice interaction in mobile networks

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14713809

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase
122 Ep: pct application non-entry in european phase

Ref document number: 14713809

Country of ref document: EP

Kind code of ref document: A1