WO2021171063A1 - Adaptive method for measuring service level consistency - Google Patents

Adaptive method for measuring service level consistency Download PDF

Info

Publication number
WO2021171063A1
WO2021171063A1 PCT/IB2020/051727 IB2020051727W WO2021171063A1 WO 2021171063 A1 WO2021171063 A1 WO 2021171063A1 IB 2020051727 W IB2020051727 W IB 2020051727W WO 2021171063 A1 WO2021171063 A1 WO 2021171063A1
Authority
WO
WIPO (PCT)
Prior art keywords
consistency
service
time window
quality
services
Prior art date
Application number
PCT/IB2020/051727
Other languages
French (fr)
Inventor
Adam ZLATNICZKI
Alexander Biro
Zoltan NYESEV
Norbert PURGER
Original Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget Lm Ericsson (Publ) filed Critical Telefonaktiebolaget Lm Ericsson (Publ)
Priority to PCT/IB2020/051727 priority Critical patent/WO2021171063A1/en
Publication of WO2021171063A1 publication Critical patent/WO2021171063A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/08Testing, supervising or monitoring using real traffic

Definitions

  • the solution presented herein relates generally to communications evaluations, and more particularly to evaluating the consistency of a quality of the communications, e.g., wireless communications.
  • Subscribers pay service providers to provide a service. For example, in wireless communications, a subscriber may pay a service provider for a video subscription service that provides requested videos to the subscriber. Because revenues for a particular service are generally tied to customer satisfaction regarding that service, it is in the service provider’s best interest to make sure its subscribers are satisfied with the provided service. High customer satisfaction, for example, generally enables the service provider to maintain and/or expand its customer base, which allows for further successful operation and increasing revenue. Service providers therefore have an interest in accurately assessing customer satisfaction.
  • KPIs Key Performance Indicators
  • a KPI may be any indicator that provides insight into the performance of a service, where many KPIs are service-specific.
  • Exemplary KPIs include, but are not limited to, throughput, delay, video stall ratio, setup time, call drops, voice quality, video quality, etc.
  • KPIs provide an important metric for the evaluation of customer satisfaction
  • KPIs alone cannot provide a complete picture of the quality of the provided service or a subscriber’s satisfaction with that quality.
  • a KPI indicating acceptable throughput over a time window does not indicate whether that throughput is acceptable because over the time window the good and bad throughput instances average out to an acceptable throughput, or because the throughput is consistently acceptable over that time window. While the latter situation is likely to produce good customer satisfaction, the former is likely to produce low customer satisfaction. As such, there remains a need for improved ways to more accurately assess customer satisfaction.
  • the solution presented herein determines a consistency of a quality of a service provided to a subscriber. To that end, the solution presented herein evaluates quality metrics over a time window to determine how much those quality values change over the time window, and thus to determine how consist a particular quality metric is for that service. In so doing, the solution presented herein assesses the consistency of a quality of a particular service provided to a subscriber. In so doing, the solution presented herein enables the service provider to more accurately assess customer satisfaction, and to better identify areas that are negatively or positively impacting customer satisfaction.
  • One exemplary embodiment comprises a method of evaluating a consistency of a quality of service of one or more services provided to a subscriber.
  • the method comprises deriving one or more model parameters for a consistency model for each of the one or more services from one or more first quality values obtained for a first time window.
  • the method further comprises calculating a consistency score for the one or more services responsive to the one or more model parameters and one or more second quality values obtained for a second time window, where each of the first and second quality values indicate a quality of the corresponding service provided to the subscriber during the first and second time windows, respectively.
  • the method further comprises outputting the consistency score to the subscriber and/or a service provider.
  • the calculation of the consistency score comprises calculating, for each of the one or more services, a consistency deviation using the corresponding one or more second quality values and the corresponding one or more model parameters, and calculating the consistency score for the one or more services for the second time window responsive to the calculated consistency deviations.
  • the method further comprises determining a service weight for each of the one or more services provided to the subscriber during the second time window responsive to the corresponding one or more second quality values, where determining the consistency score comprises determining the consistency score for the one or more services for the second time window responsive to the service weights and the consistency deviations.
  • One exemplary embodiment comprises a consistency evaluation apparatus comprising one or more processing circuits configured to derive one or more model parameters for a consistency model for each of the one or more services from one or more first quality values obtained for a first time window, and calculate a consistency score for the one or more services responsive to the one or more model parameters and one or more second quality values obtained for a second time window, where each of the first and second quality values indicate a quality of the corresponding service provided to the subscriber during the first and second time windows, respectively.
  • the consistency evaluation apparatus is further configured to output the consistency score to the subscriber and/or a service provider.
  • One exemplary embodiment comprises a computer program product for controlling a consistency evaluation apparatus.
  • the computer program product comprises software instructions which, when run on at least one processing circuit in the consistency evaluation apparatus, causes the consistency evaluation apparatus to derive one or more model parameters for a consistency model for each of the one or more services from one or more first quality values obtained for a first time window, and calculate a consistency score for the one or more services responsive to the one or more model parameters and one or more second quality values obtained for a second time window, where each of the first and second quality values indicate a quality of the corresponding service provided to the subscriber during the first and second time windows, respectively.
  • the software instructions when run on the at least one processing circuit, further cause the consistency evaluation apparatus is further configured to output the consistency score to the subscriber and/or a service provider.
  • a computer readable medium may comprise the computer program product.
  • the computer readable medium may comprise a non-transitory computer readable medium.
  • One exemplary embodiment comprises a consistency evaluation apparatus for evaluating a consistency of a quality of service of one or more services provided to a subscriber.
  • the consistency evaluation apparatus comprises a model parameter unit and a consistency unit.
  • the model parameter unit is configured to derive one or more model parameters for a consistency model for each of the one or more services from one or more first quality values obtained for a first time window.
  • the consistency unit is configured to calculate a consistency score for the one or more services responsive to the one or more model parameters and one or more second quality values obtained for a second time window, where each of the first and second quality values indicate a quality of the corresponding service provided to the subscriber during the first and second time windows, respectively.
  • the consistency unit is further configured to output the consistency score to the subscriber and/or a service provider.
  • the consistency unit calculates the consistency score by calculating, for each of the one or more services, a consistency deviation using the corresponding one or more second quality values and the corresponding one or more model parameters, and calculating the consistency score for the one or more services for the second time window responsive to the calculated consistency deviations.
  • the consistency evaluation apparatus further comprises a weight unit configured to determine a service weight for each of the one or more services provided to the subscriber during the second time window responsive to the corresponding one or more second quality values, where the consistency unit determines the consistency score by determining the consistency score for the one or more services for the second time window responsive to the service weights and the consistency deviations.
  • Figure 1 shows a consistency evaluation method according to exemplary embodiments of the solution presented herein.
  • Figure 2 shows a block diagram of a consistency evaluation apparatus according to exemplary embodiments of the solution presented herein.
  • Figure 3 shows a block diagram of another consistency evaluation apparatus according to exemplary embodiments of the solution presented herein.
  • Figure 4 shows an exemplary wireless network using the solution presented herein.
  • Figure 5 shows another consistency evaluation method according to exemplary embodiments of the solution presented herein.
  • Figure 6 shows an exemplary consistency report generated according to the solution presented herein.
  • Figure 7 shows another exemplary consistency report generated according to the solution presented herein.
  • Figure 8 shows another exemplary consistency report generated according to the solution presented herein.
  • Figure 9 shows another exemplary consistency report generated according to the solution presented herein.
  • Figure 10 shows an exemplary consistency score distribution according to the solution presented herein.
  • the solution presented herein assesses consistency of a quality of one or more services provided to a subscriber.
  • the consistency of a particular service indicates how much a particular quality metric for that service changes, e.g., over time. For example, a service is said to be consistent if a quality metric for that service is uniform regardless of time, place, or occasion.
  • KPI Key Performance Metric
  • KPI Key Performance Metric
  • the solution presented herein provides a fully automated technique for determining the consistency of a service that is also configurable to provide representative results at any given time, and for any desired time window.
  • the following first describes a solution for determining the consistency of one or more services provided to a single subscriber. It will be appreciated, however, that the solution presented herein applies equally well to determining the consistency of one or more services provided to multiple subscribers, and that the determined consistency score may be subscriber- specific.
  • the solution presented herein is described generally in terms of the consistency of a quality of service defined by one or more quality values, where the consistency of the various quality values is determined for an evaluation time window relative to a fully automated model.
  • quality values include, but are not limited to, Key Performance Indicators (KPIs), Quality of Experience (QoE) metrics, Key Quality Indicators (KQIs), etc.
  • KPIs Key Performance Indicators
  • QoE Quality of Experience
  • KQIs Key Quality Indicators
  • each provided service may be evaluated using one quality value or multiple quality values. While not required, many quality values may be specific to the provided service.
  • Table 1 shows exemplary QoE metrics that may be associated with various services.
  • Figure 1 shows an exemplary method 100 of evaluating a consistency of a quality of service of one or more services provided to a subscriber.
  • the method 100 comprises deriving one or more model parameters for a consistency model for each of the one or more services from one or more first quality values obtained for a first time window (block 110).
  • the method 100 further comprises calculating a consistency score for the one or more services responsive to the one or more model parameters and one or more second quality values obtained for a second time window (block 120), where each of the first and second quality values indicate a quality of the corresponding service provided to the subscriber during the first and second time windows, respectively.
  • the method further comprises outputting the consistency score to the service provider and/or subscriber (block 130).
  • the method 100 of Figure 1 may be implemented by any consistency evaluation apparatus 200, e.g., as shown in Figure 2, which may be part of a service provider or management node.
  • the consistency evaluation apparatus 200 may be implemented in any network node, e.g., a base station, core network node, network management node, etc. It will be appreciated, however, that the consistency evaluation apparatus 200 is limited to wireless networks, and may be implemented in any system that provides a service to a subscriber and that uses quality values to evaluate the quality of the provided service.
  • the consistency evaluation apparatus 200 comprises a consistency circuit 210 and a model parameter circuit 220.
  • the model parameter circuit 220 is configured to derive one or more model parameters for a consistency model for each of the one or more services from one or more first quality values obtained for a first time window.
  • the consistency circuit 210 is configured to calculate a consistency score for the one or more services responsive to the one or more model parameters and one or more second quality values obtained for a second time window, where each of the first and second quality values indicate a quality of the corresponding service provided to the subscriber during the first and second time windows, respectively.
  • the consistency evaluation apparatus 200 outputs the calculated consistency score to an output circuit 240 of the subscriber and/or service provider.
  • the output circuit 240 is part of the consistency evaluation apparatus 200, and in other embodiments it is separate from the consistency evaluation apparatus 200.
  • the model parameter circuit 220 derives the model parameter(s) ⁇ s for each service s by applying one or more statistical evaluations to the first quality values obtained for the first time window for the corresponding service to determine one or more model parameters for each service. For example, the model parameter circuit 220 may determine the model parameter(s) ⁇ s for a particular service s by determining one or more of a mode of the first quality values for that service, a standard deviation of the first quality values for that service, a mean of the first quality values for that service, a median of the first quality values for that service, etc. It will be appreciated that the solution presented herein is not limited to these statistical evaluations, and that mode parameter circuit 220 may use other statistical evaluations to determine other statistical properties of the quality value(s).
  • the consistency circuit 210 comprises a deviation circuit 212 and a consistency score circuit 214.
  • the deviation circuit 212 is configured to calculate, for each of one or more services being provided to the subscriber, a consistency deviation Q s using the corresponding one or more model parameters ⁇ s derived for the corresponding service and the second quality value(s) K2 s obtained for the corresponding service.
  • the deviation circuit 212 determines how much each second quality value deviates from a “norm” for that quality value, where that norm is defined by the corresponding one or more model parameters ⁇ s .
  • the deviation circuit 212 may determine the consistency deviation for each service as a scoring function of the second quality value(s) and model parameter(s), .
  • l(s ) represents the set of indices j of the one or more second quality values related to service s
  • K j represents the j th second quality value for service s
  • ⁇ 1 j represents the first model parameter for the j
  • ⁇ 2 j represents the second model parameter for the j th second quality value related to service s
  • B j represents a predetermined weight for the j th second quality value related to service s .
  • the predetermined weights may be configured to satisfy the constraint .
  • the above equation has the benefits of being easy to compute, being highly scalable, and having only two model parameters per quality value, which makes it easy to update as desired. It will be appreciated that the above equation is exemplary, and that the solution presented herein is not limited to determining the consistency deviations using this equation.
  • the consistency score circuit 214 is configured to calculate the consistency score CS for the one or more services for the second time window responsive to the calculated consistency deviations.
  • the consistency score CS for each service provided to the subscriber over the second time window may be equivalent to the corresponding consistency deviation Q s output by the deviation circuit 212.
  • the consistency score CS for multiple ones of the services provided to the subscriber may be calculated as a function of the corresponding consistency deviations , e.g., a sum of the corresponding consistency deviations or a gamma function of the corresponding consistency deviations
  • each consistency deviation used to calculate the consistency score equally impacts the consistency score.
  • a low consistency associated with a service provided infrequently to the subscriber and a high consistency associated with a service provided frequently to the subscriber will equally impact the overall consistency score.
  • another exemplary embodiment may include a weight circuit 230 ( Figure 3) that determines a service weight W s for each service.
  • the consistency circuit 210 includes the weights when determining the consistency score.
  • the consistency score circuit may determine the consistency score responsive to the consistency deviations and the weights, i.e.,
  • Weight circuit 230 may determine the weights based on any number of factors related to providing services to a subscriber. For example, the weight calculator may determine a weight for each service that is proportional to how much of the second time window is spent providing the corresponding service to the subscriber. In this embodiment, for example, the weight circuit 230 may evaluate the second quality values obtained for the second time window to determine what percentage of the second time widow was used by each service, and use the determined percentages as the weights. In other exemplary embodiments, the weight circuit 230 may additionally or alternatively determine the weights from other factors, e.g., subscriber preference, cost of service (e.g., higher cost services have a higher weight), etc. While not required, the service weights may be configured to satisfy the constraint .
  • the weight calculator may determine a weight for each service that is proportional to how much of the second time window is spent providing the corresponding service to the subscriber.
  • the weight circuit 230 may evaluate the second quality values obtained for the second time window to determine what percentage of the second time widow was used
  • first time window for the model parameters
  • second time window for the consistency score
  • first time window overlaps some or all of the second time window. While it may be useful in some embodiments for the first time window to have the same duration as the second time window, the first time window will generally be longer than the second time window to better establish historical “norms” for the quality values.
  • the first time window may span a week, while the second time window may span a day, which may or may not be part of the week of the first time window.
  • the time frames for the first and/or second windows may be predefined for the consistency evaluation apparatus 200.
  • the time frames for the first and/or second windows may be variable, and may be configured by the service provider and/or by the subscriber. For example, the subscriber may want to see consistency report each day, but may want the consistency model to be based on multiple weeks of quality values, e.g., the previous two weeks, the previous year, etc. For this example, the subscriber may set the first time window to the previous two weeks and the second time window to the previous day. It will be appreciated that the subscriber and/or service provider may configure the time windows used to calculate the consistency score to cover any time period where quality values for the service(s) in question are available.
  • Figure 4 shows one exemplary embodiment of the solution presented herein in a wireless network, where the consistency evaluation apparatus 200 may be that shown in Figure 3, for example.
  • the consistency evaluation apparatus which may also be referred to as a score management node, receives Network Service Level Consistency (NSLC) data that is defined on technical network measurements as, e.g., KPIs, which are collected by the Customer Experience Management (CEM) node for one or more time windows on the whole network or on specified geographical regions.
  • NLC Network Service Level Consistency
  • the input data contains data from occasions when a given network service was delivered to the subscribers.
  • the calculation’s further steps need source data for quality scoring and preference scoring.
  • quality scoring can be done over call session setup success ratio, call drop ratio, throttled session ratio, web access time and for e.g. usage scoring can be based on summarized time of sessions for a specific service, call durations, message traffic. Node calculations over the entire network may be referred to herein as “population statistics.”
  • An instance of the process described herein can be used for supporting the consistent quality assurance of a service or network by a greater population’s network statistics projected to the subscribers on the services when it has been delivered on a time window.
  • An instance of this model will be described in terms of functionality in a “score management node,” which comprises the deviation circuit 212, consistency score circuit 214, model parameters circuit 220, and weight circuit 230.
  • score management node is used here, it could be substituted by “score management system” or “consistency evaluation apparatus” or similar term throughout this disclosure.
  • the solution presented herein proposes enforcing the consistency of each service and considering the consistency proportional to its usage. If a service is only used by a minority of subscribers (but would require investments to become consistent), then it is not necessary for this service to be consistent for the network-level consistency to be in an acceptable range. For this reason, the solution presented herein uses a weighting circuit 230 to determine service weight for each service. In some embodiments, the weighting circuit 230 calculates this weighting per subscriber because consistency is something the subscriber perceives, and as such, it should be relative to the subscriber’s experience.
  • the weight for subscriber i for service s will be denoted by .
  • G s is a function of the s
  • h KPI usage is a derived metric of these values.
  • Model parameter circuit 220 incorporates automatic parameter tuning into the solution presented herein. There may be several reasons for incorporating such automatic parameter tuning. For example, depending on the scoring model used to quantify service consistency, a number of model parameters may be present, possibly relying on the underlying distributions of the input data. Because these distributions may very well change in time, automatic tuning enables the final consistency score to be meaningful and reliable. For each service s , the model parameter circuit 220 calculates the corresponding vector of model parameters based on the matrix comprising the values of the KPIs related to service s for a first time window, with the help of a function , as formally represented by:
  • the deviation circuit 212 is configured to provide a consistency deviation that measures the amount of divergence/distortion in the service level of a subscriber from a consistent experience.
  • the consistency deviation of a service s for a subscriber i may be given by represents the vector of KPI values obtained for that service for subscriber i for the second time window, represents the vector of auto-tuned parameters for service s and F s represents a scoring function.
  • There are several functions that may be used to achieve such goals e.g., measuring the amount of divergence/distortion), e.g., standard procedures from statistics that are used to find outliers.
  • the consistency score circuit 214 calculates the consistency score based on the service weights and the consistency deviations The calculation generally takes into account all weight-distortion pairs, which measured (or differentiated) any distortion on subscriber’s quality or usage in the population, e.g., .
  • the consistency score circuit 214 may perform a simple linear combination of the service weights and consistency deviations according to .
  • Figure 5 shows one exemplary method of the above-described mathematical details.
  • the main output of the consistency evaluation apparatus is a consistency score, which may be referred to as the Customer Experience Consistency Score (CECS).
  • CECS Customer Experience Consistency Score
  • a service is said to be consistent if it provides uniform quality regardless of time, place, and occasion.
  • the per-subscriber consistency scores may be used to determine/form a distribution. The narrower this distribution, the better the overall consistency. This can be measured for example with kurtosis, or an inter-percentile range, e.g., of the 95 th and 5 th percentiles of the distribution.
  • the consistency scores may also be used to support Service Level Agreements (SLAs). To this end, service weights and consistency deviations may be determined for a particular service provided to a particular subscriber during multiple time periods.
  • SLAs Service Level Agreements
  • the consistency scores computed from these service weights and consistency deviations form a distribution, which can tell the service provider how consistent the corresponding service is for the given subscriber over time, which information the service provider may use to enforce SLA-related sanctions, e.g., by determining how this distribution differs from that of a corresponding general population statistic.
  • a Network Outlier score may be defined, e.g., based on predefined thresholds, that highlight benchmark points in the adaptive calculation to see how far a subscriber’s ID-score was from the average service level. These calculations may be based, e.g., on appending into the database a predefined subscriber with static KPI values that characterize bad KPI values of the services.
  • the outlier score may be defined as the distance of the predefined subscriber’s score, where the bigger the gap, the greater the probability of being an outlier.
  • Network Level Customer Experience Consistency Scores may be also be evaluated as Network Comparison Measurement (or Market Comparison Measurement). As discussed above, the inter percentile range of the 95 th and 5 th percentiles of the distribution can be interpreted as a basic NL-CECS of the network, given two different networks (markets), the lower this score is, the more consistent the service is. Also, this comparison methodology yields to a natural definition of a comparative score for provided services (e.g., VoLTE calls) to rank them between operators.
  • provided services e.g., VoLTE calls
  • the solution presented herein may further be improved by taking into consideration not just objective network measurement KPI/QoE measurements, but also by using other customer related databases to enrich the input, e.g.,:
  • Billing database helps to filter out noise when setting related to end user or the associated plan can be blamed for network level inconsistency e.g. throttling
  • Prizing database can be useful input as well to filter out noise
  • the consistency scores of the solution presented herein not only provide a good measurement of the network consistency, but also provide invaluable outputs in other important network measurements (e.g., outlier detection in subscriber experience).
  • the solution presented herein may be generalized to any given dimension, e.g.,:
  • Network Level Service Consistency Supports Service Level Agreement (SLA). This perspective gives a spatio-temporal measurement of the network by providing benchmark of the network status for a given time period o Creates Cell ID and Cell Service Level Consistency pairs o Outlier detection: unusually outstanding KPI/QoE measurement aggregated values can detect cell or eNodeB malfunctioning in the core network system o A network cell level benchmark index can be created that highlights spatial consistency in a given network. (One can even consider the density of cells or can be normalized with the distance of the entire network).
  • Network Level Service Consistency Supports SLA. This perspective gives a time-device or even a device-software version measurement of the network by providing benchmark of the network status for a given time period o Creates Device ID and Device Service Level Consistency pairs o Outlier detection: unusually outstanding KPI/QoE measurement aggregated values can detect outstanding device, terminal or software malfunctioning's in network. o A device level benchmark index can be created that provides the operator a metric that how optimal the given time period’s network configuration is to the devices.
  • Figures 6-10 presents exemplary results provided by the solution presented herein.
  • Figure 6 presents the basic report, which could be generated from the Network Level Customer Experience Consistency Score calculation methodology. Subscribers can be mapped by these values to categories, aka CESC “bins” (e.g., excellent, good, average, fair, and poor quality of service experience values based on their previous time period network KPIs.
  • Figure 7 presents a basic aggregation report that can be created for the categorization. It will represent from poorly to excellent performing customers’ ratio in the network.
  • Figure 8 presents a report for given IDs with aggregated KPI and subscriber preference ratios, from which the NL-CECS was calculated.
  • Figure 9 presents a possible drill down report of a given subscriber in a time window for its previous sessions.
  • Figure 10 shows possible CECS-score distribution over a set of subscribers, where the dashed lines show thresholds derived based on underlying metrics to show different levels of service consistency.
  • the solution presented herein takes various customer experience information (e.g., service metrics) to analyze consistency of service quality over multiple subscribers and the various services they use.
  • service metrics e.g., service metrics
  • CRM Customer Relationship Management
  • ARPU Average Revenue Per User
  • output reports based on the scores per subscriber, service, relevant aggregated views, and outliers, including: o Aggregated, network level consistency measures; o Consistency score for individual subscribers; o Any outliers based on these scores; o Most impacting service, KPI, or other root cause associated with the most impacting input.
  • Network-level values can be used to compare or benchmark regions or complete networks (in case the internal calculation methods, scoring, etc. are configured the same way across similar nodes);
  • the apparatuses described above may perform the methods herein and any other processing by implementing any functional means, modules, units, or circuitry.
  • the apparatuses comprise respective circuits or circuitry configured to perform the steps shown in the method figures.
  • the circuits or circuitry in this regard may comprise circuits dedicated to performing certain functional processing and/or one or more microprocessors in conjunction with memory.
  • the circuitry may include one or more microprocessor or microcontrollers, as well as other digital hardware, which may include digital signal processors (DSPs), special-purpose digital logic, and the like.
  • DSPs digital signal processors
  • the processing circuitry may be configured to execute program code stored in memory, which may include one or several types of memory such as read-only memory (ROM), random-access memory, cache memory, flash memory devices, optical storage devices, etc.
  • Program code stored in memory may include program instructions for executing one or more telecommunications and/or data communications protocols as well as instructions for carrying out one or more of the techniques described herein, in several embodiments.
  • the memory stores program code that, when executed by the one or more processors, carries out the techniques described herein.
  • circuits e.g., a weight circuit, a deviation circuit, a consistency circuit, a model parameter circuit, a consistency score circuit, etc.
  • ASIC application specific integrated circuit
  • Embodiments further include a computer program product comprising program code portions for performing the steps of any of the embodiments herein when the computer program product is executed by a computing device.
  • This computer program product may be stored on a computer readable recording medium.
  • the computer-readable medium may comprise a non-transitory computer readable medium.

Abstract

A consistency is evaluated with respect to a quality of a service provided to a subscriber. To that end, quality values are evaluated over a time window to determine how much those quality values change over the time window, and thus to determine how consist a particular quality metric is for that service. In so doing, the consistency of a quality of a particular service provided to a subscriber may be determined.

Description

ADAPTIVE METHOD FOR MEASURING SERVICE LEVEL CONSISTENCY
TECHNICAL FIELD
The solution presented herein relates generally to communications evaluations, and more particularly to evaluating the consistency of a quality of the communications, e.g., wireless communications.
BACKGROUND
Subscribers pay service providers to provide a service. For example, in wireless communications, a subscriber may pay a service provider for a video subscription service that provides requested videos to the subscriber. Because revenues for a particular service are generally tied to customer satisfaction regarding that service, it is in the service provider’s best interest to make sure its subscribers are satisfied with the provided service. High customer satisfaction, for example, generally enables the service provider to maintain and/or expand its customer base, which allows for further successful operation and increasing revenue. Service providers therefore have an interest in accurately assessing customer satisfaction.
One way to evaluate customer satisfaction is through the use of Key Performance Indicators (KPIs). A KPI may be any indicator that provides insight into the performance of a service, where many KPIs are service-specific. Exemplary KPIs include, but are not limited to, throughput, delay, video stall ratio, setup time, call drops, voice quality, video quality, etc. While KPIs provide an important metric for the evaluation of customer satisfaction, KPIs alone cannot provide a complete picture of the quality of the provided service or a subscriber’s satisfaction with that quality. For example, a KPI indicating acceptable throughput over a time window does not indicate whether that throughput is acceptable because over the time window the good and bad throughput instances average out to an acceptable throughput, or because the throughput is consistently acceptable over that time window. While the latter situation is likely to produce good customer satisfaction, the former is likely to produce low customer satisfaction. As such, there remains a need for improved ways to more accurately assess customer satisfaction.
SUMMARY
The solution presented herein determines a consistency of a quality of a service provided to a subscriber. To that end, the solution presented herein evaluates quality metrics over a time window to determine how much those quality values change over the time window, and thus to determine how consist a particular quality metric is for that service. In so doing, the solution presented herein assesses the consistency of a quality of a particular service provided to a subscriber. In so doing, the solution presented herein enables the service provider to more accurately assess customer satisfaction, and to better identify areas that are negatively or positively impacting customer satisfaction. One exemplary embodiment comprises a method of evaluating a consistency of a quality of service of one or more services provided to a subscriber. The method comprises deriving one or more model parameters for a consistency model for each of the one or more services from one or more first quality values obtained for a first time window. The method further comprises calculating a consistency score for the one or more services responsive to the one or more model parameters and one or more second quality values obtained for a second time window, where each of the first and second quality values indicate a quality of the corresponding service provided to the subscriber during the first and second time windows, respectively. The method further comprises outputting the consistency score to the subscriber and/or a service provider.
In some exemplary embodiments, the calculation of the consistency score comprises calculating, for each of the one or more services, a consistency deviation using the corresponding one or more second quality values and the corresponding one or more model parameters, and calculating the consistency score for the one or more services for the second time window responsive to the calculated consistency deviations.
In some embodiments, the method further comprises determining a service weight for each of the one or more services provided to the subscriber during the second time window responsive to the corresponding one or more second quality values, where determining the consistency score comprises determining the consistency score for the one or more services for the second time window responsive to the service weights and the consistency deviations.
One exemplary embodiment comprises a consistency evaluation apparatus comprising one or more processing circuits configured to derive one or more model parameters for a consistency model for each of the one or more services from one or more first quality values obtained for a first time window, and calculate a consistency score for the one or more services responsive to the one or more model parameters and one or more second quality values obtained for a second time window, where each of the first and second quality values indicate a quality of the corresponding service provided to the subscriber during the first and second time windows, respectively. The consistency evaluation apparatus is further configured to output the consistency score to the subscriber and/or a service provider.
One exemplary embodiment comprises a computer program product for controlling a consistency evaluation apparatus. The computer program product comprises software instructions which, when run on at least one processing circuit in the consistency evaluation apparatus, causes the consistency evaluation apparatus to derive one or more model parameters for a consistency model for each of the one or more services from one or more first quality values obtained for a first time window, and calculate a consistency score for the one or more services responsive to the one or more model parameters and one or more second quality values obtained for a second time window, where each of the first and second quality values indicate a quality of the corresponding service provided to the subscriber during the first and second time windows, respectively. The software instructions, when run on the at least one processing circuit, further cause the consistency evaluation apparatus is further configured to output the consistency score to the subscriber and/or a service provider.
A computer readable medium may comprise the computer program product. The computer readable medium may comprise a non-transitory computer readable medium.
One exemplary embodiment comprises a consistency evaluation apparatus for evaluating a consistency of a quality of service of one or more services provided to a subscriber. The consistency evaluation apparatus comprises a model parameter unit and a consistency unit. The model parameter unit is configured to derive one or more model parameters for a consistency model for each of the one or more services from one or more first quality values obtained for a first time window. The consistency unit is configured to calculate a consistency score for the one or more services responsive to the one or more model parameters and one or more second quality values obtained for a second time window, where each of the first and second quality values indicate a quality of the corresponding service provided to the subscriber during the first and second time windows, respectively. The consistency unit is further configured to output the consistency score to the subscriber and/or a service provider.
In some exemplary embodiments, the consistency unit calculates the consistency score by calculating, for each of the one or more services, a consistency deviation using the corresponding one or more second quality values and the corresponding one or more model parameters, and calculating the consistency score for the one or more services for the second time window responsive to the calculated consistency deviations.
In some embodiments, the consistency evaluation apparatus further comprises a weight unit configured to determine a service weight for each of the one or more services provided to the subscriber during the second time window responsive to the corresponding one or more second quality values, where the consistency unit determines the consistency score by determining the consistency score for the one or more services for the second time window responsive to the service weights and the consistency deviations.
BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 shows a consistency evaluation method according to exemplary embodiments of the solution presented herein.
Figure 2 shows a block diagram of a consistency evaluation apparatus according to exemplary embodiments of the solution presented herein.
Figure 3 shows a block diagram of another consistency evaluation apparatus according to exemplary embodiments of the solution presented herein.
Figure 4 shows an exemplary wireless network using the solution presented herein.
Figure 5 shows another consistency evaluation method according to exemplary embodiments of the solution presented herein. Figure 6 shows an exemplary consistency report generated according to the solution presented herein.
Figure 7 shows another exemplary consistency report generated according to the solution presented herein.
Figure 8 shows another exemplary consistency report generated according to the solution presented herein.
Figure 9 shows another exemplary consistency report generated according to the solution presented herein.
Figure 10 shows an exemplary consistency score distribution according to the solution presented herein.
DETAILED DESCRIPTION
The solution presented herein assesses consistency of a quality of one or more services provided to a subscriber. The consistency of a particular service indicates how much a particular quality metric for that service changes, e.g., over time. For example, a service is said to be consistent if a quality metric for that service is uniform regardless of time, place, or occasion. It is useful for service providers to know that a Key Performance Metric (KPI) indicates above average performance for a particular service over a time window. It is more useful for the service provider to know whether that above average performance happens consistently over the time window, or whether that above average performance is the result of multiple high performance instances and multiple low performance instances during the time window that average out to an “above average” performance. With this additional insight into the consistency of a service, the service provider can more easily identify problems, the cause of such problems, and solutions for such problems.
One way for service providers to determine consistency is for the service providers to gather information from the subscribers about how these subscribers perceive service consistency, e.g., by conducting appropriate surveys. Surveys, however, require a great deal of human interaction and may very well be ignored by the subscriber, e.g., due to subscriber disinterest. Even if a survey collects enough data, adequate evaluation of the collected day typically requires thorough examination by an expert statistician to ensure the collected data is sufficiently representative of the subscriber base. As such, not only do surveys require significant human interaction, but implementation and evaluation of such surveys also require considerable resources, e.g., time resources, money resources, man power resources, etc. Further, survey results generally are not available right away, and thus, may not provide sufficiently timely data. The solution presented herein provides a fully automated technique for determining the consistency of a service that is also configurable to provide representative results at any given time, and for any desired time window. The following first describes a solution for determining the consistency of one or more services provided to a single subscriber. It will be appreciated, however, that the solution presented herein applies equally well to determining the consistency of one or more services provided to multiple subscribers, and that the determined consistency score may be subscriber- specific.
The solution presented herein is described generally in terms of the consistency of a quality of service defined by one or more quality values, where the consistency of the various quality values is determined for an evaluation time window relative to a fully automated model.
In so doing, the solution presented herein provides a fully automated consistency score for the service(s) provided to a subscriber. Exemplary quality values include, but are not limited to, Key Performance Indicators (KPIs), Quality of Experience (QoE) metrics, Key Quality Indicators (KQIs), etc. As discussed further below, it will be appreciated that each provided service may be evaluated using one quality value or multiple quality values. While not required, many quality values may be specific to the provided service. Table 1 shows exemplary QoE metrics that may be associated with various services.
TABLE 1
Figure imgf000006_0001
Figure 1 shows an exemplary method 100 of evaluating a consistency of a quality of service of one or more services provided to a subscriber. The method 100 comprises deriving one or more model parameters for a consistency model for each of the one or more services from one or more first quality values obtained for a first time window (block 110). The method 100 further comprises calculating a consistency score for the one or more services responsive to the one or more model parameters and one or more second quality values obtained for a second time window (block 120), where each of the first and second quality values indicate a quality of the corresponding service provided to the subscriber during the first and second time windows, respectively. The method further comprises outputting the consistency score to the service provider and/or subscriber (block 130).
The method 100 of Figure 1 may be implemented by any consistency evaluation apparatus 200, e.g., as shown in Figure 2, which may be part of a service provider or management node. In a wireless communication system, for example, the consistency evaluation apparatus 200 may be implemented in any network node, e.g., a base station, core network node, network management node, etc. It will be appreciated, however, that the consistency evaluation apparatus 200 is limited to wireless networks, and may be implemented in any system that provides a service to a subscriber and that uses quality values to evaluate the quality of the provided service.
As shown in Figure 2, the consistency evaluation apparatus 200 comprises a consistency circuit 210 and a model parameter circuit 220. The model parameter circuit 220 is configured to derive one or more model parameters for a consistency model for each of the one or more services from one or more first quality values obtained for a first time window. The consistency circuit 210 is configured to calculate a consistency score for the one or more services responsive to the one or more model parameters and one or more second quality values obtained for a second time window, where each of the first and second quality values indicate a quality of the corresponding service provided to the subscriber during the first and second time windows, respectively. The consistency evaluation apparatus 200 outputs the calculated consistency score to an output circuit 240 of the subscriber and/or service provider.
It will be appreciated that in some embodiments, the output circuit 240 is part of the consistency evaluation apparatus 200, and in other embodiments it is separate from the consistency evaluation apparatus 200.
In some exemplary embodiments, the model parameter circuit 220 derives the model parameter(s) λs for each service s by applying one or more statistical evaluations to the first quality values obtained for the first time window for the corresponding service to determine one or more model parameters for each service. For example, the model parameter circuit 220 may determine the model parameter(s) λs for a particular service s by determining one or more of a mode of the first quality values for that service, a standard deviation of the first quality values for that service, a mean of the first quality values for that service, a median of the first quality values for that service, etc. It will be appreciated that the solution presented herein is not limited to these statistical evaluations, and that mode parameter circuit 220 may use other statistical evaluations to determine other statistical properties of the quality value(s).
In some exemplary embodiments, the consistency circuit 210 comprises a deviation circuit 212 and a consistency score circuit 214. The deviation circuit 212 is configured to calculate, for each of one or more services being provided to the subscriber, a consistency deviation Qs using the corresponding one or more model parameters λs derived for the corresponding service and the second quality value(s) K2s obtained for the corresponding service. In general, the deviation circuit 212 determines how much each second quality value deviates from a “norm” for that quality value, where that norm is defined by the corresponding one or more model parameters λs . In one exemplary embodiment, the deviation circuit 212 may determine the consistency deviation for each service as a scoring function of the second quality value(s) and model parameter(s), . It will be appreciated that any
Figure imgf000008_0003
number of scoring functions may be used, e.g., any statistical functions used to find data outliers. One exemplary embodiment uses the equation below.
Figure imgf000008_0002
where l(s ) represents the set of indices j of the one or more second quality values related to service s , Kj represents the j th second quality value for service s , λ1j represents the first model parameter for the j ,h second quality value related to service s , λ2j represents the second model parameter for the j th second quality value related to service s , and Bj represents a predetermined weight for the j th second quality value related to service s . While not required, in some exemplary embodiments, the predetermined weights may be configured to satisfy the constraint . The above equation has the benefits of being easy to
Figure imgf000008_0001
compute, being highly scalable, and having only two model parameters per quality value, which makes it easy to update as desired. It will be appreciated that the above equation is exemplary, and that the solution presented herein is not limited to determining the consistency deviations using this equation.
The consistency score circuit 214 is configured to calculate the consistency score CS for the one or more services for the second time window responsive to the calculated consistency deviations. In some exemplary embodiments, the consistency score CS for each service provided to the subscriber over the second time window may be equivalent to the corresponding consistency deviation Qs output by the deviation circuit 212. In other exemplary embodiments, the consistency score CS for multiple ones of the services provided to the subscriber may be calculated as a function of the corresponding consistency deviations , e.g., a sum of the corresponding consistency deviations or a
Figure imgf000008_0006
Figure imgf000008_0004
gamma function of the corresponding consistency deviations
Figure imgf000008_0005
When the consistency score circuit 214 computes the consistency score as a function of the consistency deviations, e.g., sums the consistency deviation determined for each service, each consistency deviation used to calculate the consistency score equally impacts the consistency score. As such, a low consistency associated with a service provided infrequently to the subscriber and a high consistency associated with a service provided frequently to the subscriber will equally impact the overall consistency score. To more accurately reflect the consistency impact of each service on the subscriber, another exemplary embodiment may include a weight circuit 230 (Figure 3) that determines a service weight Ws for each service.
For this embodiment, the consistency circuit 210 includes the weights when determining the consistency score. For example, the consistency score circuit may determine the consistency score responsive to the consistency deviations and the weights, i.e.,
Figure imgf000009_0001
Figure imgf000009_0002
Weight circuit 230 may determine the weights based on any number of factors related to providing services to a subscriber. For example, the weight calculator may determine a weight for each service that is proportional to how much of the second time window is spent providing the corresponding service to the subscriber. In this embodiment, for example, the weight circuit 230 may evaluate the second quality values obtained for the second time window to determine what percentage of the second time widow was used by each service, and use the determined percentages as the weights. In other exemplary embodiments, the weight circuit 230 may additionally or alternatively determine the weights from other factors, e.g., subscriber preference, cost of service (e.g., higher cost services have a higher weight), etc. While not required, the service weights may be configured to satisfy the constraint .
Figure imgf000009_0003
The above-described solution is described in terms of a first time window (for the model parameters) and a second time window (for the consistency score). While not required, in some exemplary embodiments, the first time window overlaps some or all of the second time window. While it may be useful in some embodiments for the first time window to have the same duration as the second time window, the first time window will generally be longer than the second time window to better establish historical “norms” for the quality values. For example, the first time window may span a week, while the second time window may span a day, which may or may not be part of the week of the first time window. In some embodiments, the time frames for the first and/or second windows may be predefined for the consistency evaluation apparatus 200.
In other embodiments, the time frames for the first and/or second windows may be variable, and may be configured by the service provider and/or by the subscriber. For example, the subscriber may want to see consistency report each day, but may want the consistency model to be based on multiple weeks of quality values, e.g., the previous two weeks, the previous year, etc. For this example, the subscriber may set the first time window to the previous two weeks and the second time window to the previous day. It will be appreciated that the subscriber and/or service provider may configure the time windows used to calculate the consistency score to cover any time period where quality values for the service(s) in question are available.
The above provides some general descriptions regarding the solution presented herein for multiple services provided to a single subscriber. The following provides more detailed examples, and covers scenarios where multiple services are provided to multiple subscribers. Those skilled in the art will appreciate that the examples provided below are exemplary, and not limiting. Figure 4 shows one exemplary embodiment of the solution presented herein in a wireless network, where the consistency evaluation apparatus 200 may be that shown in Figure 3, for example. The consistency evaluation apparatus, which may also be referred to as a score management node, receives Network Service Level Consistency (NSLC) data that is defined on technical network measurements as, e.g., KPIs, which are collected by the Customer Experience Management (CEM) node for one or more time windows on the whole network or on specified geographical regions. The input data contains data from occasions when a given network service was delivered to the subscribers. The calculation’s further steps need source data for quality scoring and preference scoring. For example, quality scoring can be done over call session setup success ratio, call drop ratio, throttled session ratio, web access time and for e.g. usage scoring can be based on summarized time of sessions for a specific service, call durations, message traffic. Node calculations over the entire network may be referred to herein as “population statistics.”
An instance of the process described herein can be used for supporting the consistent quality assurance of a service or network by a greater population’s network statistics projected to the subscribers on the services when it has been delivered on a time window. An instance of this model will be described in terms of functionality in a “score management node,” which comprises the deviation circuit 212, consistency score circuit 214, model parameters circuit 220, and weight circuit 230. Although the term score management node is used here, it could be substituted by “score management system” or “consistency evaluation apparatus” or similar term throughout this disclosure.
In order for a network-level service to be consistent, it is arguably important for each service to be consistent as well. Flowever, from a service provider perspective, this may require unnecessary extra effort to satisfy. The solution presented herein proposes enforcing the consistency of each service and considering the consistency proportional to its usage. If a service is only used by a minority of subscribers (but would require investments to become consistent), then it is not necessary for this service to be consistent for the network-level consistency to be in an acceptable range. For this reason, the solution presented herein uses a weighting circuit 230 to determine service weight for each service. In some embodiments, the weighting circuit 230 calculates this weighting per subscriber because consistency is something the subscriber perceives, and as such, it should be relative to the subscriber’s experience. The weight for subscriber i for service s will be denoted by . Although not required, in some
Figure imgf000010_0003
embodiments In general, , where Gs is a function
Figure imgf000010_0001
Figure imgf000010_0002
of the s,h KPI usage and H (·) is a derived metric of these values.
Model parameter circuit 220 incorporates automatic parameter tuning into the solution presented herein. There may be several reasons for incorporating such automatic parameter tuning. For example, depending on the scoring model used to quantify service consistency, a number of model parameters may be present, possibly relying on the underlying distributions of the input data. Because these distributions may very well change in time, automatic tuning enables the final consistency score to be meaningful and reliable. For each service s , the model parameter circuit 220 calculates the corresponding vector of model parameters based
Figure imgf000011_0010
on the matrix comprising the values of the KPIs related to service s for a first time window, with the help of a function , as formally represented by:
Figure imgf000011_0001
Figure imgf000011_0002
The deviation circuit 212 is configured to provide a consistency deviation that measures the amount of divergence/distortion in the service level of a subscriber from a consistent experience. The consistency deviation of a service s for a subscriber i may be given by represents the vector of KPI values obtained for that service for
Figure imgf000011_0008
subscriber i for the second time window, represents the vector of auto-tuned parameters for
Figure imgf000011_0009
service s and Fs represents a scoring function. There are several functions that may be used to achieve such goals (e.g., measuring the amount of divergence/distortion), e.g., standard procedures from statistics that are used to find outliers. The following function satisfies these requirements, is easy to compute, is highly scalable, and has only two parameters per KPI, which are also easy to be updated by the autotuning process. Let Qi ,s denote the consistency deviation for subscriber i and service s , let Ki ,j denote the value of the j' th KPI related to service s for subscriber i , and let l(s) denote the set containing the indexes of the KPIs related to service i . By using proper weights Bj for the KPIs, the consistency deviation may be computed according to;
Figure imgf000011_0003
where is the expected value and is the standard deviation of the j th KPIs
Figure imgf000011_0005
Figure imgf000011_0004
related to service s . It will be appreciated that the solution presented herein is not limited to this simplistic function. Although not required, it is preferable for the weights Bj to satisfy the constraint
Figure imgf000011_0006
The consistency score circuit 214 calculates the consistency score based on the service weights and the consistency deviations The calculation generally takes into account
Figure imgf000011_0007
all weight-distortion pairs, which measured (or differentiated) any distortion on subscriber’s quality or usage in the population, e.g., . For example, the consistency score
Figure imgf000011_0011
circuit 214 may perform a simple linear combination of the service weights and consistency deviations according to . Figure 5 shows one exemplary method of the
Figure imgf000012_0001
above-described mathematical details.
The main output of the consistency evaluation apparatus is a consistency score, which may be referred to as the Customer Experience Consistency Score (CECS). As previously noted, a service is said to be consistent if it provides uniform quality regardless of time, place, and occasion. In some exemplary embodiments, the per-subscriber consistency scores may be used to determine/form a distribution. The narrower this distribution, the better the overall consistency. This can be measured for example with kurtosis, or an inter-percentile range, e.g., of the 95th and 5th percentiles of the distribution. The consistency scores may also be used to support Service Level Agreements (SLAs). To this end, service weights and consistency deviations may be determined for a particular service provided to a particular subscriber during multiple time periods. The consistency scores computed from these service weights and consistency deviations form a distribution, which can tell the service provider how consistent the corresponding service is for the given subscriber over time, which information the service provider may use to enforce SLA-related sanctions, e.g., by determining how this distribution differs from that of a corresponding general population statistic.
As an output of the CS calculation, a Network Outlier score may be defined, e.g., based on predefined thresholds, that highlight benchmark points in the adaptive calculation to see how far a subscriber’s ID-score was from the average service level. These calculations may be based, e.g., on appending into the database a predefined subscriber with static KPI values that characterize bad KPI values of the services. The outlier score may be defined as the distance of the predefined subscriber’s score, where the bigger the gap, the greater the probability of being an outlier.
Network Level Customer Experience Consistency Scores may be also be evaluated as Network Comparison Measurement (or Market Comparison Measurement). As discussed above, the inter percentile range of the 95th and 5th percentiles of the distribution can be interpreted as a basic NL-CECS of the network, given two different networks (markets), the lower this score is, the more consistent the service is. Also, this comparison methodology yields to a natural definition of a comparative score for provided services (e.g., VoLTE calls) to rank them between operators.
The solution presented herein may further be improved by taking into consideration not just objective network measurement KPI/QoE measurements, but also by using other customer related databases to enrich the input, e.g.,:
• Customer care database (measures the observed problems of the subscriber);
• Billing database (helps to filter out noise when setting related to end user or the associated plan can be blamed for network level inconsistency e.g. throttling); • Prizing database (can be useful input as well to filter out noise).
These additional features may significantly improve the customer care quality as well, due to their stronger relation to subjective features that do not (strictly) rely on objective measurements.
The consistency scores of the solution presented herein not only provide a good measurement of the network consistency, but also provide invaluable outputs in other important network measurements (e.g., outlier detection in subscriber experience). The solution presented herein may be generalized to any given dimension, e.g.,:
• Site or eNodeB ID o Network Level Service Consistency: Supports Service Level Agreement (SLA). This perspective gives a spatio-temporal measurement of the network by providing benchmark of the network status for a given time period o Creates Cell ID and Cell Service Level Consistency pairs o Outlier detection: unusually outstanding KPI/QoE measurement aggregated values can detect cell or eNodeB malfunctioning in the core network system o A network cell level benchmark index can be created that highlights spatial consistency in a given network. (One can even consider the density of cells or can be normalized with the distance of the entire network).
• Device or terminal ID o Network Level Service Consistency: Supports SLA. This perspective gives a time-device or even a device-software version measurement of the network by providing benchmark of the network status for a given time period o Creates Device ID and Device Service Level Consistency pairs o Outlier detection: unusually outstanding KPI/QoE measurement aggregated values can detect outstanding device, terminal or software malfunctioning's in network. o A device level benchmark index can be created that provides the operator a metric that how optimal the given time period’s network configuration is to the devices.
Figures 6-10 presents exemplary results provided by the solution presented herein. Figure 6 presents the basic report, which could be generated from the Network Level Customer Experience Consistency Score calculation methodology. Subscribers can be mapped by these values to categories, aka CESC “bins” (e.g., excellent, good, average, fair, and poor quality of service experience values based on their previous time period network KPIs. Figure 7 presents a basic aggregation report that can be created for the categorization. It will represent from poorly to excellent performing customers’ ratio in the network. Figure 8 presents a report for given IDs with aggregated KPI and subscriber preference ratios, from which the NL-CECS was calculated. Figure 9 presents a possible drill down report of a given subscriber in a time window for its previous sessions. It will be appreciated it can be created for shorter period’s average usage and quality indicator KPIs in the time window. Figure 10 shows possible CECS-score distribution over a set of subscribers, where the dashed lines show thresholds derived based on underlying metrics to show different levels of service consistency.
The solution presented herein takes various customer experience information (e.g., service metrics) to analyze consistency of service quality over multiple subscribers and the various services they use. The solution presented herein allows the following main functionalities:
Considers and infers service preferences of the subscribers based on their service usage and/or explicitly configured service preferences.
Considers multiple inputs for the preferences, like customer care and Customer Relationship Management (CRM) or billing data (e.g., Average Revenue Per User (ARPU)).
Performs parameter adaptation and auto-tuning based on the variety of inputs provided.
- Applies a configurable scoring model.
Determines most impacting service, KPI, or other root cause associated with the most impacting input.
Provides output reports based on the scores per subscriber, service, relevant aggregated views, and outliers, including: o Aggregated, network level consistency measures; o Consistency score for individual subscribers; o Any outliers based on these scores; o Most impacting service, KPI, or other root cause associated with the most impacting input.
Further, the solution presented herein provides the following advantages:
- An overall consistency metric is not available in other solutions;
Flexible, enables putting emphasis on technical metrics or user experience metrics; Includes calculation of service preferences;
Includes methods for adapting and auto-tuning parameters;
- Network-level values can be used to compare or benchmark regions or complete networks (in case the internal calculation methods, scoring, etc. are configured the same way across similar nodes);
Can be directly included in existing CEM solutions to show feedback for troubleshooting;
- Service providers can focus on proper investments by focusing on improving those services that have more effect on global service consistency;
Highlights subscribers with outstandingly good or bad service to take further actions; - Solution is flexible and can be personalized by updating internal calculations and models to better reflect a given service providers preferences;
- Supports root cause analysis;
Highlights worst service experience affecting the overall service consistency per subscriber; and
Makes it possible to consider service consistency from different points of view: geographical, device-related, etc.
The apparatuses described above may perform the methods herein and any other processing by implementing any functional means, modules, units, or circuitry. In one embodiment, for example, the apparatuses comprise respective circuits or circuitry configured to perform the steps shown in the method figures. The circuits or circuitry in this regard may comprise circuits dedicated to performing certain functional processing and/or one or more microprocessors in conjunction with memory. For instance, the circuitry may include one or more microprocessor or microcontrollers, as well as other digital hardware, which may include digital signal processors (DSPs), special-purpose digital logic, and the like. The processing circuitry may be configured to execute program code stored in memory, which may include one or several types of memory such as read-only memory (ROM), random-access memory, cache memory, flash memory devices, optical storage devices, etc. Program code stored in memory may include program instructions for executing one or more telecommunications and/or data communications protocols as well as instructions for carrying out one or more of the techniques described herein, in several embodiments. In embodiments that employ memory, the memory stores program code that, when executed by the one or more processors, carries out the techniques described herein.
Further, various elements disclosed herein are described as some kind of circuit, e.g., a weight circuit, a deviation circuit, a consistency circuit, a model parameter circuit, a consistency score circuit, etc. Each of these circuits may be embodied in hardware and/or in software (including firmware, resident software, microcode, etc.) executed on a controller or processor, including an application specific integrated circuit (ASIC).
Embodiments further include a computer program product comprising program code portions for performing the steps of any of the embodiments herein when the computer program product is executed by a computing device. This computer program product may be stored on a computer readable recording medium. Further, the computer-readable medium may comprise a non-transitory computer readable medium.
The present invention may, of course, be carried out in other ways than those specifically set forth herein without departing from essential characteristics of the invention. The present embodiments are to be considered in all respects as illustrative and not restrictive, and all changes coming within the meaning and equivalency range of the appended claims are intended to be embraced therein.

Claims

CLAIMS What is claimed is:
1 . A method (100) of evaluating a consistency of a quality of service of one or more services provided to a subscriber, the method (100) comprising: deriving (110), in a model parameter unit (220), one or more model parameters for a consistency model for each of the one or more services from one or more first quality values obtained for a first time window; and calculating (120) a consistency score, in a consistency unit (210), for the one or more services responsive to the one or more model parameters and one or more second quality values obtained for a second time window; wherein each of the first and second quality values indicate a quality of the corresponding service provided to the subscriber during the first and second time windows, respectively; outputting (130) the consistency score to an output interface (240) of the subscriber and/or a service provider.
2. The method (100) of claim 1 wherein calculating the consistency score (120) comprises: calculating, for each of the one or more services, a consistency deviation using the corresponding one or more second quality values and the corresponding one or more model parameters; and calculating the consistency score for the one or more services for the second time window responsive to the calculated consistency deviations.
3. The method (100) of any one of claims 1-2: further comprising determining a service weight for each of the one or more services provided to the subscriber during the second time window responsive to the corresponding one or more second quality values; wherein the determining (120) the consistency score comprises determining the consistency score for the one or more services for the second time window responsive to the service weights and the consistency deviations.
4. The method (100) of claim 3 wherein the determining the consistency score (120) comprises: applying the determined service weight to the corresponding consistency deviation to determine a weighted deviation for each service; and combining the weighted deviations to determine the consistency score.
5. The method (100) of any one of claims 3-4 wherein a sum of the service weights is one.
6. The method (100) of any one of claims 1-5 wherein the first time window is longer than the second time window.
7. The method (100) of any one of claims 1-6 further comprising configuring the first time window and/or the second time window responsive to service provider input.
8. The method (100) of any one of claims 1-6 further comprising configuring the first time window and/or the second time window responsive to subscriber input.
9. The method (100) of any one of claims 1-8 wherein the deriving the one or more model parameters (110) comprises, for each of the one or more services, applying one or more statistical evaluations to the corresponding one or more first quality values to derive the one or more model parameters.
10. The method (100) of claim 9 wherein the one or more statistical evaluations comprise: a mode evaluation; and/or a standard deviation evaluation; and/or a mean evaluation; and/or a median evaluation.
11 . The method (100) of claim 9 wherein: the deriving the one or more model parameters (110) comprises, for each service: deriving a first model parameter from a mean of the corresponding one or more first quality values; and deriving a second model parameter from a standard deviation of the corresponding one or more first quality values; and the calculating the consistency deviation comprises, for each service, calculating the consistency deviation responsive to the corresponding one or more second quality values, the corresponding first model parameter, and the corresponding second model parameter.
12. The method (100) of claim 11 wherein calculating the consistency deviation comprises calculating the consistency deviation Qs for each of the one or more services according to:
Figure imgf000017_0001
where l(s) represents the set of indices j of the one or more second quality values related to service s , Kj represents the jth second quality value related to service s , λ1j represents the first model parameter for the j th second quality value related to service s , λ2} represents the second model parameter for the j ,h second quality value related to service s , and Bj represents a predetermined weight for the j ,h second quality value related to service s .
13. The method (100) of any one of claims 1-12 wherein the second time window is contained at least partially within the first time window.
14. The method (100) of any one of claims 1-13 wherein the one or more services comprise one or more wireless services provided to a subscriber in a wireless network.
15. A consistency evaluation apparatus (200) configured to evaluate a consistency of a quality of service of one or more services provided to a subscriber, the consistency evaluation apparatus comprising one or more processing circuits (210-240) configured to execute any of the steps of any of claims 1 -14.
16. A computer program product for controlling a consistency evaluation apparatus (200), the computer program product comprising software instructions which, when run on at least one processing circuit (210-240) in the consistency evaluation apparatus (200), causes the consistency evaluation apparatus (200) to execute the method (100) according to any one of claims 1-14.
17. A computer-readable medium comprising the computer program product of claim 16.
18. The computer-readable medium of claim 17 wherein the computer-readable medium comprises a non-transitory computer readable medium.
19. A consistency evaluation apparatus (200) for evaluating a consistency of a quality of service of one or more services provided to a subscriber, the consistency evaluation apparatus (200) comprising: a model parameter unit (220) configured to derive one or more model parameters for a consistency model for each of the one or more services from one or more first quality values obtained for a first time window; and a consistency unit (210) configured to calculate a consistency score for the one or more services responsive to the one or more model parameters and one or more second quality values obtained for a second time window; wherein each of the first and second quality values indicate a quality of the corresponding service provided to the subscriber during the first and second time windows, respectively; and wherein the consistency unit is further configured to output the consistency score to an output unit (240) of the subscriber and/or a service provider.
20. The consistency evaluation apparatus (200) of claim 19 wherein the consistency unit (210) comprises: a deviation unit (212) configured to calculate, for each of the one or more services, a consistency deviation using the corresponding one or more second quality values and the corresponding one or more model parameters; and a consistency score unit (214) configured to calculate the consistency score for the one or more services for the second time window responsive to the calculated consistency deviations.
21 . The consistency evaluation apparatus (200) of any one of claims 19-20: further comprising a weight unit (230) configured to determine a service weight for each of the one or more services provided to the subscriber during the second time window responsive to the corresponding one or more second quality values; wherein the consistency unit (210) determines the consistency score by determining the consistency score for the one or more services for the second time window responsive to the service weights and the consistency deviations.
22. The consistency evaluation apparatus (200) of claim 21 wherein the consistency score unit (214) determines the consistency score by: applying the determined service weight to the corresponding consistency deviation to determine a weighted deviation for each service; and combining the weighted deviations to determine the consistency score.
23. The consistency evaluation apparatus (200) of any one of claims 21 -22 wherein a sum of the service weights is one.
24. The consistency evaluation apparatus (200) of any one of claims 19-23 wherein the first time window is longer than the second time window.
25. The consistency evaluation apparatus (200) of any one of claims 19-24 wherein: the model parameter unit (220) is further configured to configure the first time window responsive to service provider input; and/or the consistency unit (210) is further configured to configure the second time window responsive to service provider input.
26. The consistency evaluation apparatus (200) of any one of claims 19-24 wherein: the model parameter unit (220) is further configured to configure the first time window responsive to subscriber input; and/or the consistency unit (210) is further configured to configure the second time window responsive to subscriber input.
27. The consistency evaluation apparatus (200) of any one of claims 19-26 wherein the model parameter unit (220) derives the one or more model parameters by, for each of the one or more services, applying one or more statistical evaluations to the corresponding one or more first quality values to derive the one or more model parameters.
28. The consistency evaluation apparatus (200) of claim 27 wherein the one or more statistical evaluations comprise: a mode evaluation; and/or a standard deviation evaluation; and/or a mean evaluation; and/or a median evaluation.
29. The consistency evaluation apparatus (200) of claim 27 wherein: the model parameter unit (220) derives the one or more model parameters by, for each service: deriving a first model parameter from a mean of the corresponding one or more first quality values; and deriving a second model parameter from a standard deviation of the corresponding one or more first quality values; and the deviation unit (212) is configured to calculate the consistency deviation by, for each service, calculating the consistency deviation responsive to the corresponding one or more second quality values, the corresponding first model parameter, and the corresponding second model parameter.
30. The consistency evaluation apparatus (200) of claim 29 wherein deviation unit (212) is configured to calculate the consistency deviation by calculating the consistency deviation Qs for each of the one or more services according to:
Figure imgf000021_0001
where l(s) represents the set of indices j of the one or more second quality values related to service s , Kj represents the jth second quality value related to service s , λ1j represents the first model parameter for the j th second quality value related to service s , λ2j represents the second model parameter for the j th second quality value related to service s , and Bj represents a predetermined weight for the j th second quality value related to service s .
31. The consistency evaluation apparatus (200) of any one of claims 19-30 wherein the second time window is contained at least partially within the first time window.
32. The consistency evaluation apparatus (200) of any one of claims 19-31 wherein the one or more services comprise one or more wireless services provided to a subscriber in a wireless network.
33. The consistency evaluation apparatus (200) of any one of claims 19-32 wherein the consistency evaluation apparatus is disposed in a network management node.
PCT/IB2020/051727 2020-02-28 2020-02-28 Adaptive method for measuring service level consistency WO2021171063A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/IB2020/051727 WO2021171063A1 (en) 2020-02-28 2020-02-28 Adaptive method for measuring service level consistency

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/IB2020/051727 WO2021171063A1 (en) 2020-02-28 2020-02-28 Adaptive method for measuring service level consistency

Publications (1)

Publication Number Publication Date
WO2021171063A1 true WO2021171063A1 (en) 2021-09-02

Family

ID=69770985

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2020/051727 WO2021171063A1 (en) 2020-02-28 2020-02-28 Adaptive method for measuring service level consistency

Country Status (1)

Country Link
WO (1) WO2021171063A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024018257A1 (en) 2022-07-19 2024-01-25 Telefonaktiebolaget Lm Ericsson (Publ) Early detection of irregular patterns in mobile networks

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3035597A1 (en) * 2013-08-14 2016-06-22 ZTE Corporation Method and device for evaluating quality of user experience, user terminal and network server
US20180337834A1 (en) * 2016-01-30 2018-11-22 Huawei Technologies Co., Ltd. Network service quality evaluation method and system, and network devic
US20200036605A1 (en) * 2017-04-05 2020-01-30 Huawei Technologies Co., Ltd. Network Element Health Status Detection Method and Device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3035597A1 (en) * 2013-08-14 2016-06-22 ZTE Corporation Method and device for evaluating quality of user experience, user terminal and network server
US20180337834A1 (en) * 2016-01-30 2018-11-22 Huawei Technologies Co., Ltd. Network service quality evaluation method and system, and network devic
US20200036605A1 (en) * 2017-04-05 2020-01-30 Huawei Technologies Co., Ltd. Network Element Health Status Detection Method and Device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024018257A1 (en) 2022-07-19 2024-01-25 Telefonaktiebolaget Lm Ericsson (Publ) Early detection of irregular patterns in mobile networks

Similar Documents

Publication Publication Date Title
EP2633644B1 (en) Service performance in communications network
US8488460B2 (en) Method and apparatus for evaluating services in communication networks
US10135698B2 (en) Resource budget determination for communications network
EP2806594B1 (en) Method and device for evaluating the stability of a telecommunication line
JP5612696B2 (en) Network management system and method for identifying and accessing quality of service results within a communication network
US20120303413A1 (en) Methods and systems for network traffic forecast and analysis
US20230171637A1 (en) Method And System For Application Aware Congestion Management
US20200128441A1 (en) Service aware load imbalance detection and root cause identification
US20220103435A1 (en) Data processing method and apparatus
US20100145755A1 (en) Arrangement and a related method for providing business assurance in communication networks
WO2015081685A1 (en) Processing method and apparatus for quality of service
US20210377811A1 (en) Service aware coverage degradation detection and root cause identification
US20150249750A1 (en) Dynamic auctioning of unused network capacity
US20220046466A1 (en) Method and system for managing mobile network congestion
Yusuf-Asaju et al. Framework for modelling mobile network quality of experience through big data analytics approach
WO2021171063A1 (en) Adaptive method for measuring service level consistency
US11570228B2 (en) System and method for managing video streaming quality of experience
US10237767B2 (en) Method and score management node for supporting evaluation of a delivered service
CN109005064B (en) QoE-oriented service quality assessment method and device and electronic equipment
Dai A survey of quality of experience
Yusuf-Asaju et al. Mobile network quality of experience using big data analytics approach
CN111325432A (en) Method, device, equipment and medium for determining customer experience quality
Hossfeld et al. Industrial User Experience Index vs. Quality of Experience Models
US20180234324A1 (en) Evaluation of network condition
WO2024042346A1 (en) Unified service quality model for mobile networks

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20709744

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20709744

Country of ref document: EP

Kind code of ref document: A1