CA2371730A1 - Account fraud scoring - Google Patents
Account fraud scoring Download PDFInfo
- Publication number
- CA2371730A1 CA2371730A1 CA002371730A CA2371730A CA2371730A1 CA 2371730 A1 CA2371730 A1 CA 2371730A1 CA 002371730 A CA002371730 A CA 002371730A CA 2371730 A CA2371730 A CA 2371730A CA 2371730 A1 CA2371730 A1 CA 2371730A1
- Authority
- CA
- Canada
- Prior art keywords
- account
- alarms
- fraud
- alarm
- score
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q99/00—Subject matter not provided for in other groups of this subclass
Landscapes
- Business, Economics & Management (AREA)
- Physics & Mathematics (AREA)
- General Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Alarm Systems (AREA)
- Telephonic Communication Services (AREA)
Abstract
A method and apparatus for prioritising alarms in an account fraud detection system. The method involves assigning a numeric weight to each of a plurality of behavioural characteristics of an alarm raised against an account, and computing a fraud score for that alarm responsive to those numeric weights.
Numeric bounds may be imposed on the score, and a term may be added dependent on the number of alarms raised on the account.
Numeric bounds may be imposed on the score, and a term may be added dependent on the number of alarms raised on the account.
Description
ACCOUNT FRAUD SCORING
FIELD OF THE INVENTION
The present invention relates to a method and apparatus for account s fraud scoring and a system incorporating the same.
BACKGROUND TO THE INVENTION
In recent years there has been a rapid increase in the number of commercially operated telecommunications networks in general and in particular wireless telecommunication networks. Associated with this to proliferation of networks is a rise in fraudulent use of such networks the fraud typically taking the form of gaining illicit access to the network, and then using the network in such a way that the fraudulent user hopes subsequently to avoid paying for the resources used. This may for example involve misuse of a third party's account on the network so that is the perpetrated fraud becomes apparent only when the third party is charged for resources which he did not use.
In response to this form of attack on the network, fraud detection tools have been developed to assist in the identification of such fraudulent use.
Such a fraud detection tool may, however, produce thousands of alarms Zo in one day. In the past these alarms have been ordered either chronologically according to when they have occurred, or in terms of their importance, or a combination of both. Alarm importance provided a rudimentary order based on the significance of the alarm raised, although it has many failings: such a system takes no account of how alarms zs interact.
Since fraudulent use of a single account can cost a network operator a large sum of money within a short space of time it is important that the operator be able to identify and deal with the most costly forms of fraud at the earliest possible time. The existing methods of chronological ordering 3o and alarm importance ordering are, however, inadequate in that regard.
SUBSTITUTE SHEET (RULE 26) OBJECT OF THE INVENTION
The invention seeks to provide an improved method and apparatus for classifying and prioritising identified instances of potential account fraud.
SUMMARY OF THE INVENTION
s According to a first aspect of the present invention there is provided a method of prioritising alarms in an account fraud detection system comprising the steps of: assigning a numeric weight to each of a plurality of behavioural characteristics of an alarm raised against an account;
computing a fraud score for said alarm responsive to said numeric io weights.
Advantageously, the score gives a meaningful representation of the seriousness of a potential fraud associated with the raised alarm.
Preferably, said step of computing comprises the step of: forming a product of a plurality of said numeric weights.
is According to a further aspect of the present invention there is provided a method of prioritising alarms in an account fraud detection system comprising the steps of: assigning a numeric weight to each of a plurality of behavioural characteristics of each of one or more of alarms raised against an account; computing a fraud score for each of said one or more 2o alarms responsive to said numeric weights; computing an account fraud score responsive to said one or more fraud scores.
Preferably, said step of computing a fraud score comprises the step of:
forming a product of a plurality of said numeric weights.
Preferably, said step of computing an account fraud score comprises the 2s step of: selecting a largest of said one or more fraud scores.
Preferably, said step of computing an account fraud score comprises the step of: imposing a numeric bound on the value of said account fraud score.
Preferably, said step of computing an account fraud score for each of said, 30 one or more alarms comprises the step of: adding a term dependent on the number of alarms raised.
SUBSTITUTE SHEET (RULE 26) Preferably, said step of computing an account fraud score comprises the steps of: selecting a largest of said fraud scores; adding a term dependent on the number of alarms raised.
Advantageously, this prioritises accounts according to the seriousness of s potential fraud associated with them.
According to a further aspect of the present invention there is provided a method of prioritising alarms in an account fraud detection system comprising the steps of: performing the method of claim 3 on a plurality of accounts whereby to compute an account fraud score for each of said Io accounts; providing a sorted list of accounts responsive to said account fraud scores.
The method may also comprise the step of: displaying said sorted list of accounts.
Advantageously, this allows an operator to rapidly identify high risk is account usage and hence concentrate resources on those high risk, potentially high cost frauds.
Preferably, the step of displaying said sorted list of accounts comprises the step of: displaying with each account an indication of its associated account fraud score.
zo In a preferred embodiment, said characteristics include one or more characteristics drawn from the set consisting of: alarm capability, alarm sub-capability, velocity, bucket size, and account age.
The invention also provides for a system for the purposes of fraud detection which comprises one or more instances of apparatus 2s embodying the present invention, together with other additional apparatus.
According to a further aspect of the present invention there is provided an apparatus arranged for prioritising alarms in an account fraud detection system comprising: first apparatus arranged to assign a numeric weight 3o to each of a plurality of behavioural characteristics of an alarm raised against an account; second apparatus arranged to compute a fraud score for said alarm responsive to said numeric weights.
SUBSTITUTE SHEET (RULE 26) According to a further aspect of the present invention there is provided an apparatus arranged for prioritising alarms in an account fraud detection system comprising the steps of: first apparatus arranged to assign a numeric weight to each of a plurality of behavioural characteristics of each s of one or more of alarms raised against an account; second apparatus arranged to compute a fraud score for each of said one or more alarms responsive to said numeric weights; third apparatus arranged to compute an account fraud score responsive to said one or more fraud scores.
According to a further aspect of the present invention there is provided io software on a machine readable medium arranged for prioritising alarms in an account fraud detection system and arranged to perform the steps of: assigning a numeric weight to each of a plurality of behavioural characteristics of an alarm raised against an account; computing a fraud score for said alarm responsive to said numeric weights.
is According to a further aspect of the present invention there is provided software on a machine readable medium arranged for prioritising alarms in an account fraud detection system and arranged to perform the steps of: assigning a numeric weight to each of a plurality of behavioural characteristics of each of one or more of alarms raised against an 2o account; computing an fraud score for each of said one or more alarms responsive to said numeric weights; computing an account fraud score responsive to said one or more fraud scores.
The preferred features may be combined as appropriate, as would be apparent to a skilled person, and may be combined with any of the Zs aspects of the invention.
BRIEF DESCRIPTION OF THE DRAWINGS
In order to show how the invention may be carried into effect, embodiments of the invention are now described below by way of example only and with reference to the accompanying figures in which:
3o Figure 1 shows a schematic diagram of an account fraud scoring apparatus in accordance with the present invention.
Figure 2 shows a schematic diagram of an account fraud prioritising apparatus in accordance with the present invention.
SUBSTITUTE SHEET (RULE 26) Figures 3(a)-(d) show successive columns of a table showing an examples of account fraud score calculations in accordance with the present invention.
DETAILED DESCRIPTION OF INVENTION
s Referring to Figure 1, there is shown a schematic diagram of a system arranged to perform account fraud scoring. In particular the system shown relates to telecommunications system account fraud scoring and comprises a source 100 of Call Detail Records (CDRs) arranged. to provide CDR's to a plurality of fraud detectors 110, 120. In this specific to embodiment, a first detector 110 is a neural network whilst 'a second detector 120 is arranged to apply thresholds (and/or rules) to the received CRS's.
The neural network fraud detector 110 is arranged to receive a succession of CDR's and to provide in response a series of outputs is indicating either a Neural Network Fraudulent Alarm (NN(F)), a Neural Network Expected Alarm (NN(E)), or a third category not indicative of an alarm. (The third category may be implemented by the neural.network not generating an output.) Each NN(E) alarm provided by the neural network 110 is then mapped ao 111 to an associated Alarm Capability Factor (ACF) which is a numeric value indicative of the importance or risk associated with the alarm.
Each NN(F) provided by the neural network 110 is mapped 112 to a confidence level indicative of the confidence with which the neural network predicts that the account behaviour which raised the alarm is 2s fraudulent. This confidence level may then be normalised with respect to the Alarm Capability Factors arising from NN(E)'s and Threshold alarms (described below) to provide an Alarm Capability Factor for each NN(F).
The threshold detector 120 is arranged to receive a succession of CDR's from the CDR source 100 and to provide in response a series of outputs 3o indicative of whether the series of CDR's to date has exceeded any of one or more threshold values associated with different characteristics of the CDR series, any one of which might be indicative of fraudulent account usage.
SUBSTITUTE SHEET (RULE 26) Fraud score 140 is then calculated 130 from the Alarm Capability Factors (ACF), Velocity Factors (VF), and Bucket Factor (BF) which are described in detail below. In a preferred embodiment, the score is calculated as a product:
s Fraud Score = Alarm Capability Factor x Velocity Factor x Bucket Factor (1) In a preferred embodiment, a further factor, a sub-capability factor, is added to the equation to cater for variations of risk within a given broad category of alarms associated with the alarm capability factor.
io Fraud Score = Alarm Capability Factor x Velocity Factor x Bucket Factor x Alarm Sub-Capability Factor (2) Fraud scores are computed for each alarm type raised against a given account and the highest of these scores is taken as the base account fraud score.
is An additional term is then added which takes into account the fact that multiple alarms on the score account may be more indicative of a potential fraud risk than a single alarm. In a most preferred embodiment a fixed, multiple alarm factor is determined and then a multiple of this factor is added to the base account fraud score to give a find account fraud 2o score. The multiple used is simply the number of alarms on the account.
Details of these specific factors and others are given in more detail below.
Turning now to Figure 2, the account fraud scoring system 1 of Figure 1 typically forms part of a fraud detection system.
The CDR data 100 provided to the scoring mechanism 210 described 2s above is obtained from the telecommunications network 200.
The resulting account fraud scores calculated per account may then be sorted (220) so as to identify those accounts most suspected of being used fraudulently. This information may then be presented to an operator via, for example a Graphical User Interface (GUI) 230, either simply by 30 listing the accounts in order of fraud likelihood, or by also showing some indication of the associated account fraud score (for example by SUBSTITUTE SHEET (RULE 26) _7_ displaying the actual account fraud score), or by any other appropriate means.
Referring now to the table shown in Figures 3(a)-(d), an example is given of the numerical values assigned to the various account characteristics.
s The first column simply assigns a number to each of the main alarm types listed in column 2. Rows having no explicitly named alarm type relate to the same alarm type as appears most closely above.
Column 4 similarly lists alarm sub-types where applicable whilst column 9 indicates bucket size for two applicable alarm types.
io Columns 3, 5, 8, and 10 respectively list the alarm capability factors, sub-capability factors, velocity factors, and bucket factors associated with each alarm variant.
In the table shown no specific traffic values and threshold values are shown, since these are specific to a particular account at a particular time.
is Instead, typical resulting velocity factor values (e.g. 1, 1.35) are shown in column 8 for illustrative purposes.
Column 11 shows the effect of applying the sub-capability factor, velocity factor and bucket factor to each basic alarm capability factor.
Column 12 is blank, indicating that all the accounts listed in columns 15-20 32 are considered in this example to be well-established accounts, with a default account age factor of 1Ø In the case of newly opened accounts on higher account age factor, for example 1.2 might be employed.
Column 13 shows the effect of applying the account age factor to the product of preceding factors shown in column 11.
2s Columns 15-32 show nine examples of account fraud score calculations for separate accounts. Each successive pair of columns shows how many of each kind of alarm have been raised against that account, alongside the fraud score associated with that alarm.
At the foot of each pair of columns, a base account fraud score is shown 30 (being the maximum fraud score computed for any alarm raised against that account) along with the total number of alarms raised against that account.
SUBSTITUTE SHEET (RULE 26) _g_ These two figures, in conjunction with the fixed multiple alarm fraud factor, set in this example at 0.65, are used to compute the final account fraud score in each case by adding to the base account fraud score a term being the fixed multiple alarm fraud factor times the number of alarms s raised.
In the example shown, the resulting account fraud scores range from 60.25 on account 7 to 90.65 on account 6.
The selection of precise values for the various factors used in the calculation is a matter of experience and experiment and will vary io according to the field of application. In the example shown, sub-capability factors, velocity factors, and bucket factors all fall approximately in the range 1-1.5, whilst the basic alarm capability factors range from 30 to 90.
To achieve the desired scoring, one associates with each alarm a level of risk that is factored by a number of related elements. With each increase in the is number of such related elements, there is an increase in the level of granularity in the scoring mechanism and a consequent potential increase in precision and efficiency of the scoring mechanism.
Too many elements in the scoring equation, however, tends to make it very volatile, with a higher probability of algorithmic inaccuracies, and also 2o increased risk of any such errors causing a ricochet effect through the fraud scoring engine. The margin for error in configuring the scoring mechanism, and indeed the parameters for the rules and thresholds themselves, is also reduced as the number of elements increases since they are the building blocks on which scoring is based.
Zs In short, too few factors result in a robust but insufficiently accurate system whilst too many factors produce an initially more labour intensive set-up with the potential for being highly accurate, although if configured incorrectly, the opposite could be true. The solution is a compromise between the two extremes: the system needs to be durable yet accurate.
3o In the most preferred embodiment therefore, five significant factors are employed:
~ Alarm Capability Factor ~ Sub-Capability Factor SUBSTITUTE SHEET (RULE 26) _g_ ~ Bucket Factor ~ Velocity Factor ~ Account Age Factor The Alarm Capability Factor indicates the relative hierarchical position of s the risk associated with a given alarm relative to risks associated with other alarms.
The Sub-Capability Factor gives a further refinement of the indication of the hierarchical position of the risk associated with a given alarm relative to risks associate with other alarms.
io Bucket Factor is a measure of the volume of the potential fraud.
Velocity Factor is a measure of the rate at which the fraud is being perpetrated.
Account Age Factor is a measure of how old the account is: new accounts behaviour may be less predictable than older established usage is patterns, and more susceptible to fraud.
All neural network and threshold alarm capabilities are apportioned a figure upon which further calculations are made, increasing or decreasing the score as commensurate with the risk present. The Account Fraud Score created should accurately reflect the level of risk associated with 2o the course of events causing the production of an alarm. This calculation should primarily consider the speed with which money is and may be defrauded, and the volume of revenue defrauded, as these indicate loss to the telecommunications company concerned; questions of cost are always paramount. For example if a criminal has used $5,000 worth of 2s traffic over 4 hours, this is more significant than if the same individual had done so over 8 hours.
The Sub-Capability Factor is added to increase or decrease the risk associated with specific types of alarm. Many alarm types have a finer level of granularity as appropriate to that specific alarm. Many alarm types 3o are sub-divided, for example, into different sub-types of alarms for different call destinations as the inherent risk is different for different SUBSTITUTE SHEET (RULE 26) destinations. For example international calls are more often associated with fraud than calls to mobile telephones.
The longer that an account is in operation fraudulently, the greater the cost will be, so a good fraud management system will aim to detect fraud s as early as possible. Thus the analyst wishes, ideally, to see all alarms after the shortest time period, in order that he may stop the illegal action at the earliest opportunity.
The problem is addressed by calculating a ratio between a) the quantity of traffic pertinent to the particular alarm type within a poll and b) a threshold io value for the alarm. Trigger Value divided by Threshold Value accurately and expeditiously alarms any account where there is a large sudden increase in traffic for that customer. This is because, for example, the 1 hour bucket will always have the lowest threshold for a given capability and therefore any increase in traffic will proportionately increase the fraud is score more in any 1 hour bucket than in a corresponding longer period. In the example in table 1 below, a single extra unit of traffic represents a 2%
rise to the 1 hour bucket but only a 1 % rise for the 4 hour bucket:
Tahlp 1 ~ FYamnle velocity calculation 1 Hour 4 Hour Bucket Bucket Threshold 50 100 Value Poll 1 Tri er Value10 10 Velocity 10/50 = 0.2 10/100 = 0.1 Factor Poll 2 Tri er Value10+1 10+1 Velocity 11 /50 = 11 /100 = 0.11 0.22 Factor Difference 2% 1 Relative to Threshold SUBSTITUTE SHEET (RULE 26) This then gives an additional factor, namely rate of change of traffic relative to given thresholds, whereby to allow the account fraud scoring system to prioritise alarms so that the high velocity frauds can be investigated earlier than slower, and hence potentially less costly , s examples of fraud.
In addition to the above, an account age factor may be applied to increase the risk score associated with new accounts. Over time, the account operators' knowledge of each customer will improve as more data (such as payment information, bank details, and view call pattern) is received io about normal usage patterns and, as a consequence, it will become less likely that the customer will attempt to perpetrate a fraud.
For example, for new accounts, an account age factor of 1.2 might be applied, whilst an established account may have a factor of 1.
Furthermore, performance of certain confirmatory functions by the is ~ account owner. may be required after certain time periods and if the account owner fails to perform these then the account will be suspended As well as considering the volume or momentum of the fraud, it is also relevant to consider the immediate volume of potential fraud present in any given situation. Therefore a factor indicative of increases in the Zo bucket size associated with the alarm can be applied to ensure that a measure of the quantity of fraud is directly represented in the resulting fraud score, independent of a factor representative of the velocity. A
bucket is a time duration over which an alarm has been raised.
In the normal course of events, the 1 hour bucket alarms will be alarmed 2s first because they have the smallest thresholds assigned to them. In the unlikely event that a fraudster manages to perpetrate fraud over a longer period without triggering such a small bucket an alarm, then it is desirable to generate an indication at the earliest opportunity should an alarm on a larger bucket be triggered.
3o Therefore if a 168 hour (1 week) alarm is raised, this is of considerable significance and should be weighted accordingly. Consequently, it is appropriate to increase the weighting applied to larger time buckets. The aim is to ensure that such a larger bucket alarm would be proportionately SUBSTITUTE SHEET (RULE 26) more prominent dependent upon the size of the time bucket and the associated risk.
Some alarms do not lend themselves directly to thresholds, but are merely concerned simply with whether an specific event has occurred.
s For example in a telecommunications network account system, The Neural Network Fraudulent, Neural Network Expected, Hot A Numbers, Hot B Numbers, Overlapping Calls, Single IMEI/Multiple IMSI and Single IMSI/Multiple IMEI alarms, by their very nature, do not lend themselves to thresholds. In these cases the only significance is that a particular CDR
io has been involved in a particular kind of call or whether the profile has exhibited a particular form of suspect behaviour.
The velocity factor (Trigger value/ Threshold value) and Bucket factor are both superfluous in conjunction with the above alarm types (though they may for simplicity be assigned nominal values of 1 which when applied is will have a null modifying effect) and the only true modifier is Account Age Factor. This is not a serious issue since Hot A & B Numbers, Single IMEI/Multiple IMSI, and Single IMSI/Multiple IMEI will typically be allocated a high basic Alarm Capability Factor since these kinds of alarm will certainly need to be examined as priorities by a reviewing fraud 2o analyst.
This approach serves once again to achieve the overall aim that the risk associated with an alarm be accurately reflected in the final score allocated to that alarm.
In some cases it is possible that the score resulting directly from the 2s combinations of factors listed above may exceed reasonable bounds, for example in cases where many factors each have a high value individually indicative of high fraud risk. This may give rise to fraud scores well outside normal range. Whilst such scores may be left unamended, since their high value will clearly stand out relative to other scores, it is also 3o reasonable to take the approach that score values beyond a given threshold all be treated equally since, with such high scores all indicative of high fraud risk, there is little benefit in differentiating between them:
at those score levels the difference in score is more likely to be an artefact of the scoring system than the actual differentiation of fraud risk. The same 3s approach may be applied to very low scores. In such cases then, scores SUBSTITUTE SHEET (RULE 26) may be normalised to lie within fixed bounds: scores lying above or below those bounds being amended to the maximum or minimum bound as appropriate. In practice such a situation should not be common due to the accuracy of the various factor figures given.
s For example, an Account Fraud Score may be normalised within the calculation to ensure that a normalised score between 0 and 100 is produced. All scores under or equal to 0 will be mapped to 0; all scores over or equal to 100 will be mapped to 100.
A situation may occur where multiple alarms are raised for one account in io one poll and it is desirable to cater for this in determining axe Account Fraud Score. The decision on how to treat multiple alarm breaches is based on an assessment of whether there is a greater chance of fraud in an account with multiple threshold breaches or alarms.
It is inappropriate to aggregate the scores produced by multiple alarms is since the increase in risk signaled by multiple alarms is not normally proportional to the increase in score that would be created by aggregation of the scores produced.
It is reasonable however to assume that there would be an increase in the risk associated with an account if another alarm were added to an already 2o present alarm: that is, for example, the risk associated with a given alarm is less than the risk associated with two or more of those alarms.
It is also reasonable however to assume that two separate alarms of different types may or may not be as significant a concern as one other alarm. The level of concern must be translated to the Account Fraud 2s Score and should not be influenced by the number of alarms arbitrarily.
That is , the risk associated with an alarm of type A and an alarm of type B together may be less than, equal to, or greater than the risk associated with one alarm of type C.
This mearis that the Account Fraud Score should be increased for 3o multiple alarms but the risk associated with the highest risk alarm generated must first be considered. Accordingly, a fixed addition is made to the score dependent upon the number of alarms as described below:
Number of Alarms x Fixed Multiple Alarm Factor (3) SUBSTITUTE SHEET (RULE 26) It is beneficial to be able to assign different factors for determination of fraud scores for each account type as the increase or decrease in the level of risk associated is not uniform for all account types. For example, a business account calling PRS might indicate a greater risk compared to a s residential customer, whereas a business calling the USA would be of less concern than in a residential account.
In isolation or if combined with Account Type, time slot will add an extra dimension to the calculation of Account Fraud Score. Different frauds may be perpetrated at different times of day with certain traffic types to representing a greater risk at night or the weekend.
We now consider how to incorporate the neural network alarms in the Account Fraud Scoring mechanism as with neural network alarms, a confidence is calculated as to the accuracy of its decision.
The percentage confidence calculated by the neural network is used as is the alarm capability factor .and processed as per other alarms. The confidence given by the neural network must be integral to the score given for that alarm, since the confidence is a statement as to the probability that an account is exhibiting fraudulent behaviour.
The confidence should be the basis for any calculation and accordingly is 2o used as the prime factor calculating the Account Fraud Score, the alarm capability factor. Furthermore, the alarm confidence for fraudulent neural network alarms must be unaffected in the calculation from alarm confidence to individual alarm capability factor except for a standardisation factor which converts the percentage into an alarm priority 2s proportionate to the other alarm priorities and proportionate to its value in terms of assessing and quantifying risk. In short, the figure should be adjusted to ensure it is relative to other alarm capability factors. It is again true that it would be a detraction from the value of the neural network confidence calculation process if it were changed more than minimally.
3o The method for converting the confidence into an Alarm Capability Factor is as described below:
Alarm Capability Factor = AIarmConfidence(NN(F)) / X (4) SUBSTITUTE SHEET (RULE 26) where AIarmConfidence(NN(F)) is the Neural Network Fraudulent Alarm Confidence and X is a standardisation factor for Neural Network Fraudulent Alarms.
Neural Network Fraudulent alarms must be assessed with all other alarms s generated, or persisting, for an account in order to ensure that the alarm, and the account, posing the most risk is prioritised above the remainder..
This proposed "clean" processing keeps the ordering by Account Fraud Scoring as pure as possible; the assigned confidence is not adjusted by other factors outside the neural network although it is integrated within the 1o scoring process. -One has thought through the elements to be included within the Account Fraud Scoring mechanism, why they are to be included, how they represent risk and the appropriate method of dealing with each alarm type. The conclusion is that all alarms are processed through the scoring is mechanism in the same fashion, only the prime figure, the Alarm Capability Factor is a fixed figure for Neural Network Expected Alarms and Threshold alarms while for Neural Network Fraudulent Alarms, the confidence is standardised to associate a relational and reasonable level of significance.
2o For Neural Network Expected alarms, the confidence values will be 0-20%
as opposed to a range of 0-100% for fraudulent neural network alarms.
These expected alarms tend to indicate behaviour which is suspicious or unusual although not immediately identifiable as fraud. By their very nature, they will alert the user to areas of uncertainty. There is no Zs suggestiori that the expected behavioural neural network alarms are not valid; quite the opposite, sirice it is important that this task be performed.
The idea that small deviations in the neural network's confidence can be interpreted is a little spurious because the neural network is judging how much it doesn't know the behaviour being presented to it.
3o Thus there is more to be lost, in terms of complication and processing, than would be gained by allowing the percentage confidence to affect the Alarm Capability factor. Indeed it might also prove misleading, reducing the accuracy of the alarm generation engine. Use of a fixed value for the Alarm Capability factor, as opposed to a variable level resolves this issue.
SUBSTITUTE SHEET (RULE 26) So for Neural Network Fraudulent alarms the percentage confidence is normalised and integrated into scoring mechanism; for Neural Network Expected alarms a fixed Alarm Capability factor is used as per threshold alarms.
s In summary then, the method takes different alarms or other types of information, homogenises them through scoring the risk embodied in each element of the mechanism, taking the highest scored alarm for each account on any one time and then adding an extra value to the score dependent upon the number of alarms raised. The resulting value is the io account fraud score.
Any range or device value given herein may be extended or altered without losing the effect sought, as will be apparent to the skilled person for an understanding of the teachings herein.
SUBSTITUTE SHEET (RULE 26)
FIELD OF THE INVENTION
The present invention relates to a method and apparatus for account s fraud scoring and a system incorporating the same.
BACKGROUND TO THE INVENTION
In recent years there has been a rapid increase in the number of commercially operated telecommunications networks in general and in particular wireless telecommunication networks. Associated with this to proliferation of networks is a rise in fraudulent use of such networks the fraud typically taking the form of gaining illicit access to the network, and then using the network in such a way that the fraudulent user hopes subsequently to avoid paying for the resources used. This may for example involve misuse of a third party's account on the network so that is the perpetrated fraud becomes apparent only when the third party is charged for resources which he did not use.
In response to this form of attack on the network, fraud detection tools have been developed to assist in the identification of such fraudulent use.
Such a fraud detection tool may, however, produce thousands of alarms Zo in one day. In the past these alarms have been ordered either chronologically according to when they have occurred, or in terms of their importance, or a combination of both. Alarm importance provided a rudimentary order based on the significance of the alarm raised, although it has many failings: such a system takes no account of how alarms zs interact.
Since fraudulent use of a single account can cost a network operator a large sum of money within a short space of time it is important that the operator be able to identify and deal with the most costly forms of fraud at the earliest possible time. The existing methods of chronological ordering 3o and alarm importance ordering are, however, inadequate in that regard.
SUBSTITUTE SHEET (RULE 26) OBJECT OF THE INVENTION
The invention seeks to provide an improved method and apparatus for classifying and prioritising identified instances of potential account fraud.
SUMMARY OF THE INVENTION
s According to a first aspect of the present invention there is provided a method of prioritising alarms in an account fraud detection system comprising the steps of: assigning a numeric weight to each of a plurality of behavioural characteristics of an alarm raised against an account;
computing a fraud score for said alarm responsive to said numeric io weights.
Advantageously, the score gives a meaningful representation of the seriousness of a potential fraud associated with the raised alarm.
Preferably, said step of computing comprises the step of: forming a product of a plurality of said numeric weights.
is According to a further aspect of the present invention there is provided a method of prioritising alarms in an account fraud detection system comprising the steps of: assigning a numeric weight to each of a plurality of behavioural characteristics of each of one or more of alarms raised against an account; computing a fraud score for each of said one or more 2o alarms responsive to said numeric weights; computing an account fraud score responsive to said one or more fraud scores.
Preferably, said step of computing a fraud score comprises the step of:
forming a product of a plurality of said numeric weights.
Preferably, said step of computing an account fraud score comprises the 2s step of: selecting a largest of said one or more fraud scores.
Preferably, said step of computing an account fraud score comprises the step of: imposing a numeric bound on the value of said account fraud score.
Preferably, said step of computing an account fraud score for each of said, 30 one or more alarms comprises the step of: adding a term dependent on the number of alarms raised.
SUBSTITUTE SHEET (RULE 26) Preferably, said step of computing an account fraud score comprises the steps of: selecting a largest of said fraud scores; adding a term dependent on the number of alarms raised.
Advantageously, this prioritises accounts according to the seriousness of s potential fraud associated with them.
According to a further aspect of the present invention there is provided a method of prioritising alarms in an account fraud detection system comprising the steps of: performing the method of claim 3 on a plurality of accounts whereby to compute an account fraud score for each of said Io accounts; providing a sorted list of accounts responsive to said account fraud scores.
The method may also comprise the step of: displaying said sorted list of accounts.
Advantageously, this allows an operator to rapidly identify high risk is account usage and hence concentrate resources on those high risk, potentially high cost frauds.
Preferably, the step of displaying said sorted list of accounts comprises the step of: displaying with each account an indication of its associated account fraud score.
zo In a preferred embodiment, said characteristics include one or more characteristics drawn from the set consisting of: alarm capability, alarm sub-capability, velocity, bucket size, and account age.
The invention also provides for a system for the purposes of fraud detection which comprises one or more instances of apparatus 2s embodying the present invention, together with other additional apparatus.
According to a further aspect of the present invention there is provided an apparatus arranged for prioritising alarms in an account fraud detection system comprising: first apparatus arranged to assign a numeric weight 3o to each of a plurality of behavioural characteristics of an alarm raised against an account; second apparatus arranged to compute a fraud score for said alarm responsive to said numeric weights.
SUBSTITUTE SHEET (RULE 26) According to a further aspect of the present invention there is provided an apparatus arranged for prioritising alarms in an account fraud detection system comprising the steps of: first apparatus arranged to assign a numeric weight to each of a plurality of behavioural characteristics of each s of one or more of alarms raised against an account; second apparatus arranged to compute a fraud score for each of said one or more alarms responsive to said numeric weights; third apparatus arranged to compute an account fraud score responsive to said one or more fraud scores.
According to a further aspect of the present invention there is provided io software on a machine readable medium arranged for prioritising alarms in an account fraud detection system and arranged to perform the steps of: assigning a numeric weight to each of a plurality of behavioural characteristics of an alarm raised against an account; computing a fraud score for said alarm responsive to said numeric weights.
is According to a further aspect of the present invention there is provided software on a machine readable medium arranged for prioritising alarms in an account fraud detection system and arranged to perform the steps of: assigning a numeric weight to each of a plurality of behavioural characteristics of each of one or more of alarms raised against an 2o account; computing an fraud score for each of said one or more alarms responsive to said numeric weights; computing an account fraud score responsive to said one or more fraud scores.
The preferred features may be combined as appropriate, as would be apparent to a skilled person, and may be combined with any of the Zs aspects of the invention.
BRIEF DESCRIPTION OF THE DRAWINGS
In order to show how the invention may be carried into effect, embodiments of the invention are now described below by way of example only and with reference to the accompanying figures in which:
3o Figure 1 shows a schematic diagram of an account fraud scoring apparatus in accordance with the present invention.
Figure 2 shows a schematic diagram of an account fraud prioritising apparatus in accordance with the present invention.
SUBSTITUTE SHEET (RULE 26) Figures 3(a)-(d) show successive columns of a table showing an examples of account fraud score calculations in accordance with the present invention.
DETAILED DESCRIPTION OF INVENTION
s Referring to Figure 1, there is shown a schematic diagram of a system arranged to perform account fraud scoring. In particular the system shown relates to telecommunications system account fraud scoring and comprises a source 100 of Call Detail Records (CDRs) arranged. to provide CDR's to a plurality of fraud detectors 110, 120. In this specific to embodiment, a first detector 110 is a neural network whilst 'a second detector 120 is arranged to apply thresholds (and/or rules) to the received CRS's.
The neural network fraud detector 110 is arranged to receive a succession of CDR's and to provide in response a series of outputs is indicating either a Neural Network Fraudulent Alarm (NN(F)), a Neural Network Expected Alarm (NN(E)), or a third category not indicative of an alarm. (The third category may be implemented by the neural.network not generating an output.) Each NN(E) alarm provided by the neural network 110 is then mapped ao 111 to an associated Alarm Capability Factor (ACF) which is a numeric value indicative of the importance or risk associated with the alarm.
Each NN(F) provided by the neural network 110 is mapped 112 to a confidence level indicative of the confidence with which the neural network predicts that the account behaviour which raised the alarm is 2s fraudulent. This confidence level may then be normalised with respect to the Alarm Capability Factors arising from NN(E)'s and Threshold alarms (described below) to provide an Alarm Capability Factor for each NN(F).
The threshold detector 120 is arranged to receive a succession of CDR's from the CDR source 100 and to provide in response a series of outputs 3o indicative of whether the series of CDR's to date has exceeded any of one or more threshold values associated with different characteristics of the CDR series, any one of which might be indicative of fraudulent account usage.
SUBSTITUTE SHEET (RULE 26) Fraud score 140 is then calculated 130 from the Alarm Capability Factors (ACF), Velocity Factors (VF), and Bucket Factor (BF) which are described in detail below. In a preferred embodiment, the score is calculated as a product:
s Fraud Score = Alarm Capability Factor x Velocity Factor x Bucket Factor (1) In a preferred embodiment, a further factor, a sub-capability factor, is added to the equation to cater for variations of risk within a given broad category of alarms associated with the alarm capability factor.
io Fraud Score = Alarm Capability Factor x Velocity Factor x Bucket Factor x Alarm Sub-Capability Factor (2) Fraud scores are computed for each alarm type raised against a given account and the highest of these scores is taken as the base account fraud score.
is An additional term is then added which takes into account the fact that multiple alarms on the score account may be more indicative of a potential fraud risk than a single alarm. In a most preferred embodiment a fixed, multiple alarm factor is determined and then a multiple of this factor is added to the base account fraud score to give a find account fraud 2o score. The multiple used is simply the number of alarms on the account.
Details of these specific factors and others are given in more detail below.
Turning now to Figure 2, the account fraud scoring system 1 of Figure 1 typically forms part of a fraud detection system.
The CDR data 100 provided to the scoring mechanism 210 described 2s above is obtained from the telecommunications network 200.
The resulting account fraud scores calculated per account may then be sorted (220) so as to identify those accounts most suspected of being used fraudulently. This information may then be presented to an operator via, for example a Graphical User Interface (GUI) 230, either simply by 30 listing the accounts in order of fraud likelihood, or by also showing some indication of the associated account fraud score (for example by SUBSTITUTE SHEET (RULE 26) _7_ displaying the actual account fraud score), or by any other appropriate means.
Referring now to the table shown in Figures 3(a)-(d), an example is given of the numerical values assigned to the various account characteristics.
s The first column simply assigns a number to each of the main alarm types listed in column 2. Rows having no explicitly named alarm type relate to the same alarm type as appears most closely above.
Column 4 similarly lists alarm sub-types where applicable whilst column 9 indicates bucket size for two applicable alarm types.
io Columns 3, 5, 8, and 10 respectively list the alarm capability factors, sub-capability factors, velocity factors, and bucket factors associated with each alarm variant.
In the table shown no specific traffic values and threshold values are shown, since these are specific to a particular account at a particular time.
is Instead, typical resulting velocity factor values (e.g. 1, 1.35) are shown in column 8 for illustrative purposes.
Column 11 shows the effect of applying the sub-capability factor, velocity factor and bucket factor to each basic alarm capability factor.
Column 12 is blank, indicating that all the accounts listed in columns 15-20 32 are considered in this example to be well-established accounts, with a default account age factor of 1Ø In the case of newly opened accounts on higher account age factor, for example 1.2 might be employed.
Column 13 shows the effect of applying the account age factor to the product of preceding factors shown in column 11.
2s Columns 15-32 show nine examples of account fraud score calculations for separate accounts. Each successive pair of columns shows how many of each kind of alarm have been raised against that account, alongside the fraud score associated with that alarm.
At the foot of each pair of columns, a base account fraud score is shown 30 (being the maximum fraud score computed for any alarm raised against that account) along with the total number of alarms raised against that account.
SUBSTITUTE SHEET (RULE 26) _g_ These two figures, in conjunction with the fixed multiple alarm fraud factor, set in this example at 0.65, are used to compute the final account fraud score in each case by adding to the base account fraud score a term being the fixed multiple alarm fraud factor times the number of alarms s raised.
In the example shown, the resulting account fraud scores range from 60.25 on account 7 to 90.65 on account 6.
The selection of precise values for the various factors used in the calculation is a matter of experience and experiment and will vary io according to the field of application. In the example shown, sub-capability factors, velocity factors, and bucket factors all fall approximately in the range 1-1.5, whilst the basic alarm capability factors range from 30 to 90.
To achieve the desired scoring, one associates with each alarm a level of risk that is factored by a number of related elements. With each increase in the is number of such related elements, there is an increase in the level of granularity in the scoring mechanism and a consequent potential increase in precision and efficiency of the scoring mechanism.
Too many elements in the scoring equation, however, tends to make it very volatile, with a higher probability of algorithmic inaccuracies, and also 2o increased risk of any such errors causing a ricochet effect through the fraud scoring engine. The margin for error in configuring the scoring mechanism, and indeed the parameters for the rules and thresholds themselves, is also reduced as the number of elements increases since they are the building blocks on which scoring is based.
Zs In short, too few factors result in a robust but insufficiently accurate system whilst too many factors produce an initially more labour intensive set-up with the potential for being highly accurate, although if configured incorrectly, the opposite could be true. The solution is a compromise between the two extremes: the system needs to be durable yet accurate.
3o In the most preferred embodiment therefore, five significant factors are employed:
~ Alarm Capability Factor ~ Sub-Capability Factor SUBSTITUTE SHEET (RULE 26) _g_ ~ Bucket Factor ~ Velocity Factor ~ Account Age Factor The Alarm Capability Factor indicates the relative hierarchical position of s the risk associated with a given alarm relative to risks associated with other alarms.
The Sub-Capability Factor gives a further refinement of the indication of the hierarchical position of the risk associated with a given alarm relative to risks associate with other alarms.
io Bucket Factor is a measure of the volume of the potential fraud.
Velocity Factor is a measure of the rate at which the fraud is being perpetrated.
Account Age Factor is a measure of how old the account is: new accounts behaviour may be less predictable than older established usage is patterns, and more susceptible to fraud.
All neural network and threshold alarm capabilities are apportioned a figure upon which further calculations are made, increasing or decreasing the score as commensurate with the risk present. The Account Fraud Score created should accurately reflect the level of risk associated with 2o the course of events causing the production of an alarm. This calculation should primarily consider the speed with which money is and may be defrauded, and the volume of revenue defrauded, as these indicate loss to the telecommunications company concerned; questions of cost are always paramount. For example if a criminal has used $5,000 worth of 2s traffic over 4 hours, this is more significant than if the same individual had done so over 8 hours.
The Sub-Capability Factor is added to increase or decrease the risk associated with specific types of alarm. Many alarm types have a finer level of granularity as appropriate to that specific alarm. Many alarm types 3o are sub-divided, for example, into different sub-types of alarms for different call destinations as the inherent risk is different for different SUBSTITUTE SHEET (RULE 26) destinations. For example international calls are more often associated with fraud than calls to mobile telephones.
The longer that an account is in operation fraudulently, the greater the cost will be, so a good fraud management system will aim to detect fraud s as early as possible. Thus the analyst wishes, ideally, to see all alarms after the shortest time period, in order that he may stop the illegal action at the earliest opportunity.
The problem is addressed by calculating a ratio between a) the quantity of traffic pertinent to the particular alarm type within a poll and b) a threshold io value for the alarm. Trigger Value divided by Threshold Value accurately and expeditiously alarms any account where there is a large sudden increase in traffic for that customer. This is because, for example, the 1 hour bucket will always have the lowest threshold for a given capability and therefore any increase in traffic will proportionately increase the fraud is score more in any 1 hour bucket than in a corresponding longer period. In the example in table 1 below, a single extra unit of traffic represents a 2%
rise to the 1 hour bucket but only a 1 % rise for the 4 hour bucket:
Tahlp 1 ~ FYamnle velocity calculation 1 Hour 4 Hour Bucket Bucket Threshold 50 100 Value Poll 1 Tri er Value10 10 Velocity 10/50 = 0.2 10/100 = 0.1 Factor Poll 2 Tri er Value10+1 10+1 Velocity 11 /50 = 11 /100 = 0.11 0.22 Factor Difference 2% 1 Relative to Threshold SUBSTITUTE SHEET (RULE 26) This then gives an additional factor, namely rate of change of traffic relative to given thresholds, whereby to allow the account fraud scoring system to prioritise alarms so that the high velocity frauds can be investigated earlier than slower, and hence potentially less costly , s examples of fraud.
In addition to the above, an account age factor may be applied to increase the risk score associated with new accounts. Over time, the account operators' knowledge of each customer will improve as more data (such as payment information, bank details, and view call pattern) is received io about normal usage patterns and, as a consequence, it will become less likely that the customer will attempt to perpetrate a fraud.
For example, for new accounts, an account age factor of 1.2 might be applied, whilst an established account may have a factor of 1.
Furthermore, performance of certain confirmatory functions by the is ~ account owner. may be required after certain time periods and if the account owner fails to perform these then the account will be suspended As well as considering the volume or momentum of the fraud, it is also relevant to consider the immediate volume of potential fraud present in any given situation. Therefore a factor indicative of increases in the Zo bucket size associated with the alarm can be applied to ensure that a measure of the quantity of fraud is directly represented in the resulting fraud score, independent of a factor representative of the velocity. A
bucket is a time duration over which an alarm has been raised.
In the normal course of events, the 1 hour bucket alarms will be alarmed 2s first because they have the smallest thresholds assigned to them. In the unlikely event that a fraudster manages to perpetrate fraud over a longer period without triggering such a small bucket an alarm, then it is desirable to generate an indication at the earliest opportunity should an alarm on a larger bucket be triggered.
3o Therefore if a 168 hour (1 week) alarm is raised, this is of considerable significance and should be weighted accordingly. Consequently, it is appropriate to increase the weighting applied to larger time buckets. The aim is to ensure that such a larger bucket alarm would be proportionately SUBSTITUTE SHEET (RULE 26) more prominent dependent upon the size of the time bucket and the associated risk.
Some alarms do not lend themselves directly to thresholds, but are merely concerned simply with whether an specific event has occurred.
s For example in a telecommunications network account system, The Neural Network Fraudulent, Neural Network Expected, Hot A Numbers, Hot B Numbers, Overlapping Calls, Single IMEI/Multiple IMSI and Single IMSI/Multiple IMEI alarms, by their very nature, do not lend themselves to thresholds. In these cases the only significance is that a particular CDR
io has been involved in a particular kind of call or whether the profile has exhibited a particular form of suspect behaviour.
The velocity factor (Trigger value/ Threshold value) and Bucket factor are both superfluous in conjunction with the above alarm types (though they may for simplicity be assigned nominal values of 1 which when applied is will have a null modifying effect) and the only true modifier is Account Age Factor. This is not a serious issue since Hot A & B Numbers, Single IMEI/Multiple IMSI, and Single IMSI/Multiple IMEI will typically be allocated a high basic Alarm Capability Factor since these kinds of alarm will certainly need to be examined as priorities by a reviewing fraud 2o analyst.
This approach serves once again to achieve the overall aim that the risk associated with an alarm be accurately reflected in the final score allocated to that alarm.
In some cases it is possible that the score resulting directly from the 2s combinations of factors listed above may exceed reasonable bounds, for example in cases where many factors each have a high value individually indicative of high fraud risk. This may give rise to fraud scores well outside normal range. Whilst such scores may be left unamended, since their high value will clearly stand out relative to other scores, it is also 3o reasonable to take the approach that score values beyond a given threshold all be treated equally since, with such high scores all indicative of high fraud risk, there is little benefit in differentiating between them:
at those score levels the difference in score is more likely to be an artefact of the scoring system than the actual differentiation of fraud risk. The same 3s approach may be applied to very low scores. In such cases then, scores SUBSTITUTE SHEET (RULE 26) may be normalised to lie within fixed bounds: scores lying above or below those bounds being amended to the maximum or minimum bound as appropriate. In practice such a situation should not be common due to the accuracy of the various factor figures given.
s For example, an Account Fraud Score may be normalised within the calculation to ensure that a normalised score between 0 and 100 is produced. All scores under or equal to 0 will be mapped to 0; all scores over or equal to 100 will be mapped to 100.
A situation may occur where multiple alarms are raised for one account in io one poll and it is desirable to cater for this in determining axe Account Fraud Score. The decision on how to treat multiple alarm breaches is based on an assessment of whether there is a greater chance of fraud in an account with multiple threshold breaches or alarms.
It is inappropriate to aggregate the scores produced by multiple alarms is since the increase in risk signaled by multiple alarms is not normally proportional to the increase in score that would be created by aggregation of the scores produced.
It is reasonable however to assume that there would be an increase in the risk associated with an account if another alarm were added to an already 2o present alarm: that is, for example, the risk associated with a given alarm is less than the risk associated with two or more of those alarms.
It is also reasonable however to assume that two separate alarms of different types may or may not be as significant a concern as one other alarm. The level of concern must be translated to the Account Fraud 2s Score and should not be influenced by the number of alarms arbitrarily.
That is , the risk associated with an alarm of type A and an alarm of type B together may be less than, equal to, or greater than the risk associated with one alarm of type C.
This mearis that the Account Fraud Score should be increased for 3o multiple alarms but the risk associated with the highest risk alarm generated must first be considered. Accordingly, a fixed addition is made to the score dependent upon the number of alarms as described below:
Number of Alarms x Fixed Multiple Alarm Factor (3) SUBSTITUTE SHEET (RULE 26) It is beneficial to be able to assign different factors for determination of fraud scores for each account type as the increase or decrease in the level of risk associated is not uniform for all account types. For example, a business account calling PRS might indicate a greater risk compared to a s residential customer, whereas a business calling the USA would be of less concern than in a residential account.
In isolation or if combined with Account Type, time slot will add an extra dimension to the calculation of Account Fraud Score. Different frauds may be perpetrated at different times of day with certain traffic types to representing a greater risk at night or the weekend.
We now consider how to incorporate the neural network alarms in the Account Fraud Scoring mechanism as with neural network alarms, a confidence is calculated as to the accuracy of its decision.
The percentage confidence calculated by the neural network is used as is the alarm capability factor .and processed as per other alarms. The confidence given by the neural network must be integral to the score given for that alarm, since the confidence is a statement as to the probability that an account is exhibiting fraudulent behaviour.
The confidence should be the basis for any calculation and accordingly is 2o used as the prime factor calculating the Account Fraud Score, the alarm capability factor. Furthermore, the alarm confidence for fraudulent neural network alarms must be unaffected in the calculation from alarm confidence to individual alarm capability factor except for a standardisation factor which converts the percentage into an alarm priority 2s proportionate to the other alarm priorities and proportionate to its value in terms of assessing and quantifying risk. In short, the figure should be adjusted to ensure it is relative to other alarm capability factors. It is again true that it would be a detraction from the value of the neural network confidence calculation process if it were changed more than minimally.
3o The method for converting the confidence into an Alarm Capability Factor is as described below:
Alarm Capability Factor = AIarmConfidence(NN(F)) / X (4) SUBSTITUTE SHEET (RULE 26) where AIarmConfidence(NN(F)) is the Neural Network Fraudulent Alarm Confidence and X is a standardisation factor for Neural Network Fraudulent Alarms.
Neural Network Fraudulent alarms must be assessed with all other alarms s generated, or persisting, for an account in order to ensure that the alarm, and the account, posing the most risk is prioritised above the remainder..
This proposed "clean" processing keeps the ordering by Account Fraud Scoring as pure as possible; the assigned confidence is not adjusted by other factors outside the neural network although it is integrated within the 1o scoring process. -One has thought through the elements to be included within the Account Fraud Scoring mechanism, why they are to be included, how they represent risk and the appropriate method of dealing with each alarm type. The conclusion is that all alarms are processed through the scoring is mechanism in the same fashion, only the prime figure, the Alarm Capability Factor is a fixed figure for Neural Network Expected Alarms and Threshold alarms while for Neural Network Fraudulent Alarms, the confidence is standardised to associate a relational and reasonable level of significance.
2o For Neural Network Expected alarms, the confidence values will be 0-20%
as opposed to a range of 0-100% for fraudulent neural network alarms.
These expected alarms tend to indicate behaviour which is suspicious or unusual although not immediately identifiable as fraud. By their very nature, they will alert the user to areas of uncertainty. There is no Zs suggestiori that the expected behavioural neural network alarms are not valid; quite the opposite, sirice it is important that this task be performed.
The idea that small deviations in the neural network's confidence can be interpreted is a little spurious because the neural network is judging how much it doesn't know the behaviour being presented to it.
3o Thus there is more to be lost, in terms of complication and processing, than would be gained by allowing the percentage confidence to affect the Alarm Capability factor. Indeed it might also prove misleading, reducing the accuracy of the alarm generation engine. Use of a fixed value for the Alarm Capability factor, as opposed to a variable level resolves this issue.
SUBSTITUTE SHEET (RULE 26) So for Neural Network Fraudulent alarms the percentage confidence is normalised and integrated into scoring mechanism; for Neural Network Expected alarms a fixed Alarm Capability factor is used as per threshold alarms.
s In summary then, the method takes different alarms or other types of information, homogenises them through scoring the risk embodied in each element of the mechanism, taking the highest scored alarm for each account on any one time and then adding an extra value to the score dependent upon the number of alarms raised. The resulting value is the io account fraud score.
Any range or device value given herein may be extended or altered without losing the effect sought, as will be apparent to the skilled person for an understanding of the teachings herein.
SUBSTITUTE SHEET (RULE 26)
Claims (15)
1. A method of prioritising alarms in an account fraud detection system comprising the steps of:
assigning a numeric weight to each of a plurality of behavioural characteristics of an alarm raised against an account;
computing a fraud score for said alarm responsive to said numeric weights.
assigning a numeric weight to each of a plurality of behavioural characteristics of an alarm raised against an account;
computing a fraud score for said alarm responsive to said numeric weights.
2. A method according to claim 1 wherein said step of -computing comprises the step of:
forming a product of a plurality of said numeric weights.
forming a product of a plurality of said numeric weights.
3. A method of prioritising alarms in an account fraud detection system comprising the steps of:
assigning a numeric weight to each of a plurality of behavioural characteristics of each of one or more of alarms raised against an account;
computing a fraud score for each of said one or more alarms responsive to said numeric weights;
computing an account fraud score responsive to said one or more fraud scores.
assigning a numeric weight to each of a plurality of behavioural characteristics of each of one or more of alarms raised against an account;
computing a fraud score for each of said one or more alarms responsive to said numeric weights;
computing an account fraud score responsive to said one or more fraud scores.
4. A method according to claim 3 wherein said step of computing a fraud score for each of said one or more alarms comprises the step of:
forming a product of a plurality of said numeric weights.
forming a product of a plurality of said numeric weights.
5. A method according to claim 3 wherein said step of computing an account fraud score comprises the step of:
selecting a largest of said one or more fraud scores.
selecting a largest of said one or more fraud scores.
6. A method according to any one of claims 3 - 5 wherein said step of computing an account fraud score comprises the step of:
imposing a numeric bound on the value of said account fraud score.
imposing a numeric bound on the value of said account fraud score.
7. A method according to any one of claims 3 - 6 wherein said step of computing an account fraud score for each of said one or more alarms comprises the step of:
adding a term dependent on the number of alarms raised.
adding a term dependent on the number of alarms raised.
8. A method of prioritising alarms in an account fraud detection system comprising the steps of:
performing the method of any one of claims 3 - 7 on a plurality of accounts whereby to compute an account fraud score for each of said accounts;
providing a sorted list of accounts responsive to said account fraud scores.
performing the method of any one of claims 3 - 7 on a plurality of accounts whereby to compute an account fraud score for each of said accounts;
providing a sorted list of accounts responsive to said account fraud scores.
9. A method according to claim 8 additionally comprising the step of:
displaying said sorted list of accounts.
displaying said sorted list of accounts.
10. A method according to claim 9 wherein the step of displaying said sorted list of accounts comprises the step of:
displaying with each account an indication of its associated account fraud score.
displaying with each account an indication of its associated account fraud score.
11. A method according to any one of claims 3 - 10 wherein said characteristics include one or more characteristics drawn from the set consisting of: alarm capability, alarm sub-capability, velocity, bucket size, and account age.
12. Apparatus arranged for prioritising alarms in an account fraud detection system comprising:
first apparatus arranged to assign a numeric weight to each of a plurality of behavioural characteristics of an alarm raised against an account;
second apparatus arranged to compute a fraud score for said alarm responsive to said numeric weights.
first apparatus arranged to assign a numeric weight to each of a plurality of behavioural characteristics of an alarm raised against an account;
second apparatus arranged to compute a fraud score for said alarm responsive to said numeric weights.
13. Apparatus arranged for prioritising alarms in an account fraud detection system comprising the steps of:
first apparatus arranged to assign a numeric weight to each of a plurality of behavioural characteristics of each of one or more of alarms raised against an account;
second apparatus arranged to compute a fraud score for each of said one or more alarms responsive to said numeric weights;
third apparatus arranged to compute an account fraud score responsive to said one or more fraud scores.
first apparatus arranged to assign a numeric weight to each of a plurality of behavioural characteristics of each of one or more of alarms raised against an account;
second apparatus arranged to compute a fraud score for each of said one or more alarms responsive to said numeric weights;
third apparatus arranged to compute an account fraud score responsive to said one or more fraud scores.
14. Software on a machine readable medium arranged for prioritising alarms in an account fraud detection system and arranged to perform the steps of:
assigning a numeric weight to each of a plurality of behavioural characteristics of an alarm raised against an account;
computing a fraud score for said alarm responsive to said numeric weights.
assigning a numeric weight to each of a plurality of behavioural characteristics of an alarm raised against an account;
computing a fraud score for said alarm responsive to said numeric weights.
15. Software on a machine readable medium arranged for prioritising alarms in an account fraud detection system and arranged to perform the steps of:
assigning a numeric weight to each of a plurality of behavioural characteristics of each of one or more of alarms raised against an account;
computing an fraud score for each of said one or more alarms responsive to said numeric weights;
computing an account fraud score responsive to said one or more fraud scores.
assigning a numeric weight to each of a plurality of behavioural characteristics of each of one or more of alarms raised against an account;
computing an fraud score for each of said one or more alarms responsive to said numeric weights;
computing an account fraud score responsive to said one or more fraud scores.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB9910111.5 | 1999-04-30 | ||
GBGB9910111.5A GB9910111D0 (en) | 1999-04-30 | 1999-04-30 | Account fraud scoring |
PCT/GB2000/001669 WO2000067168A2 (en) | 1999-04-30 | 2000-04-28 | Account fraud scoring |
Publications (1)
Publication Number | Publication Date |
---|---|
CA2371730A1 true CA2371730A1 (en) | 2000-11-09 |
Family
ID=10852648
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA002371730A Abandoned CA2371730A1 (en) | 1999-04-30 | 2000-04-28 | Account fraud scoring |
Country Status (6)
Country | Link |
---|---|
EP (1) | EP1224585A2 (en) |
AU (1) | AU4422700A (en) |
CA (1) | CA2371730A1 (en) |
GB (1) | GB9910111D0 (en) |
IL (1) | IL146373A0 (en) |
WO (1) | WO2000067168A2 (en) |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7606721B1 (en) | 2003-01-31 | 2009-10-20 | CDR Associates, LLC | Patient credit balance account analysis, overpayment reporting and recovery tools |
US7817791B2 (en) | 2003-05-15 | 2010-10-19 | Verizon Business Global Llc | Method and apparatus for providing fraud detection using hot or cold originating attributes |
US7774842B2 (en) * | 2003-05-15 | 2010-08-10 | Verizon Business Global Llc | Method and system for prioritizing cases for fraud detection |
US7971237B2 (en) | 2003-05-15 | 2011-06-28 | Verizon Business Global Llc | Method and system for providing fraud detection for remote access services |
US7783019B2 (en) | 2003-05-15 | 2010-08-24 | Verizon Business Global Llc | Method and apparatus for providing fraud detection using geographically differentiated connection duration thresholds |
US9420448B2 (en) | 2007-03-16 | 2016-08-16 | Visa International Service Association | System and method for automated analysis comparing a wireless device location with another geographic location |
US8116731B2 (en) | 2007-11-01 | 2012-02-14 | Finsphere, Inc. | System and method for mobile identity protection of a user of multiple computer applications, networks or devices |
US9922323B2 (en) | 2007-03-16 | 2018-03-20 | Visa International Service Association | System and method for automated analysis comparing a wireless device location with another geographic location |
US9432845B2 (en) | 2007-03-16 | 2016-08-30 | Visa International Service Association | System and method for automated analysis comparing a wireless device location with another geographic location |
US9185123B2 (en) | 2008-02-12 | 2015-11-10 | Finsphere Corporation | System and method for mobile identity protection for online user authentication |
US8374634B2 (en) | 2007-03-16 | 2013-02-12 | Finsphere Corporation | System and method for automated analysis comparing a wireless device location with another geographic location |
US8280348B2 (en) | 2007-03-16 | 2012-10-02 | Finsphere Corporation | System and method for identity protection using mobile device signaling network derived location pattern recognition |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5819226A (en) * | 1992-09-08 | 1998-10-06 | Hnc Software Inc. | Fraud detection using predictive modeling |
GB2303275B (en) * | 1995-07-13 | 1997-06-25 | Northern Telecom Ltd | Detecting mobile telephone misuse |
EP0890255B1 (en) * | 1996-03-29 | 2004-08-04 | Azure Solutions Limited | Fraud monitoring in a telecommunications network |
GB2321364A (en) * | 1997-01-21 | 1998-07-22 | Northern Telecom Ltd | Retraining neural network |
-
1999
- 1999-04-30 GB GBGB9910111.5A patent/GB9910111D0/en not_active Ceased
-
2000
- 2000-04-28 EP EP00925506A patent/EP1224585A2/en not_active Withdrawn
- 2000-04-28 WO PCT/GB2000/001669 patent/WO2000067168A2/en active Application Filing
- 2000-04-28 IL IL14637300A patent/IL146373A0/en unknown
- 2000-04-28 AU AU44227/00A patent/AU4422700A/en not_active Abandoned
- 2000-04-28 CA CA002371730A patent/CA2371730A1/en not_active Abandoned
Also Published As
Publication number | Publication date |
---|---|
AU4422700A (en) | 2000-11-17 |
WO2000067168A3 (en) | 2002-04-25 |
WO2000067168A2 (en) | 2000-11-09 |
EP1224585A2 (en) | 2002-07-24 |
GB9910111D0 (en) | 1999-06-30 |
IL146373A0 (en) | 2002-07-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7457401B2 (en) | Self-learning real-time prioritization of fraud control actions | |
US6597775B2 (en) | Self-learning real-time prioritization of telecommunication fraud control actions | |
US6535728B1 (en) | Event manager for use in fraud detection | |
US7117191B2 (en) | System, method and computer program product for processing event records | |
US7783019B2 (en) | Method and apparatus for providing fraud detection using geographically differentiated connection duration thresholds | |
US7971237B2 (en) | Method and system for providing fraud detection for remote access services | |
US8340259B2 (en) | Method and apparatus for providing fraud detection using hot or cold originating attributes | |
EP0890256B1 (en) | Fraud prevention in a telecommunications network | |
JP2002510942A (en) | Automatic handling of fraudulent means in processing-based networks | |
CA2371730A1 (en) | Account fraud scoring | |
US20050222806A1 (en) | Detection of outliers in communication networks | |
EP0890255B1 (en) | Fraud monitoring in a telecommunications network | |
KR102200253B1 (en) | System and method for detecting fraud usage of message | |
WO2002096087A1 (en) | Variable length called number screening | |
US6466778B1 (en) | Monitoring a communication network | |
EP1427244A2 (en) | Event manager for use in fraud detection | |
CN116055196A (en) | Service detection method and device, electronic equipment and storage medium | |
MXPA98007770A (en) | Monitoring fraud in a telecommunication network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FZDE | Discontinued | ||
FZDE | Discontinued |
Effective date: 20060428 |