MXPA00002500A - System and method for detecting and managing fraud - Google Patents

System and method for detecting and managing fraud

Info

Publication number
MXPA00002500A
MXPA00002500A MXPA/A/2000/002500A MXPA00002500A MXPA00002500A MX PA00002500 A MXPA00002500 A MX PA00002500A MX PA00002500 A MXPA00002500 A MX PA00002500A MX PA00002500 A MXPA00002500 A MX PA00002500A
Authority
MX
Mexico
Prior art keywords
fraud
layer
cases
rules
event
Prior art date
Application number
MXPA/A/2000/002500A
Other languages
Spanish (es)
Inventor
John Gavan
Kevin Paul
Jim Richards
Arkel Hans Van
Cheryl Herrington
Saralyn Mahone
Terril J Curtis
James J Wagner
Charles A Dallas
Original Assignee
Mci Communications Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mci Communications Corporation filed Critical Mci Communications Corporation
Publication of MXPA00002500A publication Critical patent/MXPA00002500A/en

Links

Abstract

A system, method and computer program product for processing event records. The present invention includes a detection layer (123), an analysis layer (133), an expert systems layer (139), and a presentation layer (143). The layered system includes a core infrastructure (1310) and a configurable, domain-specific implementation (1312). The detection layer (123) employs one or more detection engines, such as, for example, a rules-based thresholding engine (126) and a profiling engine (128). The detection layer can include an AI-based pattern recognition engine (132) for analyzing data records, for detecting new and interesting patterns and for updating the detection engines to ensure that the detection engines can detect the new patterns.

Description

SYSTEM AND METHOD TO DETECT AND MANAGE FRAUDS Reference to Related Requests This patent application relates to the following jointly owned, co-pending United States of America alternate patent application: "Information Network Hub", Serial Number 08 / 426,256, Registration Number of Lawyer 1643/0012, which is incorporated herein by reference.
Background of the Invention Field of the Invention The present invention relates to the processing of event records, such as, for example, event records of the telecommunications network.
Related Technology As the telecommunications industry grows rapidly, fraud in telecommunications also grows. In the United States alone, it is estimated that telecommunications fraud has cost $ 3 billion in 1995. Telecommunications service providers have experienced difficulties in keeping up with new methods of fraud. As soon as providers implement new systems to detect current methods of fraud, criminals innovate new methods. The current methods of fraud are aimed at all types of services. These services and the corresponding fraud include the use of calling cards, credit cards, customer establishment equipment (CPE), which includes private branch exchanges (PBX), dialing 1+, entry number 800, and cellular calls. In addition, international dialing is a frequent target of fraud due to its high service price. Subscription fraud, where a customer subscribes to a service, such as the 800 or Dial 1 number, and that never pays later, is also a frequent fraud target. Existing methods for detecting fraud are mainly based on the establishment of previously determined thresholds and then monitoring service records to detect when a threshold has been exceeded. The parameters for these thresholds include the total number of calls in a day, the number of calls less than one minute in length, the number of calls over 1 hour in duration, calls to specific telephone numbers, calls to countries specific, calls that originate from specific telephone numbers, etc. Many parameters can be used to design a particular initiation point system for certain clients or services. These thresholds must be programmed manually, which is an intensive and time-consuming job. In addition, these thresholds are generally subjective and do not rely directly on empirical data. In addition, the thresholds that are programmed manually are static and therefore do not conform to changing patterns of fraud. Therefore it is easy for criminals to easily detect and dupe them. Also, because these thresholds are adjusted conservatively in order to detect most frauds, they are often exceeded by non-fraudulent calls, contributing to high percentages of false alarms. When a threshold is exceeded, an alarm is triggered and an analyst is presented, who must then analyze the alarm to determine if it reflects the fraud properly. The analyst should consult many data sources, such as the customer's payment history and service provisioning data, to assess the likelihood of fraud. The analyst should also evaluate many different alarms and correlate them to determine if a fraud case is spreading through the services. This manual process of analysis and correlation is a consumer of time, of intensive work, highly subjective and prone to errors.
When it has been determined that a fraud has occurred, the analyst must then select an appropriate action and then initiate it. These actions can include deactivating a calling card or blocking an ANI (.Identifier of Automatic Number) of the origin calls. Because the current fraud handling systems are rigid and are generally not configurable for other service providers or industries, rules, algorithms, routines, and new thresholds must be constantly reprogramming. What is needed is a configurable computer program, method, and product to automatically detect and act on new and evolving patterns that can be implemented in a variety of applications such as, for example, telecommunications fraud, fraud credit cards and debit cards, data scouring, et cetera.
SUMMARY OF THE INVENTION The present invention is a system, method and computer program program for processing event records. The present invention includes a detection layer for detecting certain types of activity, such as, for example, thresholds and profiles, for generating alarms from them and for analyzing event records for new patterns. The present invention also includes an analysis layer to consolidate the alarms within cases, a layer of expert systems to automatically act on certain cases and a presentation layer to present the cases to human operators and to allow the human operator to initiate additional actions The present invention combines a core infrastructure with implementation rules, configurable, user-specific, or domain-specific. The core infrastructure is used generically regardless of the current type of network that is being monitored. The specific implementation of the domain is provided with user-specific data and thus provides configurability to the system. The specific implementation of the domain can include a configurable database of the user to store the domain-specific data. The user's configurable database may include one or more databases that include, for example, simple file databases, object oriented databases, correlative databases, and so on. The user's configurable data can include conversion formats to normalize the records and send the data to specify which fields of the standard network event records should be sent to different processing machines. In one embodiment, the present invention is implemented as a fraud detection system in telecommunications in which the detection layer receives the event records of the network from the telecommunications network and detects the possible fraudulent use of the network of telecommunications. telecommunications In another embodiment, the present invention is implemented in a fraud detection system in credit card and / or debit card. In yet another embodiment, the present invention is implemented in a data undercutting system or a market analysis system. Regardless of the specific modality of the implementation, event records can come from a variety of sources. In this way, event records are preferably normalized event records before they are acted upon. The normalized event records are sent to one or more processing machines in the detection layer, depending on the specific mode being used. The normalization and dispatch functions include a core infrastructure and a domain-specific, configurable implementation. The detection layer may employ a plurality of detection machines, such as, for example, an initiation point machine, a profiling machine, and a pattern recognition machine. One or more of the detection machines can improve event records before acting on them. Improvement includes providing access to external databases for additional information related to a network event log. For example, in a telecommunications fraud detection system, the improvement data may include, for example, bill payment history data for a particular caller. An initiating point machine constantly monitors the normalized event records to determine when the thresholds have been exceeded. When the thresholds are exceeded, an alarm is generated. In a telecommunications fraud detection implementation, the initiation point can be based on the call data prior to termination, as well as the conventional post-call data. The initiation point machine includes a core infrastructure and a domain-specific implementation, configurable. The core infrastructure includes configurable detection algorithms. The domain-specific implementation includes user-specific initiation rules. The rules can be easily designed for specific uses and can be updated automatically, preferably with the updates generated by the pattern recognition machine. In this way, the specific implementation of the initiation point machine domain can employ complex initiation point rules that compare and aggregate different data and event records from the network. The underlying core infrastructure provides scalability to the domain-specific implementation. A profiling machine constantly monitors the standardized event records to determine when a deviation from a standard profile has occurred. When a deviation of a profile is detected, a corresponding alarm is generated. In a telecommunications fraud detection implementation, profiling can be based on call data prior to termination, as well as conventional post-call data. The profiling machine includes a core infrastructure and a domain-specific implementation, configurable. The specific implementation of the domain provides user-specific profiles. The profiles can be easily designed for specific uses and can be updated automatically, preferably with the updates generated by a pattern recognition machine. The core infrastructure provides scalability * to the specific implementation of the configurable domain. A pattern recognition machine preferably employs artificial intelligence to monitor event records and to determine if interesting or unusual patterns are developed. In an implementation of fraud detection in telecommunications, interesting or unusual patterns may indicate the fraudulent or non-fraudulent use of the telecommunications network. The pattern recognition machine uses the new patterns to dynamically update both a rules database for the parametric initiation point, and a profile database for the profile analysis. The pattern recognition machine includes a core infrastructure and a domain-specific implementation, configurable. The core infrastructure includes an AI pattern analysis processor to analyze the records and a call history database to store a history of previous records. The real contents of the database of the history of the call are developed from the real use of the system and are, in this way, part of the specific implementation of the domain. By implementing the AI for pattern recognition, thresholds are dynamic and can be adjusted in accordance with changing patterns of fraud. The patterns and thresholds are based on the real-time event data, as well as the historical data that is derived from external sources. In addition, the pattern recognition data is fed into the profiling machine, which can then establish the profiles representing the normal and fraudulent call patterns. The variation of the deviations from these profiles will trigger an alarm. In a fraud detection implementation in telecommunications, a fraud probability is calculated for each alarm. The analysis layer receives the alarms from the detection layer and performs different analysis functions to generate cases. In an implementation of fraud detection, the analysis layer correlates the alarms that are generated from common network incidents, builds cases of alleged frauds of individual alarms and gives priority to cases of compliance with their probability of fraud, so there are likely to be fewer false positives at the top of the priority list than at the bottom. The analysis layer includes a core infrastructure and a domain-specific, configurable implementation. The analysis layer uses a fraud case builder to correlate the multiple alarms that are generated by one or more detection layer machines. For example, a single event may violate one or more rules of the initiation point while simultaneously violating one or more profiling rules. Alarms can be consolidated in a single case of fraud which lists each violation. The fraud case builder can also correlate over time. In this way, you can relate a - * • esS * "í§-? Í * - event subsequent to the event that was previously listed with previous events For example, a phone call that is charged to a particular credit card may violate a threshold rule that has to be see the duration of a call A subsequent call that is charged to the same credit card may violate the same rule or other rules or profiles of the initiation point.The fraud case designer can correlate all these calls in a case of Fraud that indicates all violations associated with the credit card Depending on the analysis rules of the implementation layer, the fraud case builder can also generate additional fraud cases that are based on the calling number, the number on the which was called, etc. The specific implementation of the analysis layer domain includes a configurable informant to retrieve the data from external systems for use an improver A configuration database indicates the data necessary for improvement. Preferably, the configuration database is a configurable database of the user that includes one or more databases such as, for example, simple file databases, object-oriented databases, correlative databases, etc. The specific implementation of the domain also includes rules to analyze the alarms. The rules are specific to the user and can be designed as necessary. The expert system layer receives the cases from the analysis layer, carries out the automatic analysis of the cases and automates the decision support functions. The expert system layer includes a priority to give priority to cases, such as cases of fraud, for example, and an informant to retrieve additional data from external systems. The informant interconnects with external systems in native formats to external systems. The informant of the expert system layer is similar to the informants who use the detection and analysis layers. External systems provide data that can be used to determine if a fraud case is so obvious that automatic action, such as the termination of an account, is guaranteed. The expert system layer includes a reinforcer to interconnect with the external action systems. For example, in a fraud detection implementation, when the prioritizer determines that automatic action is required to stop a fraudulent activity, the reinforcer sends the necessary commands to one or more external action systems, which will implement the action. The reinforcer includes a domain-specific, configurable implementation that includes user-specific interconnection protocols to interconnect with external action systems in native formats to external systems. The expert system layer includes a core infrastructure and a domain-specific, configurable implementation. The domain-specific implementation includes the prioritization rules for the prioritizer to use to prioritize cases. These rules are generally user specific and are typically based on previous experience. The specific implementation of the domain also includes the rules of action for the priority to use to determine what action to take in cases of fraud. The presentation layer receives cases for presentation to, and analysis by, human operators. Human operators can initiate action independent of any action that has automatically initiated the expert system layer. The presentation layer includes a core infrastructure and a domain-specific, configurable implementation. The present invention is scalable, configurable, distributed and redundant and can be implemented in software, firmware, hardware, or any combination thereof. The present invention employs Artificial Intelligence (AI, for its acronym in English) and technologies of the Expert System within a logical systems architecture in layers. In this way, the configurability of the detection criteria, the portability for multiple companies and the ability to detect new fraud methods are improved. In addition, dynamic thresholds and automated analysis are provided. The additional functions and advantages of the present invention, as well as the structure and operation of the different embodiments of the present invention, are described in detail below with reference to the following drawings.
BRIEF DESCRIPTION OF THE DRAWINGS The accompanying drawings, which are incorporated herein and form a part of the specification, illustrate the present invention and, together with the description, serve further to explain the principles of the invention and to enable an expert person. in the pertinent technique for making and using the invention. The present invention is described with reference to the accompanying Figures, wherein: Figure 1 is a block diagram of a detection system and event registration processing of. multiple layers, including a detection layer, an analysis layer, an expert system layer and a presentation layer, which is implemented as a telecommunications fraud detection system; Figure 2 is a manufacturing diagram of the high-level process illustrating a method for detecting and acting upon fraud in a telecommunications system; Figure 3 is a block diagram of a distributed architecture for implementing the present invention; Figure 4 is a manufacturing diagram of the process that is expanded over step 214 of Figure 2, which illustrates a rule-based initiation process, a profiling process and a pattern recognition process; Figure 5A is a block diagram of a rules-based start point machine that can be used in the detection layer of the present invention; Figure 5B is a high-level block diagram of a function vector that can be used to represent the functions associated with the data registers, - Figure 5C is a detailed block diagram of the function vector illustrated in FIG. Figure 5B; Figure 5D is a block diagram of an alternative embodiment of the rule-based initiation point machine that is described in Figure 5A; Figure 6 is a block diagram of a profiling machine that can be used in the detection layer of the present invention; Figure 7 is a block diagram of a pattern recognition machine based on artificial intelligence that can be used in the present invention; Figure 8 is a process manufacturing diagram illustrating a process for analyzing the alarms generated by the rule-based initiation point process and the profiling process of Figure 4 and for generating cases therefrom; Figure 9 is a block diagram of the analysis layer of Figure 1; Figure 10 is a process fabrication diagram that illustrates a method for prioritizing fraud cases and for taking appropriate action on certain of these prioritized fraud cases; Figure 11 is a block diagram of the layer of the expert system of Figure 1; Figure 12 is a block diagram of the presentation layer of Figure 1; and Figure 13 is a block diagram illustrating a relationship between a core infrastructure and a user-specific, or domain-specific, implementation of the present invention. The present invention will now be described with reference to the accompanying drawings. In the drawings, like reference numbers typically indicate identical or functionally similar elements. Additionally, the digit (s) further to the left of a reference number typically identifies the drawing in which the reference number first appears.
Detailed Description of the Preferred Modalities Index of Topics I. General View II. Environment of Example III. Processing of Event Logs A. Detection Layer 1. Normalization and Sending 2. Initiation Point Based on Rules 3. Profiling 4. Pattern Recognition B. Analysis Layer C. Expert System Layer D. Presentation Layer IV. Conclusions T. Overview The present invention is a configurable computer program, method and product system for automatically detecting and acting on new and evolving patterns in, for example, the detection of telecommunications fraud, data scouring, market analysis (ie say, to segment potential strong customers and to detect patterns The present invention is a multilayer, scalable, configurable, distributed and redundant system that can be implemented in software, firmware, hardware or any combination thereof. present invention can be implemented as a group of portable software programs.The multilayer architecture includes a detection layer to detect thresholds, profiles and patterns to generate alarms, an analysis layer to analyze the alarms and to consolidate the alarms in cases , a layer of the expert system to act on the cases and a presentation layer for presenting the cases to human users With reference to Figure 13, the invention includes a core infrastructure 1310 that allows each layer to be implemented in a variety of ways. applications without alteration The invention also includes a configurable, rule-based, user-specific 1312 implementation or domain, which allows each layer to be designed for specific applications. The specific implementation of domain 1312 allows the present invention to be configured for use in a variety of applications such as, for example, telecommunications fraud, credit card or debit card fraud, data scouring, and so on. The core infrastructure 1310 and the specific implementation of the domain 1312 can be implemented as software, firmware, hardware, or any combination thereof. Core 1310 infrastructure includes elements that are required, regardless of the current deployment environment. The specific implementation of domain 1310 includes data and user-specific functions such as initiation point rules, profiles, pattern recognition rules, alarm correlation and reduction rules, fraud case prioritization and action rules, presentation parameters and external system interaction parameters. The core 1310 infrastructure can be used in a variety of applications without having to redesign. The specific implementation of the 1312 domain allows the substantial design of the system for the specific situations of the user. The specific implementation of domain 1312 includes configurable rules to provide flexibility. Configurable rules include event recognition rules, event normalization rules, improvement rules, detection rules, analysis rules, prioritization rules, presentation rules, and shipping rules, including provisioning rules. The event recognition rules specify how to recognize the input events. Event normalization rules specify how to normalize events. Provisioning rules specify how to map detection rule sets to events, such as, for example, through the client or through the product. Improvement rules specify how to derive new information from existing information, such as how to derive product identification from an input event. The detection rules specify how to generate alarms in reaction to suspected fraud events. The detection rules may include start point rules and usage profiles. The detection rules also specify alarm priorities for the alarms that were generated in the detection layer, based on the type of rule that was violated. Analysis rules give additional priority to alarms and specify how to correlate alarms in cases of fraud. The prioritization rules specify how to prioritize cases for automatic actions. Presentation rules specify how to visually display information to users. The shipping rules, which include the provisioning rules, are used to provide data to a set of rules, separate data and decide which machine the data will be sent to. >; The rules can be created or modified and take effect while the system is working. When a rule is created or modified, it will apply to new events that arrive on the system. Generally, the rules do not apply to events that were previously received. When a rule is deleted, its removal does not affect the values or entities that were generated from the rule. For example, the removal of an alarm type definition rule does not affect existing alarms. The specific implementation of domain 1312 also includes configurable values. These may include, but are not limited to, one or more of the following values. A time variable outside the database specifies a maximum amount of time to wait for a response from a database. For example, if a request for data is sent to an external database, the system will only wait for the time period outside. The configurable rules will determine what event to take after a response was not received within the timeout period. An expected volume of data from a variable in the data management layer specifies the number of messages that can be received and the period of time during which to measure this expected number of messages. A time-out waiting period for the data from a variable in the data management layer, specifies the maximum time to wait for a message to From the data management layer, before sending a network alarm. A maximum age for the arrival event variable specifies the maximum time between creation and arrival of the event from the data management layer. This variable can be used to improve an old event counter. A variable of the maximum number of old events specifies the number of events older than the maximum age for arrival events. Typically, a network administration message is generated whenever this variable has been exceeded. A variable of the maximum number of invalid events specifies the maximum number of invalid events that can be received from the data management layer. Typically, a network administration message is generated whenever this variable has been exceeded. A variable of the high priority threshold of the case specifies a priority level over which cases are monitored if they remain unprocessed. A maximum time variable of unprocessed cases specifies the maximum time a case above the high priority threshold of the case may be unprocessed before it can be reported. A variable of the time period for measuring the performance of the rules specifies a period of time during which the performance of the rules will be measured. This variable is typically used to report the purposes. A variety of debug time variables specify the time periods to store a variety of data items. Data items may include invalid events, valid and normalized events, alarms, cases that were determined to be fraudulent, cases that were determined to be fraudulent and actions taken.
TT. Example Environment The present invention can be configured for a variety of applications, such as, for example, detecting fraud in telecommunications, detecting fraud on credit cards and debit cards, undermining data, marketing analysis, and so on. The present invention is described below as implemented as a fraud detection system in telecommunications. The examples described herein are provided to assist in the description of the present invention, not to limit the present invention. Telecommunications systems can include any of a variety of types of telecommunications networks. Many of these telecommunications networks are described in the layer of the network 101 of Figure 1. The network layer 101 may include a public switch telephone network 102 (PSTN) of the Global / Inter-Exchange Carrier (IXC), which can include conventional IXC networks with domestic and global coverage, such as those of MCI Telecommunications and British Telecom (BT). A variety of services can be supported through these networks. The layer of the network 101 may also include cellular and wireless networks 104, which offer conventional analog and digital cellular services. Local exchange networks (LECs) 106, such as those operated by the Regional Bell Operating Companies (RBOCs), independent local telephone companies, and Competitive Access Providers (CAPs) may also be included. its acronym in English). A service control layer 107 offers and manages several telecommunications services and creates the service records, which contain the data representing each example of a particular service offering. For example, the service control layer 107 can support the PSTN 102 of the Global / Inter-Exchange Carrier with a plurality of switches 108 for issuing the Call Detail Records (CDRs) for each call of voice and data that it processes. In addition, a plurality of service control points can be used (SCP, for its acronym in English) 110 to provide data and intelligence for enhanced services, such as the virtual network or the routing of a 800 call. SCPs issue registers for each transaction they process. These records are referred to herein as records of the Application Data Field (ADF). Intelligent networks (INs) 112 can be provided for enhanced service offerings, such as operator services. The components of the INs 112 can issue records, commonly referred to as Billing Detail Records (BDRs), and poorly billed number records (BBNs) of the smart services, for those services. In addition, completed operator-assisted calls from the IN can be sent from a network information hub (NIC) as enhanced operator service records (EOSRs), which include an ISN BDR which is matched to a switch format (EOSR). The signal transfer points (STPs) 114 can be used for the signaling of the networks, which are referred to as SS7 networks, which use these signal transfer points (STPs) 114 for process the call signaling messages. The STPs 114 emit messages, such as, for example, Initial Address Messages, which contain the data that have to do with a call that the IXC network is carrying. The service control layer 107 may also employ the cellular service control components 116 to issue the standard AMA registers for cellular calls handled by the cellular network 104. The service control layer 107 may include the service control components LEC 118 of a LEC 106 network to issue the AMA registers for local calls and the local exchange of long distance calls. A single call can traverse multiple networks and generate multiple call records. ADFs, BDRs, and IAMs can be issued before the termination of a call. CDRs, EOSRs and AMAs are issued after a call is terminated. A data management layer 119 collects the different service records from the service control components and processes them to produce the network event records that can be used by a variety of systems. The processing of the data to produce the event records of the network can include the separation of data between different distributed processors, which reduce the data by eliminating redundancy and consolidating multiple records for the same call, and improving the data by means of increasing records with. Relevant data from external systems.
The data management layer 119 can be implemented in a variety of ways and can include a data separation, reduction and enhancement component 120. Preferably, the component 120 is a Network Information Concentrator (NIC), as Specified and claimed in the co-pending United States of America Patent Application Serial No. 08 / 426,256, which is hereby incorporated by reference in its entirety. The NIC 120 may use one or more reference databases 122 to provide the external data for improvement. External reference data may include customer identification codes, service type codes and network element codes. Typically, each of the telecommunications networks within the layer of the network 101 can manipulate or process any of a variety of call types, such as calling card calls, credit card calls, equipment calls from the establishment of the client (CPE), dialing calls 1+, 800 calls without cost and cellular calls. These can also include credit card and debit card transactions. Each of these types of calls are subject to fraudulent use. In this way, each of the telecommunications networks within the layer of the network 101 are affected by fraud.
TTT. Processing of Event Records The present invention provides a system, method and product of multilayer computer program, to detect and act upon data patterns and thresholds. When implemented as a fraud detection system in telecommunications to detect and act on fraud in one or more telecommunications networks, the present invention detects fraud by comparing the event records of the network with the rules and profiles of the initiation point. The violations result in the generation of alarms. The multiple alarms are correlated in cases of fraud that are based on common aspects of the alarms, thus reducing the amount of analysis that should be performed in suspected fraud incidents. The system acts automatically on certain fraud cases detected to reduce the losses that derive from them. In addition, live analysts can initiate additional actions. In a parallel operation, call patterns are analyzed through network event records to discern new methods or patterns of fraud. From these methods that were recently detected, new thresholds and profiles are automatically generated to protect the telecommunications system. With reference to Figure 1, the present invention is illustrated implemented as a fraud detection system 169. The present invention includes a detection layer 123, an analysis layer 133, a layer of the expert system 139 and a presentation layer 143. In Figure 2, a process fabrication diagram illustrates a method for detecting and managing fraud in a telecommunications system, such as the one shown in Figure 1. The process can be performed with software, firmware, hardware, or any combination of them. The process begins at step 210, wherein the service control layer 107 generates the service records for the calls that are handled by the telecommunications systems at the layer of the network 101. The formats of the service record and the data contained in them vary according to the type of call and the network equipment that handles a particular call, as described above. Because a single call can traverse multiple networks, a single call can generate multiple call records. In step 212, the service records are processed by the data management layer 119 to generate the event records of the network. This processing includes the separation of the data between the different distributed processors, reduce data by eliminating redundancy and consolidate multiple records for the same call, and improve data by increasing records with relevant data from external systems. In step 214, the event records of the network are analyzed by the detection layer 123 for possible fraud. Step 214 is further detailed in the manufacturing diagram of Figure 4, as described below. The detection layer 123 specifies and executes the tests to detect the fraudulent use of the services of the layer of the network 101. The detection layer 123 is part of the infrastructure and is scalable and distributed with a configurable component to allow manufacturing according to specifications in accordance with the user's requirements. Preferably, the detection layer 123 includes three classes of processing machines, which are three distinct but related software processes, which operate on similar hardware components. Preferably, these three classes of machines include an initiation point machine 124 that is rule-based, a profiling machine 128 and a pattern recognition machine 132. These scalable and distributed machines can operate together or separately and provide They provide the system with unprecedented flexibility. A standardization and sending component 124 can be used to normalize the event records of the network and to send the standardized records to the different processing machines. Normalization is a process or processes to convert network event records that were formatted in various ways, in standardized formats for processing within the detection layer 123. Preferably, the standardization process is dynamic because the standardized formats They can be varied according to the needs of the user. Sending is a process that uses separation rules to pass some subset of the standardized network event records to particular fraud detection and teaching trajectories. In this way, where a particular processing machine requires only a subset of the available information, time and resources are conserved by sending only the necessary information. The rule-based initiation machine 124 reads constantly the event records of the real-time network from the information concentrator of the network 120 and compares these records with the selected initiation point rules. If a record exceeds a start point rule, the event is assumed to be fraudulent and an alarm is generated. The initiation point alarms are sent to the analysis layer 133. The profiling machine 128 constantly reads the event records of the real-time network from the information concentrator of the network 120 and from other possible data sources the which can be specified in the implementation layer by each user architecture. The profiling machine 128 then compares the event data with the appropriate profiles from the profile database 130. If an event represents a deviation from an appropriate profile, a probability of fraud based on the extension is calculated. of the deviation and an alarm is generated. The profiling alarm and the assigned fraud probability are sent to an analysis layer 133. Preferably, in step 214, the network event logs are also analyzed in real time by the pattern recognition machine 132 based on artificial intelligence. This AI analysis will detect new fraud profiles so that threshold rules and profiles will be updated dynamically to correspond to the latest fraud methods. The pattern recognition machine 132 allows the detection layer 123 to detect new fraud methods and update the fraud detection machines, including machines 126 and 128, with new rules and threshold profiles, respectively, as they are developed. In order to detect new fraud methods and to generate new thresholds and profiles, the pattern recognition machine 132 operates on all network event records that include the data from the network information concentrator 120 to through all other levels of the system, to discern anomalous call patterns that may be indicative of fraud. The pattern recognition machine 132 collects and stores the volumes of event records to analyze the stories of the calls. Using the artificial intelligence (AI) technology, the pattern recognition machine 132 analyzes the call histories to learn the normal patterns and determine if interesting, abnormal patterns emerge. When one of these abnormal patterns is detected, the pattern recognition machine 132 determines whether this pattern should be considered fraudulent. The AI technology allows the pattern recognition machine 132 to identify, using historical data, types of patterns to be searched as fraudulent. The pattern recognition machine 132 also uses the external data from the billing and the account systems that can be received (AR) 136 as references for the current accumulations and the payment histories. These references can be applied to the pattern recognition analysis process as indicators for possible fraud patterns. Once the pattern recognition machine 132 has established the normal and fraudulent patterns, it uses these results to modify the initiation point rules within the initiation point machine 226. For example, the pattern recognition 132 may determine that credit card calls to a specific country, which exceeded 50 minutes in duration, are fraudulent 80 percent of the time. The pattern recognition machine 132 may then modify an initiation point rule within the initiation point machine 126, which will generate an alarm if the event data that is received reflects that particular pattern. In this way, by dynamically modifying the threshold rules, the system can keep up with new and emerging fraud methods, thereby providing a key advantage over conventional parametric initiation point systems for detection of frauds. Similarly, once the normal and fraudulent patterns have been established by the pattern recognition machine 132, the pattern recognition machine 132 updates the profiles within the profile database 130. This allows the profiles to be Modify dynamically to keep up with new and emerging fraud methods. In step 216, the alarms are filtered and correlated by the analysis layer 133. For example, suppose that a threshold rule generates an alarm if more than ten credit card calls are made that are loaded to a single credit card. credit within a time frame of one hour. Let's also suppose that another threshold rule generates an alert if more than one call is charged to a particular credit card at the same time. If ten calls are placed within one hour using the same credit card, and the ninth and tenth calls were made simultaneously (two different callers using the same credit card number), then two alarms will be generated at the same time weather. An alarm for exceeding ten calls per hour and one for exceeding one call per card at the same time. A correlation scheme for step 216 may combine the two previous alarms into a single fraud case which indicates that a particular credit card has exceeded the two different threshold rules. further, if a pattern recognition machine is used, a new threshold rule can be generated to cause an alarm that will be generated in the case of any future use attempt of the identified credit card. The alarms generated by the detection layer 123 are sent to the analysis layer 133. The analysis layer 133 analyzes the alarm data and correlates the different alarms that are generated from the same events or related events and consolidates you are alarms in cases of fraud. This reduces redundant and cumulative data and allows fraud cases to represent related fraud that occurs in multiple services. For example, different alarms may be received for the possibly fraudulent use of calling cards and cell phones. The correlation process within the analysis layer 133 may determine that the fraudulent use of a credit card on a cell phone is occurring. An alarm database 138 stores the alarms that are received from the detection layer for the correlation. Analysis layer 133 gives priority to fraud cases in accordance with their likelihood of fraud, so there are likely to be fewer false positives at the top of the priority list than at the bottom. In this way, cases of fraud that are generated due to an occasional excess of a threshold by an authorized user or by an abnormal calling pattern by an authorized user, such as calling from a new country while on a trip businesses are typically assigned a lower priority. Preferably, the analysis layer 133 employs artificial intelligence algorithms for prioritization. Alternatively, the rules of the detection layer 123 can be manufactured according to specifications to avoid these alarms in the first place. In a preferred embodiment, the analysis layer 133 includes a software component 134 which performs the functions of consolidation, correlation, and reduction. The software component 134 makes use of the external data from the billing systems and AR 136 in the correlation and reduction processes. Preferably, the alarm database 138 resides in the same hardware as the software component 134. In step 218, the consolidated fraud cases are sent to the expert system layer 139 to automatically execute one or more tasks in response to certain types of fraud cases. Thus, in the previous example, the automatic action may include notification to the credit card company responsible for the fraud that is suspected, so that it can take preventive action against fraud. In addition, any pending calls can be terminated if this functionality is supported by the network. Preferably, the expert system layer 139 includes an expert fraud analysis system 140, which applies expert rules to determine priorities and appropriate actions. An expert shelf system can be used. Preferably, however, an expert system manufactured according to specifications is employed and programmed using rule-based language appropriate for expert systems, such as, for example, CLIPS. Typically, the algorithms for step 218 are designed to identify certain suspected fraud cases which are so extraordinary that automatic responses are appropriate. However, these algorithms can also be designed to retain automatic responses where, for example, the suspected fraud is not so flagrant or not as potentially expensive or where certain mitigating circumstances exist, such as, for example, a history of activity similar for which the customer responded or canceled later. The expert system 140 includes the interfaces for several expert systems for the purpose of performing different actions in response to the fraud that was detected. These may include a system 144 for issuing demand letters, a system 146 for issuing deactivation notifications and a system 148 for instituting blocks of ANI based on the switch. The expert system 140 may include an interface for a service provisioning system 142 for retrieving the data that relates to the services that are provided to a customer and for initiating the actions that will be taken on a customer service. The expert system 140 can employ artificial intelligence to control the execution of automatic or semi-automatic actions. Regardless of whether or not automatic responses are generated, it is important to provide all suspected cases of fraud to live operators, so that they can take some action for which the automatic system can not do so. Thus, in step 220, all cases of fraud, including those that triggered automatic responses in step 218 and those that were not sent to a presentation layer 143 for presentation to human analysts. The presentation layer 143 preferably includes a plurality of work stations 152a-152n connected to each other and to the expert system 140 via a LAN 150 local area network, a wide area network (WAN). ), or by means of another suitable interconnection system. In this way, through the rest of this document, where reference is made to LAN 150, it should be understood that a WAN or any other suitable interconnection system can be substituted. Workstations 152a-152n can be conventional personal computers and can operate as clients having specific software which provides a graphical user interface (GUI) to the system. Then, the fraud data that has been collected and processed by the detection, analysis and expert system layers can be presented to human analysts via work stations 152a-152n via LAN 150. The layer of Presentation 143 also allows human analysts who are operating from work stations 152a-152n to initiate the actions that will be taken in response to the fraud that was detected. These actions are executed through the interfaces with different external systems. The presentation layer 143 may include a flexible writing language, manufactured according to specifications which is part of the infrastructure component of the system. In the example above, if the automatic system does not trigger an automatic action, a human operator could, however, contact the issuer or credit card owner to inform them that the credit card is being used to make two calls of simultaneous credit card and that the credit card has been used to make more than ten calls within one hour. If the issuer or credit card owner indicates that calls are authorized and billing will be paid, no further action may be taken. The live operator may even enter data into the system so that, in step 214, the rules or threshold profiles are altered for this particular credit card so that similar use of the card in the future will not generate alarms. Alternatively, where the issuer or credit card owner indicates that the calls are not authorized, the live operator can take action to immediately disconnect calls, if they are still in progress and if the monitored network supports this functionality. In addition, the live operator may enter data within the system so that, in step 214, the alarms are immediately generated if an attempt is made to bill up to a single call to that credit card. The live operator can also enter data within the system so that, in step 218, any alarms generated by that credit card generate an immediate automatic response that includes the termination of the attempted call. Alternatively, the operator can initiate the deactivation of the card so that calls based on that card are blocked - before any substantial analysis is performed. With reference to Figure 3, a preferred physical architecture of the fraud detection system 169 is a client / server based system which operates over a LAN, WAN or other system, which may be LAN 150. Preferably, the logical components forming the detection layer 123, the analysis layer 133 and the layer of the expert system 139 operate on one or more servers such as the servers 310a-310n. These logic components preferably include an initiation point machine 126, a profiling machine 128, a profile database 130, a pattern recognition machine 132, a consolidation, correlation and alarm reduction unit 134, a base of alarm data 138 and an expert fraud analysis system 140. The event logs of the network 314 are provided to the servers 310a-310n from the data management layer 119, which preferably includes the information concentrator. the network 120 that was described above. External systems 316 can provide data for the improvement of internal processes and data in the system. The external systems 316 may include a billing component and accounts that can be received 136 and several other systems that the detection layer uses. The analyst work stations of fraud 152a-152n provide an interface between the fraud detection system 160 and human analysts.
A. Detection Layer With reference to Figure 4, a process fabrication diagram illustrates the different processes that can be performed by the detection layer 123. These processes include a rule-based initiation process (steps 412-420). ), a profiling process (steps 422-428 and 420) and a pattern recognition process (steps 430-438). In Figures 5A, 5D, 6 and 7 the systems for implementing the processes of Figure 4 are provided. In a preferred embodiment, the system is implemented in a distributed processing architecture. For example, the system can be implemented in a plurality of server components 310a-310n. 2. Normalization and Sending Normalization is a process to convert the records of network events of different formats to standardized formats that recognize each of the multiple trajectories of detection and learning of fraud. Preferably, the standardization process is dynamic because the standardized formats can be varied in accordance with the implementation. Sending is a process for forwarding the network event records normalized to the particular detection and learning trajectories of frauds in the detection layer 123. The shipment includes the provisioning rules and the separation rules. The provisioning rules determine which set of rules or detection method within the detection layer 123 will receive the data. The separation rules balance the loads among the multiple processors that are assigned to a rule set or detection method. With reference to Figures 5A, 5D, 6 and 7, the normalizer and dispatcher 124 are provided to normalize and send the event records of the network 501, which were sent from the data management layer 119. The event records of the network includes those that were created when a call is terminated, such as a CDR, EOSR, and AMA, and those that were created during a call and that received the Data Management Layer 119 prior to call termination, such as an IAM, BDR, BBN and ADF. The operation of the normalizer and dispatcher 124 is described with reference to the manufacturing diagram of Figure 4. In step 408, the standardizer and dispatcher 124 receives the event records of the network 501. The standardizer and dispatcher 124 preferably includes an infrastructure core 1310 and a user-specific or domain-specific implementation 1312. A normalizer 502 converts the event records of the network 501 from the different formats of the data management layer into the standardized formats required by the detection layer. . In step 410, the normalizer 502 uses the configuration data 504 to convert the event records of the network 501 into the normalized event registers 506. The configuration data 504 is user-dependent and therefore is part of the implementation domain specific 1312. Then the standard network event records 506 are sent to the sender 508, which employs the user's specific sending rules to pass the standardized network event records 506 to the detection layer machines 126, 128 and 132 appropriate. In one embodiment, the sender 508 provides the normalized network event registers 506a, which are sent to the rule-based initiation machine 126, the standardized network event registers 506b, which are sent to the profiling machine 128, and the normalized network event registers 506c, which are sent to the pattern recognition machine 132. The normalizer 502 also stores the event records of the network in a database 125 for use of one or more machines within the detection layer 123. The events are preferably stored for a period of time that the user can designate. A storage period can, for example, adjust in twenty-four hours. The 508 dispatcher allows the fundamental process being performed by the 502 normalizer to be configured for any company by means of programming the specific data to the requirements of the company within the 508 shipper. The 502 normalizer, the 504 configuration data and the 508 shipper they also allow quick and simple updates to the normalizer process, provided that modifications are made to the downstream detection processes. 2. Initiation Point Based on Rules The initiation point is a process by which the event records of the network are compared with the threshold rules. In a telecommunications fraud detection system, where the network event records represent telephone calls, the network event records are compared with the threshold rules to determine whether the event records of the network represent the possible fraudulent use of a telecommunications network. With reference to the manufacturing diagram of the process of Figure 4, steps 412-420 represent a rule-based initiation point process. In Figure 5A, the details of the rule-based initiation point machine 126 are provided to implement the initiation point process of steps 412-420. In describing the elements and process of the initiation point machine 126, the following terms are used: An event, which is represented by a standardized event register 506a, generates a function in the initiation point machine 126. The functions they are defined later. In an implementation of fraud detection in telecommunications, an event is typically a telephone call. A generating event is an event that caused a function to be generated. A generating event is typically the most recent event in a series of events that are measured to calculate a function value. A contributing event is an event that contributed to a function, but did not cause the function to be generated. While a function can consist of a single event, it typically consists of a generating event and several contributing events. Each event that is received is first a generating event, because when it is received, it generates the calculation of a function. The event can then become a contributing event for the calculation of other functions. A key is an event field, such as the ANI, call card number, destination of the call, and so on. A key is used to identify a type of event. A function is the information used by the initiation point detector 520 to determine whether or not there is evidence of fraud. A function can include, for example, the number of calls that were made with a certain number of calling cards in a period of two hours. A function can also be, for example, an NPA of a call. The functions are calculated by the upgrader 510 in accordance with the improvement rules 512, and with the data from one or more events. There are generally two types of functions, functions of a single event and functions of multiple events. For example, the function of a single event can be a call that is made from an insecure ANI. The functions of multiple events are derived from a generating event and zero or more contributing events. The functions of multiple events are the result of measurements made on a key over a period of time, such as, for example, a measurement of the number of calls made with a certain number of calling cards within a period of time. time .. With reference to Figure 5B, functions can be represented by function vectors such as, for example, function vector 518. A function vector, such as function vector 518, includes one or more segments of function vectors 532, 534, 536, 537, etc., to identify a collection of functions. In a preferred embodiment, the enhancer 510 generates function vectors 518 and passes them to the initiation point detector 520. Referring to FIG. 5C, a preferred embodiment of the function vector 518 is provided wherein each segment of the function vector 532 -536 preferably includes a function name field, or key function field, 538-542, respectively, to identify a particular key function. For example, a key function field may indicate a particular calling card number, an ANI, a credit card account number, a call number, and so on. Each segment of feature vector 532-536 includes a key value field 544-548, respectively, to provide a value for the associated function name fields 538-542. For example, if the key name field 538 identifies a particular calling card number as the key function that is represented by the function vector segment 532, the function value field 544 can provide the number of calls that were made during the last two hours with that calling card. Similarly, if the function name field 540 identifies a particular ANI, the function value field 546 may provide the number of calls during the past two hours from that ANI. The generation of event fields 550-554 identifies a generating event for each function vector segment 532-544. Remember that a generating event is an event that caused a function to be generated. A generating event is typically the most recent event that is counted in a field of value 544-548. For example, where the value field 544 specifies that four calls were made in the last two hours with a particular calling card, the most recent of those four calls is the generating event for function 532. The fields of the contributing event 556 -560 and 562-566 represent the events that contributed to the function vector segments 532 and 536, respectively. Using the previous example, where the value field 544 specifies four calls during the two past hours and where the generator event field 550 represents the fourth of those calls, the fields of the contributing event 556-560 represent the three calls previous Note that the segment of the function vector 534 does not include contributing events. The segment of the function vector 534, therefore, represents a function of a single event such as, for example, an unsafe ANI call. Although generation of the fields of the generating event 550-554 can identify different generating events, the segments of the function vector 532-536 are related, however, by one or more common aspects of their generating events and / or contributing events. For example, a segment of the function vector 532 may represent call card calls that were made during the last two hours with a particular calling card, so that the generating event 550 and the contributing events 556-56Q represent the calls that were made during, say, the past two hours with that calling card. In addition, the segment of the function vector 534 may represent an insecure ANI and the generating event 552 may identify a single case of a call that was made from the insecure ANI. It is said that the function vector segments 532 and 534 are related if one or more of the generating events 550 and the contributing events 556-560 can identify a call that was made with the calling card that is represented by the function 532, from the insecure ANI that is represented by the generator event 552. Referring again to Figure 5A, the initiation point machine 126 preferably employs a core 1310 infrastructure and a specific implementation of the 1312 domain. The core 1310 infrastructure includes an enhancer and threshold detection component 509 for applying configurable rules to the event records of the network 506a and for generating the alarms 526 in the event that one or more event records of the network 506a violate one or more of the rules configurable The enhancer and threshold detection component 509 includes at least one enhancer component 510 and at least one or more threshold sensing components 520. The upgrade component 510 receives the event records of the network 506a and generates a function vector to identify a or more key functions associated with event registers 506a. Key functions can include, for example, the ANI, the credit card number, the calling number, and so on. Reference is made to an event log of the network that triggers the generation of a function vector as a generating event. The specific implementation of the domain 1310 includes improvement rules and the configuration database 512 and a database of detection rules of the threshold 522. The databases 512 and 522 include the rules that can be created, deleted or modified from compliance with the evolving needs of a user. Changes to start point rules 522 can still be executed while the system is running. When an initiation point rule is created or modified, it will be applied to new events that arrive on the system. With reference to Figure 4, in step 412, the enhancer component 510 receives the normalized event registers 506a within the initiation point machine 126. Each event, when received, is treated as a generating event, because it generates the calculation of a function vector. The threshold enhancing component 510 increases the normalized event registers 506a to produce the function vectors 518. The improvement rules specify the fields to be saved, omitted, formatted and increased. The accretion may provide additional data enabling the threshold detector 520, to apply the appropriate threshold detection rules. The improvement rules and configuration data 512 specify the data that is required for the improvement, where to find this data, and how to add it to the event records. Configuration data 512 start point enhancers, analogous to normalizer configuration data 504, provide modularity to the system both for ease of configurability, and for portability for the detection of other business frauds. The threshold improvement component 510, based on the instructions from the improvement rules and the configuration data 512, can request improvement data from an informant 514. The informant 514 provides a communication interface to one more external systems 516 from which additional data needed by upgrader 510 can be retrieved. External systems 516 can include the customer order entry systems and the network design systems. Based on the request that the informant 514 receives from the upgrade component 510 of the threshold, the informant 514 sends a query to an appropriate external system 516, using an external system communication protocol 516. The informant 514 receives the required data and supplies it to the enhancer 510. The informant 514 thus provides the modularity that allows the addition and removal of the different external systems 516 by simply modifying the interfaces within the informant 514. The initiation point machine 126 can thus be interconnected to a variety of external systems 516 and can be reconfigured in a simple manner for fraud detection systems of other companies. When the upgrader 510 receives an event log 506a, the upgrader 510 determines the type of event based on a key. For example, if the event is a call card call, a key could be the number field of the call card of the event log. The upgrader 510 searches for a set of rules, based on the provisioning, improvement rules and configuration data 512, for that type of event. A rule set includes one or more rules to specify how to calculate functions for an event type. A generating event can trigger the calculation of one or more functions. A rule defines a function and requests that that function be calculated using a certain type of measurement. 510 The resulting value of the function is placed in a function vector 518. Multiple types of measurements can be performed by the 510 enhancer, according to the specifications in the improvement rules and configuration data 512. Each type of measurement includes an algorithm that is used to calculate the value of a function. For example, the types of measurements may include, but are not limited to, any of the following types of measurements: 1) simple account: counts the events in a given period of time (ie, the number of calls in the past two hours); 2) field account: counts the events so that they meet a criterion for a certain value of an event field (that is, the number of calls with ANI = 202-555-1234). Improver 510 searches for the field in the event. If the field value = a specified value, then the 510 enhancer adds the event to a list to be counted, -) set account: counts the events that meet the criteria for a set of values of a field in a way that, if a field in an event has a value that is a member of a set (as defined by an improvement rule), then the upgrader 510 counts the event (that is, the number of calls originating in Texas, New Mexico, Arizona, or California)) sum: adds a certain field from one or more events in a given period of time (that is, adds the duration of all calls that were made in the last 2 hours); ) Simultaneous: counts the calls (with certain criteria) that were made at the same time, defined by the overlap of the minimum call duration or the maximum time separation (that is, it counts all the calls that were made with the number of the calling card = nnn, that overlap for at least 2 seconds or that was made for 10 seconds duration of another call that was made with the same calling card number); and geographical speed: simultaneous calls over a distance. The rule will provide a minimum necessary time between calls, based on the physical distance between the points of the origin of the call. For example, if a call that was made with a certain calling card in a first city is placed less than 4 hours after another call that was made with the same calling card in a second city, and the second city is over of 4 hours of travel time of the first city, then the two events are added to the list that is to be counted. To count the events that use any of these types of measurements, the 510 enhancer places each event in a list, the events being sequenced in time. Each rule specifies a period of time in which to include events for a measurement. To make an account, upgrader 510 starts with the most recent event (the generating event) in the list, and counts the events in the list back in time, until the time period is covered. As a new generating event is received, the upgrader 510 starts its account later in the list. This represents a sliding time window. For example, suppose that a rule specifies a field account, such as "counting all events in the past two hours in which the ANI equals a certain value". When a first event is received that meets the criteria of the ANI, this is a generating criterion, and causes the upgrader 510 to retrieve this particular rule from the improvement rules and 512 configuration data, using the ANI as a key . Improver 510 places this event in a list of events that meets the same criteria. Improver 510 then counts all the events in this list by going back two hours from the generating event. Other events are the contributing events. If another event is received that meets the same criteria, with a time stamp 5 minutes later than the first event, then this second event becomes a generating event. Improver 510 counts all events on the list by going back two hours from this second event. The first event becomes a contributing event for the second event. The two-hour time window slides forward for 5 minutes. The six types of measurements described above have the following common functions: 1) 'each of them makes a measurement for a specific key (ie the calling card number 202-555-1234); 2) each of them analyzes all events that have the specific key and applies an algorithm to each event within the given time period (the sliding time window); 3) each of them returns a function value for the given period of time, which represents a time window (the time window is adjusted by the most recent event, and goes back in time from there); and 4) all of them are persistent. Continuing with the process of Figure 4, the enhancer 510 performs a measurement on a generating event, in accordance with a rule that is part of a set of rules that is read from the improvement rules and configuration data 512. Each generating event causes the upgrader 510 to apply a set of rules, including a set of rules one or more rules. The results of a measurement is a function. Remember that a function includes a measured function value, a generating event, and zero or more contributing events. For performance reasons, each event in a function is represented by an event identifier, which points to an actual event record in the event 125 database. A single event generator can result in one or more functions. Improver 510 creates a function vector, and places each function in the function vector. The following example is provided for a rule which may be, for example: If the calling card number = 123456789, then create function vector; calculate simple account: number of calls that were made in the past 2 hours: calculate joint account: number of calls from (ANIS public telephone list) that were made in the past 2 hours; Simultaneous calculation: number of calls that were made during the 10 seconds of another call, in the past 2 hours; Suppose that the upgrader 510 receives an event 506a representing a call that was made with the calling card number = 123456789. This is a generating event. Improver 510 uses the calling card number as a key, and retrieves the previous rule from the improvement rules and configuration data 512. The upgrader 510 then performs the rule by means of creating a function vector 518 The upgrader 510 then reads a list of all the events starting from the DB 125 event. Starting with the generating event, the upgrader 510 goes back two hours and counts all the events that represent the calls that were made with that card number. calls. This is the simple account. Improver 510 goes back two hours and counts all the calls that were made with that calling card number, from a public telephone ANI. This is the joint account. Similarly, the enhancer 510 performs the simultaneous count. The result of each count, along with the identifiers for each event that was counted, is added as a function to the function vector 518. Now the upgrader 510 can also include in the function vector a threshold for a function. Thresholds are included if they are called by an improvement rule, and are provided by the improvement rules and configuration data 512. These are placed in the function vector as a function. For example, function 534 may represent a threshold for function 532. A threshold may be a value for measurement (ie, "5" calls) or an exposure of truth (ie, if ANI = 202-555 -1234). An exposure of the truth is equivalent to an omission value of 1 (that is, if 1 call with the ANI = 202-555-1234). A function does not include a threshold for itself. This is simply a measured value. A threshold for a function in the function vector can be included as a function in itself. The 510 enhancer makes no comparison, simply performs the measurements and creates the function vector. The comparison of the function values with the thresholds, to determine if a threshold has been exceeded, is done by the threshold detector 520. The thresholds can also be obtained from the threshold detection rules 522 by means of the threshold detector 520, as part of the process of determining if a threshold has been exceeded. This is described with reference to step 414, below. In step 414, the threshold rules are applied events to determine whether or not fraud exists. In the preferred embodiment, the threshold detector 520 receives the function vectors 518 for the application of the threshold rules. The threshold detector 520 is responsible for determining whether there is evidence of fraud in a function vector. The threshold detector 520 employs the threshold detection rules 522 to perform the comparisons of the values of the function against the thresholds. The threshold detection rules 522 specify how comparisons should be made. A threshold for a function can be included as another function in the function vector, or it can be obtained from threshold detection rules 522. A threshold is usually a value for a measurement, the threshold for the omission can be . A unit value threshold is useful for true / false statements of a function. For example, if an ANI has been designated as a source of fraudulent calls, any call from that ANI is considered evidence of fraud. A threshold comparison is simply made by identifying a single event that contains that ANI.
Each of the segments of the function vector 532, 534, 536, and 537 is a function, and each contains a function value. The threshold detection rules specify the threshold detector 520 how to perform the comparisons to determine if there is evidence of fraud. Threshold detection rules may include, for example, the following types of rules: 1) if A > "5", create evidence (the threshold for A is a value that is obtained from the Threshold Detection Rules 522); 2) if A > B, create evidence (the threshold for A is another function in the function vector), - 3) if A > B # and # B > C, create evidence (may be complex statements); and 4) if D, create evidence (the threshold is unity, useful for items such as an insecure ANI or a stolen calling card number). If an explicit value for a threshold is not given, it is assumed that it is a unit. With reference to Figure 5D, in a preferred embodiment, the enhancer and threshold enhancer and detection component 509 includes two sets of threshold enhancer and detector pairs 570 and 572. One pair, which may be pair 570, it can be devoted to analyzing the functions of a single event, while par 572 can be dedicated to analyzing the functions of multiple events. Generally, enhancers 510 generally perform complex calculations, as necessary for different types of measurements, while threshold detector 520 generally performs simple comparisons. Thus, for aggregate performance, as illustrated in the enhancer and threshold detector pair 570, one or more threshold sensors 520 may be provided with two or more 510 enhancers. This configuration provides more adequate data throughput and uniform. The threshold detection rules 522 can be created and modified automatically, in real time, while the initiation point machine 126 is executing. Preferably, the rules can be modified in two ways, corresponding to two general rule formats. In a first general format, a rule can be a general statement and can refer to specific values in a table. For example, a rule can be seen as "If the number of calls from the ANI of a public phone > ruin, create evidence", in which nnn is an indicator towards a specific value in a table. The rules in this format can be modified dynamically or created by modifying or creating the specific values in the table. In a second general format, a rule can be rigidly encoded with specific values. For example, a rule can be viewed as "If the number of calls from the ANI of a public telephone> 10, create evidence". The rules in this format can be modified dynamically or created by modifying or creating the same rule. The threshold detection rules 522 may vary according to the company that uses the machine. Preferably, the threshold rules stored in the database 522 can be modified dynamically without removing the machine from the line. The threshold detection rules 522 can be created automatically and modified both automatically by the external pattern recognition machine 132, and manually by the human analyst. In one embodiment, wherein the rules are generated automatically by the pattern recognition machine 132, se. updates the threshold detection rule database 522 automatically by the pattern recognition machine 132. Alternatively, where the rules are automatically generated by the pattern recognition machine 132, the base of the pattern recognition machine 132 is updated. data of threshold detection rules 522 manually by human analysts. In this alternative mode, the pattern recognition machine 132 is used to detect new fraud patterns, but instead of automatically creating a fraud detection rule, it notifies a human analyst and suggests a new rule. The human analyst can then enter the suggested rule or create a new rule. A variety of rule types can be implemented in threshold detection rules 522. For example, threshold detection rules 522 may include, but are not limited to, rules for generating one or more of the following types of alarms. A long-term alarm (LD) is generated if the duration of a single completed call exceeds a threshold of duration X. The thresholds of LD can be established by the product for the call type categories such as international with objective, international without objective, and domestic. The category of the call type is determined from the international indicator in the normalized event. An alarm of duration of combination of origin, termination (OTCD) is generated if a completed call originating from X and ending in Y has a duration exceeding Z. The product can establish the threshold of duration Z for a combination X and Y. The origin X and the termination Y can be specified as NPA-NXX, NPA or country code. No hierarchy is required to apply the most specific threshold. Users could, however, implement a hierarchy if they wish. An event can generate more than one OTCD alarm.
An attempt to combine the origin, termination (ACTO) is generated for a single attempt originating from X and ending at Y. The product can set the ACTO alarms. You can specify the origin X and the ending Y as NPA-NXX, NPA or country code. There is no hierarchy to apply the most specific origin and termination combination. An event can generate more than one ACTO alarm. An unsafe origin attempt alarm (HOA) is generated when a call attempt originates from X. The origination X is contained in a list of previously defined numbers.An unsafe termination attempt alarm (HTA, for its acronym in English) is generated when a call attempt ends for X. The termination X is contained in a list of previously defined numbers.An insecure origin completion alarm (HOC, for its acronym in English) it is generated when a completed call originates from X. The origination X is contained in a list of previously defined numbers.An insecure completion completion alarm (HTC, for its acronym in English) is generated when a completed call ends for X. The termination X is contained in a list of previously defined numbers. An unsafe originating attempt alarm of Deactivation (DHOA) is generated when a call attempt originates from a number from which a newly disabled card originated. The measure of how recently the card should have been disabled is a T parameter. An unsuccessful termination attempt (DHTA) alarm is generated when a call attempt ends for a number for which You have finished a recently disabled card. The measure of how recently the card should have been deactivated is a time parameter T. An unsecured deactivation origin completion alarm (DHOC) is generated when a completed call originates from a number from which a recently deactivated card originated. The measure of how recently the card should have been deactivated is a time parameter T. An unsafe termination completion completion alarm (DHTC) is generated when a completed call ends for a number for which You have finished a recently disabled card. The measure of how recently the card should have been disabled is a time parameter T. A PIN piracy origination alarm (PHO) is generated when an X number of attempts from the same origin fails the PIN validation within time T. The number of attempts X accumulates through all validated PIN products based on the information digits. The call attempts are tracked with the information digits indicating the public telephones outside the account. An invoiced PIN pirate number alarm (PHBN) is generated when the number X of attempts on the same invoiced number fails the PIN validation within time T. The invoiced number is calculated by excluding the last four digits of the BTN, that is, exclude the 4-digit PIN. The number of X attempts is accumulated through all validated PIN products based on the information digits. Call attempts with the information digits indicating public telephones are tracked outside the account. A simultaneous international alarm (SI) is generated when the number X of completed international calls using the same automatic code / BTN overlaps in time by at least 2 minutes within a sliding window of time. The product specifies the number X of international calls. The sliding window of time T within which the simultaneity is verified, can not exceed the evacuation time for standardized events. An international call is determined from the international indicator in the normalized event. A simultaneous domestic alarm (SD) is generated when the number x of completed domestic calls using the same automatic code / BTN overlap in time by at least 2 minutes within a duration T of the generating event. The product specifies the number X of domestic calls. The duration T, within which the simultaneity is verified, can not exceed the evacuation time for the normalized events. A domestic call is determined from the international indicator in the normalized event. A simultaneous call alarm (SA) is generated when the number x of completed calls using the same automatic code / BTN overlap in time by at least two minutes within a duration T of the generating event. The product specifies the number X of calls. The duration T, within which the simultaneity is verified, can not exceed the evacuation time for the normalized events. This alert includes both international and domestic calls. A geographic speed check is a check for a call for calls that use the same automatic-plus-PIN / BTN code that originates from the locations between which it would be impossible for a caller to travel during the interval between calls. The geographic speed verification alarms can be calculated either by specifying the time for combinations of origins and terminations or by specifying a latitude / longitude for each country or NPA and a maximum travel speed and performing a calculation of time. An international geographical speed verification alarm (GVCI) is generated when, for an X number of international call completion pairs that use the same automatic-plus-PIN / BTN code, each pair occurs within an interval of time TI, each pair is non-simultaneous and each pair occurs within a sliding window of time T2. The product specifies the number X of call pairs. The TI interval for a pair of calls is determined by combining the pair of the NPAs of the originating ANIs and / or the country codes. The determination of whether the calls were made within a given interval is calculated from the difference between the termination time of the first call and the time of origin of the second call. The sliding window of time T2 within which the verification of the geographical speed is performed, can not exceed the evacuation time for the normalized events. An international call is determined from the international indicator in the normalized event. Similarly, a domestic geographic rate verification (GVCD) alarm is generated when the number X of domestic call completion pairs, using the same automatic code-plus-PIN / BTN, would have been impossible to do for a single caller. A geographic speed check (GVCA) is generated when the number X of completion completers, regardless of the domestic or international classification, using the same automatic code-plus-PIN / BTN, would have been impossible to do for a single caller. For certain types of calls, such as free long-distance or dial-in calls, for example, you can configure switch locks to block these calls to a set of countries. There are cases, however, where a country block on the switch fails. In this way, a failed country lock alarm (FCB) can be generated if a call is made from a switch with a lock in place for a blocked country. This type of alarm makes use of the data that indicate, by means of the identification of the switch, the codes of the blocked countries. A completed call interval alarm (CCI) is generated when one or more calls completed in an automatic-plus-PIN / BTN code exceeded one or more thresholds. Thresholds can include cumulative call minutes, completed call accounts and cumulative dollars. The thresholds can be based on the categories of international calls with objective, international without aim, and domestic. The category of the type of call is determined from an international indicator in the normalized event. CCI alerts can include detection rules comparable to the current FMS limit and interval alarms. In step 416, when the threshold detector 520 receives a function vector, it reads through the threshold detection rules 522 and applies any rules that have been designated for any functions in the function vector. For example, in Figure 5C, if a function vector 518 includes functions such as, for example, functions 538, 540 and 544, threshold detector 520 will apply any rule that includes comparisons for those functions. If the event log of the network does not exceed the threshold, no further action is necessary and the process is stopped in step 418. If, however, the event log of the network exceeds or violates the threshold rule , the evidence is generated, or an indication of possible fraud. An indication, or evidence, of preference fraud includes at least one record that contains a priority indicator, the name of the account, and a set of suspicious events. The actual content of evidence is defined by the implementation of the infrastructure. Suspected event sets include a union of all events from the function event sets that resulted in evidence of fraud. In step 420, if an indication, or evidence, of fraud was generated, the threshold detector 520 generates an alarm 526, which is passed to the analysis layer 133, preferably with evidence. Remember that all events received by the normalizer 502 are preferably stored in the event DB 125 for a period of time that the user can designate. A storage period can, for example, be adjusted to twenty-four hours. In addition, events that are identified in a set of suspicious events can be kept for longer for analysis purposes. Suspicious events can be stored in the Event DB 125 with an indication of suspicion or can be stored in a separate database, so that they are not evacuated after the typical storage period. 3. Profiling Profiling is a process by which the standardized network event records sent are compared to the profiles that represent the normal and fraudulent use of one or more telecommunications networks. Profiling may require historical network event records to help determine if a current network event log matches a fraudulent use profile of a telecommunications network. With reference to Figure 4, steps 422-428 and 420 represent a preferred profiling process. In Figure 6, the details of the profiling machine 128 are provided to implement the profiling process of the steps 422-428 and 420. The process begins at step 422, wherein the event registers of the 506b network are sent to the profiling machine 128 and receiving the profiling enhancer 624. The profiling enhancer 624 provides additional data for the profiling processor 634 which will enable the application of the appropriate profile detection rules 636. The profiling enhancer 624 operates in a manner similar to the start point enhancer 508, except that the profiling enhancer 624 may use a different 626 configuration data component, because different types of data may be needed to create a data record. 632 events enhanced for profiling. Preferably, the upgrade components 510 and 625 have similar fundamental structures, but operate differently through the use of the specific configuration data 626 and 512. The configuration data 626 specifies what data is required, where to find it and how to add it to the event records. The enhancer 624, based on the instructions of the configuration data 626, then requests this data from an informant 628. The informant 628 provides a communications interface to each of several external systems 630 from which the data that will be retrieved will be retrieved. 628. The informant 628, similar to the threshold informant 514, is used to retrieve the required data from the external systems 630. Again, the use of the modular configuration data 626 and the informant 628 components provides the current invention with simplicity of configurability and portability. Based on the request received from the breeder 624, the informant 628 sends a query to the appropriate external system 630, using the communication protocol of the external system 630. The informant 628 receives the required data and supplies it to the breeder 624. The breeder 624 increases Event logs 606b normalized using the data received from the informant 628, and thus creates an improved event log 632. In step 424, a profiling processor 634 receives the enhanced event logs 632 to apply against one or more profiles. Using certain parameters of the improved event log 632, the profiling process 634 selects an appropriate profile detection rule 636, several of which are maintained in a database. Rules 636 determine which profile of the profile database 130 the event should match. Profiles can be created in any number of ways. Profiles can be created as user, product, geographic or global profiles. The profile database 130 may be a database of the object, a set of tables, a set of real numbers representing weighted factors for a neural network (as used in AI technology), or other different forms. Preferably, profiles representing both normal and fraudulent patterns are stored. In a preferred embodiment, profile development and profile matching employ Artificial Intelligence (AI) technology. Although there are different AI systems for this purpose, the preferred modality uses the algorithms based on statistics (rather than based on rules), to process the volumes of known normal and fraudulent normal patterns. Preferably, a profiling processor based on AI 634 also trains itself to formulate profile rules 636 that allow it to match events with profiles and detect deviations from normal profiles. In step 426, the profiling component 634 retrieves an appropriate profile from the profile database 130, and buys the event with the profile. If an event falls within the selected profile or profiles, the test is stopped in step 428. If, however, a deviation of the profile that was selected is detected, the profile component 634 generates an alarm 638 in step 420. Preferably, a probability of fraud based on the significance and degree of deviation is calculated and expressed as a percentage or weighted factor. Preferably, at least step 426 is performed with the help of Artificial Intelligence (AI) technology. The alarm 638 is then sent to the analysis layer 133. 4. Pattern Recognition Pattern recognition is a process by which event records are analyzed to learn and to identify normal and potentially fraudulent patterns of use in a telecommunications network. With reference to Figure 4, steps 430-438 represent a preferred process for pattern recognition and for updating threshold rules and profiles. In Figure 7, details of the pattern recognition machine 132 are provided to implement the pattern recognition process of Figure 4. The pattern recognition machine 132 can receive feedback from other layers and can employ the components that they self-teach fraudulent and non-fraudulent patterns based on that feedback. In step 420, the normalized event registers 506c are sent to the pattern recognition machine 132 where they are received by the pattern recognition enhancer 740. The pattern recognition enhancer 740 operates much like point enhancers initiation and perlop 510 and 624, respectively, except that the enhancer 740 employs a different configuration data component 742. Also, like. that the initiation and profiling process processes, the enhancer 740 uses an informant 744 for data retrieval from the external systems 746. This data is used to improve the standardized event registers 506c to create the improved event registers 748. In step 432, the improved event registers 748 are sent to an update and storage component 750, which maintains a call history database 752. The update and storage component 750 enters each 748 record within the call history database 752. The call history database 752 contains the volumes of the call data that can be analyzed by a pattern analysis processor 754. In step 434, the processor Pattern Analysis 754 analyzes the call histories from the call history database 752, to determine whether or not any interesting patterns emerge santes. Interesting patterns include patterns that could be fraudulent and patterns that could be non-fraudulent. Recognition of non-fraudulent patterns is important to minimize the processing of non-fraudulent information. If an interesting pattern is detected, the pattern analysis processor 754 determines whether it is a fraudulent or non-fraudulent pattern. To achieve this, the pattern analysis processor 754 uses artificial intelligence technology to train itself in the identification of fraudulent patterns. By analyzing the event volumes from the history of the 752 call, a pattern analysis processor based on AI 754 first determines the normal patterns and then looks for deviations that can be identified as fraudulent. The processor 754 then detects the emerging patterns of these deviations and identifies them as fraudulent patterns. There are several AI systems available for this purpose. Examples include tree-based algorithms that obtain discrete outputs, neural networks, and statistics-based algorithms that use interactive numerical processing to calculate the parameters. These systems are widely used for pattern recognition. By using an AI system for pattern analysis 754, both normal and fraudulent patterns can be identified from the volumes of data stored in the call history database 752.
In step 436, the pattern analysis processor 754 uses the results of step 434 to modify the threshold detection rules 522 by means of the initiation point interface 756. By recognizing the fraudulent patterns, certain initiation point rules can be updated to reflect the latest fraud patterns. For example, the pattern analysis processor 754 can detect a fraudulent pattern that emerges with calling card calls that are made on weekends to a certain country from certain states in the United States of America. You can then update an initiation point rule to generate an alarm 526 whenever that call is made. In step 438, the pattern analysis processor 754 uses the results of step 434 to modify the profiles in the profile database 130 by means of the profiling interface 758. The pattern analysis processor 754 feeds the fraudulent patterns and known normals to the profiling processor 634. Using the AI technology, the profiling processor 634 processes these patterns to construct the profiles that were identified as fraudulent or normal. In this way, through the use of a pattern recognition based on AI, the invention allows fraud detection to keep pace with the most current fraud schemes. The processes of threshold detection, profiling and pattern recognition are described as being performed substantially in parallel mainly to reduce the processing time. The processes can, however, be performed one after the other or as some combination of parallel and non-parallel processing.
B. Analysis layer With reference to Figure 8, a process fabrication diagram for a preferred alarm analysis process is provided in which the alarms generated in step 420 are analyzed by the analysis layer 133 to consolidate and correlate alarms in cases of fraud. After the creation of the case or the addition of a new alarm to a case, a case priority is calculated or recalculated. The priority of the case is calculated from the configurable prioritization rules that can make use of any field of the case. The prioritization rules sort the cases so that there are likely to be fewer false positives at the top of the priority list. Prioritized cases are presented to an expert system and a human analyst. With reference to Figure 9, the details for implementing the process of Figure 8 are provided. The analysis layer 133 consolidates the alarms by examining the different functions of each alarm and correlating those that are possibly related to the same fraudulent activity. . The analysis layer 133 consolidates the correlated alarms to construct the "cases" of fraud, thus reducing the total number of data that should be examined further. A fraud case consists of alarms that may cover several types of services, but which are possibly related to the same events or callers. Preferably, the analysis layer 133 includes a core infrastructure portion and a portion of the user's specific implementation. The alarms are consolidated in cases according to the rules of analysis, or the types of cases. Cases include alarms that have some aspect in common. Alarms can be consolidated in one or more cases, depending on the analysis rules that were used. Each alarm must belong to at least one case. If an alarm is created that matches an existing non-closed case, the alarm will be added to the case. Otherwise, a new case is created for the alarm. In step 810, an alarm enhancer 902 receives alarms 526 from the initiation point machine 126 and alarms 638 from the profiling machine 128. Alarms 526 and 638 represent cases of possible fraud and designate the types of service of fraud such as cell phones, credit cards, etcetera. The alarms 638 of the profiling machine 128 are preferably accompanied by the probability of fraud. In step 812, these alarms are improved. The alarm enhancer 902 is similar to the detection layer enhancers 510, 624 and 740. The enhancer 902 increases the alarms 526 and 638 to produce the improved alarms 910. The configuration data 904 specifies the additional information that may be needed. and how that information should be added, based on the type of alarm received. Additional information can be retrieved from billing, accounts receivable, order entry, or other various external systems 908. For example, informant 906 can access an accounts receivable system to determine that "that ANI has an account very hard of $ 1000". Similarly, informant 906 can access an order entry system to determine that "this calling card number was deactivated two months ago". Informant 906 communicates with different external systems 908 to retrieve the information that was requested. Then, the alarm enhancer 902 adds this information to the alarms 526 and 638 and produces the improved alarms 910. In step 814, the improved alarms 910 are sent to a 912 fraud case builder to correlate and consolidate several alarms in cases of related fraud. This correlation is based on common aspects of the alarms. An example of one of these common aspects is "alerts that have the same calling card number". The correlation is governed by the rules of analysis 914, which can be programmed and maintained in a rules database. The rules 914 may use the fraud probability that was assigned by the profiling processor 634 as a parameter. For example, a rule can establish "build only cases for alarms that have more than 50 percent probability of fraud and which are generated for the same account". In the operation, the fraud case builder 912 receives an improved alarm 910 and determines if there is a case or existing cases in which to place the alarm. The Fraud Case Builder 912 looks for the functions that the alarm might have in common with the existing cases, using the analysis rules 914. If no existing case is appropriate, the Fraud Case Builder 912 creates a new case for the alert . The fraud case builder 912 tries to avoid duplication of cases. Fraud case builder 912 also tries to avoid corruption of cases, which could happen otherwise due to the distributed platform of the invention. For example, in this environment of parallel processing, multiple instances of the same process, such as updating a case, could occur. If two analysts are trying to update the same case with either identical data or different data, Case Builder 912 tries to make sure that the case reflects both data if the data is different, and tries to make sure that cases are not created duplicates if the two data are identical. Fraud case builder 912 can employ a case lock mechanism to achieve this goal. A main objective of the analysis layer is to reduce the amount of data that the analyst must examine. In this way, although an alarm can go within more than one case, the total reduction of the data can still be achieved. For example, an alarm for a call card call can designate both the card number and the ANI. The alert can then be placed in a first case of consolidated alarms that are based on a common card number, and in a second case of consolidated alarms that are based on a common ANI. However, the total reduction of alerts will generally be achieved as the number of alarms that are consolidated exceeds the number of alerts that are placed in more than one case. In step 816, the fraud case designer 912 issues fraud cases 916 to the expert system layer 139 for further analysis and action.
C. Expert System Layer With reference to Figure 10, a manufacturing diagram of the process is provided to analyze cases of fraud and to act automatically on certain cases.
The process analyzes the cases by giving them priority, adding additional information relevant to each case and performing automatic actions for some cases. Because some actions are performed automatically, the rules of action triggering the automatic actions apply only to cases where there is a high probability of fraud or where the potential cost of fraud is significant. In Figure 11, the details of the expert system layer 139 are provided to implement the process of Figure 10. In step 1010, the fraud cases 916 are sent from the analysis layer 133 and received by a priority 1102. Prioritizer 1102 improves fraud cases 916, assigns them priority and determines if any action should be executed automatically. In step 1012, cases of fraud 916 are improved. To improve cases, the prioritizer 1102 uses the configuration data 1104, an informant 1106, and the external systems 1108. The configuration data 1104 specifies any additional information that may be required. need for 916 fraud cases, where to find the data and how to add the data to the fraud case. The informant 1106 serves as a communications interface to the external systems 1108. The informant 1106 sends a query to an appropriate external system 1108 using an external system communication protocol 1108. The informant 1106 receives the required data and supplies it to the prioritizer 1102. Then, the prioritizer 1102 adds the information to a fraud case, creating an improved fraud case 1114. In step 1014, priority is given to enhanced fraud cases. To prioritize cases, the 1102 prioritizer uses prioritization rules which are maintained as part of the 1104 configuration data. The prioritization rules are based on the rules to determine and prioritize cases of fraud by experienced analysts. These rules are programmed into the 4040 configuration data as logical algorithms. Prioritization rules can also employ parameters that are retrieved from external systems such as, "how much has the customer's last income been." These parameters are useful for determining the potential cost of a fraud case, which influences the determination of priority. In step 1016, the prioritized fraud cases are analyzed and appropriate actions based on the rules of action are specified. In determining whether to initiate an action on a case or not, the 1102 prioritizer uses the action rules that are held as part of the 1104 configuration data. An action is a response to fraud that is suspected and includes an external system. Examples of actions include deactivating or activating cards or modifying-privileges of the range of use. Actions are placed in categories such as automatic or semi-automatic actions, initiated by the user, initiated by mail, or manuals. The automatic or semi-automatic actions are initiated by an expert system, following the rules that were previously defined. The other actions are typically initiated by human analysts. Semiautomatic actions are initiated by an expert system under the conditions that were previously specified. For example, under a condition that was previously specified for excessive backup records, the expert system can automatically deactivate high priority fraud cases. The actions initiated by the user are carried out after certain requests. Actions can include activations and deactivations of accounts such as telephone accounts, credit card accounts and debit card accounts. Actions initiated by e-mail are made after the request that was received from external groups, such as a customer service group, through the e-mail script. Actions can include activations and deactivations of accounts such as phone bills, credit card accounts and debit card accounts.
Manual actions are initiated by users external to the system and are executed independently of the system. The external user can request that the system registers that the action was performed. The rules of action are based on the rules to specify the actions that should be taken on a case of fraud by experienced analysts. Actions can include deactivating the number of a calling card, placing a switch lock on an ANI, or sending a notice to a customer. Action rules can be programmed as logical algorithms and can consider parameters such as priority (ie, "for any case on a priority level N, disable the account") and the type of service (ie, cellular, card of calls, dial 1). Action rules can include data that was retrieved from external systems. The rules of action are part of the implementation layer. The action rules for manual actions can refer to special handling instructions to act on suspected fraud in certain customer accounts. For example, special instructions may indicate that you contact a customer fraud investigation unit, rather than contacting the cardholder, whose card is suspected of fraudulent activity.
The action rules are programmed into the configuration data 404 as logical algorithms that should be applied to the improved cases. These actions can be based on priority (ie, "for any case on the priority level N, disable the account"), the type of service (ie cellular, calling card, dial 1), or the data improvements that were retrieved from the external systems 1108. In step 1018, a reinforcer 1110 executes the actions that were specified in a step 1016 that interfaces with different external action systems 1112. The external action systems may include switch interface for_ switch-based ANI locks, order entry systems for account deactivations, network control systems for calling card deactivations, email system for email alerts to customers or the internal staff, customer service centers, printing centers to send collection letters by mail and several other systems. The reinforcer 1110 preferably resides on the servers 310a ... 310n. In operation, the enhancer 1110 receives a request for action from the priority 1102 and interconnects with an appropriate external action system 1112 to execute the action. External action systems 1112 may include switch interface systems for switch-based ANI locks, order entry systems for account deactivations, network control systems for call card deactivations, mail system electronic mail notices to customers or internal staff, customer service centers, printing centers to send letters of collection by mail and several other systems. Because these actions are carried out automatically, the rules of action that trigger them apply preferably only to cases where there is a high probability of fraud or where the potential cost of fraud is significant. Priority 1102, together with the prioritization rules and the action rules that are maintained as 1104 configuration data, serve as an expert system, applying the expert rules to data records to determine what actions to take. An expert shelf system can be used. However, it is preferable to program a system manufactured according to specifications using a logic-based language appropriate for expert systems, such as the CLIPS. Cases in which an automatic action is not guaranteed are sent to the presentation layer 143 for further examination and potential action by the human analyst. Also sent as part of an enhanced case 414 to the presentation layer 143 are data that are not useful for the automated expert system layer 139, such as the text notes that a human analyst who is working in the fields adds to a case. work stations 152a-152n.
, D. Layer of Presentation Fraud cases 1114, including those that guarantee automatic action in the expert system layer 139 and those that do not, are sent to presentation layer 143 for examination and potential action by a human analyst at work stations 152a-152n. With reference to Figure 12, the details of a presentation layer 143 are provided for interconnecting the detection layer 123 and the analysis layer 123 with the human analysts who are working on the work stations 152a ... 152n, which they connect to LAN 150. Preferably, cases of fraud 1114 include data that was generated in the upper layers, such as the probability of fraud, information that was retrieved from external systems, and any actions that have already been taken. taken. The presentation layer 143 preferably allows human analysts to retrieve and aggregate additional data and perform actions similar to those performed by the expert system layer 139. The case enhancer 1202 receives the improved fraud cases 1114 from the expert system 139. Like data enhancers in the upper layers, case enhancer 1202 uses configuration data 1204, an informant 1206, and external systems 1208 to increase enhanced fraud cases 1114 with additional information relevant to the presentation to an analyst. A presentation interface 1210 serves as an interface to work stations 152a ... I52n, providing data for the graphical presentation to the analyst. Fraud cases are presented in accordance with presentation rules 1212, which are programmed as logical algorithms within a database and are configurable. The presentation interface 1210 employs an informant 1214 and the external systems 1216 to retrieve the additional information. However, this is not automatic, as in the upper layers. Rather, the informant 1214 retrieves the data from the external systems 1216 which are based on the commands from the analysts at work stations 152a ... 152n. For example, an analyst can see a case and decide that a client's payment history is needed before taking any action. The analyst, by means of a work station 152a ... 152n, sends a command to the presentation interface 1210 requesting this data. Then, the presentation interface 1210 instructs the informant 1214 to retrieve this data from an external system of accounts receivable 1216.
The presentation interface 1210 uses the enhancer 1218 to perform actions. This is not automatic, as it is with the layer of the expert system 139. Instead of this, the reinforcer 1218 performs the actions based on the commands of the analysts. For example, an analyst may decide that a switch block is needed on an ANI. The analyst, via workstation 152a ... 152n, sends a command to presentation interface 1210 requesting an ANI block based on switch. Then, the presentation interface .1210 instructs the reinforcer 1218 to execute the command. The booster 1218 is interconnected with an externally acting switch interface system 1220 to implement the switch blockade of the ANI. Other 1220 external action systems may be similar to those employed by the. expert system layer 239.
IV. Conclusions Although different embodiments of the present invention have been described above, it should be understood that these have been presented by way of example only, not as a limitation. Therefore, the extent and scope of the present invention should not be limited by any of the exemplary embodiments described above, but should be defined only in accordance with the following claims and their equivalents.

Claims (53)

1. A method for detecting fraud in a telecommunications system, comprising the steps of: (1) performing a plurality of types of fraud detection tests on network event records; (2) generate fraud alerts after the detection of a suspected fraud by any of the fraud detection tests; (3) correlate fraud alerts in cases of fraud, based on the common aspects of fraud alerts; and (4) automatically respond to certain fraud cases.
2. The method of compliance with the claim 1, wherein step (1) comprises the steps of: (a) normalizing the network event records from any of a variety of formats, in standardized formats; (b) send the standardized network event logs to at least one fraud detection machine; and (c) processing in parallel the portions that were sent in the plurality of fraud detection machines.
3. The method according to claim 2, wherein step (c) comprises the step of improving the standardized network event logs sent before testing them to see if there is fraud.
4. The method of compliance with the claim 1, wherein step (1) comprises the steps of: (a) selecting a threshold rule from a plurality of threshold rules stored in a threshold rule database; and (b) determining whether or not a network event log violates the threshold rules that were selected.
5. The method of compliance with the claim 4, characterized in that it additionally comprises the step of: (c) updating the database of the threshold rules during the operating time.
6. The method of compliance with the claim 5, wherein step (c) comprises the steps of: (i) analyzing the network event records to identify new fraud methods; (ii) generate new threshold rules to detect new fraud methods; and (iii) update the database of threshold rules with new threshold rules.
The method according to claim 5, wherein step (c) comprises the steps of: (i) analyzing the network event logs to identify new fraud methods using artificial intelligence; (ii) generate new threshold rules to detect new fraud methods using artificial intelligence; and (iii) update the database of threshold rules with new threshold rules.
The method according to claim 1, wherein step (1) comprises the steps of: (a) selecting a profile from a plurality of profiles stored in a profile database; and (b) determine whether a record of network events violates the profile or not; wherein step (2) comprises the step of generating an alarm and a probability of fraud based on the degree of deviation, if a record of network events violates the profile.
9. The method of compliance with the claim 8, characterized in that it additionally comprises the step of: (d) updating the profile database during the operating time.
10. The method of compliance with the claim 9, wherein step (d) comprises the steps of: (i) analyzing the network event logs to identify new fraud methods; (ii) generate new profiles representative of new fraud methods; and (iii) update the profile database with new profiles.
The method according to claim 10, wherein steps (ii) and (iii) are performed by means of artificial intelligence.
The method according to claim 1, wherein step (3) comprises the step of: (a) prioritizing fraud cases to indicate a probability of fraud associated with each of the fraud cases.
The method according to claim 12, wherein step (a) comprises the step of improving the fraud alarms with the data, before correlating.
14. The method according to the claim 12, wherein step (a) comprises the steps of: (i) recovering the data from an external system; and (ii) improve fraud alerts with the data that was recovered, before correlating.
15. The method of compliance with the claim 1, characterized in that it additionally comprises the steps of: (5) presenting fraud cases to live operators; and (6) respond in a manual manner to certain of these fraud cases.
16. A multi-layered fraud detection system for a telecommunications system, the telecommunications system comprising a layer of the network having at least one telecommunications network, a service control layer for manipulating the layer of the network and to generate service records that contain the data representing telecommunication cases at the network layer, and a data management layer to receive the service records from -different components and processes of the service control layer and to reduce data by eliminating redundancy and consolidating multiple registers in network event logs, the multilayer fraud detection system comprising: a detection layer for receiving network event logs from the network; data management layer, to test network event logs for possible fraud and to generate alarms indicating incidences of suspected frauds; an analysis layer to receive the alarms that were generated by the detection layer and to consolidate the alarms in cases of fraud; and a layer of the expert system to receive cases of fraud from the analysis layer and to act on certain of these cases of fraud.
17. The multilayer fraud detection system according to claim 16, wherein the detection layer comprises a core infrastructure and a configurable, domain-specific implementation.
18. The multi-layer fraud detection system according to claim 17, wherein the detection layer further comprises at least one fraud detection machine having a core infrastructure and a configurable, domain-specific implementation.
19. The multi-layer fraud detection system according to claim 18, wherein the detection layer further comprises: a network event normalizer for converting the event logs of the network from any one of a plurality of formats in standardized formats, to be processed by at least one fraud detection machine; and a shipper to send portions of the standard network event logs to the at least one fraud detection machine.
20. The multi-layer fraud detection system according to claim 18, wherein the at least one fraud detection machine comprises a rule-based initiation point machine.
21. The multi-layer fraud detection system according to claim 18, wherein the at least one fraud detection machine comprises: a configurable enhancer that increases the event records with additional data; and a configurable informant to interconnect the improver to an external system and to retrieve the additional data from the external system.
22. The multilayer fraud detection system according to claim 21, characterized in that it further comprises: elements for interconnecting the informant with the external system in a native format with the external system; and a rules database comprising instructions for processing the improved event logs to detect fraud.
23. The multi-layer fraud detection system according to claim 22, wherein: the at least one fraud detection machine includes a rule-based initiation point machine; and the rules database comprises initiation point rules for use by the rule-based initiation point machine.
24. The multi-layer fraud detection system according to claim 22, wherein: the at least one fraud detection machine includes a profiling machine; and the rules database comprises profiles for the use of the profiling machine.
25. The multi-layer fraud detection system according to claim 18, wherein the detection layer further comprises a pattern recognition machine that learns new fraud patterns and generates updates for the at least one detection machine. of frauds.
26. The multi-layer fraud detection system according to claim 16, wherein the analysis layer comprises a core infrastructure and a configurable, domain-specific implementation.
27. The multi-layer fraud detection system according to claim 26, wherein the analysis layer further comprises: a configurable alarm enhancer for increasing data fraud alarms; a configurable informant to interconnect the alarm improver with an external system and to retrieve the additional data from the external system; and a configurable fraud case builder to consolidate the fraud alarms generated by the detection layer.
28. The multilayer fraud detection system according to claim 27, wherein the user-specific implementation layer of the analysis layer further comprises: elements for interconnecting the informant with the external system in a native format with the external system; and a database of analysis rules comprising the instructions for the fraud case builder, to filter and correlate fraud alerts in cases of fraud, in accordance with at least one common attribute.
29. The multi-layer fraud detection system according to claim 28, wherein the at least one common attribute is one of the following attributes: ANI; origin switch; Credit card number; DNIS; Destination country; geographical area of destination; source area code; and type of call equipment.
30. The multi-layer fraud detection system according to claim 16, wherein the layer of the expert system comprises a core infrastructure and a configurable, domain-specific implementation.
31. The multilayer fraud detection system according to claim 30, wherein the specific implementation of the expert system layer domain comprises: a configurable prioritizer that generates improved fraud cases, gives priority to enhanced fraud cases and directs actions on external action systems for certain of the enhanced, prioritized fraud cases; a configurable informant that interconnects the alarm improver with an external system and retrieves the additional data from the external system; and a configurable improver that interconnects the prioritizer to an external action system and directs the execution of actions through the external action system that is based on the commands generated by the priority.
32. The multilayer fraud detection system according to claim 31, wherein the layer of the user-specific implementation of the expert system layer includes a configuration database, and wherein the database of configuration comprises: elements to interconnect the informant with the external system in a format native to the external system; and prioritization rules for use by the prioritizer.
33. The multilayer fraud detection system according to claim 16, characterized in that it additionally comprises: a presentation layer that receives the cases of fraud prioritized from the layer of the expert system and that presents the cases of fraud prioritized to live operators, where the presentation layer includes a core infrastructure and a configurable, domain-specific implementation.
34. The multi-layer fraud detection system according to claim 33, wherein the specific implementation of the presentation layer domain comprises: a configurable case enhancer that improves prioritized fraud cases with additional data; a configurable presentation interface that distributes the improved fraud cases, prioritized to one or more work stations and that sends action commands that are generated in the workstations for an external action system; a first configurable informant that interconnects the case improver with a first external system and retrieves data from the first external system; a second configurable informant that interconnects the presentation interface with a second external system and retrieves data from the second external system, which is based on the commands that were generated in the workstations; and a configurable reinforcer that interconnects the work stations, through the presentation interface, with the external action system and that directs the execution of the actions through the external action system that is based on the commands that are generated in the work stations.
35. The multilayer fraud detection system according to claim 34, wherein the first and second external systems are each a part of the same external system.
36. The multi-layer fraud detection system according to claim 34, wherein the user-specific implementation layer of the presentation layer further comprises; elements for interconnecting the informant with the first external system in an interconnection format that is native to the first external system; and configurable presentation rules to direct the presentation of improved fraud cases, prioritized in the work stations.
37. A method for detecting fraud in a telecommunications system, comprising the steps of: (1) determining whether or not a network event log violates a selected threshold rule; (2) generate an alarm when the event log of the network violates the selected threshold rule; (3) determine whether the network event log deviates or not from a selected profile; and (4) generate an alarm when the network event log deviates from the selected profile.
38. The method according to claim 37, characterized in that it further comprises the step of: (5) performing steps (1) and (3) in parallel.
39. The method according to claim 37, characterized in that it also comprises the steps of: (5) analyzing a history of the network event records to identify normal patterns of use in a telecommunications system; and (6) analyze a history of network event records to identify fraudulent patterns of use in a telecommunications system.
40. The method according to claim 37, characterized in that it also comprises the steps of: (5) analyzing a history of the network event records to identify normal patterns of use in a telecommunications system, using artificial intelligence; And (6) analyze a history of network event records to identify fraudulent patterns of use in a telecommunications system, using artificial intelligence.
41. The method according to claim 40, characterized in that it also comprises the step of: (7) generating at least one of the following types of updates when a fraudulent pattern of use is identified: updates for the rules databases of threshold; and updates for a profile database.
42. A system for processing event records, comprising: a scalable core infrastructure that can be implemented in more than one application; and a configurable domain-specific implementation that includes configurable rules.
43. The system according to claim 42, wherein the core infrastructure is implemented as part of a telecommunications fraud detection system and the domain-specific implementation, configurable comprising: threshold rules for testing the event logs of the telecommunications network; and profiles for comparison with the event records of the telecommunications network.
44. The system in accordance with the claim 42, where the core infrastructure is implemented as part of a credit card fraud detection system and domain-specific implementation, configurable comprising: threshold rules for testing credit card event logs; and profiles for comparison with credit card event logs.
45. The system in accordance with the claim 42, where the core infrastructure is implemented as part of a data scour system and domain-specific implementation, configurable comprising: threshold rules for testing records of data undermining events; and profiles for comparison with the records of data undermining events.
46. The system in accordance with the claim 42, wherein the core infrastructure is implemented as part of a consumer purchasing pattern analysis system and domain-specific implementation, configurable comprising: threshold rules for testing consumer purchase event records; and profiles for comparison with consumer purchase event records.
47. The system according to claim 42, characterized in that it also comprises: a detection layer that detects and normalizes the event registers, sends the event registers to one or more detection machines and generates alarms when the event registration complies with a condition; an analysis layer that receives the alarms from the detection layer and consolidates the alarms that were received in cases that are based on the common features of the alarms; and a layer of the expert system that receives cases from the analysis layer and acts on certain cases.
48. The system according to claim 47, characterized in that the detection layer further comprises: elements for generating function vectors to represent multiple occurrences of an event function.
49. The system according to claim 42, characterized in that it also comprises: a presentation layer that receives the cases from the detection layer, presents the cases that were received to human analysts, receives the commands from the human analysts and sends the instructions to external action systems to take actions that are based on the commands of human analysts.
50. A computer program product comprising a usable computer medium having a computer program logic stored in it, the computer program logic for enabling a computer system to process the event logs, wherein the logic of the computer program comprises: elements to enable the computer to test event logs; elements to enable the computer to generate alarms after certain conditions tested; elements to enable the computer to correlate the alarms in cases that are based on common aspects of the alarms; and elements to enable the computer to respond to certain cases.
51. The computer program product according to claim 50, characterized in that it also comprises: elements for enabling the computer to present the cases to live operators; and elements to enable the computer to allow live operators to initiate responses manually in certain cases.
52. The computer program product according to claim 50, characterized in that it further comprises: configurable user elements to enable the computer to process the user-specific types of the event records; and elements of the core infrastructure to enable the computer to process event logs for a variety of types of event logs.
53. A method for processing event records, comprising the steps of: (1) testing event records; (2) generate alarms after certain conditions tested; (3) correlate the alarms in cases that are based on common aspects of the alarms; and (4) generate responses to certain cases. 5 . The method in accordance with the claim 53, characterized in that it also comprises the steps of: (5) presenting the cases to live operators; and (6) allow live operators to manually initiate responses to certain cases.
MXPA/A/2000/002500A 1997-09-12 2000-03-10 System and method for detecting and managing fraud MXPA00002500A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US08928851 1997-09-12

Publications (1)

Publication Number Publication Date
MXPA00002500A true MXPA00002500A (en) 2001-06-26

Family

ID=

Similar Documents

Publication Publication Date Title
US6601048B1 (en) System and method for detecting and managing fraud
US6208720B1 (en) System, method and computer program product for a dynamic rules-based threshold engine
CA2215361C (en) Detecting possible fraudulent communications usage
EP1889461B1 (en) Network assurance analytic system
US5805686A (en) Telephone fraud detection system
US5596632A (en) Message-based interface for phone fraud system
US6570968B1 (en) Alert suppression in a telecommunications fraud control system
US6636592B2 (en) Method and system for using bad billed number records to prevent fraud in a telecommunication system
MXPA00002500A (en) System and method for detecting and managing fraud
US6590967B1 (en) Variable length called number screening
Wiens et al. A new unsupervised user profiling approach for detecting toll fraud in VoIP networks
WO2003009575A1 (en) Method and system for preventing fraud in a telecommunications system
AU690441C (en) Detecting possible fraudulent communications usage
MXPA97006554A (en) Detect possible fraudulent use in communication