US20210037137A1 - Method and system for fraud clustering by content and biometrics analysis - Google Patents

Method and system for fraud clustering by content and biometrics analysis Download PDF

Info

Publication number
US20210037137A1
US20210037137A1 US16/740,518 US202016740518A US2021037137A1 US 20210037137 A1 US20210037137 A1 US 20210037137A1 US 202016740518 A US202016740518 A US 202016740518A US 2021037137 A1 US2021037137 A1 US 2021037137A1
Authority
US
United States
Prior art keywords
clusters
database
interactions
analysis
call
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US16/740,518
Other versions
US10911600B1 (en
Inventor
Roman FRENKEL
Matan Keret
Roman Shternharts
ltay KALMAN HAREL
Galya Julya Bar
Yaara Bar
Michal Daisey Momika
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nice Ltd
Original Assignee
Nice Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/525,606 external-priority patent/US11039012B2/en
Application filed by Nice Ltd filed Critical Nice Ltd
Priority to US16/740,518 priority Critical patent/US10911600B1/en
Assigned to NICE LTD. reassignment NICE LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KERET, MATAN, BAR, GALYA JULYA, BAR, YAARA, DAISEY MOMIKA, MICHAL, FRENKEL, Roman, KALMAN HAREL, ITAY, SHTERNHARTS, ROMAN
Priority to US17/129,986 priority patent/US11108910B2/en
Application granted granted Critical
Publication of US10911600B1 publication Critical patent/US10911600B1/en
Publication of US20210037137A1 publication Critical patent/US20210037137A1/en
Priority to US17/376,209 priority patent/US11252279B2/en
Priority to US17/565,464 priority patent/US11503158B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/552Detecting local intrusion or implementing counter-measures involving long-term monitoring or reporting
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/04Training, enrolment or model building
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/06Decision making techniques; Pattern matching strategies
    • G10L17/08Use of distortion metrics or a particular distance between probe pattern and reference templates
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/26Recognition of special voice characteristics, e.g. for use in lie detectors; Recognition of animal voices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/50Centralised arrangements for answering calls; Centralised arrangements for recording messages for absent or busy subscribers ; Centralised arrangements for recording messages
    • H04M3/51Centralised call answering arrangements requiring operator intervention, e.g. call or contact centers for telemarketing
    • H04M3/5166Centralised call answering arrangements requiring operator intervention, e.g. call or contact centers for telemarketing in combination with interactive voice response systems or voice portals, e.g. as front-ends
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/50Centralised arrangements for answering calls; Centralised arrangements for recording messages for absent or busy subscribers ; Centralised arrangements for recording messages
    • H04M3/51Centralised call answering arrangements requiring operator intervention, e.g. call or contact centers for telemarketing
    • H04M3/5183Call or contact centers with computer-telephony arrangements
    • H04M3/5191Call or contact centers with computer-telephony arrangements interacting with the Internet
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • G06F40/35Discourse or dialogue representation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2201/00Electronic components, circuits, software, systems or apparatus used in telephone systems
    • H04M2201/38Displays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2203/00Aspects of automatic or semi-automatic exchanges
    • H04M2203/60Aspects of automatic or semi-automatic exchanges related to security aspects in telephonic communication systems
    • H04M2203/6027Fraud preventions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2203/00Aspects of automatic or semi-automatic exchanges
    • H04M2203/60Aspects of automatic or semi-automatic exchanges related to security aspects in telephonic communication systems
    • H04M2203/6045Identity confirmation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2203/00Aspects of automatic or semi-automatic exchanges
    • H04M2203/60Aspects of automatic or semi-automatic exchanges related to security aspects in telephonic communication systems
    • H04M2203/6054Biometric subscriber identification
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/42221Conversation recording systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/50Centralised arrangements for answering calls; Centralised arrangements for recording messages for absent or busy subscribers ; Centralised arrangements for recording messages
    • H04M3/51Centralised call answering arrangements requiring operator intervention, e.g. call or contact centers for telemarketing
    • H04M3/5175Call or contact centers supervision arrangements

Definitions

  • the present disclosure relates to the field of voice biometric security and analytics, content analysis and real-time authentication. More specifically the present disclosure relates to method and system for proactive fraudster exposure in a customer service channel by fraudsters clustering and displaying to a user a ranked list of potential fraudsters to add to a watchlist database.
  • Call centers are increasingly becoming a target for fraudsters via their customer service channels.
  • Call center frauds are one of the leading threats that organizations such as financial institutions face.
  • Fraudsters commonly attempt to retrieve information or change information of other legitimate customers by exploiting call center agents by social engineering. For example, fraudsters may conduct an attack on a financial institution by manipulating the call center agents to provide them with confidential information of legitimate customers and then use the extracted information to commit another fraud e.g., identity theft.
  • fraudsters may use information from social networks or public information to correctly answer knowledge-based questions during a call with an agent.
  • Fraudulent activity may take many shapes and forms. It may be performed via multiple frequent attacks or attempts on a singular legitimate customer account or on multiple customer accounts. The attacks may be via different channels such as mobile application, call-center calls or internet on different lines of business e.g., VIP handling agents. Another type of attack is a “targeted attack” in which the attack is targeted to a specific individual i.e., customer. Yet, another type of attack is “spread out attack” in which the attack is on various customers in the call center.
  • the computer-implemented method comprising: (a) collecting call interactions from a database of recorded calls in a customer service channel. The collecting is based on predefined rules by a calls collection engine; (b) storing the collected call interactions in an interactions database; (c) analyzing the call interactions in the interactions database by a Proactive Fraud Exposure (PFE) engine, said analyzing comprising: (i) generating a voiceprint for each call interaction in the interactions database to be stored in a voiceprints database; (ii) using machine learning technique to group the call interactions in the interaction database into one or more clusters based on respective voiceprints in the voiceprints database.
  • PFE Proactive Fraud Exposure
  • Each one of the one or more clusters is associated with a repeating speaker's voice based on the generated voiceprints; (iii) storing the one or more clusters in a clusters database; and (iv) ranking and classifying the one or more clusters stored in the clusters database to yield a list of potential fraudsters, and (d) transmitting the list of potential fraudsters to an application to display to a user the list of potential fraudsters via a display unit, thus enabling said user to review said list of potential fraudsters and to add fraudsters from the list to a watchlist database.
  • the generating of voiceprints is performed by extracting i-vectors which represent a speaker effect and a channel effect.
  • the method further comprising detecting fraudsters which are stored on the watchlist database in new call interactions to the customer service center via one of the multiple service channels, in real-time.
  • the ranking is performed by at least one of: (i) inter-cluster statistics; and (ii) probability of representing a fraudster or any combination thereof.
  • the probability of representing a fraudster is calculated based on at least one of the following factors: (i) same voice on same claimed customer; (ii) same voice on different claimed customers; (iii) fraudulent behavioral characteristics of the call interaction, manifested in the voice; (iv) metadata representing details of a predefined line of business.
  • the method further comprising attributing a predefined weight value to the factors and the wherein the ranking is further based on a weighted average of the factors.
  • the predefined rules are at least one of: (i) mismatch during customer authentication; (ii) business data; (iii) agents that are associated with a risk group or line of business; (iv) behavioral flows of the speaker; (v) call content analysis; and (vi) frequency of the call interactions or any combination thereof.
  • the analyzing is performed on audio or textual content.
  • the collecting is further based on automated machine-learning algorithms, such as phonetic speech and voice analysis.
  • the ranking further includes: (i) comparing each call interaction in the interaction database to all other call interactions in the call interaction database to yield a matrix of comparisons; (ii) scoring each pair of call interactions based on the extracted i-vectors; (iii) retrieving from each row in the matrix of comparisons a pair of call interactions (i,j) with the higher score; and (iv) for each retrieved pair of call interactions (i,j) perform clustering.
  • the clustering is performed according to the following conditions: when the score of the pair of call interactions (i,j) is higher than a predefined threshold: a. when both call interactions (i,j) were not assigned to a cluster, assign both interactions to a new cluster; b. when only one of the call interactions (i,j) is assigned to a cluster add the call interaction that is not assigned to the cluster; c. when both call interactions are assigned merge them to one cluster; when the score of the pair of call interactions (i,j) is not higher than a predefined threshold: call interaction (i) is assigned to a new cluster. Call interaction (i) has the highest score in a row.
  • the classifying comprises calculating a confidence value for each cluster based on the inner ties between the call interactions in the cluster.
  • a computerized system for proactive fraudster exposure in a customer service center having multiple service channels may be configured to: (i) collect call interactions for analysis from a database of recorded calls in a customer service channel. The collecting may be based on predefined rules by a calls collection engine. (ii) store the collected call interactions in an interaction database; (iii) analyze the call interactions in the interaction database by a Proactive Fraud Exposure (PFE) engine, said analyze comprising: a. generating a voiceprint for each interaction in the interaction database to be stored in a voiceprints database; b.
  • PFE Proactive Fraud Exposure
  • Each one of the one or more clusters is associated with a repeating speaker's voice based on the generated voiceprints; c.
  • a computer-implemented method for proactive fraudster exposure in a customer service center may comprise collecting call interactions from a database of recorded calls in a customer service channel. The collecting may be based on predefined rules by a calls collection engine.
  • the computer-implemented method may further comprise after the collecting, storing the collected call interactions in an interactions database.
  • the computer-implemented method may further comprise after the storing, performing a first type analysis to cluster the call interactions in the interactions database into ranked clusters and storing the ranked clusters in a clusters database
  • the computer-implemented method may further comprise performing a second type analysis on a predefined amount of the highest ranked clusters, into ranked clusters and storing the ranked clusters in a clusters database.
  • the first type analysis is a content analysis and the second type analysis is a voice biometrics analysis, or vice versa.
  • the computer-implemented method may further comprise enabling a user to repeat performing the first type analysis to cluster the call interactions in the interactions database into ranked clusters and storing the ranked clusters in a clusters database; and performing the second type analysis on a predefined amount of the highest ranked clusters, into ranked clusters and storing the ranked clusters in a clusters database in a cyclic manner.
  • the computer-implemented method may further comprise retrieving from the ranked clusters in the clusters database, a list of fraudsters; transmitting the list of potential fraudsters to an application to display to a user the list of potential fraudsters via a display unit, thus enabling the user to review the list of potential fraudsters and to add fraudsters from the list to a watchlist database.
  • the content analysis may include performing Automatic Speech Recognition (ASR) for each call interaction to yield a textual transcript thereof.
  • ASR Automatic Speech Recognition
  • the content analysis may further include operating a clustering algorithm over each textual transcript to cluster phrases in the textual transcript into clusters according to topics.
  • the content analysis may further include identifying clusters with topics having predefined suspicious phrases and ranking the identified clusters.
  • the identifying may be performed according to at least one of the following algorithms: graph theory, machine learning, statistics, linguistic, deep learning, hybrid, Term Frequency—Inverse Document Frequency (TF-IDF) and parallel word importance estimator.
  • TF-IDF Term Frequency—Inverse Document Frequency
  • the content analysis may further include representing a predefined amount of the highest ranked phrases as nodes in a connected graph.
  • the content analysis may further include calculating phrase statistics to suggest to a user a connection between the phrases according to the graph.
  • the content analysis may further include enabling a user to select a specific phrase in the graph for biometrics analysis of related call interactions.
  • the voice biometrics analysis is performed by a Proactive Fraud Exposure (PFE) engine, wherein the PFE is performing clustering.
  • PFE Proactive Fraud Exposure
  • the PFE engine is performing the clustering according to at least one of: K-means, K-centroids, Clustering Using REpresentatives (CURE), Agglomerative Hierarchal Clustering (AHC) algorithms.
  • K-means K-centroids
  • CURE Clustering Using REpresentatives
  • AHC Agglomerative Hierarchal Clustering
  • the content analysis further includes stemming to transform phrases into a predefined form.
  • a computerized-system for proactive fraudster exposure in a customer service center according to content analysis and voice biometrics analysis.
  • the computerized-system may include: a database of recorded calls; an interactions database; a clusters database; a watchlist database; a memory to store the database of recorded calls, the interactions database, the clusters database and the watchlist database; a display unit; and a processor.
  • the processor may be configured to: (a) collect call interactions from the database of recorded calls in a customer service channel. The collecting may be based on predefined rules by a calls collection engine. The processor may be further configured to (b) store the collected call interactions in the interactions database.
  • the processor may be further configured to (c) perform a first type analysis to cluster the call interactions in the interactions database into ranked clusters and to store the ranked clusters in the clusters database.
  • the processor may be further configured to (d) perform a second type analysis on a predefined amount of the highest ranked clusters, into ranked clusters and to store the ranked clusters in the clusters database.
  • the first type analysis is a content analysis and the second type analysis is a voice biometrics analysis, or vice versa.
  • the processor may be further configured to enable a user to repeat steps (c) and (d).
  • the processor may be further configured to retrieve from the ranked clusters in the clusters database, a list of fraudsters and transmit the list of potential fraudsters to an application to display to a user said list of potential fraudsters via a display unit, thus to enable said user to review said list of potential fraudsters and to add fraudsters from the list to the watchlist database.
  • the content analysis may include: performing Automatic Speech Recognition (ASR) for each call interaction to yield a textual transcript thereof; operating a clustering algorithm over each textual transcript to cluster phrases in the textual transcript into clusters according to topics; and identifying clusters with topics having predefined suspicious phrases and ranking the identified clusters.
  • ASR Automatic Speech Recognition
  • the identifying is performed according to at least one of the following algorithms: graph theory, machine learning, statistics, linguistic, deep learning, hybrid, Term Frequency—Inverse Document Frequency (TF-IDF) and parallel word importance estimator.
  • TF-IDF Term Frequency—Inverse Document Frequency
  • the content analysis may further include: representing a predefined amount of the highest ranked phrases as nodes in a connected graph; calculating phrase statistics to suggest to a user a connection between the phrases according to the graph; and enabling a user to select a specific phrase in the graph for biometrics analysis of related call interactions.
  • the voice biometrics analysis is performed by a Proactive Fraud Exposure (PFE) engine, wherein the PFE is performing clustering.
  • PFE Proactive Fraud Exposure
  • the PFE engine is performing the clustering according to at least one of: K-means, K-centroids, Clustering Using REpresentatives (CURE), and Agglomerative Hierarchal Clustering (AHC) algorithms.
  • the content analysis further includes stemming to transform phrases into a predefined form.
  • FIG. 1 schematically illustrates a calls collection engine, in accordance with some embodiments of the present disclosure
  • FIG. 2 schematically illustrates a proactive fraud exposure engine, in accordance with some embodiments of the present disclosure
  • FIG. 3A is a high-level diagram of the system, in accordance with some embodiments of the present disclosure.
  • FIG. 3B schematically illustrate a system for proactive fraudster exposure in a customer service center having multiple channels, in accordance with some embodiments of the present disclosure
  • FIG. 4 is a high-level flow diagram depicting clustering algorithm, in accordance with some embodiments of the present disclosure.
  • FIGS. 5A-5B schematically illustrate score matrix with speakers marked after cluster detection and the grouping of the interactions into one or more clusters, respectively, in accordance with some embodiments of the present disclosure
  • FIG. 6 is a high-level flow diagram depicting a ranking algorithm, in accordance with some embodiments of the present disclosure.
  • FIG. 7 is a high-level flow diagram depicting a method for proactive fraudster exposure, in accordance with some embodiments of the present disclosure.
  • FIG. 8 is a high-level flow diagram depicting a method for analyzing the call interactions by a Proactive Fraud Exposure (PFE) engine, in accordance with some embodiments of the present disclosure
  • FIG. 9 is a high-level flow diagram depicting a content analysis, in accordance with some embodiments of the present disclosure.
  • FIG. 10 schematically illustrates a calls cluster phase including a content analysis engine and a Proactive Fraud Exposure (PFE) engine, in accordance with some embodiments of the present disclosure
  • FIGS. 11A-11B illustrate a schematic flowchart of a method for proactive fraudster exposure, in a customer service center according to content analysis and voice biometrics analysis, in accordance with some embodiments of the present disclosure
  • FIG. 12 illustrates ranking of phrases, in accordance with some embodiments of the present disclosure.
  • FIG. 13 illustrates the highest ranked phrases as nodes in a connected graph, in accordance with some embodiments of the present disclosure.
  • the terms “plurality” and “a plurality” as used herein may include, for example, “multiple” or “two or more”.
  • the terms “plurality” or “a plurality” may be used throughout the specification to describe two or more components, devices, elements, units, parameters, or the like.
  • the method embodiments described herein are not constrained to a particular order or sequence. Additionally, some of the described method embodiments or elements thereof can occur or be performed simultaneously, at the same point in time, or concurrently. Unless otherwise indicated, use of the conjunction “or” as used herein is to be understood as inclusive (any or all of the stated options).
  • voiceprint refers to a stored sample of a voice of a user which is used to identify and authenticate the user via speaker recognition based on characteristics of voice.
  • the characteristics of the voice may be selected from the group consisting of: volume, pace, pitch, resonance, articulation, enunciation, respiration, pauses, timber, stress, rhyme, diction, dialect and the like.
  • cluster refers to a set of call interactions.
  • social engineering refers to manipulating agents to provide confidential information to a speaker that pretends to be a legitimate customer.
  • i-vector refers to intermediate vectors or identity vectors which is an enhancement for a previously used approach in speaker verification technology called Joint Factor Analysis (JFA). JFA divides a human voice into two factors: a speaker factor and a channel factor.
  • the data structure of the i-vectors may be an array, and each element in the data structure is representing a characteristic of the speech of a speaker.
  • the i-vectors are generated as part of voiceprint generation for later on comparison.
  • similarity score refers to a comparison of two voice samples based on extracted i-vectors.
  • watchlist refers to a list of known fraudsters which is commonly saved in a database.
  • customer service channels refers to one type of channel or more through which a customer service center of an organization suggests service to its customer.
  • a customer may complete an action with the organization via one of the following customer service channels: Interactive Voice Response (IVR), mobile application or speaking with an agent.
  • IVR Interactive Voice Response
  • threshold refers to a scalar such that:
  • claimed customer refers to the speaker's claimed identity i.e., the details of a legitimate customer, which is provided by a fraudster in a call interaction between a fraudster and an agent.
  • confidence level refers to a value that is representing how likely it is for a given cluster to include several calls made by one speaker.
  • Biometrics technology automatically verifies the speaker's claimed identity, commonly, within the first few seconds of a call through natural conversation with an agent in the customer service channel.
  • the biometric technology verifies the identity of the speaker by comparing a sample of an ongoing call interaction of the speaker with a voiceprint.
  • the construction of the watchlists may still require manual checks and may be time consuming, therefore there is a need for a system and method that will eliminate the expense and time needed for manual checks by analyzing the big data of call interactions and extracting information related to fraudsters to be later presented to security officers, and upon review they will add the fraudsters to the watchlist.
  • the embodiments taught herein solve the technical problem of checking and analyzing varied high-volume call interactions which are occurring in high velocity, to detect and identify fraudsters.
  • the embodiments taught herein relating to call interactions in a customer call center with call interactions between a customer and an agent i.e., a call center representative is merely shown by way of example and technical clarity, and not by way of limitation of the embodiments of the present disclosure.
  • the embodiments herein for proactive fraudster exposure in a customer service channel may be applied on any customer service channel such as IVR or mobile application.
  • the embodiments herein are not limited to a call center but may be applied to any suitable platform providing customer service channels.
  • FIG. 1 schematically illustrates a calls collection engine, in accordance with some embodiments of the present disclosure.
  • all call interactions are recorded and stored in a database of recorded calls.
  • a calls collection engine 100 receives call interactions from a database of recorded calls where some of the calls may be ongoing calls.
  • a user e.g., a security officer may define a set of rules which are applied on all call interactions and determine which call interactions should be further analyzed.
  • the set of rules may include various types of rules. For example, (i) The speaker got mismatch result during authentication procedure; (ii) The speaker asked to perform a high-risk transaction; (iii) The agent that handled the call is associated to a special group that should always be monitored, e.g., VIP customers.
  • the calls collection engine 100 may apply predefined rules on the call interactions to extract call interactions for further analysis i.e., pending interactions to be stored in an interactions database 110 , thus lowering the high volume of call interactions that must be checked by the security officers.
  • the predefined rules may be at least one of: (i) mismatch during customer authentication; (ii) business data; (iii) agents that are associated with a risk group or line of business; (iv) behavioral flows of the speaker; (v) call content analysis; (vi) frequency of the call interactions or any combination thereof.
  • a call content analysis may be related to search for keywords and phrases.
  • frequency of the call interactions relates to the number of call interactions from the same speaker in a predefined time interval.
  • a call ends its information is sent to a Calls Collection Engine 100 to see if the interaction matches to one or more of the pre-defined rules of the system. If the call interaction matches one or more of the rules, it is stored in the interactions database 110 to be later on analyzed by the PFE engine which is shown in detail in FIG. 2 .
  • FIG. 2 schematically illustrates a proactive fraud exposure engine, in accordance with some embodiments of the present disclosure.
  • the PFE engine 200 may retrieve and read the information of the call interaction from the interactions database 210 to analyze it.
  • Calls Collection Engine 100 in FIG. 1 and PFE engine 200 may include a processor, a memory, an output device, an input device and communication circuitry and interface module for wired and/or wireless communication with any other computerized device over a communication network, as illustrated in FIG. 3B , described hereinbelow.
  • the user may be a security officer and the data may be details of fraudsters to be added to a watchlist database 240 and the instructions may be the rules, which are applied on all call interactions and determine which call interactions should be further analyzed.
  • the PFE Engine 200 may use the processor and memory to generate a voiceprint for each call interaction in the interactions database 210 to be stored in a voiceprints database 220 .
  • the PFE Engine 200 may be using machine learning technique to group the call interactions in the interaction database 210 based on the voiceprints database 220 into one or more clusters which may be stored in a clusters database 230 . Each one of the one or more clusters is associated with a repeating speaker's voice based on the generated voiceprints.
  • the one or more clusters in the clusters database 230 may be ranked and classified to yield a list of potential fraudsters.
  • the list of potential fraudsters may be transmitted to an application 260 over a communication network, to be later on displayed to a user via a display unit 250 .
  • the user may be a security officer that may review the list of potential fraudsters and listen to the call that is in the respective cluster. Upon reviewal, when the security officer suspects that the call has been made by an actual fraudster, the security officer may add the call and the respective fraudsters information via the application 260 to a watchlist database 240 .
  • the application 260 may be web application or desktop application.
  • the fraudster may be detected in real-time.
  • An alert may be sent to the users i.e., the agents and/or the security officers upon the detection for further monitoring and analysis or alternatively the call may be blocked.
  • FIG. 3A is a high-level diagram of the system, in accordance with some embodiments of the present disclosure.
  • Real Time Authentication (RTA) flows 305 may be sent to Real Time Voice Buffering (RTVB) 310 which may be buffering the call's audio to a Fluent Engine 315 .
  • the Fluent Engine 315 is a voice biometric engine that is performing authentication and fraud detection.
  • An authentication center 320 holds the fraudsters watchlists and may forward the watchlists to the Fluent Engine 315 .
  • RTA results are transmitted to a call server 325 which manages all the calls and controls the call recording by initiating the call recording in the system and the buffering which is performed by RTVB 310 .
  • the call server 325 also saves all the call-related metadata to the DB server 335 , i.e., once a call ends call-related metadata such as if the call was indeed recorded and archived, certain business data or having an authentication mismatch is being saved.
  • Proactive Fraud Engine (PFE) Rule Manager 330 which is a sub-component of the call server 325 may tag the relevant PFE calls according to predefined PFE rules. Once a call ends, the tagged PFE calls may be transmitted to a DB Server 335 .
  • the DB server 335 manages all the call interactions with all the databases which are the rule database 335 and the voiceprints database such as database 340 .
  • PFE call interactions are forwarded to database 340 which holds the pending PFE interactions and the PFE voiceprints.
  • PFE Engine 345 creates the voiceprints from the tagged calls and performs the clustering algorithms.
  • Storage Center 350 may hold the archived calls as Media Files (MF) and may forward MF to PFE Engine 345 .
  • PFE Engine 345 may forward clustering result to Rule database 355 , which holds the PFE application data.
  • PFE application Backend 360 serves the PFE application frontend requests.
  • PFE Frontend 365 is the application where a user can define rules, review the clustering results, manage them and add new fraudsters to the watchlist database 240 in FIG. 2 .
  • FIG. 3B schematically illustrates a system for proactive fraudster exposure in a customer service center having multiple channels, in accordance with some embodiments of the present disclosure.
  • Calls Collection Engine 100 in FIG. 1 and PFE engine 200 may include a processor 3010 , a memory 3040 , an input device 3025 , an output device 3030 , and a communication circuitry and interface module 3005 for wired and/or wireless communication with any other computerized device over a communication network.
  • the processor 3010 may be configured to operate in accordance with programmed instructions stored in memory 3040 and may include one or more processing units, e.g., of one or more computers.
  • the processor 3010 may be further capable of executing an engine such as PFE engine 3020 (also shown in FIG. 2 as 200 ), for generating a voiceprint of a speaker out of an audio sample.
  • the voiceprint is stored in a voiceprints database such as voiceprints database 3035.
  • the processor 3010 via PFE 3020 may communicate with an output device such as output device 3030 via application 3060 .
  • the output device 3030 may include a computer monitor or screen and the processor 3010 may communicate with a screen of the output device 3030 .
  • the output device 3030 may include a printer, display panel, speaker, or another device capable of producing visible, audible, or tactile output.
  • the processor 3010 via PFE 3020 may further communicate with an input device such as input device 3025 via application 3060 .
  • the input device 3025 may include one or more of a keyboard, keypad or pointing device for enabling a user to input data or instructions for operation of the processor 3010 .
  • the user may be a security officer and the data may be details of fraudsters to be added to a watchlist database 240 in FIG. 2 and the instructions may be the rules, which are applied on all call interactions and determine which call interactions in the recorded calls database 3050 should be stored in interactions database 3045 to be further analyzed by the PFE engine 3020 (also shown in FIG. 2 as 200 ).
  • a calls collection engine such as call collection engine 3015 (also shown in FIG. 1 as 100 ) may receive call interactions from a database of recorded calls such as recorded calls database 3050 , where some of the calls may be ongoing calls.
  • the processor 3010 may further communicate with memory 3040 .
  • the memory 3040 may include one or more volatile or nonvolatile memory devices.
  • the memory 3040 may be utilized to store, for example, programmed instructions for operation of the processor 3010 , data or parameters for use by the processor 3010 during operation, or results of the operation of the processor 3010 .
  • the memory 3040 may store: recorded calls database 3050 , call interactions in interactions database 3045 (also shown in FIG. 2 as 210 ), voiceprints in voiceprints database 3035 (also shown in FIG. 2 as 220 ) and clusters in a clusters database 3055 (also shown in FIG. 2 as 230 ).
  • the processor 3010 may use PFE engine 3020 (also shown in FIG.2 as 200 ) to implement machine learning technique to group the call interactions in the interaction database 3045 into one or more clusters and store the clusters in the clusters database 3055 .
  • PFE engine 3020 also shown in FIG.2 as 200
  • Each one of the one or more clusters is associated with a repeating speaker's voice based on the generated voiceprints stored in the voiceprints database 3035 .
  • the processor 3010 may further use the PFE engine 3020 to rank and classify the one or more clusters stored in the clusters database 3055 to yield a list of potential fraudsters.
  • FIG. 4 is a high-level flow diagram depicting clustering algorithm, in accordance with some embodiments of the present disclosure. The steps described herein below may be performed by a processor.
  • operation 410 may comprise taking a collection of call interactions.
  • Operation 420 may comprise, for each call interaction, finding the call interactions that are most similar to it and creating a cluster out of them.
  • clustering algorithm 400 may further comprise operation 430 , which may comprise, if there is no call interaction that is similar to it, creating a cluster of size ‘1’ that represents it.
  • clustering algorithm 400 may comprise ranking the clusters and determining which clusters have the highest confidence level.
  • clustering algorithm 400 may be illustrated by the following pseudo code:
  • N N interactions, and a threshold (T) ⁇ init N empty groups (G).
  • M NxN matrix
  • Diagonal values should be (-infinity).
  • For i from 0 to N: Find the maximum value for row i, let's say it's in index j if maximum > T: if G[i] and G[j] are both empty ⁇ assign them to a new cluster. if G[i] is empty and G[j] is not ⁇ assign G[i] to G[j] (and vice versa). if G[i] and G[j] are both assigned ⁇ merge them. If not: G[i] is assigned to a new cluster T is determined in the following way: Take all the pairwise scores, calculate their mean and variance, T mean ⁇ Z*variance.
  • Z is empirically tested to be from 1 to 2 (commonly 2)
  • Z is empirically tested to be from 1 to 2 (commonly 2)
  • FIGS. 5A-5B schematically illustrate score matrix with speakers marked after cluster detection and the grouping of the interactions into one or more clusters, respectively, in accordance with some embodiments of the present disclosure.
  • a score matrix with speakers marked after cluster detection 510 is shown.
  • the matrix given a set of call interactions, there is a pairwise comparison of all to all, and similarity scores.
  • the similarity scores are calculated based on i-vectors of each speaker according to a similarity algorithm.
  • the threshold value may be ‘25’, all call interactions are clustered together in a set of interactions as shown in 520 (in FIG. 5B ). If the similarity score of call ‘1’ and call ‘2’ is the highest in a row then when it is higher than a predefined threshold then that call interaction is clustered in set of interactions 520 .
  • the set of interaction 520 is later on divided into clusters according to the clustering algorithm 400 described in FIG. 4 .
  • the result of the clustering algorithm is shown in 530 .
  • FIG. 6 is a high-level flow diagram depicting a ranking algorithm 600 , in accordance with some embodiments of the present disclosure.
  • the ranking algorithm 600 may take all the clusters shown in element 530 in FIG. 5B .
  • operation 620 may comprise, for each cluster, calculating the confidence of the inner ties, and then in operation 630 normalizing the calculated confidence to yield a score.
  • the normalization is needed because the matrix includes the speaker effect and the channel effect, and this is also manifested in the i-vectors themselves, therefore there is a need to later normalize the channel effect.
  • operation 640 may comprise checking if it is the last cluster and operation 650 may comprise storing the cluster ID and the score in a data structure. In operation 660 this score is used to ranking the clusters in the data structure and outputting in a sorted manner. In a non-limiting example, the sorted clustered may be outputted in ascendance manner from high to low.
  • the ranking is performed by at least one of the following approaches: (i) inter-cluster statistics; (ii) probability of representing a fraudster; customers or any combination thereof.
  • the inter-cluster statistics represent the level of “confidence” that the cluster includes call interactions that share the same voice.
  • the probability of representing a fraudster may be performed using one or more of the following factors: (i) same voice on same claimed customer also known as “targeted attack”; (ii) same voice on different claimed customer, also known as “spread out attack”; (iii) fraudulent behavioral characteristics of the call interaction, manifested in the voice such as deception acoustic features: stutter, jitter, shimmer and the like, and (iv) metadata representing details of a predefined line of business that is more prone to fraud attacks than others.
  • each factor may be attributed with a predefined weight value and the ranking algorithm 600 may be further based on a weighted average of the factors.
  • the weights may be predefined in collaboration with the employees in the call center.
  • ranking algorithm 600 may be illustrated by the following pseudo code, given N clusters:
  • FIG. 7 is a high-level flow diagram depicting a method for proactive fraudster exposure 700 , in accordance with some embodiments of the present disclosure.
  • proactive fraudster exposure 700 may comprise operation 710 for collecting call interactions from a database of recorded calls (not shown) in a customer service center having multiple service channels, whereby the collecting is based on predefined rules by a calls collection engine, e.g., calls collection engine 100 in FIG. 1 .
  • operation 720 may comprise storing the collected call interactions in an interactions database, e.g., interactions database 110 in FIG. 1 .
  • operation 730 may comprise analyzing the call interactions in the interactions database 110 in FIG. 1 by a Proactive Fraud Exposure (PFE) engine, e.g., PFE engine 200 in FIG. 2 .
  • PFE Proactive Fraud Exposure
  • operation 740 may comprise transmitting the list of potential fraudsters to an application, e.g., application 260 in FIG. 2 to display to a user, the list of potential fraudsters via a display unit, e.g., display unit 250 in FIG. 2 , thus enabling the user, e.g., a security officer to review the list of potential fraudsters and to add fraudsters from the list to a watchlist database, e.g., watchlist database 240 in FIG. 2 .
  • FIG. 8 is a high-level flow diagram depicting a method for analyzing the call interactions by a Proactive Fraud Exposure (PFE) engine, in accordance with some embodiments of the present disclosure.
  • PFE Proactive Fraud Exposure
  • operation 730 in FIG. 7 may comprise analyzing the call interactions by a Proactive Fraud Exposure (PFE) engine.
  • Such operation 730 may comprise operations 800 depicting a method for analyzing the call interactions by a Proactive Fraud Exposure (PFE) engine.
  • operation 810 may comprise generating a voiceprint for each call interaction in the interactions database 210 in FIG. 2 to be stored in a voiceprints database 220 in FIG. 2 .
  • operation 820 may comprise using machine learning technique to group the call interactions in the interaction database, e.g., interaction database 210 in FIG. 2 into one or more clusters, whereby each one of the one or more clusters is associated with a repeating speaker's voice based on the generated voiceprints.
  • operation 830 may comprise storing the one or more clusters in a clusters database, e.g., clusters database 230 in FIG. 2 .
  • operation 840 may comprise ranking and classifying the one or more clusters stored in a clusters database, e.g., clusters database 230 in FIG. 2 to yield a list of potential fraudsters.
  • the similarity algorithm may use a log likelihood ratio, where this ratio is calculated as follows: given two i-vectors, V1 and V2, assuming V1 and V2 are normally distributed with mean 0 and variance 1, the ratio may be calculated according to the following calculation:
  • n may be the length of the i-vector, in a non-limiting example n may be equal to 400.
  • the method may include calculating the predefined threshold from a decision boundary of a distribution of the similarity scores for voiceprints generated from speech data chunks.
  • an additional analysis to the big data may be performed before or after the voice biometric analysis thus, overcoming the issues which are related to processing of huge amount of data.
  • a financial organization may have millions of customer interactions every day.
  • a more sophisticated clustering method is required.
  • voice biometrics with different types of content analysis, the interactions clustering becomes more accurate and produces high quality clusters with a special focus on fraudulent activity and behavior detection.
  • content analysis may be performed before or after a biometric analysis to provide the ability to scale and automate the process of fraudster detection and shorten the time needed to find frauds attacks.
  • the additional content analysis may provide more accurate results i.e., more accurate fraudster detection and a short list of fraudsters having ranked clusters for review, by a user such as the security officer.
  • the amount of call interactions which are being processed by the PFE engine such as PFE engine 200 in FIG. 2 may be reduced by focusing on higher risk call interactions due to the call interaction content.
  • Call interactions with security content phrases may be in higher risk to be a fraud.
  • the content analysis may enable a user to select a specific phrase in a generated graph for later voiceprint biometric analysis of related call interactions.
  • FIG. 9 is a high-level flow diagram depicting a content analysis 900 , in accordance with some embodiments of the present disclosure.
  • a content analysis 900 may be performed after collecting call interactions from a database of recorded calls (not shown) in a customer service channel and storing the collected call interactions in an interactions database 910 .
  • the content analysis 900 may perform Automatic Speech Recognition (ASR) for each call interaction in an interactions database 910 to yield a textual transcript 930 thereof.
  • ASR Automatic Speech Recognition
  • a fraudster in a contact center where a fraudster gains access to customer data the data could be used to reset the customer's password to the customer's account thus leaving the customer exposed to her identity theft.
  • the call interactions where a fraudster may try to reset the customer's password may include the following sentences: “Hello, I am Anna and I forgot my password, can I reset it?” or “Hi, I'm Danny and I would like to change my password” or “Hi, My name is Jason and I would like to change my password” or “Good day, I am David and I would like to update my password please”. In all these sentences the phrase “password” is used.
  • the identification of the clusters may be performed according to at least one of the following algorithms: graph theory, machine learning, statistics, linguistic, deep learning, hybrid, Term Frequency—Inverse Document Frequency (TF-IDF) and parallel word importance estimator.
  • TF-IDF Term Frequency—Inverse Document Frequency
  • the PFE engine may perform the clustering according to at least one of: K-means, K-centroids, Clustering Using REpresentatives (CURE), and Agglomerative Hierarchal Clustering (AHC) algorithms.
  • K-means K-centroids
  • CURE Clustering Using REpresentatives
  • AHC Agglomerative Hierarchal Clustering
  • the content analysis 900 may represent the detected most important words, which may be a predefined amount of the highest ranked phrases, or words that have a higher confidence level as nodes in a graph with edges 950 .
  • the content analysis 900 may calculate phrase statistics to suggest to a user a connection between the phrases according to the graph and it may be represented on a display unit.
  • the content analysis 900 may enable a user to select a word 960 and then a voiceprint biometric clustering of all the call interactions where this word has been used 970 , may be performed.
  • the result of the voiceprint biometric clustering 970 may detect a fraud cluster 980 .
  • the voice biometrics analysis may be performed by a Proactive Fraud Exposure (PFE) engine such as PFE 200 in FIG. 2 , and the PFE may perform clustering.
  • the PFE engine may perform the clustering according to at least one of: K-means, K-centroids, Clustering Using REpresentatives (CURE), Agglomerative Hierarchal Clustering (AHC) algorithms.
  • the content analysis may further include stemming to transform phrases into a predefined form.
  • Stemming is the process of reducing inflected (or sometimes derived) words to their word stem, base or root form. for example—“looked”, “looking”, “will look”, “looking at”->will all be transformed into “look”.
  • the voiceprint biometric clustering i.e., voiceprint biometric analysis may be initiated.
  • the voiceprint biometric clustering may be performed on the call interactions in the call interactions database 910 and then the content analysis may begin.
  • FIG. 10 schematically illustrates a calls cluster phase 1000 including a content analysis engine and a Proactive Fraud Exposure (PFE) engine such as PFE engine 200 in FIG. 2 , in accordance with some embodiments of the present disclosure.
  • PFE Proactive Fraud Exposure
  • either the PFE engine 1070 or the content analysis engine 1080 may retrieve and read the information of the interaction from the interactions database 1010 to analyze it.
  • PFE engine 1070 and content analysis engine 1080 may include a processor, a memory, an output device, an input device and communication circuitry and interface module for wired and/or wireless communication with any other computerized device over a communication network, as illustrated in FIG. 3B , described hereinbelow.
  • the content analysis engine 1080 may use the processor and memory to perform Automatic Speech Recognition (ASR) for each call interaction to yield a textual transcript thereof and to operate a clustering algorithm over each textual transcript to cluster phrases in the textual transcript into clusters according to topics.
  • ASR Automatic Speech Recognition
  • the content analysis engine 1080 may further use the processor and memory to identify clusters with topics having suspicious phrases and rank them.
  • the content analysis may forward related call interactions of the selected suspicious phrases for further voiceprint biometrics analysis.
  • the voiceprint biometric analysis may cluster the related call interactions into ranked clusters and then store the ranked clusters in a clusters database 1030 .
  • the PFE engine 1070 may perform a voiceprint biometric analysis on interactions from interactions database 1010 to yield ranked clusters and then forward related interactions for further content analysis by the content analysis engine 1080 .
  • the content analysis may cluster the related call interactions into ranked clusters and then store the ranked clusters in the clusters database 1030 .
  • the content analysis by content analysis engine 1080 and the voiceprint biometric analysis by the PFE engine 1070 may be performed in a cyclic manner.
  • the calls cluster phase 1000 may retrieve from the ranked clusters in the clusters database 1030 (also shown in FIG. 2 as 230 ), a list of fraudsters and may transmit the list of potential fraudsters to an application such as application 1060 (also shown in FIG. 2 as 260 ), to display to a user the list of potential fraudsters via a display unit 1050 , thus enabling the user to review the list of potential fraudsters and to add fraudsters from the list to a watchlist database 1040 .
  • the list of potential fraudsters may be transmitted to the application 1060 over a communication network, to be later on displayed to a user via a display unit 1050 (also shown in FIG. 2 as 250 ).
  • the user may be a security officer that may review the list of potential fraudsters and listen to the call that is in the respective cluster. Upon reviewal, when the security officer suspects that the call has been made by an actual fraudster, the security officer may add the call and the respective fraudsters information via the application 1060 to a watchlist database 1040 .
  • the application 1060 may be web application or desktop application.
  • the fraudster when the details of the fraudster are stored in the watchlist database 1040 , when the fraudster calls the customer service center, it may be detected in real-time. An alert may be sent to the users i.e., the agents and/or the security officers upon the detection for further monitoring and analysis or alternatively the inbound call may be blocked.
  • FIGS. 11A-11B illustrate a schematic flowchart of a method for proactive fraudster exposure, in a customer service center according to content analysis and voice biometrics analysis, in accordance with some embodiments of the present disclosure.
  • proactive fraudster exposure 1100 may comprise operation 1110 for collecting call interactions from a database of recorded calls (not shown) in a customer service center channel, whereby the collecting is based on predefined rules by a calls collection engine, e.g., calls collection engine 100 in FIG. 1 .
  • operation 1120 may comprise storing the collected call interactions in an interactions database, such as interactions database 1010 in FIG. 10 .
  • operation 1130 may comprise performing a first type analysis to cluster the call interactions in the interactions database such as interactions database 1010 in FIG. 10 into ranked clusters and storing the ranked clusters in a clusters database e.g., clusters database 1030 in FIG. 10 .
  • operation 1140 may comprise performing a second type analysis on a predefined amount of the highest ranked clusters, into ranked clusters and storing the ranked clusters in a clusters database e.g., clusters database 1030 in FIG. 10 .
  • the first type analysis may be a content analysis and the second type analysis may be a voice biometrics analysis, or vice versa.
  • operation 1150 may comprise enabling a user to repeat operations 1130 through 1140 .
  • operation 1160 may comprise retrieving from the ranked clusters in the clusters database e.g., clusters database 1030 in FIG. 10 , a list of protentional fraudsters.
  • operation 1170 may comprise transmitting the list of potential fraudsters to an application e.g., application 1060 in FIG. 10 to display to a user the list of potential fraudsters via a display unit e.g., display unit 1050 in FIG. 10 , thus enabling the user to review said list of potential fraudsters and to add fraudsters from the list to a watchlist database, e.g., watchlist database 1040 in FIG. 10 .
  • an application e.g., application 1060 in FIG. 10 to display to a user the list of potential fraudsters via a display unit e.g., display unit 1050 in FIG. 10 , thus enabling the user to review said list of potential fraudsters and to add fraudsters from the list to a watchlist database, e.g., watchlist database 1040 in FIG. 10 .
  • FIG. 12 illustrates ranking of phrases 1200 , in accordance with some embodiments of the present disclosure.
  • phrases may be retrieved from textual transcripts e.g., textual transcripts 930 in FIG. 9 to detect the most important words.
  • the most important words are the phrases which have been mostly repeated or words that have a higher confidence level.
  • a predefined amount of the highest ranked phrases e.g., the phrases that have been mostly repeated, such as phrases “account” 1210 , “password” 1220 , “username” 1230 and “package” 1240 , may be represented as nodes in a connected graph, as shown in detail in FIG. 13 on the display unit e.g., display unit 1050 in FIG. 10 .
  • FIG. 13 illustrates the highest ranked phrases as nodes in a graph such as connected graph 1300 , in accordance with some embodiments of the present disclosure.
  • phrases such as “account” 1310 , “password” 1320 , “username” 1330 and “package” 1340 which were denoted as phrases “account” 1210 , “password” 1220 , “username” 1230 and “package” 1240 in FIG. 12 , may be related to a topic and connected based on a calculation of phrase statistics to suggest to a user as a graph such as a connected graph or any other type of graph.
  • Another type of graph may be a hypergraph which is a generalization of a graph in which an edge may join any number of vertices.
  • the connections between nodes in the connected graph 1300 represent high probability e.g., a probability above a predefined threshold, that if a word or a phrase has been used during an interaction e.g., “password” 1320 then other words or phrases e.g., “username” 1330 or “account” 1310 which are connected to it have been used as well.
  • the phrase “password” 1320 as represented in the connected graph 1300 may be selected by the user e.g., a security officer, for additional analysis via an application such as application 1060 in FIG. 10 or 260 in FIG. 2 .
  • the additional analysis may be a voiceprint biometric analysis where all the call interactions that relate to the phrase e.g., “password” 1320 will be processed instead of processing all the interactions in the interactions database e.g., interactions database 1010 in FIG. 10 .
  • elevating the accuracy of fraudster detection process which is performed on huge amount of data.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Security & Cryptography (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Signal Processing (AREA)
  • Marketing (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Computer Hardware Design (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Game Theory and Decision Science (AREA)
  • Telephonic Communication Services (AREA)

Abstract

A computer-implemented method for proactive fraudster exposure in a customer service center according to content analysis and voice biometrics analysis, is provided herein. The computer-implemented method includes: (i) collecting call interaction; (ii) storing the collected call interactions; (iii) performing a first type analysis to cluster the call interactions into ranked clusters and storing the ranked clusters in a clusters database; (iv) performing a second type analysis on a predefined amount of the highest ranked clusters, into ranked clusters and storing the ranked clusters; the first type analysis is a content analysis and the second type analysis is a voice biometrics analysis, or vice versa, (v) enabling a user to repeat steps (iii) and (iv); (vi) retrieving from the ranked clusters, a list of fraudsters; and transmitting the list of potential fraudsters to an application to display to a user said list of potential fraudsters via a display unit.

Description

    RELATED APPLICATIONS
  • This application claims priority as a continuation in part from application Ser. No. 16/525,606 dated Jul. 30, 2019, the disclosure of which is incorporated herein by reference.
  • TECHNICAL FIELD
  • The present disclosure relates to the field of voice biometric security and analytics, content analysis and real-time authentication. More specifically the present disclosure relates to method and system for proactive fraudster exposure in a customer service channel by fraudsters clustering and displaying to a user a ranked list of potential fraudsters to add to a watchlist database.
  • BACKGROUND
  • Call centers are increasingly becoming a target for fraudsters via their customer service channels. Call center frauds are one of the leading threats that organizations such as financial institutions face. Fraudsters commonly attempt to retrieve information or change information of other legitimate customers by exploiting call center agents by social engineering. For example, fraudsters may conduct an attack on a financial institution by manipulating the call center agents to provide them with confidential information of legitimate customers and then use the extracted information to commit another fraud e.g., identity theft. Instead of social engineering, fraudsters may use information from social networks or public information to correctly answer knowledge-based questions during a call with an agent.
  • Fraudulent activity may take many shapes and forms. It may be performed via multiple frequent attacks or attempts on a singular legitimate customer account or on multiple customer accounts. The attacks may be via different channels such as mobile application, call-center calls or internet on different lines of business e.g., VIP handling agents. Another type of attack is a “targeted attack” in which the attack is targeted to a specific individual i.e., customer. Yet, another type of attack is “spread out attack” in which the attack is on various customers in the call center.
  • Currently, one practice to mitigate the threats to the call center is having a fraud team including a few security officers. These few security officers are responsible to make sure that the customers data is protected by investigating fraudulent behavior with their existing tools or following customers complaints and handling those scenarios. However, listening to a large amount of call interactions of thousands of agents which respond to abundance calls per day, might be inefficient or even not feasible. Also, these security officers struggle to detect most of the fraudulent activities and fraudsters and add the detected fraudsters to their known fraudsters list but, this practice does not provide coverage for unknown fraudsters which are not in the known fraudsters list.
  • Furthermore, the implementation of current practices maintains the call centers exposed to fraudsters. The sample of random calls, out of the plethora of calls, that is checked by the few security officers may overlook some of the fraudsters. Therefore, there is a need for a proactive fraudster exposure system and method that will analyze the big data of call interactions and extract information related to fraudsters, to be later on presented to security officers, so they will add the fraudsters to a watchlist, so that in the future they could be blocked, in real-time.
  • Currently, there is no solution that provides the ability to automatically detect new fraudsters by analysis of varied and high-volume call interactions which are occurring in high velocity together with both biometric analysis and content analysis. Furthermore, there is no solution which automatically detects new fraudsters in real-time with biometric authentication technique such as voice signature i.e., biometric analysis and content analysis. Furthermore, currently there is no solution for automatic detection of new fraudsters that does not require any manual pre-setup or pre-sorting of audio calls.
  • SUMMARY
  • There is thus provided, in accordance with some embodiments of the present disclosure, a method for proactive fraudster exposure in a customer service center having multiple service channels.
  • In accordance with some embodiments of the present disclosure, the computer-implemented method comprising: (a) collecting call interactions from a database of recorded calls in a customer service channel. The collecting is based on predefined rules by a calls collection engine; (b) storing the collected call interactions in an interactions database; (c) analyzing the call interactions in the interactions database by a Proactive Fraud Exposure (PFE) engine, said analyzing comprising: (i) generating a voiceprint for each call interaction in the interactions database to be stored in a voiceprints database; (ii) using machine learning technique to group the call interactions in the interaction database into one or more clusters based on respective voiceprints in the voiceprints database. Each one of the one or more clusters is associated with a repeating speaker's voice based on the generated voiceprints; (iii) storing the one or more clusters in a clusters database; and (iv) ranking and classifying the one or more clusters stored in the clusters database to yield a list of potential fraudsters, and (d) transmitting the list of potential fraudsters to an application to display to a user the list of potential fraudsters via a display unit, thus enabling said user to review said list of potential fraudsters and to add fraudsters from the list to a watchlist database.
  • Furthermore, in accordance with some embodiments of the present disclosure, the generating of voiceprints is performed by extracting i-vectors which represent a speaker effect and a channel effect.
  • Furthermore, in accordance with some embodiments of the present disclosure, the method further comprising detecting fraudsters which are stored on the watchlist database in new call interactions to the customer service center via one of the multiple service channels, in real-time.
  • There is further provided, in accordance with some embodiments of the present disclosure, the ranking is performed by at least one of: (i) inter-cluster statistics; and (ii) probability of representing a fraudster or any combination thereof.
  • Furthermore, in accordance with some embodiments of the present disclosure, the probability of representing a fraudster is calculated based on at least one of the following factors: (i) same voice on same claimed customer; (ii) same voice on different claimed customers; (iii) fraudulent behavioral characteristics of the call interaction, manifested in the voice; (iv) metadata representing details of a predefined line of business.
  • Furthermore, in accordance with some embodiments of the present disclosure, the method further comprising attributing a predefined weight value to the factors and the wherein the ranking is further based on a weighted average of the factors.
  • Furthermore, in accordance with some embodiments of the present disclosure, the predefined rules are at least one of: (i) mismatch during customer authentication; (ii) business data; (iii) agents that are associated with a risk group or line of business; (iv) behavioral flows of the speaker; (v) call content analysis; and (vi) frequency of the call interactions or any combination thereof.
  • Furthermore, in accordance with some embodiments of the present disclosure, the analyzing is performed on audio or textual content.
  • Furthermore, in accordance with some embodiments of the present disclosure, the collecting is further based on automated machine-learning algorithms, such as phonetic speech and voice analysis.
  • Furthermore, in accordance with some embodiments of the present disclosure, the ranking further includes: (i) comparing each call interaction in the interaction database to all other call interactions in the call interaction database to yield a matrix of comparisons; (ii) scoring each pair of call interactions based on the extracted i-vectors; (iii) retrieving from each row in the matrix of comparisons a pair of call interactions (i,j) with the higher score; and (iv) for each retrieved pair of call interactions (i,j) perform clustering.
  • Furthermore, in accordance with some embodiments of the present disclosure, the clustering is performed according to the following conditions: when the score of the pair of call interactions (i,j) is higher than a predefined threshold: a. when both call interactions (i,j) were not assigned to a cluster, assign both interactions to a new cluster; b. when only one of the call interactions (i,j) is assigned to a cluster add the call interaction that is not assigned to the cluster; c. when both call interactions are assigned merge them to one cluster; when the score of the pair of call interactions (i,j) is not higher than a predefined threshold: call interaction (i) is assigned to a new cluster. Call interaction (i) has the highest score in a row.
  • Furthermore, in accordance with some embodiments of the present disclosure, the classifying comprises calculating a confidence value for each cluster based on the inner ties between the call interactions in the cluster.
  • There is further provided, in accordance with some embodiments of the present disclosure, a computerized system for proactive fraudster exposure in a customer service center having multiple service channels. The processor may be configured to: (i) collect call interactions for analysis from a database of recorded calls in a customer service channel. The collecting may be based on predefined rules by a calls collection engine. (ii) store the collected call interactions in an interaction database; (iii) analyze the call interactions in the interaction database by a Proactive Fraud Exposure (PFE) engine, said analyze comprising: a. generating a voiceprint for each interaction in the interaction database to be stored in a voiceprints database; b. using machine learning technique to group the call interactions in the interaction database into one or more clusters, based on respective voiceprints in the voiceprints database. Each one of the one or more clusters is associated with a repeating speaker's voice based on the generated voiceprints; c. storing the one or more clusters in a clusters database; and ranking and classifying the one or more clusters stored in the clusters database to yield a list of potential fraudsters; and (iv) transmit the list of potential fraudsters to an application to display to a user said list of potential fraudsters via a display unit thus, enabling said user upon review of said list of potential fraudsters to add fraudsters from said list of potential to a watchlist database and when the fraudster calls the customer service center, it may be detected in real-time.
  • There is further provided, in accordance with some embodiments of the present disclosure, a computer-implemented method for proactive fraudster exposure in a customer service center, according to content analysis and voice biometrics analysis. The computer-implemented method may comprise collecting call interactions from a database of recorded calls in a customer service channel. The collecting may be based on predefined rules by a calls collection engine.
  • According to some embodiments, the computer-implemented method may further comprise after the collecting, storing the collected call interactions in an interactions database.
  • According to some embodiments, the computer-implemented method may further comprise after the storing, performing a first type analysis to cluster the call interactions in the interactions database into ranked clusters and storing the ranked clusters in a clusters database
  • According to some embodiments, the computer-implemented method may further comprise performing a second type analysis on a predefined amount of the highest ranked clusters, into ranked clusters and storing the ranked clusters in a clusters database.
  • According to some embodiments, the first type analysis is a content analysis and the second type analysis is a voice biometrics analysis, or vice versa.
  • According to some embodiments, the computer-implemented method may further comprise enabling a user to repeat performing the first type analysis to cluster the call interactions in the interactions database into ranked clusters and storing the ranked clusters in a clusters database; and performing the second type analysis on a predefined amount of the highest ranked clusters, into ranked clusters and storing the ranked clusters in a clusters database in a cyclic manner.
  • According to some embodiments, the computer-implemented method may further comprise retrieving from the ranked clusters in the clusters database, a list of fraudsters; transmitting the list of potential fraudsters to an application to display to a user the list of potential fraudsters via a display unit, thus enabling the user to review the list of potential fraudsters and to add fraudsters from the list to a watchlist database.
  • According to some embodiments, the content analysis may include performing Automatic Speech Recognition (ASR) for each call interaction to yield a textual transcript thereof.
  • According to some embodiments, the content analysis may further include operating a clustering algorithm over each textual transcript to cluster phrases in the textual transcript into clusters according to topics.
  • According to some embodiments, the content analysis may further include identifying clusters with topics having predefined suspicious phrases and ranking the identified clusters.
  • According to some embodiments, the identifying may be performed according to at least one of the following algorithms: graph theory, machine learning, statistics, linguistic, deep learning, hybrid, Term Frequency—Inverse Document Frequency (TF-IDF) and parallel word importance estimator.
  • According to some embodiments, the content analysis may further include representing a predefined amount of the highest ranked phrases as nodes in a connected graph.
  • According to some embodiments, the content analysis may further include calculating phrase statistics to suggest to a user a connection between the phrases according to the graph.
  • According to some embodiments, the content analysis may further include enabling a user to select a specific phrase in the graph for biometrics analysis of related call interactions.
  • According to some embodiments, the voice biometrics analysis is performed by a Proactive Fraud Exposure (PFE) engine, wherein the PFE is performing clustering.
  • According to some embodiments, the PFE engine is performing the clustering according to at least one of: K-means, K-centroids, Clustering Using REpresentatives (CURE), Agglomerative Hierarchal Clustering (AHC) algorithms.
  • According to some embodiments, the content analysis further includes stemming to transform phrases into a predefined form.
  • There is further provided, in accordance with some embodiments of the present disclosure, a computerized-system for proactive fraudster exposure in a customer service center according to content analysis and voice biometrics analysis.
  • According to some embodiments, the computerized-system may include: a database of recorded calls; an interactions database; a clusters database; a watchlist database; a memory to store the database of recorded calls, the interactions database, the clusters database and the watchlist database; a display unit; and a processor.
  • According to some embodiments, the processor may be configured to: (a) collect call interactions from the database of recorded calls in a customer service channel. The collecting may be based on predefined rules by a calls collection engine. The processor may be further configured to (b) store the collected call interactions in the interactions database.
  • According to some embodiments, the processor may be further configured to (c) perform a first type analysis to cluster the call interactions in the interactions database into ranked clusters and to store the ranked clusters in the clusters database.
  • According to some embodiments, the processor may be further configured to (d) perform a second type analysis on a predefined amount of the highest ranked clusters, into ranked clusters and to store the ranked clusters in the clusters database. The first type analysis is a content analysis and the second type analysis is a voice biometrics analysis, or vice versa.
  • According to some embodiments, the processor may be further configured to enable a user to repeat steps (c) and (d).
  • According to some embodiments, the processor may be further configured to retrieve from the ranked clusters in the clusters database, a list of fraudsters and transmit the list of potential fraudsters to an application to display to a user said list of potential fraudsters via a display unit, thus to enable said user to review said list of potential fraudsters and to add fraudsters from the list to the watchlist database.
  • According to some embodiments, the content analysis may include: performing Automatic Speech Recognition (ASR) for each call interaction to yield a textual transcript thereof; operating a clustering algorithm over each textual transcript to cluster phrases in the textual transcript into clusters according to topics; and identifying clusters with topics having predefined suspicious phrases and ranking the identified clusters.
  • According to some embodiments, the identifying is performed according to at least one of the following algorithms: graph theory, machine learning, statistics, linguistic, deep learning, hybrid, Term Frequency—Inverse Document Frequency (TF-IDF) and parallel word importance estimator.
  • According to some embodiments, the content analysis may further include: representing a predefined amount of the highest ranked phrases as nodes in a connected graph; calculating phrase statistics to suggest to a user a connection between the phrases according to the graph; and enabling a user to select a specific phrase in the graph for biometrics analysis of related call interactions.
  • According to some embodiments, the voice biometrics analysis is performed by a Proactive Fraud Exposure (PFE) engine, wherein the PFE is performing clustering.
  • According to some embodiments, the PFE engine is performing the clustering according to at least one of: K-means, K-centroids, Clustering Using REpresentatives (CURE), and Agglomerative Hierarchal Clustering (AHC) algorithms.
  • According to some embodiments, the content analysis further includes stemming to transform phrases into a predefined form.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • In order for the present disclosure, to be better understood and for its practical applications to be appreciated, the following Figures are provided and referenced hereafter. It should be noted that the Figures are given as examples only and in no way limit the scope of the disclosure. Like components are denoted by like reference numerals.
  • FIG. 1 schematically illustrates a calls collection engine, in accordance with some embodiments of the present disclosure;
  • FIG. 2 schematically illustrates a proactive fraud exposure engine, in accordance with some embodiments of the present disclosure;
  • FIG. 3A is a high-level diagram of the system, in accordance with some embodiments of the present disclosure;
  • FIG. 3B schematically illustrate a system for proactive fraudster exposure in a customer service center having multiple channels, in accordance with some embodiments of the present disclosure;
  • FIG. 4 is a high-level flow diagram depicting clustering algorithm, in accordance with some embodiments of the present disclosure;
  • FIGS. 5A-5B schematically illustrate score matrix with speakers marked after cluster detection and the grouping of the interactions into one or more clusters, respectively, in accordance with some embodiments of the present disclosure;
  • FIG. 6 is a high-level flow diagram depicting a ranking algorithm, in accordance with some embodiments of the present disclosure;
  • FIG. 7 is a high-level flow diagram depicting a method for proactive fraudster exposure, in accordance with some embodiments of the present disclosure;
  • FIG. 8 is a high-level flow diagram depicting a method for analyzing the call interactions by a Proactive Fraud Exposure (PFE) engine, in accordance with some embodiments of the present disclosure;
  • FIG. 9 is a high-level flow diagram depicting a content analysis, in accordance with some embodiments of the present disclosure;
  • FIG. 10 schematically illustrates a calls cluster phase including a content analysis engine and a Proactive Fraud Exposure (PFE) engine, in accordance with some embodiments of the present disclosure;
  • FIGS. 11A-11B illustrate a schematic flowchart of a method for proactive fraudster exposure, in a customer service center according to content analysis and voice biometrics analysis, in accordance with some embodiments of the present disclosure;
  • FIG. 12 illustrates ranking of phrases, in accordance with some embodiments of the present disclosure; and
  • FIG. 13 illustrates the highest ranked phrases as nodes in a connected graph, in accordance with some embodiments of the present disclosure.
  • DETAILED DESCRIPTION
  • In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the disclosure. However, it will be understood by those of ordinary skill in the art that the disclosure may be practiced without these specific details. In other instances, well-known methods, procedures, components, modules, units and/or circuits have not been described in detail so as not to obscure the disclosure.
  • Although embodiments of the disclosure are not limited in this regard, discussions utilizing terms such as, for example, “processing,” “computing,” “calculating,” “determining,” “establishing”, “analyzing”, “checking”, or the like, may refer to operation(s) and/or process(es) of a computer, a computing platform, a computing system, or other electronic computing device, that manipulates and/or transforms data represented as physical (e.g., electronic) quantities within the computer's registers and/or memories into other data similarly represented as physical quantities within the computer's registers and/or memories or other information non-transitory storage medium (e.g., a memory) that may store instructions to perform operations and/or processes. Although embodiments of the disclosure are not limited in this regard, the terms “plurality” and “a plurality” as used herein may include, for example, “multiple” or “two or more”. The terms “plurality” or “a plurality” may be used throughout the specification to describe two or more components, devices, elements, units, parameters, or the like. Unless explicitly stated, the method embodiments described herein are not constrained to a particular order or sequence. Additionally, some of the described method embodiments or elements thereof can occur or be performed simultaneously, at the same point in time, or concurrently. Unless otherwise indicated, use of the conjunction “or” as used herein is to be understood as inclusive (any or all of the stated options).
  • The term “voiceprint” as used herein refers to a stored sample of a voice of a user which is used to identify and authenticate the user via speaker recognition based on characteristics of voice. The characteristics of the voice may be selected from the group consisting of: volume, pace, pitch, resonance, articulation, enunciation, respiration, pauses, timber, stress, rhyme, diction, dialect and the like.
  • The term “cluster” as used herein refers to a set of call interactions.
  • The term “social engineering” as used herein refers to manipulating agents to provide confidential information to a speaker that pretends to be a legitimate customer.
  • The term “i-vector” as used herein refers to intermediate vectors or identity vectors which is an enhancement for a previously used approach in speaker verification technology called Joint Factor Analysis (JFA). JFA divides a human voice into two factors: a speaker factor and a channel factor. The data structure of the i-vectors may be an array, and each element in the data structure is representing a characteristic of the speech of a speaker. The i-vectors are generated as part of voiceprint generation for later on comparison.
  • The term “similarity score” as used herein refers to a comparison of two voice samples based on extracted i-vectors.
  • The term “watchlist” as used herein refers to a list of known fraudsters which is commonly saved in a database.
  • The term “customer service channels” as used herein refers to one type of channel or more through which a customer service center of an organization suggests service to its customer. E.g., a customer may complete an action with the organization via one of the following customer service channels: Interactive Voice Response (IVR), mobile application or speaking with an agent.
  • The term “threshold” as used herein refers to a scalar such that:
  • Interactions a and b are = { mismatch , score ( a , b ) threshold match , score ( a , b ) > threshold
  • The term “claimed customer” as used herein refers to the speaker's claimed identity i.e., the details of a legitimate customer, which is provided by a fraudster in a call interaction between a fraudster and an agent.
  • The term “topic” as used herein refers to a set of key phrases that have been said during related call interactions.
  • The term “confidence level” as used herein refers to a value that is representing how likely it is for a given cluster to include several calls made by one speaker.
  • Commonly, fraud attacks are caused by people who call multiple times to the customer service centers, in an attempt to perform money transactions, identity theft and the like.
  • Nowadays, organizations must verify customers' identity to protect them and their data from fraud, especially with the rise in identity theft and account takeover, which incur high costs. For that purpose, and also to increase the level of security, there are system and methods for authentication and fraud prevention for customer service channels which are based on voice biometrics technology and other factors. Biometrics technology automatically verifies the speaker's claimed identity, commonly, within the first few seconds of a call through natural conversation with an agent in the customer service channel. The biometric technology verifies the identity of the speaker by comparing a sample of an ongoing call interaction of the speaker with a voiceprint.
  • These systems and methods which are based on biometric technology, scan pre-created watchlists against the speaker's voice and call characteristics at the beginning of each call to identify suspected fraud. When a suspected speaker is identified, the systems and methods can send an alert to the security officers, block the caller from committing a fraud and even block when calls are made in the future, thus lowering overall spending of the organization on authentication.
  • However, the construction of the watchlists may still require manual checks and may be time consuming, therefore there is a need for a system and method that will eliminate the expense and time needed for manual checks by analyzing the big data of call interactions and extracting information related to fraudsters to be later presented to security officers, and upon review they will add the fraudsters to the watchlist.
  • The embodiments taught herein solve the technical problem of checking and analyzing varied high-volume call interactions which are occurring in high velocity, to detect and identify fraudsters.
  • The embodiments taught herein relating to call interactions in a customer call center with call interactions between a customer and an agent i.e., a call center representative is merely shown by way of example and technical clarity, and not by way of limitation of the embodiments of the present disclosure. The embodiments herein for proactive fraudster exposure in a customer service channel may be applied on any customer service channel such as IVR or mobile application. Furthermore, the embodiments herein are not limited to a call center but may be applied to any suitable platform providing customer service channels.
  • FIG. 1 schematically illustrates a calls collection engine, in accordance with some embodiments of the present disclosure.
  • According to some embodiment, in the customer service center, all call interactions are recorded and stored in a database of recorded calls. A calls collection engine 100 receives call interactions from a database of recorded calls where some of the calls may be ongoing calls.
  • According to some embodiments, a user e.g., a security officer may define a set of rules which are applied on all call interactions and determine which call interactions should be further analyzed. The set of rules may include various types of rules. For example, (i) The speaker got mismatch result during authentication procedure; (ii) The speaker asked to perform a high-risk transaction; (iii) The agent that handled the call is associated to a special group that should always be monitored, e.g., VIP customers. The calls collection engine 100 may apply predefined rules on the call interactions to extract call interactions for further analysis i.e., pending interactions to be stored in an interactions database 110, thus lowering the high volume of call interactions that must be checked by the security officers. The predefined rules may be at least one of: (i) mismatch during customer authentication; (ii) business data; (iii) agents that are associated with a risk group or line of business; (iv) behavioral flows of the speaker; (v) call content analysis; (vi) frequency of the call interactions or any combination thereof.
  • In a non-limiting example, mismatch during customer authentication may occur when in the authentication procedure the data that the user provides does not match the authentication data that is saved in the organizations database. Further, in a non-limiting example business data may include high-risk transactions such as money transfer when the organization is a financial institution. Furthermore, in a non-limiting example, agents that are associated with a risk group or line of business may be agents which provide service to VIP customers. Furthermore, in a non-limiting example, behavioral flows of the speaker.
  • In a non-limiting example, a call content analysis may be related to search for keywords and phrases. In another non-limiting example, frequency of the call interactions relates to the number of call interactions from the same speaker in a predefined time interval.
  • According to some embodiments, when a call ends its information is sent to a Calls Collection Engine 100 to see if the interaction matches to one or more of the pre-defined rules of the system. If the call interaction matches one or more of the rules, it is stored in the interactions database 110 to be later on analyzed by the PFE engine which is shown in detail in FIG. 2.
  • FIG. 2 schematically illustrates a proactive fraud exposure engine, in accordance with some embodiments of the present disclosure.
  • Once a call interaction is stored in interactions database 210 (i.e., 110 in FIG. 1) by the Calls Collection Engine 100 in FIG. 1, the PFE engine 200 may retrieve and read the information of the call interaction from the interactions database 210 to analyze it.
  • According to some embodiments, Calls Collection Engine 100 in FIG. 1 and PFE engine 200 may include a processor, a memory, an output device, an input device and communication circuitry and interface module for wired and/or wireless communication with any other computerized device over a communication network, as illustrated in FIG. 3B, described hereinbelow.
  • According to some embodiments, in a non-limiting example, the user may be a security officer and the data may be details of fraudsters to be added to a watchlist database 240 and the instructions may be the rules, which are applied on all call interactions and determine which call interactions should be further analyzed.
  • According to some embodiments, the PFE Engine 200 may use the processor and memory to generate a voiceprint for each call interaction in the interactions database 210 to be stored in a voiceprints database 220.
  • Next, according to some embodiments, the PFE Engine 200 may be using machine learning technique to group the call interactions in the interaction database 210 based on the voiceprints database 220 into one or more clusters which may be stored in a clusters database 230. Each one of the one or more clusters is associated with a repeating speaker's voice based on the generated voiceprints.
  • According to some embodiments, the one or more clusters in the clusters database 230 may be ranked and classified to yield a list of potential fraudsters.
  • According to some embodiments, the list of potential fraudsters may be transmitted to an application 260 over a communication network, to be later on displayed to a user via a display unit 250. The user may be a security officer that may review the list of potential fraudsters and listen to the call that is in the respective cluster. Upon reviewal, when the security officer suspects that the call has been made by an actual fraudster, the security officer may add the call and the respective fraudsters information via the application 260 to a watchlist database 240. The application 260 may be web application or desktop application.
  • According to some embodiments, after the details of the fraudster are stored in the watchlist database 240, when the fraudster calls the customer service center, it may be detected in real-time. An alert may be sent to the users i.e., the agents and/or the security officers upon the detection for further monitoring and analysis or alternatively the call may be blocked.
  • FIG. 3A is a high-level diagram of the system, in accordance with some embodiments of the present disclosure.
  • According to some embodiments, Real Time Authentication (RTA) flows 305 may be sent to Real Time Voice Buffering (RTVB) 310 which may be buffering the call's audio to a Fluent Engine 315. The Fluent Engine 315 is a voice biometric engine that is performing authentication and fraud detection. An authentication center 320 holds the fraudsters watchlists and may forward the watchlists to the Fluent Engine 315. RTA results are transmitted to a call server 325 which manages all the calls and controls the call recording by initiating the call recording in the system and the buffering which is performed by RTVB 310. The call server 325 also saves all the call-related metadata to the DB server 335, i.e., once a call ends call-related metadata such as if the call was indeed recorded and archived, certain business data or having an authentication mismatch is being saved.
  • According to some embodiments, Proactive Fraud Engine (PFE) Rule Manager 330 which is a sub-component of the call server 325 may tag the relevant PFE calls according to predefined PFE rules. Once a call ends, the tagged PFE calls may be transmitted to a DB Server 335. The DB server 335 manages all the call interactions with all the databases which are the rule database 335 and the voiceprints database such as database 340.
  • According to some embodiments, PFE call interactions are forwarded to database 340 which holds the pending PFE interactions and the PFE voiceprints. PFE Engine 345 creates the voiceprints from the tagged calls and performs the clustering algorithms.
  • According to some embodiments, Storage Center 350 may hold the archived calls as Media Files (MF) and may forward MF to PFE Engine 345. PFE Engine 345 may forward clustering result to Rule database 355, which holds the PFE application data.
  • PFE application Backend 360 serves the PFE application frontend requests. PFE Frontend 365 is the application where a user can define rules, review the clustering results, manage them and add new fraudsters to the watchlist database 240 in FIG. 2.
  • FIG. 3B schematically illustrates a system for proactive fraudster exposure in a customer service center having multiple channels, in accordance with some embodiments of the present disclosure.
  • According to some embodiments, Calls Collection Engine 100 in FIG. 1 and PFE engine 200 may include a processor 3010, a memory 3040, an input device 3025, an output device 3030, and a communication circuitry and interface module 3005 for wired and/or wireless communication with any other computerized device over a communication network.
  • According to some embodiments, the processor 3010 may be configured to operate in accordance with programmed instructions stored in memory 3040 and may include one or more processing units, e.g., of one or more computers. The processor 3010 may be further capable of executing an engine such as PFE engine 3020 (also shown in FIG. 2 as 200), for generating a voiceprint of a speaker out of an audio sample. The voiceprint is stored in a voiceprints database such as voiceprints database 3035.
  • According to some embodiments, the processor 3010 via PFE 3020 may communicate with an output device such as output device 3030 via application 3060. For example, the output device 3030 may include a computer monitor or screen and the processor 3010 may communicate with a screen of the output device 3030. In another example, the output device 3030 may include a printer, display panel, speaker, or another device capable of producing visible, audible, or tactile output.
  • According to some embodiments, the processor 3010 via PFE 3020 may further communicate with an input device such as input device 3025 via application 3060. For example, the input device 3025 may include one or more of a keyboard, keypad or pointing device for enabling a user to input data or instructions for operation of the processor 3010. In a non-limiting example, the user may be a security officer and the data may be details of fraudsters to be added to a watchlist database 240 in FIG. 2 and the instructions may be the rules, which are applied on all call interactions and determine which call interactions in the recorded calls database 3050 should be stored in interactions database 3045 to be further analyzed by the PFE engine 3020 (also shown in FIG. 2 as 200).
  • According to some embodiments, a user may insert the rules according to which call interactions in the recorded calls database 3050 should be stored in interactions database 3045, via application 3060. In some embodiments, a user may receive a list of potential fraudsters and update the watchlist database 240 (FIG. 2) via application 3060 (also shown as application 260 in FIG. 2).
  • According to some embodiments, a calls collection engine such as call collection engine 3015 (also shown in FIG. 1 as 100) may receive call interactions from a database of recorded calls such as recorded calls database 3050, where some of the calls may be ongoing calls.
  • According to some embodiments, the processor 3010 may further communicate with memory 3040. The memory 3040 may include one or more volatile or nonvolatile memory devices. The memory 3040 may be utilized to store, for example, programmed instructions for operation of the processor 3010, data or parameters for use by the processor 3010 during operation, or results of the operation of the processor 3010. For example, the memory 3040 may store: recorded calls database 3050, call interactions in interactions database 3045 (also shown in FIG. 2 as 210), voiceprints in voiceprints database 3035 (also shown in FIG. 2 as 220) and clusters in a clusters database 3055 (also shown in FIG. 2 as 230).
  • According to some embodiments, the processor 3010 may use PFE engine 3020 (also shown in FIG.2 as 200) to implement machine learning technique to group the call interactions in the interaction database 3045 into one or more clusters and store the clusters in the clusters database 3055. Each one of the one or more clusters is associated with a repeating speaker's voice based on the generated voiceprints stored in the voiceprints database 3035.
  • According to some embodiments, the processor 3010 may further use the PFE engine 3020 to rank and classify the one or more clusters stored in the clusters database 3055 to yield a list of potential fraudsters.
  • FIG. 4 is a high-level flow diagram depicting clustering algorithm, in accordance with some embodiments of the present disclosure. The steps described herein below may be performed by a processor.
  • According to some embodiments, operation 410 may comprise taking a collection of call interactions. Operation 420 may comprise, for each call interaction, finding the call interactions that are most similar to it and creating a cluster out of them. In some embodiments, clustering algorithm 400 may further comprise operation 430, which may comprise, if there is no call interaction that is similar to it, creating a cluster of size ‘1’ that represents it. Next, clustering algorithm 400 may comprise ranking the clusters and determining which clusters have the highest confidence level.
  • In some embodiments, clustering algorithm 400 may be illustrated by the following pseudo code:
  • Given N interactions, and a threshold (T) − init N empty groups (G).
    Create a NxN matrix (M) containing compare scores of all pairwise
    comparisons.
    Diagonal values should be (-infinity).
    For i from 0 to N:
    Find the maximum value for row i, let's say it's in index j
    if maximum > T:
     if G[i] and G[j] are both empty − assign them to a new
     cluster.
    if G[i] is empty and G[j] is not − assign G[i] to G[j] (and vice
    versa).
    if G[i] and G[j] are both assigned − merge them.
    If not:
    G[i] is assigned to a new cluster
    T is determined in the following way:
    Take all the pairwise scores, calculate their mean and variance,
    T = mean − Z*variance.
  • Where Z is empirically tested to be from 1 to 2 (commonly 2) Optionally, when detecting extremely large clusters, for example more than 100 calls in one cluster, repeat all the above for each large cluster, creating sub-clusters.
  • FIGS. 5A-5B schematically illustrate score matrix with speakers marked after cluster detection and the grouping of the interactions into one or more clusters, respectively, in accordance with some embodiments of the present disclosure.
  • According to some embodiments, in a non-limiting example a score matrix with speakers marked after cluster detection 510 is shown. In the matrix, given a set of call interactions, there is a pairwise comparison of all to all, and similarity scores. The similarity scores are calculated based on i-vectors of each speaker according to a similarity algorithm.
  • According to some embodiments, given a threshold, in a non-limiting example, the threshold value may be ‘25’, all call interactions are clustered together in a set of interactions as shown in 520 (in FIG. 5B). If the similarity score of call ‘1’ and call ‘2’ is the highest in a row then when it is higher than a predefined threshold then that call interaction is clustered in set of interactions 520.
  • According to some embodiments, the set of interaction 520 is later on divided into clusters according to the clustering algorithm 400 described in FIG. 4. The result of the clustering algorithm is shown in 530.
  • FIG. 6 is a high-level flow diagram depicting a ranking algorithm 600, in accordance with some embodiments of the present disclosure.
  • According to some embodiments, in operation 610 the ranking algorithm 600 may take all the clusters shown in element 530 in FIG. 5B.
  • According to some embodiments, operation 620 may comprise, for each cluster, calculating the confidence of the inner ties, and then in operation 630 normalizing the calculated confidence to yield a score.
  • According to some embodiments, the normalization is needed because the matrix includes the speaker effect and the channel effect, and this is also manifested in the i-vectors themselves, therefore there is a need to later normalize the channel effect.
  • In some embodiments, operation 640 may comprise checking if it is the last cluster and operation 650 may comprise storing the cluster ID and the score in a data structure. In operation 660 this score is used to ranking the clusters in the data structure and outputting in a sorted manner. In a non-limiting example, the sorted clustered may be outputted in ascendance manner from high to low.
  • According to some embodiments, the ranking is performed by at least one of the following approaches: (i) inter-cluster statistics; (ii) probability of representing a fraudster; customers or any combination thereof.
  • According to some embodiments, the inter-cluster statistics represent the level of “confidence” that the cluster includes call interactions that share the same voice.
  • According to some embodiments, the probability of representing a fraudster may be performed using one or more of the following factors: (i) same voice on same claimed customer also known as “targeted attack”; (ii) same voice on different claimed customer, also known as “spread out attack”; (iii) fraudulent behavioral characteristics of the call interaction, manifested in the voice such as deception acoustic features: stutter, jitter, shimmer and the like, and (iv) metadata representing details of a predefined line of business that is more prone to fraud attacks than others.
  • According to some embodiments, each factor may be attributed with a predefined weight value and the ranking algorithm 600 may be further based on a weighted average of the factors. The weights may be predefined in collaboration with the employees in the call center.
  • In some embodiments, ranking algorithm 600 may be illustrated by the following pseudo code, given N clusters:
  • Init an empty array A
    For i from 1 to N:
    TmpSum = Sum(all pairwise compares in cluster i)
    clusterMean = TmpSum/numberOfCompares
    clusterVariance = variance(all pairwise compares in cluster i)
    clusterScore = clusterMean/(clusterVariance+1)
    A.append(clusterScore,i)
    A = A.sort # based on clusterScore
    Display to the user ‘y’ highest scored clusters.
  • FIG. 7 is a high-level flow diagram depicting a method for proactive fraudster exposure 700, in accordance with some embodiments of the present disclosure.
  • In some embodiments, proactive fraudster exposure 700 may comprise operation 710 for collecting call interactions from a database of recorded calls (not shown) in a customer service center having multiple service channels, whereby the collecting is based on predefined rules by a calls collection engine, e.g., calls collection engine 100 in FIG. 1.
  • In some embodiments, operation 720 may comprise storing the collected call interactions in an interactions database, e.g., interactions database 110 in FIG. 1.
  • In some embodiments, operation 730 may comprise analyzing the call interactions in the interactions database 110 in FIG. 1 by a Proactive Fraud Exposure (PFE) engine, e.g., PFE engine 200 in FIG. 2.
  • In some embodiments, operation 740 may comprise transmitting the list of potential fraudsters to an application, e.g., application 260 in FIG. 2 to display to a user, the list of potential fraudsters via a display unit, e.g., display unit 250 in FIG. 2, thus enabling the user, e.g., a security officer to review the list of potential fraudsters and to add fraudsters from the list to a watchlist database, e.g., watchlist database 240 in FIG. 2.
  • FIG. 8 is a high-level flow diagram depicting a method for analyzing the call interactions by a Proactive Fraud Exposure (PFE) engine, in accordance with some embodiments of the present disclosure.
  • According to some embodiments, operation 730 in FIG. 7 may comprise analyzing the call interactions by a Proactive Fraud Exposure (PFE) engine. Such operation 730 may comprise operations 800 depicting a method for analyzing the call interactions by a Proactive Fraud Exposure (PFE) engine. According to some embodiments, operation 810 may comprise generating a voiceprint for each call interaction in the interactions database 210 in FIG. 2 to be stored in a voiceprints database 220 in FIG. 2.
  • According to some embodiments, operation 820 may comprise using machine learning technique to group the call interactions in the interaction database, e.g., interaction database 210 in FIG. 2 into one or more clusters, whereby each one of the one or more clusters is associated with a repeating speaker's voice based on the generated voiceprints.
  • According to some embodiments, operation 830 may comprise storing the one or more clusters in a clusters database, e.g., clusters database 230 in FIG. 2. In some embodiments, operation 840 may comprise ranking and classifying the one or more clusters stored in a clusters database, e.g., clusters database 230 in FIG. 2 to yield a list of potential fraudsters.
  • According to some embodiments of the present disclosure, the similarity algorithm may use a log likelihood ratio, where this ratio is calculated as follows: given two i-vectors, V1 and V2, assuming V1 and V2 are normally distributed with mean 0 and variance 1, the ratio may be calculated according to the following calculation:
  • ratio ( V 1 , V 2 ) = i = 1 i = n V 1 [ i ] 2 - i = 1 i = n V 2 [ i ] 2
  • n may be the length of the i-vector, in a non-limiting example n may be equal to 400.
  • In some embodiments of the present disclosure, the method may include calculating the predefined threshold from a decision boundary of a distribution of the similarity scores for voiceprints generated from speech data chunks.
  • According to some embodiments, the Proactive Fraudster Exposure (PFE) engine as PFE engine 200 in FIG. 2, performs voiceprint biometric analysis and identifies new fraudsters by grouping call interactions with a similar voiceprint into clusters. Then, the PFE engine is ranking the clusters to display it for a user e.g., a security officer, which reviews the list of ranked clusters and determines if the call interactions were made by a fraudster.
  • However, due to increasing amount of call interactions in each customer service center, issues related to big data analysis might arise. Such issues might be inaccuracy of data i.e., inaccurate fraudster detection or too many clusters to review by the security officer. According to some embodiments, to resolve such issues, an additional analysis to the big data may be performed before or after the voice biometric analysis thus, overcoming the issues which are related to processing of huge amount of data.
  • For example, a financial organization may have millions of customer interactions every day. In such cases, a more sophisticated clustering method is required. By combining voice biometrics with different types of content analysis, the interactions clustering becomes more accurate and produces high quality clusters with a special focus on fraudulent activity and behavior detection.
  • According to some embodiments, content analysis may be performed before or after a biometric analysis to provide the ability to scale and automate the process of fraudster detection and shorten the time needed to find frauds attacks. Essentially, the additional content analysis may provide more accurate results i.e., more accurate fraudster detection and a short list of fraudsters having ranked clusters for review, by a user such as the security officer.
  • According to some embodiments, the amount of call interactions which are being processed by the PFE engine such as PFE engine 200 in FIG. 2, may be reduced by focusing on higher risk call interactions due to the call interaction content. Call interactions with security content phrases may be in higher risk to be a fraud. The content analysis may enable a user to select a specific phrase in a generated graph for later voiceprint biometric analysis of related call interactions.
  • FIG. 9 is a high-level flow diagram depicting a content analysis 900, in accordance with some embodiments of the present disclosure.
  • According to some embodiments, a content analysis 900 may be performed after collecting call interactions from a database of recorded calls (not shown) in a customer service channel and storing the collected call interactions in an interactions database 910.
  • According to some embodiments, the content analysis 900 may perform Automatic Speech Recognition (ASR) for each call interaction in an interactions database 910 to yield a textual transcript 930 thereof.
  • According to some embodiments, the content analysis 900 may operate a clustering algorithm over each textual transcript 930 to detect most important words 940 i.e. words that have a higher confidence level, by clustering phrases in the textual transcript 930 into clusters according to topics. The content analysis 900 may identify clusters with topics having one or more predefined suspicious phrases and may rank the identified clusters. The most important words 940 may be predefined as words that have a higher confidence level.
  • In a non-limiting example, in a contact center where a fraudster gains access to customer data the data could be used to reset the customer's password to the customer's account thus leaving the customer exposed to her identity theft. The call interactions where a fraudster may try to reset the customer's password may include the following sentences: “Hello, I am Anna and I forgot my password, can I reset it?” or “Hi, I'm Danny and I would like to change my password” or “Hi, My name is Jason and I would like to change my password” or “Good day, I am David and I would like to update my password please”. In all these sentences the phrase “password” is used.
  • According to some embodiments, the identification of the clusters may be performed according to at least one of the following algorithms: graph theory, machine learning, statistics, linguistic, deep learning, hybrid, Term Frequency—Inverse Document Frequency (TF-IDF) and parallel word importance estimator.
  • According to some embodiments, the PFE engine may perform the clustering according to at least one of: K-means, K-centroids, Clustering Using REpresentatives (CURE), and Agglomerative Hierarchal Clustering (AHC) algorithms.
  • According to some embodiments, the content analysis 900 may represent the detected most important words, which may be a predefined amount of the highest ranked phrases, or words that have a higher confidence level as nodes in a graph with edges 950. The content analysis 900 may calculate phrase statistics to suggest to a user a connection between the phrases according to the graph and it may be represented on a display unit.
  • According to some embodiments, the content analysis 900 may enable a user to select a word 960 and then a voiceprint biometric clustering of all the call interactions where this word has been used 970, may be performed. The result of the voiceprint biometric clustering 970 may detect a fraud cluster 980.
  • According to some embodiments, the voice biometrics analysis may be performed by a Proactive Fraud Exposure (PFE) engine such as PFE 200 in FIG. 2, and the PFE may perform clustering. The PFE engine may perform the clustering according to at least one of: K-means, K-centroids, Clustering Using REpresentatives (CURE), Agglomerative Hierarchal Clustering (AHC) algorithms.
  • According to some embodiments, the content analysis may further include stemming to transform phrases into a predefined form. Stemming is the process of reducing inflected (or sometimes derived) words to their word stem, base or root form. for example—“looked”, “looking”, “will look”, “looking at”->will all be transformed into “look”.
  • According to some embodiments, before the content analysis is performed the voiceprint biometric clustering i.e., voiceprint biometric analysis may be initiated. The voiceprint biometric clustering may be performed on the call interactions in the call interactions database 910 and then the content analysis may begin.
  • FIG. 10 schematically illustrates a calls cluster phase 1000 including a content analysis engine and a Proactive Fraud Exposure (PFE) engine such as PFE engine 200 in FIG. 2, in accordance with some embodiments of the present disclosure.
  • Once a call interaction is stored in interactions database 1010 (i.e., 910 in FIG. 9) either the PFE engine 1070 or the content analysis engine 1080 may retrieve and read the information of the interaction from the interactions database 1010 to analyze it.
  • According to some embodiments, PFE engine 1070 and content analysis engine 1080 may include a processor, a memory, an output device, an input device and communication circuitry and interface module for wired and/or wireless communication with any other computerized device over a communication network, as illustrated in FIG. 3B, described hereinbelow.
  • According to some embodiments, the content analysis engine 1080 may use the processor and memory to perform Automatic Speech Recognition (ASR) for each call interaction to yield a textual transcript thereof and to operate a clustering algorithm over each textual transcript to cluster phrases in the textual transcript into clusters according to topics. The content analysis engine 1080 may further use the processor and memory to identify clusters with topics having suspicious phrases and rank them.
  • According to some embodiments, upon user's selection of a specific phrase in a graph of most suspicious phrases as related to fraud, as shown in detail in FIG. 13, the content analysis may forward related call interactions of the selected suspicious phrases for further voiceprint biometrics analysis. The voiceprint biometric analysis may cluster the related call interactions into ranked clusters and then store the ranked clusters in a clusters database 1030.
  • According to some embodiments, the PFE engine 1070 may perform a voiceprint biometric analysis on interactions from interactions database 1010 to yield ranked clusters and then forward related interactions for further content analysis by the content analysis engine 1080. The content analysis may cluster the related call interactions into ranked clusters and then store the ranked clusters in the clusters database 1030.
  • According to some embodiments, the content analysis by content analysis engine 1080 and the voiceprint biometric analysis by the PFE engine 1070 may be performed in a cyclic manner.
  • According to some embodiments, the calls cluster phase 1000 may retrieve from the ranked clusters in the clusters database 1030 (also shown in FIG. 2 as 230), a list of fraudsters and may transmit the list of potential fraudsters to an application such as application 1060 (also shown in FIG. 2 as 260), to display to a user the list of potential fraudsters via a display unit 1050, thus enabling the user to review the list of potential fraudsters and to add fraudsters from the list to a watchlist database 1040.
  • According to some embodiments, the list of potential fraudsters may be transmitted to the application 1060 over a communication network, to be later on displayed to a user via a display unit 1050 (also shown in FIG. 2 as 250). The user may be a security officer that may review the list of potential fraudsters and listen to the call that is in the respective cluster. Upon reviewal, when the security officer suspects that the call has been made by an actual fraudster, the security officer may add the call and the respective fraudsters information via the application 1060 to a watchlist database 1040. The application 1060 may be web application or desktop application.
  • According to some embodiments, after the details of the fraudster are stored in the watchlist database 1040, when the fraudster calls the customer service center, it may be detected in real-time. An alert may be sent to the users i.e., the agents and/or the security officers upon the detection for further monitoring and analysis or alternatively the inbound call may be blocked.
  • FIGS. 11A-11B illustrate a schematic flowchart of a method for proactive fraudster exposure, in a customer service center according to content analysis and voice biometrics analysis, in accordance with some embodiments of the present disclosure.
  • According to some embodiments, proactive fraudster exposure 1100 may comprise operation 1110 for collecting call interactions from a database of recorded calls (not shown) in a customer service center channel, whereby the collecting is based on predefined rules by a calls collection engine, e.g., calls collection engine 100 in FIG. 1.
  • In some embodiments, operation 1120 may comprise storing the collected call interactions in an interactions database, such as interactions database 1010 in FIG. 10.
  • In some embodiments, operation 1130 may comprise performing a first type analysis to cluster the call interactions in the interactions database such as interactions database 1010 in FIG. 10 into ranked clusters and storing the ranked clusters in a clusters database e.g., clusters database 1030 in FIG. 10.
  • In some embodiments, operation 1140 may comprise performing a second type analysis on a predefined amount of the highest ranked clusters, into ranked clusters and storing the ranked clusters in a clusters database e.g., clusters database 1030 in FIG. 10.
  • In some embodiments, the first type analysis may be a content analysis and the second type analysis may be a voice biometrics analysis, or vice versa.
  • In some embodiments, operation 1150 may comprise enabling a user to repeat operations 1130 through 1140.
  • In some embodiments, operation 1160 may comprise retrieving from the ranked clusters in the clusters database e.g., clusters database 1030 in FIG. 10, a list of protentional fraudsters.
  • In some embodiments, operation 1170 may comprise transmitting the list of potential fraudsters to an application e.g., application 1060 in FIG. 10 to display to a user the list of potential fraudsters via a display unit e.g., display unit 1050 in FIG. 10, thus enabling the user to review said list of potential fraudsters and to add fraudsters from the list to a watchlist database, e.g., watchlist database 1040 in FIG. 10.
  • FIG. 12 illustrates ranking of phrases 1200, in accordance with some embodiments of the present disclosure.
  • According to some embodiments, phrases may be retrieved from textual transcripts e.g., textual transcripts 930 in FIG. 9 to detect the most important words. The most important words are the phrases which have been mostly repeated or words that have a higher confidence level.
  • According to some embodiments, a predefined amount of the highest ranked phrases e.g., the phrases that have been mostly repeated, such as phrases “account” 1210, “password” 1220, “username” 1230 and “package” 1240, may be represented as nodes in a connected graph, as shown in detail in FIG. 13 on the display unit e.g., display unit 1050 in FIG. 10.
  • FIG. 13 illustrates the highest ranked phrases as nodes in a graph such as connected graph 1300, in accordance with some embodiments of the present disclosure.
  • According to some embodiments, phrases such as “account” 1310, “password” 1320, “username” 1330 and “package” 1340 which were denoted as phrases “account” 1210, “password” 1220, “username” 1230 and “package” 1240 in FIG. 12, may be related to a topic and connected based on a calculation of phrase statistics to suggest to a user as a graph such as a connected graph or any other type of graph. Another type of graph may be a hypergraph which is a generalization of a graph in which an edge may join any number of vertices.
  • According to some embodiments, the connections between nodes in the connected graph 1300 represent high probability e.g., a probability above a predefined threshold, that if a word or a phrase has been used during an interaction e.g., “password” 1320 then other words or phrases e.g., “username” 1330 or “account” 1310 which are connected to it have been used as well.
  • According to some embodiments, in a non-limiting example, the phrase “password” 1320 as represented in the connected graph 1300 may be selected by the user e.g., a security officer, for additional analysis via an application such as application 1060 in FIG. 10 or 260 in FIG. 2. The additional analysis may be a voiceprint biometric analysis where all the call interactions that relate to the phrase e.g., “password” 1320 will be processed instead of processing all the interactions in the interactions database e.g., interactions database 1010 in FIG. 10. Thus, elevating the accuracy of fraudster detection process which is performed on huge amount of data.
  • It should be understood with respect to any flowchart referenced herein that the division of the illustrated method into discrete operations represented by blocks of the flowchart has been selected for convenience and clarity only. Alternative division of the illustrated method into discrete operations is possible with equivalent results. Such alternative division of the illustrated method into discrete operations should be understood as representing other embodiments of the illustrated method.
  • Similarly, it should be understood that, unless indicated otherwise, the illustrated order of execution of the operations represented by blocks of any flowchart referenced herein has been selected for convenience and clarity only. Operations of the illustrated method may be executed in an alternative order, or concurrently, with equivalent results. Such reordering of operations of the illustrated method should be understood as representing other embodiments of the illustrated method.
  • Different embodiments are disclosed herein. Features of certain embodiments may be combined with features of other embodiments; thus certain embodiments may be combinations of features of multiple embodiments. The foregoing description of the embodiments of the disclosure has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosure to the precise form disclosed. It should be appreciated by persons skilled in the art that many modifications, variations, substitutions, changes, and equivalents are possible in light of the above teaching. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the disclosure.
  • While certain features of the disclosure have been illustrated and described herein, many modifications, substitutions, changes, and equivalents will now occur to those of ordinary skill in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the disclosure.

Claims (14)

What is claimed:
1. A computer-implemented method for proactive fraudster exposure in a customer service center according to content analysis and voice biometrics analysis, the computer-implemented method comprising:
(a) collecting call interactions from a database of recorded calls in a customer service channel,
wherein the collecting is based on predefined rules by a calls collection engine,
(b) storing the collected call interactions in an interactions database;
(c) performing a first type analysis to cluster the call interactions in the interactions database into ranked clusters and storing the ranked clusters in a clusters database;
(d) performing a second type analysis on a predefined amount of the highest ranked clusters, into ranked clusters and storing the ranked clusters in a clusters database;
wherein the first type analysis is a content analysis and the second type analysis is a voice biometrics analysis, or vice versa,
enabling a user to repeat steps (c) and (d);
retrieving from the ranked clusters in the clusters database, a list of potential fraudsters;
transmitting the list of potential fraudsters to an application to display to a user said list of potential fraudsters via a display unit, thus enabling said user to review said list of potential fraudsters and to add fraudsters from the list to a watchlist database.
2. The computer-implemented method of claim 1, wherein the content analysis comprising:
performing Automatic Speech Recognition (ASR) for each call interaction to yield a textual transcript thereof;
operating a clustering algorithm over each textual transcript to cluster phrases in the textual transcript into clusters according to topics; and
identifying clusters with topics having predefined suspicious phrases and ranking the identified clusters.
3. The computer-implemented method of claim 2, wherein the identifying is performed according to at least one of the following algorithms: graph theory, machine learning, statistics, linguistic, deep learning, hybrid, Term Frequency—Inverse Document Frequency (TF-IDF) and parallel word importance estimator.
4. The computer-implemented method of claim 2, wherein, the content analysis further comprising:
representing a predefined amount of the highest ranked phrases as nodes in a connected graph;
calculating phrase statistics to suggest to a user a connection between the phrases according to the graph; and
enabling a user to select a specific phrase in the graph for biometrics analysis of related call interactions.
5. The computer-implemented method of claim 1, wherein the voice biometrics analysis is performed by a Proactive Fraud Exposure (PFE) engine, wherein the PFE is performing clustering.
6. The computer-implemented method of claim 5, wherein the PFE engine is performing the clustering according to at least one of: K-means, K-centroids, Clustering Using REpresentatives (CURE), and Agglomerative Hierarchal Clustering (AHC) algorithms.
7. The computer-implemented method of claim 2, wherein the content analysis further includes stemming to transform phrases into a predefined form.
8. A computerized-system for proactive fraudster exposure in a customer service center according to content analysis and voice biometrics analysis, the computerized-system comprising:
a database of recorded calls;
an interactions database;
a clusters database;
a watchlist database;
a memory to store the database of recorded calls, the interactions database, the clusters database and the watchlist database;
a display unit;
and
a processor, said processor is configured to:
(a) collect call interactions from the database of recorded calls in a customer service channel,
wherein the collecting is based on predefined rules by a calls collection engine,
(b) store the collected call interactions in the interactions database;
(c) perform a first type analysis to cluster the call interactions in the interactions database into ranked clusters and to store the ranked clusters in the clusters database;
(d) perform a second type analysis on a predefined amount of the highest ranked clusters, into ranked clusters and to store the ranked clusters in the clusters database;
wherein the first type analysis is a content analysis and the second type analysis is a voice biometrics analysis, or vice versa,
enable a user to repeat steps (c) and (d);
retrieve from the ranked clusters in the clusters database, a list of potential fraudsters;
transmit the list of potential fraudsters to an application to display to a user said list of potential fraudsters via a display unit, thus to enable said user to review said list of potential fraudsters and to add fraudsters from the list to the watchlist database.
9. The computerized-system of claim 8, wherein the content analysis comprising:
performing Automatic Speech Recognition (ASR) for each call interaction to yield a textual transcript thereof;
operating a clustering algorithm over each textual transcript to cluster phrases in the textual transcript into clusters according to topics; and
identifying clusters with topics having predefined suspicious phrases and ranking the identified clusters.
10. The computerized-system of claim 9, wherein the identifying is performed according to at least one of the following algorithms: graph theory, machine learning, statistics, linguistic, deep learning, hybrid, Term Frequency—Inverse Document Frequency (TF-IDF) and parallel word importance estimator.
11. The computerized-system of claim 9, wherein, the content analysis further comprising:
representing a predefined amount of the highest ranked phrases as nodes in a connected graph;
calculating phrase statistics to suggest to a user a connection between the phrases according to the graph; and
enabling a user to select a specific phrase in the graph for biometrics analysis of related call interactions.
12. The computerized-system of claim 8, wherein the voice biometrics analysis is performed by a Proactive Fraud Exposure (PFE) engine, wherein the PFE is performing clustering.
13. The computerized-system of claim 12, wherein the PFE engine is performing the clustering according to at least one of: K-means, K-centroids, Clustering Using REpresentatives (CURE), and Agglomerative Hierarchal Clustering (AHC) algorithms.
14. The computerized-system of claim 9, wherein the content analysis further includes stemming to transform phrases into a predefined form.
US16/740,518 2019-07-30 2020-01-13 Method and system for fraud clustering by content and biometrics analysis Active US10911600B1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US16/740,518 US10911600B1 (en) 2019-07-30 2020-01-13 Method and system for fraud clustering by content and biometrics analysis
US17/129,986 US11108910B2 (en) 2019-07-30 2020-12-22 Method and system for fraud clustering by content and biometrics analysis
US17/376,209 US11252279B2 (en) 2019-07-30 2021-07-15 Method and system for fraud clustering by content and biometrics analysis
US17/565,464 US11503158B2 (en) 2019-07-30 2021-12-30 Method and system for fraud clustering by content and biometrics analysis

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US16/525,606 US11039012B2 (en) 2019-07-30 2019-07-30 Method and system for proactive fraudster exposure in a customer service channel
US16/740,518 US10911600B1 (en) 2019-07-30 2020-01-13 Method and system for fraud clustering by content and biometrics analysis

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US16/525,606 Continuation-In-Part US11039012B2 (en) 2019-07-30 2019-07-30 Method and system for proactive fraudster exposure in a customer service channel

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/129,986 Continuation US11108910B2 (en) 2019-07-30 2020-12-22 Method and system for fraud clustering by content and biometrics analysis

Publications (2)

Publication Number Publication Date
US10911600B1 US10911600B1 (en) 2021-02-02
US20210037137A1 true US20210037137A1 (en) 2021-02-04

Family

ID=74258655

Family Applications (4)

Application Number Title Priority Date Filing Date
US16/740,518 Active US10911600B1 (en) 2019-07-30 2020-01-13 Method and system for fraud clustering by content and biometrics analysis
US17/129,986 Active US11108910B2 (en) 2019-07-30 2020-12-22 Method and system for fraud clustering by content and biometrics analysis
US17/376,209 Active US11252279B2 (en) 2019-07-30 2021-07-15 Method and system for fraud clustering by content and biometrics analysis
US17/565,464 Active US11503158B2 (en) 2019-07-30 2021-12-30 Method and system for fraud clustering by content and biometrics analysis

Family Applications After (3)

Application Number Title Priority Date Filing Date
US17/129,986 Active US11108910B2 (en) 2019-07-30 2020-12-22 Method and system for fraud clustering by content and biometrics analysis
US17/376,209 Active US11252279B2 (en) 2019-07-30 2021-07-15 Method and system for fraud clustering by content and biometrics analysis
US17/565,464 Active US11503158B2 (en) 2019-07-30 2021-12-30 Method and system for fraud clustering by content and biometrics analysis

Country Status (1)

Country Link
US (4) US10911600B1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11108910B2 (en) * 2019-07-30 2021-08-31 Nice Ltd Method and system for fraud clustering by content and biometrics analysis

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR3079987A1 (en) * 2018-04-06 2019-10-11 Orange METHOD OF PROCESSING A TRANSACTION BETWEEN A SOURCE TERMINAL AND A DESTINATION TERMINAL, BANKING SERVICE SYSTEM, TERMINAL AND CORRESPONDING COMPUTER PROGRAM.
US11039012B2 (en) * 2019-07-30 2021-06-15 Nice Ltd Method and system for proactive fraudster exposure in a customer service channel
US11508377B2 (en) 2020-05-14 2022-11-22 Nice Ltd. System and method for detecting fraud rings
WO2021247987A1 (en) * 2020-06-04 2021-12-09 Nuance Communications, Inc. Fraud detection system and method
US11664033B2 (en) * 2020-06-15 2023-05-30 Samsung Electronics Co., Ltd. Electronic apparatus and controlling method thereof
US11606461B2 (en) * 2020-12-04 2023-03-14 Nice Ltd. Method for training a spoofing detection model using biometric clustering
US12020711B2 (en) * 2021-02-03 2024-06-25 Nice Ltd. System and method for detecting fraudsters
US11451658B1 (en) 2021-05-14 2022-09-20 Nice Ltd. Systems and methods for proactive fraudster exposure in a customer service channel
US11582336B1 (en) 2021-08-04 2023-02-14 Nice Ltd. System and method for gender based authentication of a caller

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8639757B1 (en) * 2011-08-12 2014-01-28 Sprint Communications Company L.P. User localization using friend location information
EP2622832B1 (en) * 2010-09-30 2019-03-13 British Telecommunications public limited company Speech comparison
US20150142556A1 (en) * 2013-11-18 2015-05-21 iList AB Distribution service system
US20150142446A1 (en) * 2013-11-21 2015-05-21 Global Analytics, Inc. Credit Risk Decision Management System And Method Using Voice Analytics
US9472194B2 (en) * 2014-03-21 2016-10-18 Wells Fargo Bank, N.A. Enhanced fraud detection
US10325601B2 (en) * 2016-09-19 2019-06-18 Pindrop Security, Inc. Speaker recognition in the call center
US10623581B2 (en) * 2017-07-25 2020-04-14 Vail Systems, Inc. Adaptive, multi-modal fraud detection system
US10623557B2 (en) * 2018-05-30 2020-04-14 International Business Machines Corporation Cognitive telephone fraud detection
US10911600B1 (en) * 2019-07-30 2021-02-02 Nice Ltd. Method and system for fraud clustering by content and biometrics analysis
US11039012B2 (en) * 2019-07-30 2021-06-15 Nice Ltd Method and system for proactive fraudster exposure in a customer service channel
US12020711B2 (en) * 2021-02-03 2024-06-25 Nice Ltd. System and method for detecting fraudsters
US11451658B1 (en) * 2021-05-14 2022-09-20 Nice Ltd. Systems and methods for proactive fraudster exposure in a customer service channel

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11108910B2 (en) * 2019-07-30 2021-08-31 Nice Ltd Method and system for fraud clustering by content and biometrics analysis
US11252279B2 (en) * 2019-07-30 2022-02-15 Nice Ltd. Method and system for fraud clustering by content and biometrics analysis
US20220124198A1 (en) * 2019-07-30 2022-04-21 Nice Ltd. Method and system for fraud clustering by content and biometrics analysis
US11503158B2 (en) * 2019-07-30 2022-11-15 Nice Ltd. Method and system for fraud clustering by content and biometrics analysis

Also Published As

Publication number Publication date
US20210344801A1 (en) 2021-11-04
US11252279B2 (en) 2022-02-15
US11108910B2 (en) 2021-08-31
US20210112163A1 (en) 2021-04-15
US10911600B1 (en) 2021-02-02
US11503158B2 (en) 2022-11-15
US20220124198A1 (en) 2022-04-21

Similar Documents

Publication Publication Date Title
US11800014B2 (en) Method and system for proactive fraudster exposure in a customer service channel
US11108910B2 (en) Method and system for fraud clustering by content and biometrics analysis
US10685657B2 (en) Biometrics platform
US10958779B1 (en) Machine learning dataset generation using a natural language processing technique
US11704728B1 (en) Systems and methods for detecting fraudulent requests on client accounts
US8793131B2 (en) Systems, methods, and media for determining fraud patterns and creating fraud behavioral models
US8145562B2 (en) Apparatus and method for fraud prevention
US20160142534A1 (en) Systems, methods, and media for determining fraud patterns and creating fraud behavioral models
US8548818B2 (en) Method and system for authenticating customer identities
US8311826B2 (en) Method and system for screening using voice data and metadata
US11115521B2 (en) Systems and methods for authentication and fraud detection
US20160133256A1 (en) Script compliance in spoken documents
US11120199B1 (en) Systems for transcribing, anonymizing and scoring audio content
US10440187B1 (en) Bootstrapped predicative routing in CRM
US11735188B2 (en) System and method for detecting fraud rings
US20210319085A1 (en) System and method for updating biometric evaluation systems
US11451658B1 (en) Systems and methods for proactive fraudster exposure in a customer service channel
US12020711B2 (en) System and method for detecting fraudsters
US20240203424A1 (en) Computerized-method and computerized-system for training and applying a machine learning (ml) textual behavioral identification model to authenticate an agent, in a digital multi-channel environment
Gunson et al. Effective speaker spotting for watch‐list detection of fraudsters in telephone banking
US20230206925A1 (en) System and method for spoofing detection
Moura et al. Enhancing speaker identification in criminal investigations through clusterization and rank-based scoring
Faruquie et al. Protecting sensitive customer information in call center recordings

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: NICE LTD., ISRAEL

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:FRENKEL, ROMAN;KERET, MATAN;SHTERNHARTS, ROMAN;AND OTHERS;SIGNING DATES FROM 20200114 TO 20200117;REEL/FRAME:051561/0371

STCF Information on status: patent grant

Free format text: PATENTED CASE