WO2021053615A2 - A federated learning system and method for detecting financial crime behavior across participating entities - Google Patents

A federated learning system and method for detecting financial crime behavior across participating entities Download PDF

Info

Publication number
WO2021053615A2
WO2021053615A2 PCT/IB2020/058732 IB2020058732W WO2021053615A2 WO 2021053615 A2 WO2021053615 A2 WO 2021053615A2 IB 2020058732 W IB2020058732 W IB 2020058732W WO 2021053615 A2 WO2021053615 A2 WO 2021053615A2
Authority
WO
WIPO (PCT)
Prior art keywords
neural network
nodes
data
computer
input
Prior art date
Application number
PCT/IB2020/058732
Other languages
French (fr)
Other versions
WO2021053615A3 (en
Inventor
Justin BERCICH
Theresa BERCICH
Gudmundur Runar KRISTJANSSON
Anush VASUDEVAN
Original Assignee
Lucinity ehf
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/020,453 external-priority patent/US11227067B2/en
Priority claimed from US17/020,496 external-priority patent/US20210089899A1/en
Application filed by Lucinity ehf filed Critical Lucinity ehf
Publication of WO2021053615A2 publication Critical patent/WO2021053615A2/en
Publication of WO2021053615A3 publication Critical patent/WO2021053615A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Definitions

  • One aspect of the invention relates to the field of "federated learning” and its use in conjunction with machine learning models to detect illicit financial crime behaviors including but not limited to money laundering.
  • this aspect of the invention relates to the use of "federated learning” in the process of model training and inference and the use of machine learning more generally.
  • Another aspect of the invention relates to an autoencoder-based data anonymization method and apparatus for maintaining the integrity of entities and performing analysis after the anonymization method has been performed on the data.
  • This aspect of the invention may be used with machine-learning, data security, and in various domains that utilize sensitive information.
  • Neural networks map input vector x to output y through complex mathematical operations optimized by a loss function. Neural networks can process vast amounts of data and detect patterns in a multidimensional manifold that are unrecognizable by humans. This achievement is a product of a multitude of calculations within a neural network and its large number of parameters that are defined during the model training, its architecture and hyper parameter optimization process. This also means that, even if neural networks appear to be the exact same from an architectural and hyper-parameter perspective, their output can differ as during training the model self-optimizes each neuron's weight, thereby ever so slightly changing the mathematical combination of inputs.
  • PII personal identifiable information
  • the conventional way of hashing the most common way of encrypting data, does not suffice for the purposes of further elaborate and more complex analysis as the information content within the data is lost.
  • One of the main attributes of hashing is that two similar inputs into a hashing algorithm provide whenever possible very different output hashes to maximize the security of the encrypted data.
  • the ability to learn insights from one bank and then apply that knowledge to detect money laundering or other financial crimes in another bank would increase the accuracy of overall illicit activity detection while increasing efficiency and saving time for analysts assessing a potential case.
  • the cumulative gain from participating in this system lies in the synergies harnessed by accruing knowledge learned from customers’ behavior in each entity, with the improved tuning of detection models benefiting all participating entities significantly and equally, without the need to share underlying customer data.
  • a method of updating a first neural network provides a computer system with a computer-readable memory storing specific computer-executable instructions for the first neural network and a second neural network separate from the first neural network.
  • the method also provides one or more processors in communication with the computer-readable memory.
  • the one or more processors are programmed by the computer-executable instructions to at least process a first data with the first neural network and process a second data with the second neural network.
  • the one or more processors are further programmed by the computer-executable instructions to at least update a weight in a node of the second neural network by a delta amount as a function of the processing of the second data with the second neural network and update a weight in a node of the first neural network as a function of the delta amount.
  • a computer system for updating a first neural network includes a computer memory storing specific computer-executable instructions for the first neural network and a separate second neural network.
  • the computer system also includes one or more processors in communication with the computer-readable memory.
  • the one or more processors are programmed by the computer-executable instructions to at least process a first data with the first neural network and process a second data with the second neural network.
  • the one or more processors are further programmed by the computer-executable instructions to at least update a weight in a node of the second neural network by a delta amount as a function of the processing of the second data with the second neural network and update a weight in a node of the first neural network as a function of the delta amount.
  • a method provides an auto-encoder for anonymizing data associated with a population of entities.
  • the method includes providing a computer system with a memory storing specific computer-executable instructions for a neural network.
  • the neural network includes input nodes; a first layer of nodes for receiving an output from the input nodes; a second layer of nodes positioned on an output side of the first layer of nodes; one or more additional layers of nodes positioned on an output side of the second layer of nodes; and output nodes for receiving an output from the last inner layer of nodes to provide an encoded output vector.
  • An inner layer of nodes includes a number of nodes that is greater than a number of nodes in a layer of nodes on the input side of such inner layer and is also greater than a number of nodes in a layer of nodes on the output side of such layer.
  • the method includes identifying a plurality of characteristics associated with at least a subset of the entities in the population and preparing a plurality of input vectors that include at least one of the characteristics, wherein the characteristics appear in the respective input vectors as numerical information transformed from human recognizable text.
  • the method includes training the neural network with the plurality of input vectors.
  • the training includes a plurality of training cycles wherein the training cycle comprises: inputting one of the input vectors at the input nodes; processing said input vector with the neural network to provide an encoded output vector at the output node; determining an output vector reconstruction error by calculating a function of the encoded output vector and the input vector; back-propagating the output vector reconstruction error back through the neural network from the output nodes back to the input nodes by a chained derivative of the outputs and weights of the intervening nodes; recalibrating a weight in one or more of the nodes in the neural network to minimize the output vector reconstruction error.
  • the method may include programming the computer system with a second neural network and with a third neural network and combining the encoded output vector of the neural network, the second neural network and the third neural network. Additional neural networks may also be used and their respective encoded output vectors may also be combined with the encoded output vectors of the neural network, the second neural network, and the third neural network. Such additional neural networks would be used so that there is one neural network for each of the data fields that have to be encrypted. And since there can be 50, 100, 200 or more data fields, an equal number of neural networks will be used within the scope of the invention.
  • the method may also include preparing an input vector for the entities in the population and processing said input vector with the neural network to provide an encoded output vector at the output node for such entity.
  • the method may include storing the encoded output vectors for subsequent use in identifying a common characteristic between two or more of the entities.
  • the method may include comparing the encoded output vectors to identify the two or more entities with the common characteristic.
  • An auto-encoder system anonymizes data associated with a population of entities and includes a computer memory storing specific computer-executable instructions for a neural network.
  • the neural network includes input nodes; a first layer of nodes for receiving an output from the input nodes; a second layer of nodes positioned on an output side of the first layer of nodes; one or more additional layers of nodes positioned on an output side of the second layer of nodes; and output nodes for receiving an output from the last inner layer of nodes to provide an encoded output vector.
  • An inner layer of nodes includes a number of nodes that is greater than a number of nodes in a layer of nodes on the input side of such inner layer and is also greater than a number of nodes in a layer of nodes on the output side of such inner layer.
  • the system further includes one or more processors in communication with the computer-readable memory. The one or more processors are programmed by the computer-executable instructions to at least obtain data identifying a plurality of characteristics associated with at least a subset of the entities in the population; prepare a plurality of input vectors that include at least one of the plurality of characteristics, wherein the characteristics appear in the respective input vectors as numerical information transformed from human recognizable text; and train the neural network with the plurality of input vectors.
  • the training includes a plurality of training cycles wherein the training cycles comprise: inputting one of the input vectors at the input nodes; processing said input vector with the neural network to provide an encoded output vector at the output node; determining an output vector reconstruction error by calculating a function of the encoded output vector and the input vector; back-propagating the output vector reconstruction error back through the neural network from the output nodes back to the input nodes by a chained derivative of the outputs and weights of the intervening nodes; recalibrating a weight in one or more of the nodes in the neural network to minimize the output vector reconstruction error.
  • processors up to 10 processors, up to 50 processors, up to 100 processors, up to 500 processors, or even up to 1000 processors may be used.
  • the preferred embodiments can be made scalable such that any number of processors may be used based on the number of entities and the number of characteristics to be encoded or tracked.
  • the autoencoder system may include a computer memory that stores specific computer-executable instructions for a second neural network and a third neural network. Additional neural networks may also be used and their respective encoded output vectors may also be combined with the encoded output vectors of the neural network, the second neural network, and the third neural network.
  • Such neural networks include: an input node; a first layer of nodes for receiving an output from the input node; a second layer of nodes for receiving an output from the first layer of nodes; one or more additional layers of nodes for receiving an output from the second layer of nodes; and output nodes for receiving an output from the last inner layer of nodes to provide an encoded output vector.
  • An inner layer of nodes includes a number of nodes that is greater than a number of nodes on the input side of such inner layer and is also greater than a number of nodes on the output side of such inner layer.
  • the one or more processors are programmed by the computer-executable instructions to train the second and third neural networks with the plurality of input vectors.
  • the training includes a plurality of training cycles wherein the training cycle comprise, for the respective second, third, and such additional neural networks: inputting one of the input vectors at the input node; processing said input vector with the respective neural network to provide an encoded output vector at the output node; determining an output vector reconstruction error by calculating a function of the encoded output vector and the input vector; back-propagating the output vector reconstruction error back through the respective neural network from the output nodes back to the input nodes by a chained derivative of the outputs and weights of the intervening nodes; recalibrating a weight in one or more of the nodes in the respective neural network to minimize the output vector reconstruction error.
  • the one or more processors are programmed by the computer-executable instructions to combine the encoded output vector of the neural network, the second neural network and the third neural network to provide a combined encoded output vector.
  • the autoencoder system may include one or more processors that are programmed by the computer-executable instructions to prepare an input vector for the entities in the population; process said input vector with the neural network to provide an encoded output vector at the output node for the entities; and store the encoded output vectors for subsequent use in identifying a common characteristic between two or more of the entities.
  • the autoencoder system may include one or more processors that are programmed by the computer-executable instructions to compare the encoded output vectors to identify the two or more entities with the common characteristic. In practice, it is contemplated that up to 10 processors, up to 50 processors, up to 100 processors, up to 500 processors, or even up to 1000 processors may be used.
  • the preferred embodiments can be made scalable such that any number of processors may be used based on the number of entities and the number of characteristics to be encoded or tracked.
  • FIG. 1A shows a computer system for anonymizing data.
  • FIG. IB is an expansion of the memory 104 in FIG. 1A to show a non exclusive list of the additional types of data that may be stored concerning characteristics of entities.
  • FIG. 2 shows a single autoencoder for anonymizing data that amalgamates all of the relevant PII data fields.
  • FIG. 3 shows multiple autoencoders for anonymizing data where the autoencoders are assigned and trained on a specific PII data field and their respective outputs are combined.
  • FIG. 4 shows a routine for training a neural network to anonymize data.
  • FIG. 5 shows an embodiment where multiple entities are able to share changes in the weights of the nodes in their neural networks to assist other entities in updating their own neural networks.
  • the present system provides a cloud-based solution that uses federated learning to achieve the goal of a unified, holistic and accurate detection and analysis of money laundering (or other type of financial crime) behavior for financial entities devoid of the need to cross share client data between entities themselves.
  • This aggregation of their differential scores is then combined with the weights of a single entity's neural network model, and then inputted into a supra neural network.
  • This supra neural network is specifically trained offline to extract information from these differential scores which is then used to update the entity's neural weights, essentially shifting the entity's weights in a way that integrates both feedback learnt from their individual clients and information from the other entities' partial derivative scores, which implicitly impound information about those entities' feedback and contextual situation, whilst still preserving each entity's model specificity and without sharing any raw client data.
  • This approach elegantly handles several issues that used to exist in this domain. Firstly, it completely maintains the integrity and safety of each entity's data as the data itself is never shared in any form. Thus, data does not leave the entity's own firewalls set up within a secure cloud or other systems such as systems on-premise at the client. Secondly, it maintains specificity in the models such that models are optimized based on the individual circumstances of entities. Thirdly, it learns from partial derivative information derived from other participating entities in a way that improves accuracy and detection.
  • Feature Importance Delta scores D
  • D Feature Importance Delta scores
  • an autoencoder system can maintain anonymity and preserve the relational content between and among PII data while still encoding it in a safe manner. Therefore, the data can still be used for network analysis, deduplication efforts and can generally serve as an input into machine-learning models to detect complex patterns whose accuracy and veracity is enhanced by the inclusion of this encoded PII data in the analysis.
  • Business and research areas alike should be able to utilize this encoded data for analysis, without having to have access to the original data. This is especially applicable in (but not restricted to) the financial sector for the purposes of fraud detection and anti-money laundering efforts, and in the healthcare sectors, allowing third party providers and researchers to work with a more complete dataset than ever before without revealing any actual PII data.
  • the autoencoder system such as that generally shown in FIG. 1A, takes PII data as input, increases its dimensionality in a latent space, performs mathematical operations including a form of dimensionality reduction, and then arrives at an encoded output of data which can be used for further analysis.
  • the novelty of this approach is two-fold: Firstly, the usage of deep learning algorithms as a system for encryption; and secondly, the usability of PII data after being unidentifiably encoded while maintaining the relational position of the PII data to each other.
  • the mathematical theory of pattern recognition and the near impossible exact replicability of a model are harnessed as main strengths in the autoencoder system to encode personal identifiable information (PII) for the purpose of further analysis.
  • the first system uses a 'single' autoencoder that amalgamates all relevant PII data fields and trains a unique autoencoder model with attached neuron weights.
  • the second system contains 'multiple' autoencoders, where each autoencoder is assigned and trained on a specific PII data field mapping each input to its own autoencoder, e.g. first names and last names have their own autoencoder to maximize security concerns as all parameters, hyper-parameters, architectural properties and the training dataset has to be present to be able to attempt the decryption of the output. Neither of these systems has been previously used to provide useful, anonymized data.
  • FIG. 2 shows a graphic that depicts the PII data schematic 210 which indicates the directional flow of data through the Singular Autoencoder (AE-S) system 200.
  • the PII Data 210 is transformed into a feature vector format and serves as an input into the input nodes AE-S 212.
  • the autoencoder 200 is represented by its neurons and their connections.
  • a neuron is a mathematical entity in which an activation function is applied to a calculated value to arrive at an interim transitional output value, which through a series of directional connections informs the mathematical transformations applied to the data as it flows through the AE-S system, analogous to a computational graph, visually from left to right.
  • the PII data 210 which is split into a feature vector, is fed into the autoencoder AE-S system as a single data vector at 212.
  • the solid lines (214, 216, 218, 220) connect the input 212, through each of the layers of neurons (222, 224, 226) to the output layer 228 represent a complex mathematical transformation in which a myriad of combinatorial compositions of the input is analyzed.
  • Output layer 228 has the same dimensionality as the input node 212.
  • An additional layer of abstraction is provided by the architecture of the autoencoder itself as the dimensionality of the data is significantly increased as shown by arrow 230 from "a" input neurons where b>a neurons in the deeper layers of the network. Dimensionality reduction as shown by arrow 232 thereafter occurs to transform the larger layers, e.g. layer 224, to an output layer 228 having the same dimensionality as the dimensionality of the input node 212.
  • the output of the system provided at a schematic box 234 is a deep abstraction of the original PII input data 210 and thus is not replicable without the exact same autoencoder system 200 in place and, even then, replication is a very complex undertaking.
  • the autoencoders 200 in FIG. 2 and 334a, 334b & 334c in FIG. 3 may preferably contain the same number of nodes in the first layer of nodes as in the third layer of nodes.
  • the first, second and third layers of nodes in the autoencoders 200 in FIG. 2 and 334a, 334b & 334c in FIG. 3 may contain three nodes, five nodes, up to 25 nodes, up to 50 nodes, or up to 500 nodes.
  • the input node and the output node in the autoencoders 200 in FIG. 2 and 334a, 334b & 334c in FIG. 3 may be single nodes.
  • the input vector and the output vector of the autoencoders 200 in FIG. 2 and 334a, 334b & 334c in FIG. 3 may have the same length. The features of these preferred embodiments may also be combined together.
  • the AE-S outputs provide a transformed representation of the original PII vector data 210, resulting in an output vector at 234 that has both pseudonymized the data, while also being trained to create a 'DNA' or representation of the data that is analyzable and comparable with other output vectors.
  • This is achieved by the training process of the system (explained more fully in FIG. 4, below) before the output vectors at 234 are used for analysis.
  • the aforementioned trainable weights vector w is optimized during a process of optimization called backpropagation during which the model is exposed to synthetic data to learn the optimal abstract representation of it, thereby preserving the inherent information content in the data.
  • Natural language processing distances are calculated from various base features to transform the PII data 210 into numerical data, which is provided as input into AE-S at node 212.
  • Autoencoders aim to find deep abstractions of the data as originally input while minimizing the reconstruction error, which describes the distortions and shifts of the underlying distributions of the recreated abstract data compared to the original input data.
  • An output vector reconstruction error is determined by calculating a function of the encoded output vector and the input vector. The objective of minimizing the reconstruction error through backpropagation is attained by back-propagating the output vector reconstruction error back through the neural network from the output nodes back to the input nodes by a chained derivative of the outputs and weights of the intervening nodes. This results in the weights iteratively being recalibrated to minimize the reconstruction error in each training step.
  • these models undergo thousands, if not more, training steps to arrive at the optimal setting.
  • the graphic in FIG. 3 depicts the schematic of the PII data 310 flowing through the developed Multiple Autoencoder (AE-M) system 300.
  • the PII data 310 is split into its respective parts (310a, 310b, 310c ... 3 lOx) and a natural language processing distance is calculated from various base features to turn the data into numerical values.
  • the PII data categories are then used as an input vector into the first node (312a, 312b, 312c ... 312x) of their own respective autoencoder (334a, 334b, 334c ... 334x) to arrive at a partial output (336a, 336b, 336c ... 336x).
  • FIG. IB is an expansion of the memory 104 in FIG. 1A to show a non exclusive list in memory 104a of the additional types of data that may be stored in memories 104 and 104a concerning characteristics of entities.
  • FIGS. 1A, IB, 2 & 3 show an auto-encoder system 100 for anonymizing data associated with a population of entities.
  • a computer memory 104 stores specific computer-executable instructions for a neural network, wherein the neural network comprises: input nodes; a first layer of nodes for receiving an output from the input nodes; a second layer of nodes for receiving an output from the first layer of nodes; one or more additional layers of nodes for receiving an output from the second layer of nodes; and output nodes for receiving an output from the last inner layer of nodes to provide an encoded output vector.
  • An inner layer of nodes includes a number of nodes that is greater than a number of nodes in a layer of nodes on the input side of such inner layer and is also greater than a number of nodes in a layer of nodes on the output side of such inner layer.
  • One or more processors 102 are in communication with the computer-readable memory 104 and are programmed by the computer-executable instructions to at least obtain data identifying a plurality of characteristics associated with at least a subset of the entities in the population and prepare a plurality of input vectors that include at least one of the plurality of characteristics, wherein the characteristics appear in the respective input vectors as numerical information transformed from a human recognizable text.
  • the one or more processors 102 also train the neural network with the plurality of input vectors, wherein the training comprises a plurality of training cycles.
  • the training comprises a plurality of training cycles.
  • up to 10 processors 102, up to 50 processors 102, up to 100 processors 102, up to 500 processors 102, or even up to 1000 processors 102 may be used.
  • the preferred embodiments can be made scalable such that any number of processors may be used based on the number of entities and the number of characteristics to be encoded or tracked.
  • the neural network can have 7 inner layers of nodes, 11 inner layers of nodes, 21 inner layers of nodes, or even 51 inner layers of nodes - so long as the inner layers of nodes between the input nodes and a central layer of nodes provide increasing dimensionality and so long as the inner layers of nodes between such central layer of nodes and the output node provide decreasing dimensionality.
  • FIG. 1A also includes input devices 106 such as a keypad, mouse, touchscreen, graphic user interface and such other commonly known input devices to those of ordinary skill in the art.
  • Input devices 106 as well as an internet connection 108 and a display 110 are provided for use in storing computer executable instructions in memory 104 and retrieving same, operating the processors in system 102, providing inputs needed to train the various neural networks disclosed herein, storing and retrieving data needed for such training in memory 104, storing and retrieving encoded data in memory 104, reviewing the results of the operation of the preferred embodiments, and such other uses as required for the functioning of the preferred embodiments.
  • a training cycle begins at the START 400.
  • a training cycle comprises: the step 402 of inputting one of the input vectors at the input node; the step 403 of processing said input vector with the neural network to provide an encoded output vector at the output node; the step 404 of determining an output vector reconstruction error by calculating a function of the encoded output vector and the input vector; the step 406 of back- propagating the output vector reconstruction error back through the neural network from the output nodes back to the input nodes by a chained derivative of the outputs and weights of the intervening nodes; and recalibrating a weight in one or more of the nodes in the neural network to minimize the output vector reconstruction error.
  • the one or more processors 102 can also be programmed to set a threshold for a total number of training cycles and to stop the training of the neural network at step 408 in response to the number of training cycles exceeding the threshold.
  • the one or more processors 102 can also be programmed to set a threshold as a function of a loss plane of the output vector reconstruction error and stop the training of the neural network at step 410 in response to the output vector reconstruction error being less than the threshold.
  • the one or more processors can also be programmed to determine whether one of the characteristics in a plurality of selected input vectors is not also found in a human recognizable form in the respective encoded output vectors. This detection method may be based on use of additional input vectors having a same length as the additional encoded output vectors; and detecting that the output vector is not equal to the input vector or by detecting that more than 10%,
  • the one or more processors may fix the weights and biases in one or more of the nodes in the neural network.
  • the one or more processors 102 may be programmed by the computer-executable instructions to fix the weights in one or more of the nodes in the neural network; and process a plurality of additional input vectors through the neural network to provide a plurality of respective additional encoded output vectors at the output node.
  • a plurality of respective additional encoded output vectors will contain a plurality of characteristics, but said plurality of respective additional encoded output vectors will not contain said plurality of characteristics in a human recognizable form using any of the detection methods described above.
  • the one or more processors 102 may be programmed by the computer-executable instructions to fix the weights in one or more of the nodes in the neural network; and process a plurality of additional input vectors through the neural network to provide a plurality of respective additional encoded output vectors at the output node.
  • the majority of the respective additional encoded output vectors will contain a plurality of characteristics, but said majority of respective additional encoded output vectors will not contain said plurality of characteristics in a human recognizable form using any of the detection methods described above.
  • the one or more processors 102 may be programmed by the computer-executable instructions to fix the weights in one or more of the nodes in the neural network; and process a plurality of additional input vectors through the neural network to provide a plurality of respective additional encoded output vectors at the output node.
  • More than 90% of the respective additional encoded output vectors will contain a plurality of characteristics, but more than 90% of the respective additional encoded output vectors will not contain said plurality of characteristics in a human recognizable form using any of the detection methods described above.
  • the one or more processors 102 are also programmed to determine whether one of the plurality of characteristics in one of the input vectors is also found in a human recognizable form in the respective encoded output vector; and perform a plurality of additional training cycles in response to the respective encoded output vector containing said one of the plurality of characteristics in the human recognizable form using any of the detection methods described above.
  • the one or more processors 102 may be programmed to perform more than 100 training cycles, more than 1,000 training cycles, or more than 5,000 training cycles.
  • the plurality of characteristics may comprise data stored in the memory 104 which data is associated with any three or more of the following: a piece of personally identifiable information, a name, an age, a residential address, a business address, an address of a family relative, an address of a business associate, an educational history, an employment history, an address of any associate, a data from a social media site, a bank account number, a plurality of data providing banking information, a banking location, a purchase history, a purchase location, an invoice, a transaction date, a financial history, a credit history, a criminal record, a criminal history, a drug use history, a medical history, a hospital record, a police report, or a tracking history.
  • the computer memory 104 may store specific computer-executable instructions for a second neural network and a third neural network, wherein the second and third neural networks each comprise: an input node; a first layer of nodes for receiving an output from the input node; a second layer of nodes for receiving an output from the first layer of nodes; a third layer of nodes for receiving an output from the second layer of nodes; and an output node for receiving an output from the third layer of nodes to provide an encoded output vector; wherein the second layer of nodes includes a number of nodes that is greater than a number of nodes in the first layer of nodes and is greater than a number of nodes in the third layer of nodes.
  • the one or more processors are also programmed by the computer-executable instructions to train the second and third neural networks with the plurality of input vectors, wherein the training comprises a plurality of training cycles wherein the training cycles comprise, for each of the respective second and third neural networks: inputting one of the input vectors at the input node; processing said input vector with the respective neural network to provide an encoded output vector at the output node; determining an output vector reconstruction error by calculating a function of the encoded output vector and the input vector; back-propagating the output vector reconstruction error back through the respective neural network from the output nodes back to the input nodes by a chained derivative of the outputs and weights of the intervening nodes; and recalibrating a weight in one or more of the nodes in the respective neural network to minimize the output vector reconstruction error.
  • the one or more processors are programmed by the computer-executable instructions to combine the encoded output vector of the neural network, the second neural network and the third neural network to provide a combined encoded output vector. These three outputs may also be concatenated to provide a concatenated combined encoded output vector.
  • Additional neural networks may also be used and their respective encoded output vectors may also be combined with the encoded output vectors of the neural network, the second neural network, and the third neural network. Such additional neural networks would be used so that there is one neural network for each of the data fields that have to be encrypted. And since there can be 50, 100, 200 or more data fields, an equal number of neural networks will be used within the scope of the invention.
  • the one or more processors 102 may also be programmed by the computer- executable instructions to prepare an input vector for the entities in the population; process said input vector with the neural network to provide an encoded output vector at the output node for each of the entities; and store the encoded output vectors in the memory 104 for subsequent use in identifying a common characteristic between two or more of the entities.
  • the one or more processors 102 may also be programmed by the computer-executable instructions to compare the encoded output vectors to identify the two or more entities with the common characteristic.
  • FIG. 5 shows a federated learning system 500 for use by, for example, four independent entities A, B, C, and D, which are also indicated, respectively, by reference numbers 502, 504, 506 and 508.
  • the vertically aligned elements that bear an "A" in the left-most vertical position show the elements of the deep learning computer system used exclusively by Entity A.
  • These include the data silo 512, neural network MAo indicated by reference number 520, updated neural network MAi indicated by reference number 552, the delta score in the weights for the neural network as it updates from MAo to MAi indicated with the nomenclature DMAoi and reference number 560, and the updated neural network MA2 indicated by reference number 580.
  • the deep learning computer system used for Entity B Immediately to the right of the system used by Entity A is the deep learning computer system used for Entity B.
  • the vertically aligned elements that bear a "B" show the elements of the deep learning computer system used exclusively by Entity B. These include the data silo 514, neural network MBo indicated by reference number 522, updated neural network MBi indicated by reference number 554, the delta change in the weights for the neural network as it updates from MBo to MBi indicated with the nomenclature DMB01 and reference number 562, and the updated neural network MB2 indicated by reference number 582.
  • Entity C Immediately to the right of the system used by Entity B is the deep learning computer system used for Entity C.
  • the vertically aligned elements that bear a "C" show the elements of the deep learning computer system used exclusively by Entity C. These include the data silo 516, neural network MCo indicated by reference number 524, updated neural network MCi indicated by reference number 556, the delta change in the weights for the neural network as it updates from MCo to MCi indicated with the nomenclature DMOii and reference number 564, and the updated neural network MC2 indicated by reference number 584.
  • the deep learning computer system used for Entity D Immediately to the right of the system used by Entity C is the deep learning computer system used for Entity D.
  • the vertically aligned elements that bear a "D" show the elements of the deep learning computer system used exclusively by Entity D. These include the data silo 518, neural network MDo indicated by reference number 526, updated neural network MDi indicated by reference number 558, the delta change in the weights for the neural network as it updates from MDo to MDi indicated with the nomenclature AMDoi and reference number 566, and the updated neural network MD2 indicated by reference number 586.
  • Entity A stores its data in a very secure location indicated by data silo 512.
  • Entity A may use the autoencoder disclosed above in Figures 1A, IB, 2, 3 and 4 to encrypt its data thus rendering the data anonymous while simultaneously maintaining defining characteristics of the data available for analysis even in the encoded form. Either way, Entity A never shares its raw data or encoded data with any other third-party Entity.
  • Entities B, C and D Similar to Entity A, the other Entities B, C and D maintain their own respective data very securely in their own data silos 514, 516 and 518. Again, none of these Entities share their raw data or encoded data with any other Entity.
  • neural network MAo is trained by Entity A (or a confidential service provider) to detect the presence of a particular behavior based on the data stored in data silo 512 where Entity A stores its data.
  • the particular behavior may indicate money laundering, financial criminality, or any other condition that Entity A may wish to detect.
  • the output of the network is graded by an analyst at the user- interface, Ul, indicated by reference number 528. Once the output of the neural networks are shown to analysts via the user-interface, the interface collects feedback data in various forms on features, outputs, their relevance etc. Based on this feedback, the neural networks are retrained to become even more accurate in their decision making.
  • the grade may be an "X" (not productive) or an "O" (productive).
  • Entity A further investigates the underlying actors to determine whether a report should be made or any further action taken.
  • the grade is also used to update the neural network as indicated by the curved arrow at reference number 544.
  • the delta scores of the neural network are shown by DMA01 and are stored in a memory 568.
  • Entity B neural network MBois trained by Entity B (or a confidential service provider) to detect the presence of a particular behavior based on the data stored in data silo 514 where Entity B stores its data.
  • the particular behavior may indicate money laundering, financial criminality, or any other condition that Entity B may wish to detect.
  • the output of the network is graded by a decision maker U 1 indicated by reference number 530.
  • the grade may be an "X" (not productive) or an "O" (productive).
  • the grade is also used to update the neural network as indicated by the curved arrow at reference number 546.
  • the change in the weights for the nodes of the neural network are shown by DMBoi at reference number 562 and are also stored in a memory 568.
  • Entity C neural network MCo is trained by Entity C (or a confidential service provider) to detect the presence of a particular behavior based on the data stored in data silo 516 where Entity C stores its data.
  • the particular behavior may indicate money laundering, financial criminality, or any other condition that Entity C may wish to detect.
  • the output of the network is graded by a decision maker U 1 indicated by reference number 532.
  • the grade may be an "X" (not productive) or an "O" (productive).
  • O productive
  • Entity C further investigates the underlying actors to determine whether a report should be made or any further action taken.
  • the grade is also used to update the neural network as indicated by the curved arrow at reference number 548.
  • the change in the weights for the nodes of the neural network are shown by AMCoi at reference number 564 and are also stored in a memory 568.
  • neural network MDo is trained by Entity D (or a confidential service provider) to detect the presence of a particular behavior based on the data stored in data silo 518 where Entity D stores its data.
  • the particular behavior may indicate money laundering, financial criminality, or any other condition that Entity D may wish to detect.
  • the output of the network is graded by a decision maker U 1 indicated by reference number 534.
  • the grade may be an "X" (not productive) or an "O" (productive).
  • O productive
  • Entity D further investigates the underlying actors to determine whether a report should be made or any further action taken.
  • the grade is also used to update the neural network as indicated by the curved arrow at reference number 550.
  • the change in the weights for the nodes of the neural network are shown by AMDoi at reference number 566 and are also stored in a memory 568.
  • each of Entities A, B, C and D would use the same or similar architecture in neural networks 520, 522, 524 and 526 and each such network would be separately trained to detect the presence of the same or similar behavior. If the Entities chose to use autoencoded anonymous data per the disclosure above concerning Figures 1A to 4, then that autoencoder would be set up using the same parameters and the same or similar architecture across each of the Entities. Most importantly, however, is that no raw data and no encoded data ever needs to be shared and the Entities are still able to assist each other with updating their respective neural networks.
  • This updating of neural networks between Entities occurs using Learning Neural Network 576 which has access to the changes in the weights stored in memory 568.
  • a processor (not shown) forms a vector 570 by concatenating the then current weights for Entity A's neural network MAi with the changes in the weights DMBoi that occurred during the updating shown by arrow 546 of Entity B's neural network.
  • Network 576 is trained to thereby provide new weights at reference number 578 for Entity A's neural network at reference number 580. If Entity A wishes to obtain additional updates from the neural networks of Entities C and D, then network 576 repeats the updating process but using the change in weights for Entity C (564) and then Entity D (566).
  • Network 576 is equally available to the networks of the other Entities so each can update their own respective networks in a similar fashion as explained above for Entity A by using the change in weights experienced by the other networks. In updating the weights of one neural network using the changes in weights from another neural network, it is important that such updates not be too great or else the update might overwhelm the original weights.
  • the neural networks of Entities A, B, C and D can be trained to detect many different behaviors in a data set. For each different behavior, Entities A, B, C and D set up a discrete neural network having the same architecture for the network and data files. In this manner, the Entities may share the changes in the weights for each node in the neural networks (but not any data) in order to assist the other in updating their respective neural networks.
  • Examples of behaviors that may be detected as indicative of money laundering activity include, but are not limited to, frequent changes of financial advisers or institutions; selection of financial advisers or institutions that are geographically distant from the entity or the location of the transaction; requests for increased speed in processing a transaction or making funds available; failure to disclose a real party to a transaction; a prior conviction for an acquisitive crime; a significant amount of private funding from a person who is associated with, or an entity that is, a cash-intensive business; a third party private funder without an apparent connection to the entity's business; a disproportionate amount of private funding or cash which is inconsistent with the socio-economic profile of the persons involved; finance provided by a lender, other than a financial institution, with no logical explanation or economic justification; business transactions in countries where there is a high risk of money laundering and/or terrorism funding; false documentation in support of transactions; an activity level that is inconsistent with the client's business or legitimate income level; and/or an overly complicated ownership structure for the entity.
  • Model Inference refers to the process of post-training where, for example, Entity A’ s weights and Entity B’s delta scores are input into the trained supra-neural network 576 (i.e. model recalculation network), and network 576 outputs a ‘new’ weight vector that then replaces Entity A’s original weights. Inference is thus the ‘prediction’ of these new weights by the supra-neural network 576.
  • Model Training refers to the process of training the weight recalculation network.
  • the inference of the model to determine the new weights for Entity A’s network 552 based on the changes in the weights 562 for Entity B’s updated network 544 would look like this: Take the weights of Entity A after its network 552 has learned from its own data. Then concatenate these weights from network 552 with the delta feature importance inference scores 562 and flatten these two matrixes into a vector 570. Vector 570 is then the input vector into the “supra” or learning neural network 576. Within the network 576, we calculate the new weights for neural network 552 that incorporate the learned feedback from neural network 554.
  • This is preferably conducted recursively to first update network 552 with feedback from network 554 to arrive at the weight vector for network 580.
  • the updated vector weights for network 552 are then updated again by concatenating them with the delta feature importance scores 564 to arrive at a new model weight vector for network 552.
  • the process repeats until the weights for network 552 have been updated with all of the other relevant customers’ feedback.
  • the process next repeats in order to update network 554, and so on.
  • a separate task is training learning neural network 576. This is completely separate from training the Entitys' networks which have already been trained, and their weights and delta scores calculated. Once the network 576 is trained, Inference is conducted, and then the Entitys' networks are updated using the process described above. The training of network 576 is based on the principle that there should not be significant changes in the weights for the Entitys,' given the other delta scores of the other networks. Rather, the changes in the weights should just nudge them in the right direction.
  • a first simple method of training is to include the delta scores of network 554 as bias terms/vectors into network 552, and then retrain network 552 given the addition of these biases.
  • Another basic method is to apply to the weights an operation of some non-linear activation function of the delta scores.
  • the second process trains the supra-neural network 576 using a cost function based on the Entitys' underlying networks.
  • the supra-neural network architecture is preferably a deep neural network that has input dimensions of 2x and output dimension of x.
  • the input dimension could, for example, be composed of Entity A weights and Entity B's delta scores.
  • the output dimension is thus equal to the length of the weight vector.
  • the supra- network 576 is trained by feeding in examples of concatenated Entity A weights and Entity B delta scores, and then outputting ‘new’ Entity A weights, which are supplanted onto Entity A's network. The accuracy of Entity A’s updated neural network is calculated.
  • Such training is conducted by feeding a lot of samples of input vectors into the network 576, calculating the cost function, and then updating the weights accordingly.
  • the training process consists of two parts: the first part is the neural network 576, which takes in the current model weights of network 552 and the delta feature importance scores 562 of network 554. It then runs the concatenated vector through the neural network (as explained above), which computes a set of new weights (which reduces the dimension from the input to the output vector since only one set of weights needs to be calculated for one network). The output of the neural network 576 is then provided to the second part of training.
  • the second part of training consists of pre-trained networks, which detect certain money laundering behaviors for a specific “entity”, i.e. they simulate network 552, network 554, network 556, etc. These networks could be trained on synthetic data, for example.
  • Network 552 and network 554 would detect a behavior on two separate sets of data, Dataset A and Dataset B.
  • a “sleeper” actor would be added into the dataset, which is more specific to either network 552 or network 554, that the networks at that current moment would not detect.
  • a network 554 specific actor is then inserted into Dataset A.
  • the output from part 1 was a new weight vector for network 552, based on the “fake network 554” from the training environment, then the current weights of network 552 would be replaced with the new ones and run inference on the new Dataset A.
  • This provides an accuracy score (because the number of actors in the dataset that conduct this specific illicit behavior is known).
  • This accuracy score is fed back into part 1 of training the supra neural network, which learns from the given accuracy score and adapts its own weights according to this metric, which would govern the cost function. This is quite an intensive training process. However, since it must train a network’s architecture, it must be known how the accuracy impacts the result to learn what the best adaptation operations are.
  • second data has “the same or similar” predetermined data format as compared to first data in a predetermined data format when at least one of the following is true: (1) the data formats contain the same data fields; (2) the data formats contain the same data fields concatenated in the same order; (3) the data formats each contain a plurality of data fields and 95% of those data fields are the same; (4) the data formats each contain a plurality of data fields and 90% of those data fields are the same; (5) the data formats each contain a plurality of data fields and 80% of those data fields are the same; (6) the data formats each contain a plurality of data fields and 95% of those data fields have the same length; (7) the data formats each contain a plurality of data fields and 90% of those data fields have the same length; or (8) the data formats each contain a plurality of data fields and 80% of those data fields have the same length.
  • a second neural network has “the same or similar” predetermined network architecture as a first neural network with a predetermined network architecture when at least one of the following is true: (1) each neural network contains the same number of nodes as the other neural network; (2) each neural network contains the same number of nodes within 95% as the other neural network; (3) each neural network contains the same number of nodes within 90% as the other neural network; (4) each neural network contains the same number of nodes within 80% as the other neural network; (5) each neural network has the same number of layers and contains the same number of nodes in each layer as the other neural network; (6) each neural network has the same number of layers and contains the same number of nodes within 95% in each layer as the other neural network; (7) each neural network has the same number of layers and contains the same number of nodes within 90% in each layer as the other neural network; (8) each neural network has the same number of layers and contains the same number of nodes within 80% in each layer as the other neural network; (9) each neural network has the same or similar
  • An "entity” as used herein means a person, a company, a business, an organization, an institution, an establishment, a governing body, a corporation, a partnership, a unit of a government, a department, a team, a cooperative, or other group with whom it is possible to transact (e.g., to conduct business, or to communicate with, for example, on the internet or social media).
  • the data utilized in the methods of the invention include, but are not limited to, data regarding identity (e.g., height, weight, physical attributes, age, and/or sex); health- related data (e.g., blood pressure, pulse, genetic data, respiratory data, blood analysis, medical test results, personal disease history, and/or family disease history); personal data (e.g., relationship status, marital status, relatives, co-workers, place of work, previous workplaces, residence, neighbors, living address, previous living addresses, identity of household members, number of household members, usual modes of transportation, vehicles owned or leased, educational history, institutions of higher learning attended, degrees or certifications obtained, grades received, government or private grants, funding or support received, email addresses, criminal record, prior convictions, political contributions, and/or charitable contributions); personal information available from electronic devices used (e.g., phone records, text messages, voice messages, contact information, and app information); social media data (e.g., likes, comments, tags, mentions, photos, videos, ad interactions, and/or click
  • the methods of the invention are useful in analyzing data of entities in various sectors including, but not limited to, compliance for banks or other financial institutions, securities investigations, investigations of counterfeiting, illicit trade, or contraband, compliance regarding technology payments, regulatory investigations, healthcare, life sciences, pharmaceuticals, social networking, online or social media marketing, marketing analytics and agencies, urban planning, political campaigns, insurance analytics, real estate analytics, education, tax compliance and government analytics.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Financial Or Insurance-Related Operations Such As Payment And Settlement (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

A method of updating a first neural network is disclosed. The method includes providing a computer system with a computer-readable memory that stores specific computer-executable instructions for the first neural network and a second neural network separate from the first neural network. The method also includes providing one or more processors in communication with the computer-readable memory. The one or more processors are programmed by the computer-executable instructions to at least process a first data with the first neural network, process a second data with the second neural network, update a weight in a node of the second neural network by a delta amount as a function of the processing of the second data with the second neural network, and update a weight in a node of the first neural network as a function of the delta amount.

Description

A FEDERATED LEARNING SYSTEM AND METHOD FOR DETECTING FINANCIAL CRIME BEHAVIOR ACROSS PARTICIPATING ENTITIES
FIELD OF THE INVENTION
[0001] One aspect of the invention relates to the field of "federated learning" and its use in conjunction with machine learning models to detect illicit financial crime behaviors including but not limited to money laundering. In particular, this aspect of the invention relates to the use of "federated learning" in the process of model training and inference and the use of machine learning more generally. Another aspect of the invention relates to an autoencoder-based data anonymization method and apparatus for maintaining the integrity of entities and performing analysis after the anonymization method has been performed on the data. This aspect of the invention may be used with machine-learning, data security, and in various domains that utilize sensitive information.
BACKGROUND OF THE INVENTION
[0002] In the past few years, there have been advancements in the capabilities of machine-learning, especially in the sub-discipline of neural networks and deep learning. Neural networks map input vector x to output y through complex mathematical operations optimized by a loss function. Neural networks can process vast amounts of data and detect patterns in a multidimensional manifold that are unrecognizable by humans. This achievement is a product of a multitude of calculations within a neural network and its large number of parameters that are defined during the model training, its architecture and hyper parameter optimization process. This also means that, even if neural networks appear to be the exact same from an architectural and hyper-parameter perspective, their output can differ as during training the model self-optimizes each neuron's weight, thereby ever so slightly changing the mathematical combination of inputs.
[0003] For a wide field of domains, the analysis of personal identifiable information (PII) data, such as addresses, names and age or any other sensitive customer data, is an important and crucial task to be able to arrive at valuable insights. The conventional way of hashing, the most common way of encrypting data, does not suffice for the purposes of further elaborate and more complex analysis as the information content within the data is lost. One of the main attributes of hashing is that two similar inputs into a hashing algorithm provide whenever possible very different output hashes to maximize the security of the encrypted data. However, this means that slightly misspelled names, or zip codes that are nearly identical, produce very different hashes and it is mathematically near impossible to ascertain which data has a relational connection, be it geo-spatial proximity or detection of entities that are related. In order to analyze PII data, it is thus normally decrypted leaving it vulnerable.
[0004] Additionally, there may be reasons to encode other forms of data other than data typically considered to be PII. For example, there may be a need to encode financial, engineer, testing, or other data in order to ensure that the data itself is not easily digested by unauthorized sources. Regardless of the content of the data processes, conventional hashing functions may be less than ideal for the same reasons discussed immediately above. Further, the inventions disclosed herein may provide data that may be analyzed in such situations without access to the original data that has not been encoded.
[0005] The financial industry has traditionally maintained a highly decentralized system where entities utilize highly encrypted and/or secure data 'silos' in which transaction, customer, and other information is kept on premise (or in dedicated cloud vault) and accessible only to that legal entity. This system has largely precluded the sharing of information between entities, other than with regulatory competent authorities, including entities interlinked through a common international or domestic parent entity. The primary driver for maintaining decentralization is concerns regarding data leakage, regulatory and commercial requirements for data security, and the risk to an entity that competitors could gain insights from their data. While maintaining a state of isolation, financial entities have from a data perspective disregarded that financial markets and their actors are highly interlinked networks not operating remotely but rather connectively. Thus, the ability to learn insights from one bank and then apply that knowledge to detect money laundering or other financial crimes in another bank, would increase the accuracy of overall illicit activity detection while increasing efficiency and saving time for analysts assessing a potential case. The cumulative gain from participating in this system lies in the synergies harnessed by accruing knowledge learned from customers’ behavior in each entity, with the improved tuning of detection models benefiting all participating entities significantly and equally, without the need to share underlying customer data.
SUMMARY OF THE INVENTION
[0006] A method of updating a first neural network provides a computer system with a computer-readable memory storing specific computer-executable instructions for the first neural network and a second neural network separate from the first neural network. The method also provides one or more processors in communication with the computer-readable memory. The one or more processors are programmed by the computer-executable instructions to at least process a first data with the first neural network and process a second data with the second neural network. The one or more processors are further programmed by the computer-executable instructions to at least update a weight in a node of the second neural network by a delta amount as a function of the processing of the second data with the second neural network and update a weight in a node of the first neural network as a function of the delta amount.
[0007] A computer system for updating a first neural network includes a computer memory storing specific computer-executable instructions for the first neural network and a separate second neural network. The computer system also includes one or more processors in communication with the computer-readable memory. The one or more processors are programmed by the computer-executable instructions to at least process a first data with the first neural network and process a second data with the second neural network. The one or more processors are further programmed by the computer-executable instructions to at least update a weight in a node of the second neural network by a delta amount as a function of the processing of the second data with the second neural network and update a weight in a node of the first neural network as a function of the delta amount.
[0008] A method provides an auto-encoder for anonymizing data associated with a population of entities. The method includes providing a computer system with a memory storing specific computer-executable instructions for a neural network. The neural network includes input nodes; a first layer of nodes for receiving an output from the input nodes; a second layer of nodes positioned on an output side of the first layer of nodes; one or more additional layers of nodes positioned on an output side of the second layer of nodes; and output nodes for receiving an output from the last inner layer of nodes to provide an encoded output vector. An inner layer of nodes includes a number of nodes that is greater than a number of nodes in a layer of nodes on the input side of such inner layer and is also greater than a number of nodes in a layer of nodes on the output side of such layer. The method includes identifying a plurality of characteristics associated with at least a subset of the entities in the population and preparing a plurality of input vectors that include at least one of the characteristics, wherein the characteristics appear in the respective input vectors as numerical information transformed from human recognizable text. The method includes training the neural network with the plurality of input vectors. The training includes a plurality of training cycles wherein the training cycle comprises: inputting one of the input vectors at the input nodes; processing said input vector with the neural network to provide an encoded output vector at the output node; determining an output vector reconstruction error by calculating a function of the encoded output vector and the input vector; back-propagating the output vector reconstruction error back through the neural network from the output nodes back to the input nodes by a chained derivative of the outputs and weights of the intervening nodes; recalibrating a weight in one or more of the nodes in the neural network to minimize the output vector reconstruction error.
[0009] The method may include programming the computer system with a second neural network and with a third neural network and combining the encoded output vector of the neural network, the second neural network and the third neural network. Additional neural networks may also be used and their respective encoded output vectors may also be combined with the encoded output vectors of the neural network, the second neural network, and the third neural network. Such additional neural networks would be used so that there is one neural network for each of the data fields that have to be encrypted. And since there can be 50, 100, 200 or more data fields, an equal number of neural networks will be used within the scope of the invention. The method may also include preparing an input vector for the entities in the population and processing said input vector with the neural network to provide an encoded output vector at the output node for such entity. The method may include storing the encoded output vectors for subsequent use in identifying a common characteristic between two or more of the entities. The method may include comparing the encoded output vectors to identify the two or more entities with the common characteristic.
[0010] An auto-encoder system anonymizes data associated with a population of entities and includes a computer memory storing specific computer-executable instructions for a neural network. The neural network includes input nodes; a first layer of nodes for receiving an output from the input nodes; a second layer of nodes positioned on an output side of the first layer of nodes; one or more additional layers of nodes positioned on an output side of the second layer of nodes; and output nodes for receiving an output from the last inner layer of nodes to provide an encoded output vector. An inner layer of nodes includes a number of nodes that is greater than a number of nodes in a layer of nodes on the input side of such inner layer and is also greater than a number of nodes in a layer of nodes on the output side of such inner layer. The system further includes one or more processors in communication with the computer-readable memory. The one or more processors are programmed by the computer-executable instructions to at least obtain data identifying a plurality of characteristics associated with at least a subset of the entities in the population; prepare a plurality of input vectors that include at least one of the plurality of characteristics, wherein the characteristics appear in the respective input vectors as numerical information transformed from human recognizable text; and train the neural network with the plurality of input vectors. The training includes a plurality of training cycles wherein the training cycles comprise: inputting one of the input vectors at the input nodes; processing said input vector with the neural network to provide an encoded output vector at the output node; determining an output vector reconstruction error by calculating a function of the encoded output vector and the input vector; back-propagating the output vector reconstruction error back through the neural network from the output nodes back to the input nodes by a chained derivative of the outputs and weights of the intervening nodes; recalibrating a weight in one or more of the nodes in the neural network to minimize the output vector reconstruction error. In practice, it is contemplated that up to 10 processors, up to 50 processors, up to 100 processors, up to 500 processors, or even up to 1000 processors may be used. The preferred embodiments can be made scalable such that any number of processors may be used based on the number of entities and the number of characteristics to be encoded or tracked.
[0011] The autoencoder system may include a computer memory that stores specific computer-executable instructions for a second neural network and a third neural network. Additional neural networks may also be used and their respective encoded output vectors may also be combined with the encoded output vectors of the neural network, the second neural network, and the third neural network. Such neural networks include: an input node; a first layer of nodes for receiving an output from the input node; a second layer of nodes for receiving an output from the first layer of nodes; one or more additional layers of nodes for receiving an output from the second layer of nodes; and output nodes for receiving an output from the last inner layer of nodes to provide an encoded output vector. An inner layer of nodes includes a number of nodes that is greater than a number of nodes on the input side of such inner layer and is also greater than a number of nodes on the output side of such inner layer. The one or more processors are programmed by the computer-executable instructions to train the second and third neural networks with the plurality of input vectors. The training includes a plurality of training cycles wherein the training cycle comprise, for the respective second, third, and such additional neural networks: inputting one of the input vectors at the input node; processing said input vector with the respective neural network to provide an encoded output vector at the output node; determining an output vector reconstruction error by calculating a function of the encoded output vector and the input vector; back-propagating the output vector reconstruction error back through the respective neural network from the output nodes back to the input nodes by a chained derivative of the outputs and weights of the intervening nodes; recalibrating a weight in one or more of the nodes in the respective neural network to minimize the output vector reconstruction error. The one or more processors are programmed by the computer-executable instructions to combine the encoded output vector of the neural network, the second neural network and the third neural network to provide a combined encoded output vector.
[0012] The autoencoder system may include one or more processors that are programmed by the computer-executable instructions to prepare an input vector for the entities in the population; process said input vector with the neural network to provide an encoded output vector at the output node for the entities; and store the encoded output vectors for subsequent use in identifying a common characteristic between two or more of the entities. The autoencoder system may include one or more processors that are programmed by the computer-executable instructions to compare the encoded output vectors to identify the two or more entities with the common characteristic. In practice, it is contemplated that up to 10 processors, up to 50 processors, up to 100 processors, up to 500 processors, or even up to 1000 processors may be used. The preferred embodiments can be made scalable such that any number of processors may be used based on the number of entities and the number of characteristics to be encoded or tracked.
[0013] Other objects and features will be in part apparent and in part pointed out hereinafter.
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING
[0014] FIG. 1A shows a computer system for anonymizing data.
[0015] FIG. IB is an expansion of the memory 104 in FIG. 1A to show a non exclusive list of the additional types of data that may be stored concerning characteristics of entities.
[0016] FIG. 2 shows a single autoencoder for anonymizing data that amalgamates all of the relevant PII data fields. [0017] FIG. 3 shows multiple autoencoders for anonymizing data where the autoencoders are assigned and trained on a specific PII data field and their respective outputs are combined.
[0018] FIG. 4 shows a routine for training a neural network to anonymize data.
[0019] FIG. 5 shows an embodiment where multiple entities are able to share changes in the weights of the nodes in their neural networks to assist other entities in updating their own neural networks.
[0020] Corresponding reference characters indicate corresponding parts throughout the drawings.
DETAILED DESCRIPTION OF THE INVENTION
[0021] The present system provides a cloud-based solution that uses federated learning to achieve the goal of a unified, holistic and accurate detection and analysis of money laundering (or other type of financial crime) behavior for financial entities devoid of the need to cross share client data between entities themselves.
[0022] In practice, deep learning detection models are first developed and trained for each individual entity. Every single entity possesses properties that make it unique such as the composition of their customers, the entity location, and usage and frequency of specific financial products, which entails that each entity has a certain kind of specificity that sets it apart from others. Each entity is assigned a model for individual behaviors (e.g. for money laundering) so that complex nuances and differences across entities can be learned by the model, thus optimizing the model's suitability for detection in that entity. This also ensures that model accuracy is not eroded by cross training which would result in the generalization of inference such that the important structural differences between entities would be disregarded. Models for a specific behavior have the same architectural properties across all entities and are re-trained using the specific entity's data and feedback.
[0023] Breakthroughs in Artificial Intelligence and advanced applied Machine Learning in recent years have made it possible to explain previous Blackbox algorithms through several mathematical techniques. In its simplest form these calculations provide insights into the inner workings of deep neural networks and how they learn/optimize. These state-of-the-art techniques are then applied to each entity's model. The information that is extracted through this mathematically explains how the models' weights have changed after being re-trained using their own feedback data. These scores, technically referred to as feature importance differential values, are then inputted into a supra deep learning neural network, which sits on top of all models concerning this behavior. The scores extracted from each entity are aggregated, as all entities will update their models based on insight learnt from the behavior of their clients. This aggregation of their differential scores is then combined with the weights of a single entity's neural network model, and then inputted into a supra neural network. This supra neural network is specifically trained offline to extract information from these differential scores which is then used to update the entity's neural weights, essentially shifting the entity's weights in a way that integrates both feedback learnt from their individual clients and information from the other entities' partial derivative scores, which implicitly impound information about those entities' feedback and contextual situation, whilst still preserving each entity's model specificity and without sharing any raw client data.
[0024] This approach elegantly handles several issues that used to exist in this domain. Firstly, it completely maintains the integrity and safety of each entity's data as the data itself is never shared in any form. Thus, data does not leave the entity's own firewalls set up within a secure cloud or other systems such as systems on-premise at the client. Secondly, it maintains specificity in the models such that models are optimized based on the individual circumstances of entities. Thirdly, it learns from partial derivative information derived from other participating entities in a way that improves accuracy and detection.
[0025] In use, the mathematical process of Feature Importance Delta scores, D, is calculated for each and every weight in the deep learning models so as to be able to match to the current neuronal weight scores. Delta scores are calculated by chaining derivative equations together for each weight propagating backwards from the output towards the input and then calculating the differential between MA0 weights and MA1 weights and/or the differential between MA0 and MA1 feature importance scores. These delta scores and the model weights of MA1 are then flattened and concatenated into a single vector. This new vector serves as input into a supra learning model. This model has been trained to calculate adjusted weights based on the delta scores given the current weights of MA1 so as to not lose the specificity of MA1 but to integrate the learned findings of differential score AMB of model B. These adjustments are not simply done in a new weights = MA1 — AMB fashion as this would distort the delicately trained weights too much to maintain the clients' specificity. Therefore, the learning approach (and, preferably, the deep learning approach) is taken to find optimally recalibrated weights to arrive at model MA2. Model MA2 is then used in the production process instead of MA0 and the entire process is indefinitely repeated. [0026] As a matter of security, some entities might prefer to use autoencoder-based data anonymization systems and methods to encrypt their data at the outset before attempting to detect particular behaviors. FIGS. 1 A to 4 disclose such systems and methods.
[0027] More particularly, an autoencoder system can maintain anonymity and preserve the relational content between and among PII data while still encoding it in a safe manner. Therefore, the data can still be used for network analysis, deduplication efforts and can generally serve as an input into machine-learning models to detect complex patterns whose accuracy and veracity is enhanced by the inclusion of this encoded PII data in the analysis. Business and research areas alike should be able to utilize this encoded data for analysis, without having to have access to the original data. This is especially applicable in (but not restricted to) the financial sector for the purposes of fraud detection and anti-money laundering efforts, and in the healthcare sectors, allowing third party providers and researchers to work with a more complete dataset than ever before without revealing any actual PII data.
[0028] Additionally, there may be reasons to encode other forms of data other than data typically considered to be PII. For example, there may be a need to encode financial, engineer, testing, or other data in order to ensure that the data itself is not easily digested by unauthorized sources. Regardless of the content of the data processes, conventional means for anonymizing data, such as hashing functions, may be less than ideal for the following reasons. First, conventional hashing functions do not suffice for the purposes of further elaborate and more complex analysis as the information content within the data is lost. One of the main attributes of hashing is that two similar inputs into a hashing algorithm provide whenever possible very different output hashes to maximize the security of the encrypted data. However, this means that slightly misspelled names, or zip codes that are nearly identical, produce very different hashes and it is mathematically near impossible to ascertain which data has a relational connection, be it geo-spatial proximity or detection of entities that are related. In order to analyze such data, it is thus normally decrypted leaving it vulnerable. However, the inventions disclosed herein may provide data that may be analyzed in such situations without access to the original data that has not been encoded.
[0029] The autoencoder system, such as that generally shown in FIG. 1A, takes PII data as input, increases its dimensionality in a latent space, performs mathematical operations including a form of dimensionality reduction, and then arrives at an encoded output of data which can be used for further analysis. The novelty of this approach is two-fold: Firstly, the usage of deep learning algorithms as a system for encryption; and secondly, the usability of PII data after being unidentifiably encoded while maintaining the relational position of the PII data to each other. The mathematical theory of pattern recognition and the near impossible exact replicability of a model are harnessed as main strengths in the autoencoder system to encode personal identifiable information (PII) for the purpose of further analysis.
[0030] Two systems are devised to achieve this result for different applications. As seen in FIG. 2, the first system uses a 'single' autoencoder that amalgamates all relevant PII data fields and trains a unique autoencoder model with attached neuron weights. As seen in FIG. 3, the second system contains 'multiple' autoencoders, where each autoencoder is assigned and trained on a specific PII data field mapping each input to its own autoencoder, e.g. first names and last names have their own autoencoder to maximize security concerns as all parameters, hyper-parameters, architectural properties and the training dataset has to be present to be able to attempt the decryption of the output. Neither of these systems has been previously used to provide useful, anonymized data.
[0031] More particularly, FIG. 2 shows a graphic that depicts the PII data schematic 210 which indicates the directional flow of data through the Singular Autoencoder (AE-S) system 200. The PII Data 210 is transformed into a feature vector format and serves as an input into the input nodes AE-S 212. The autoencoder 200 is represented by its neurons and their connections. A neuron is a mathematical entity in which an activation function is applied to a calculated value to arrive at an interim transitional output value, which through a series of directional connections informs the mathematical transformations applied to the data as it flows through the AE-S system, analogous to a computational graph, visually from left to right.
[0032] The PII data 210, which is split into a feature vector, is fed into the autoencoder AE-S system as a single data vector at 212. The solid lines (214, 216, 218, 220) connect the input 212, through each of the layers of neurons (222, 224, 226) to the output layer 228 represent a complex mathematical transformation in which a myriad of combinatorial compositions of the input is analyzed. Output layer 228 has the same dimensionality as the input node 212. Concretely this means that we take the input feature vector x0 and perform the following transformation on it,
Figure imgf000012_0001
= w 0x0 + b0, where w is a matrix of trainable weights, b is a bias term vector, and i is the relevant neuron in the input layer, to compute the neuronal input of neuron in the adjacent layer. Further, within each neuron itself, an activation function is applied so that zi = cpCn- where f represents the chosen activation function and zx is the neuronal output. The input into a neuron in the next layer would thus be n2 = ZiWj^ + — l· Zj Wj k + bj k , where j is the relevant neuron in the previous layer, k is the relevant neuronal connection in the current layer, bj k is the relevant bias and the weight subscripts indicate the neuronal layer and the relative position of the neuron, which amounts to n2 = w^z + bj kin matrix format, which is activated in the new neuron again. This creates a deep abstraction from the original input data through chained equations.
[0033] An additional layer of abstraction is provided by the architecture of the autoencoder itself as the dimensionality of the data is significantly increased as shown by arrow 230 from "a" input neurons where b>a neurons in the deeper layers of the network. Dimensionality reduction as shown by arrow 232 thereafter occurs to transform the larger layers, e.g. layer 224, to an output layer 228 having the same dimensionality as the dimensionality of the input node 212. The output of the system provided at a schematic box 234 is a deep abstraction of the original PII input data 210 and thus is not replicable without the exact same autoencoder system 200 in place and, even then, replication is a very complex undertaking.
[0034] In a preferred embodiment, the autoencoders 200 in FIG. 2 and 334a, 334b & 334c in FIG. 3 may preferably contain the same number of nodes in the first layer of nodes as in the third layer of nodes. In another preferred embodiment, the first, second and third layers of nodes in the autoencoders 200 in FIG. 2 and 334a, 334b & 334c in FIG. 3 may contain three nodes, five nodes, up to 25 nodes, up to 50 nodes, or up to 500 nodes. In another preferred embodiment, the input node and the output node in the autoencoders 200 in FIG. 2 and 334a, 334b & 334c in FIG. 3 may be single nodes. In another preferred embodiment, the input vector and the output vector of the autoencoders 200 in FIG. 2 and 334a, 334b & 334c in FIG. 3 may have the same length. The features of these preferred embodiments may also be combined together.
[0035] However, the AE-S outputs provide a transformed representation of the original PII vector data 210, resulting in an output vector at 234 that has both pseudonymized the data, while also being trained to create a 'DNA' or representation of the data that is analyzable and comparable with other output vectors. This is achieved by the training process of the system (explained more fully in FIG. 4, below) before the output vectors at 234 are used for analysis. The aforementioned trainable weights vector w is optimized during a process of optimization called backpropagation during which the model is exposed to synthetic data to learn the optimal abstract representation of it, thereby preserving the inherent information content in the data.
[0036] Natural language processing distances are calculated from various base features to transform the PII data 210 into numerical data, which is provided as input into AE-S at node 212. Autoencoders aim to find deep abstractions of the data as originally input while minimizing the reconstruction error, which describes the distortions and shifts of the underlying distributions of the recreated abstract data compared to the original input data. An output vector reconstruction error is determined by calculating a function of the encoded output vector and the input vector. The objective of minimizing the reconstruction error through backpropagation is attained by back-propagating the output vector reconstruction error back through the neural network from the output nodes back to the input nodes by a chained derivative of the outputs and weights of the intervening nodes. This results in the weights iteratively being recalibrated to minimize the reconstruction error in each training step. Generally speaking, these models undergo thousands, if not more, training steps to arrive at the optimal setting.
[0037] The graphic in FIG. 3 depicts the schematic of the PII data 310 flowing through the developed Multiple Autoencoder (AE-M) system 300. The PII data 310 is split into its respective parts (310a, 310b, 310c ... 3 lOx) and a natural language processing distance is calculated from various base features to turn the data into numerical values. The PII data categories are then used as an input vector into the first node (312a, 312b, 312c ... 312x) of their own respective autoencoder (334a, 334b, 334c ... 334x) to arrive at a partial output (336a, 336b, 336c ... 336x). All of the partial outputs (336a, 336b, 336c ... 336x) from every autoencoder are then mathematically combined to arrive at the final output 338. Concatenation is a preferred method of combining the encoded output vectors, although any other combination of the encoded output vectors is within the scope of the invention.
[0038] FIG. IB is an expansion of the memory 104 in FIG. 1A to show a non exclusive list in memory 104a of the additional types of data that may be stored in memories 104 and 104a concerning characteristics of entities.
[0039] In view of the above, it is seen that FIGS. 1A, IB, 2 & 3 show an auto-encoder system 100 for anonymizing data associated with a population of entities. A computer memory 104 stores specific computer-executable instructions for a neural network, wherein the neural network comprises: input nodes; a first layer of nodes for receiving an output from the input nodes; a second layer of nodes for receiving an output from the first layer of nodes; one or more additional layers of nodes for receiving an output from the second layer of nodes; and output nodes for receiving an output from the last inner layer of nodes to provide an encoded output vector. An inner layer of nodes includes a number of nodes that is greater than a number of nodes in a layer of nodes on the input side of such inner layer and is also greater than a number of nodes in a layer of nodes on the output side of such inner layer. One or more processors 102 are in communication with the computer-readable memory 104 and are programmed by the computer-executable instructions to at least obtain data identifying a plurality of characteristics associated with at least a subset of the entities in the population and prepare a plurality of input vectors that include at least one of the plurality of characteristics, wherein the characteristics appear in the respective input vectors as numerical information transformed from a human recognizable text. The one or more processors 102 also train the neural network with the plurality of input vectors, wherein the training comprises a plurality of training cycles. In practice, it is contemplated that up to 10 processors 102, up to 50 processors 102, up to 100 processors 102, up to 500 processors 102, or even up to 1000 processors 102 may be used. The preferred embodiments can be made scalable such that any number of processors may be used based on the number of entities and the number of characteristics to be encoded or tracked. In practice, the neural network can have 7 inner layers of nodes, 11 inner layers of nodes, 21 inner layers of nodes, or even 51 inner layers of nodes - so long as the inner layers of nodes between the input nodes and a central layer of nodes provide increasing dimensionality and so long as the inner layers of nodes between such central layer of nodes and the output node provide decreasing dimensionality.
[0040] FIG. 1A also includes input devices 106 such as a keypad, mouse, touchscreen, graphic user interface and such other commonly known input devices to those of ordinary skill in the art. Input devices 106 as well as an internet connection 108 and a display 110 are provided for use in storing computer executable instructions in memory 104 and retrieving same, operating the processors in system 102, providing inputs needed to train the various neural networks disclosed herein, storing and retrieving data needed for such training in memory 104, storing and retrieving encoded data in memory 104, reviewing the results of the operation of the preferred embodiments, and such other uses as required for the functioning of the preferred embodiments.
[0041] As seen in FIG. 4, a training cycle begins at the START 400. A training cycle comprises: the step 402 of inputting one of the input vectors at the input node; the step 403 of processing said input vector with the neural network to provide an encoded output vector at the output node; the step 404 of determining an output vector reconstruction error by calculating a function of the encoded output vector and the input vector; the step 406 of back- propagating the output vector reconstruction error back through the neural network from the output nodes back to the input nodes by a chained derivative of the outputs and weights of the intervening nodes; and recalibrating a weight in one or more of the nodes in the neural network to minimize the output vector reconstruction error.
[0042] The one or more processors 102 can also be programmed to set a threshold for a total number of training cycles and to stop the training of the neural network at step 408 in response to the number of training cycles exceeding the threshold. The one or more processors 102 can also be programmed to set a threshold as a function of a loss plane of the output vector reconstruction error and stop the training of the neural network at step 410 in response to the output vector reconstruction error being less than the threshold. The one or more processors can also be programmed to determine whether one of the characteristics in a plurality of selected input vectors is not also found in a human recognizable form in the respective encoded output vectors. This detection method may be based on use of additional input vectors having a same length as the additional encoded output vectors; and detecting that the output vector is not equal to the input vector or by detecting that more than 10%,
25%, or 50% of a plurality of values comprising the additional input vectors are different than a plurality of corresponding values in the respective additional encoded output vectors. Upon such detection, the one or more processors may fix the weights and biases in one or more of the nodes in the neural network.
[0043] In use after training, the one or more processors 102 may be programmed by the computer-executable instructions to fix the weights in one or more of the nodes in the neural network; and process a plurality of additional input vectors through the neural network to provide a plurality of respective additional encoded output vectors at the output node. A plurality of respective additional encoded output vectors will contain a plurality of characteristics, but said plurality of respective additional encoded output vectors will not contain said plurality of characteristics in a human recognizable form using any of the detection methods described above.
[0044] In use after training, the one or more processors 102 may be programmed by the computer-executable instructions to fix the weights in one or more of the nodes in the neural network; and process a plurality of additional input vectors through the neural network to provide a plurality of respective additional encoded output vectors at the output node. The majority of the respective additional encoded output vectors will contain a plurality of characteristics, but said majority of respective additional encoded output vectors will not contain said plurality of characteristics in a human recognizable form using any of the detection methods described above.
[0045] In use after training, the one or more processors 102 may be programmed by the computer-executable instructions to fix the weights in one or more of the nodes in the neural network; and process a plurality of additional input vectors through the neural network to provide a plurality of respective additional encoded output vectors at the output node.
More than 90% of the respective additional encoded output vectors will contain a plurality of characteristics, but more than 90% of the respective additional encoded output vectors will not contain said plurality of characteristics in a human recognizable form using any of the detection methods described above.
[0046] The one or more processors 102 are also programmed to determine whether one of the plurality of characteristics in one of the input vectors is also found in a human recognizable form in the respective encoded output vector; and perform a plurality of additional training cycles in response to the respective encoded output vector containing said one of the plurality of characteristics in the human recognizable form using any of the detection methods described above.
[0047] The one or more processors 102 may be programmed to perform more than 100 training cycles, more than 1,000 training cycles, or more than 5,000 training cycles.
[0048] As seen in FIG. IB, the plurality of characteristics may comprise data stored in the memory 104 which data is associated with any three or more of the following: a piece of personally identifiable information, a name, an age, a residential address, a business address, an address of a family relative, an address of a business associate, an educational history, an employment history, an address of any associate, a data from a social media site, a bank account number, a plurality of data providing banking information, a banking location, a purchase history, a purchase location, an invoice, a transaction date, a financial history, a credit history, a criminal record, a criminal history, a drug use history, a medical history, a hospital record, a police report, or a tracking history.
[0049] As also seen in FIG. 1A, the computer memory 104 may store specific computer-executable instructions for a second neural network and a third neural network, wherein the second and third neural networks each comprise: an input node; a first layer of nodes for receiving an output from the input node; a second layer of nodes for receiving an output from the first layer of nodes; a third layer of nodes for receiving an output from the second layer of nodes; and an output node for receiving an output from the third layer of nodes to provide an encoded output vector; wherein the second layer of nodes includes a number of nodes that is greater than a number of nodes in the first layer of nodes and is greater than a number of nodes in the third layer of nodes. The one or more processors are also programmed by the computer-executable instructions to train the second and third neural networks with the plurality of input vectors, wherein the training comprises a plurality of training cycles wherein the training cycles comprise, for each of the respective second and third neural networks: inputting one of the input vectors at the input node; processing said input vector with the respective neural network to provide an encoded output vector at the output node; determining an output vector reconstruction error by calculating a function of the encoded output vector and the input vector; back-propagating the output vector reconstruction error back through the respective neural network from the output nodes back to the input nodes by a chained derivative of the outputs and weights of the intervening nodes; and recalibrating a weight in one or more of the nodes in the respective neural network to minimize the output vector reconstruction error. The one or more processors are programmed by the computer-executable instructions to combine the encoded output vector of the neural network, the second neural network and the third neural network to provide a combined encoded output vector. These three outputs may also be concatenated to provide a concatenated combined encoded output vector.
[0050] Additional neural networks may also be used and their respective encoded output vectors may also be combined with the encoded output vectors of the neural network, the second neural network, and the third neural network. Such additional neural networks would be used so that there is one neural network for each of the data fields that have to be encrypted. And since there can be 50, 100, 200 or more data fields, an equal number of neural networks will be used within the scope of the invention.
[0051] The one or more processors 102 may also be programmed by the computer- executable instructions to prepare an input vector for the entities in the population; process said input vector with the neural network to provide an encoded output vector at the output node for each of the entities; and store the encoded output vectors in the memory 104 for subsequent use in identifying a common characteristic between two or more of the entities. The one or more processors 102 may also be programmed by the computer-executable instructions to compare the encoded output vectors to identify the two or more entities with the common characteristic.
[0052] FIG. 5 shows a federated learning system 500 for use by, for example, four independent entities A, B, C, and D, which are also indicated, respectively, by reference numbers 502, 504, 506 and 508.
[0053] For example, for Entity A, the vertically aligned elements that bear an "A" in the left-most vertical position show the elements of the deep learning computer system used exclusively by Entity A. These include the data silo 512, neural network MAo indicated by reference number 520, updated neural network MAi indicated by reference number 552, the delta score in the weights for the neural network as it updates from MAo to MAi indicated with the nomenclature DMAoi and reference number 560, and the updated neural network MA2 indicated by reference number 580.
[0054] Immediately to the right of the system used by Entity A is the deep learning computer system used for Entity B. The vertically aligned elements that bear a "B" show the elements of the deep learning computer system used exclusively by Entity B. These include the data silo 514, neural network MBo indicated by reference number 522, updated neural network MBi indicated by reference number 554, the delta change in the weights for the neural network as it updates from MBo to MBi indicated with the nomenclature DMB01 and reference number 562, and the updated neural network MB2 indicated by reference number 582.
[0055] Immediately to the right of the system used by Entity B is the deep learning computer system used for Entity C. The vertically aligned elements that bear a "C" show the elements of the deep learning computer system used exclusively by Entity C. These include the data silo 516, neural network MCo indicated by reference number 524, updated neural network MCi indicated by reference number 556, the delta change in the weights for the neural network as it updates from MCo to MCi indicated with the nomenclature DMOii and reference number 564, and the updated neural network MC2 indicated by reference number 584.
[0056] Immediately to the right of the system used by Entity C is the deep learning computer system used for Entity D. The vertically aligned elements that bear a "D" show the elements of the deep learning computer system used exclusively by Entity D. These include the data silo 518, neural network MDo indicated by reference number 526, updated neural network MDi indicated by reference number 558, the delta change in the weights for the neural network as it updates from MDo to MDi indicated with the nomenclature AMDoi and reference number 566, and the updated neural network MD2 indicated by reference number 586.
[0057] Although four Entities are shown in FIG. 5, any number of Entities are contemplated within the scope of the invention.
[0058] In use, Entity A stores its data in a very secure location indicated by data silo 512. As an extra precaution, Entity A may use the autoencoder disclosed above in Figures 1A, IB, 2, 3 and 4 to encrypt its data thus rendering the data anonymous while simultaneously maintaining defining characteristics of the data available for analysis even in the encoded form. Either way, Entity A never shares its raw data or encoded data with any other third-party Entity.
[0059] Similar to Entity A, the other Entities B, C and D maintain their own respective data very securely in their own data silos 514, 516 and 518. Again, none of these Entities share their raw data or encoded data with any other Entity.
[0060] Turning back to Entity A, neural network MAo is trained by Entity A (or a confidential service provider) to detect the presence of a particular behavior based on the data stored in data silo 512 where Entity A stores its data. The particular behavior may indicate money laundering, financial criminality, or any other condition that Entity A may wish to detect. For a given piece of data processed by the neural network MAo, the output of the network is graded by an analyst at the user- interface, Ul, indicated by reference number 528. Once the output of the neural networks are shown to analysts via the user-interface, the interface collects feedback data in various forms on features, outputs, their relevance etc. Based on this feedback, the neural networks are retrained to become even more accurate in their decision making. The grade may be an "X" (not productive) or an "O" (productive).
For a grade of "O," Entity A further investigates the underlying actors to determine whether a report should be made or any further action taken. The grade is also used to update the neural network as indicated by the curved arrow at reference number 544. The delta scores of the neural network are shown by DMA01 and are stored in a memory 568.
[0061] Turning now to Entity B, neural network MBois trained by Entity B (or a confidential service provider) to detect the presence of a particular behavior based on the data stored in data silo 514 where Entity B stores its data. The particular behavior may indicate money laundering, financial criminality, or any other condition that Entity B may wish to detect. For a given piece of data processed by the neural network MBo, the output of the network is graded by a decision maker U 1 indicated by reference number 530. The grade may be an "X" (not productive) or an "O" (productive). For a grade of "O," Entity B further investigates the underlying actors to determine whether a report should be made or any further action taken. The grade is also used to update the neural network as indicated by the curved arrow at reference number 546. The change in the weights for the nodes of the neural network are shown by DMBoi at reference number 562 and are also stored in a memory 568.
[0062] Turning now to Entity C, neural network MCo is trained by Entity C (or a confidential service provider) to detect the presence of a particular behavior based on the data stored in data silo 516 where Entity C stores its data. The particular behavior may indicate money laundering, financial criminality, or any other condition that Entity C may wish to detect. For a given piece of data processed by the neural network MCo, the output of the network is graded by a decision maker U 1 indicated by reference number 532. The grade may be an "X" (not productive) or an "O" (productive). For a grade of "O," Entity C further investigates the underlying actors to determine whether a report should be made or any further action taken. The grade is also used to update the neural network as indicated by the curved arrow at reference number 548. The change in the weights for the nodes of the neural network are shown by AMCoi at reference number 564 and are also stored in a memory 568.
[0063] Finally, turning to Entity D, neural network MDois trained by Entity D (or a confidential service provider) to detect the presence of a particular behavior based on the data stored in data silo 518 where Entity D stores its data. The particular behavior may indicate money laundering, financial criminality, or any other condition that Entity D may wish to detect. For a given piece of data processed by the neural network MDo, the output of the network is graded by a decision maker U 1 indicated by reference number 534. The grade may be an "X" (not productive) or an "O" (productive). For a grade of "O," Entity D further investigates the underlying actors to determine whether a report should be made or any further action taken. The grade is also used to update the neural network as indicated by the curved arrow at reference number 550. The change in the weights for the nodes of the neural network are shown by AMDoi at reference number 566 and are also stored in a memory 568.
[0064] In addition to updating their own respective neural networks based on their own experience processing their own data as explained above, the changes in the weights (560, 562, 564 and 566 - all stored in memory 568) experienced by the other Entities may also be used to update the various neural networks. To set this up, each of Entities A, B, C and D would use the same or similar architecture in neural networks 520, 522, 524 and 526 and each such network would be separately trained to detect the presence of the same or similar behavior. If the Entities chose to use autoencoded anonymous data per the disclosure above concerning Figures 1A to 4, then that autoencoder would be set up using the same parameters and the same or similar architecture across each of the Entities. Most importantly, however, is that no raw data and no encoded data ever needs to be shared and the Entities are still able to assist each other with updating their respective neural networks.
[0065] This updating of neural networks between Entities occurs using Learning Neural Network 576 which has access to the changes in the weights stored in memory 568. For example, when network 520 owned by Entity A is to be updated with the changes in the weights for network 522 owned by Entity B, a processor (not shown) forms a vector 570 by concatenating the then current weights for Entity A's neural network MAi with the changes in the weights DMBoi that occurred during the updating shown by arrow 546 of Entity B's neural network. Network 576 is trained to thereby provide new weights at reference number 578 for Entity A's neural network at reference number 580. If Entity A wishes to obtain additional updates from the neural networks of Entities C and D, then network 576 repeats the updating process but using the change in weights for Entity C (564) and then Entity D (566).
[0066] Network 576 is equally available to the networks of the other Entities so each can update their own respective networks in a similar fashion as explained above for Entity A by using the change in weights experienced by the other networks. In updating the weights of one neural network using the changes in weights from another neural network, it is important that such updates not be too great or else the update might overwhelm the original weights.
[0067] In practice, the neural networks of Entities A, B, C and D can be trained to detect many different behaviors in a data set. For each different behavior, Entities A, B, C and D set up a discrete neural network having the same architecture for the network and data files. In this manner, the Entities may share the changes in the weights for each node in the neural networks (but not any data) in order to assist the other in updating their respective neural networks.
[0068] Examples of behaviors that may be detected as indicative of money laundering activity include, but are not limited to, frequent changes of financial advisers or institutions; selection of financial advisers or institutions that are geographically distant from the entity or the location of the transaction; requests for increased speed in processing a transaction or making funds available; failure to disclose a real party to a transaction; a prior conviction for an acquisitive crime; a significant amount of private funding from a person who is associated with, or an entity that is, a cash-intensive business; a third party private funder without an apparent connection to the entity's business; a disproportionate amount of private funding or cash which is inconsistent with the socio-economic profile of the persons involved; finance provided by a lender, other than a financial institution, with no logical explanation or economic justification; business transactions in countries where there is a high risk of money laundering and/or terrorism funding; false documentation in support of transactions; an activity level that is inconsistent with the client's business or legitimate income level; and/or an overly complicated ownership structure for the entity.
[0069] More generally, operation and use of the preferred embodiments reveal two components: Model Inference and Model Training. Model Inference refers to the process of post-training where, for example, Entity A’ s weights and Entity B’s delta scores are input into the trained supra-neural network 576 (i.e. model recalculation network), and network 576 outputs a ‘new’ weight vector that then replaces Entity A’s original weights. Inference is thus the ‘prediction’ of these new weights by the supra-neural network 576. Model Training refers to the process of training the weight recalculation network.
[0070] Further with regard to model inference, the inference of the model to determine the new weights for Entity A’s network 552 based on the changes in the weights 562 for Entity B’s updated network 544 would look like this: Take the weights of Entity A after its network 552 has learned from its own data. Then concatenate these weights from network 552 with the delta feature importance inference scores 562 and flatten these two matrixes into a vector 570. Vector 570 is then the input vector into the “supra” or learning neural network 576. Within the network 576, we calculate the new weights for neural network 552 that incorporate the learned feedback from neural network 554. This is preferably conducted recursively to first update network 552 with feedback from network 554 to arrive at the weight vector for network 580. The updated vector weights for network 552 are then updated again by concatenating them with the delta feature importance scores 564 to arrive at a new model weight vector for network 552. The process repeats until the weights for network 552 have been updated with all of the other relevant customers’ feedback. The process next repeats in order to update network 554, and so on.
[0071] A separate task is training learning neural network 576. This is completely separate from training the Entitys' networks which have already been trained, and their weights and delta scores calculated. Once the network 576 is trained, Inference is conducted, and then the Entitys' networks are updated using the process described above. The training of network 576 is based on the principle that there should not be significant changes in the weights for the Entitys,' given the other delta scores of the other networks. Rather, the changes in the weights should just nudge them in the right direction.
[0072] There are at least three methods for training:
First Training Method
[0073] A first simple method of training is to include the delta scores of network 554 as bias terms/vectors into network 552, and then retrain network 552 given the addition of these biases. Another basic method is to apply to the weights an operation of some non-linear activation function of the delta scores. These processes are quite straightforward, and do not need an overarching ‘ supra-neural network’. One issue is that we do not ‘learn’ or optimize the underlying relationship between the weights and delta scores. For example, a simple subtraction would take away too much of the network's specificity for that particular Entity.
Second Training Method
[0074] The second process trains the supra-neural network 576 using a cost function based on the Entitys' underlying networks. The supra-neural network architecture is preferably a deep neural network that has input dimensions of 2x and output dimension of x. The input dimension could, for example, be composed of Entity A weights and Entity B's delta scores. The output dimension is thus equal to the length of the weight vector. The supra- network 576 is trained by feeding in examples of concatenated Entity A weights and Entity B delta scores, and then outputting ‘new’ Entity A weights, which are supplanted onto Entity A's network. The accuracy of Entity A’s updated neural network is calculated. This serves as the cost function for the supra-neural network 576 which then undergoes backpropagation sending the signal back through the network, thus updating the weights of the supra-neural network 576. Such training is conducted by feeding a lot of samples of input vectors into the network 576, calculating the cost function, and then updating the weights accordingly.
Third Training Method
[0075] Similar to the second method, the training process consists of two parts: the first part is the neural network 576, which takes in the current model weights of network 552 and the delta feature importance scores 562 of network 554. It then runs the concatenated vector through the neural network (as explained above), which computes a set of new weights (which reduces the dimension from the input to the output vector since only one set of weights needs to be calculated for one network). The output of the neural network 576 is then provided to the second part of training.
[0076] The second part of training consists of pre-trained networks, which detect certain money laundering behaviors for a specific “entity”, i.e. they simulate network 552, network 554, network 556, etc. These networks could be trained on synthetic data, for example. Network 552 and network 554 would detect a behavior on two separate sets of data, Dataset A and Dataset B. After the networks have finished with their own separate training process, a “sleeper” actor would be added into the dataset, which is more specific to either network 552 or network 554, that the networks at that current moment would not detect. A network 554 specific actor is then inserted into Dataset A. If the output from part 1 was a new weight vector for network 552, based on the “fake network 554” from the training environment, then the current weights of network 552 would be replaced with the new ones and run inference on the new Dataset A. This provides an accuracy score (because the number of actors in the dataset that conduct this specific illicit behavior is known). This accuracy score is fed back into part 1 of training the supra neural network, which learns from the given accuracy score and adapts its own weights according to this metric, which would govern the cost function. This is quite an intensive training process. However, since it must train a network’s architecture, it must be known how the accuracy impacts the result to learn what the best adaptation operations are.
[0077] As used herein, second data has “the same or similar” predetermined data format as compared to first data in a predetermined data format when at least one of the following is true: (1) the data formats contain the same data fields; (2) the data formats contain the same data fields concatenated in the same order; (3) the data formats each contain a plurality of data fields and 95% of those data fields are the same; (4) the data formats each contain a plurality of data fields and 90% of those data fields are the same; (5) the data formats each contain a plurality of data fields and 80% of those data fields are the same; (6) the data formats each contain a plurality of data fields and 95% of those data fields have the same length; (7) the data formats each contain a plurality of data fields and 90% of those data fields have the same length; or (8) the data formats each contain a plurality of data fields and 80% of those data fields have the same length. Further, if two input vectors such as, for example, input vectors 212, 312a, 312b, 312c, as well as the input vectors to neural networks 520, 522, 524, and 526, are said to use the same or similar predetermined data formats, then, with respect to such same or similar input vectors, at least one of statements (1) to (8) herein would be true.
[0078] Further, as used herein, a second neural network has “the same or similar” predetermined network architecture as a first neural network with a predetermined network architecture when at least one of the following is true: (1) each neural network contains the same number of nodes as the other neural network; (2) each neural network contains the same number of nodes within 95% as the other neural network; (3) each neural network contains the same number of nodes within 90% as the other neural network; (4) each neural network contains the same number of nodes within 80% as the other neural network; (5) each neural network has the same number of layers and contains the same number of nodes in each layer as the other neural network; (6) each neural network has the same number of layers and contains the same number of nodes within 95% in each layer as the other neural network; (7) each neural network has the same number of layers and contains the same number of nodes within 90% in each layer as the other neural network; (8) each neural network has the same number of layers and contains the same number of nodes within 80% in each layer as the other neural network; (9) each neural network has the same number of layers and at least three of those layers contain the same number of nodes; or (10) each neural network has the same number of layers and at least the input and output layers contain the same number of nodes.
[0079] An "entity" as used herein means a person, a company, a business, an organization, an institution, an establishment, a governing body, a corporation, a partnership, a unit of a government, a department, a team, a cooperative, or other group with whom it is possible to transact (e.g., to conduct business, or to communicate with, for example, on the internet or social media).
[0080] The data utilized in the methods of the invention include, but are not limited to, data regarding identity (e.g., height, weight, physical attributes, age, and/or sex); health- related data (e.g., blood pressure, pulse, genetic data, respiratory data, blood analysis, medical test results, personal disease history, and/or family disease history); personal data (e.g., relationship status, marital status, relatives, co-workers, place of work, previous workplaces, residence, neighbors, living address, previous living addresses, identity of household members, number of household members, usual modes of transportation, vehicles owned or leased, educational history, institutions of higher learning attended, degrees or certifications obtained, grades received, government or private grants, funding or support received, email addresses, criminal record, prior convictions, political contributions, and/or charitable contributions); personal information available from electronic devices used (e.g., phone records, text messages, voice messages, contact information, and app information); social media data (e.g., likes, comments, tags, mentions, photos, videos, ad interactions, and/or click information); credit data (e.g., household income, credit history and/or credit score); financial data (e.g., income sources, income amounts, assets, tax records, loan information, loan history, loan repayments, banking history, banking transactions, financial institutions involved in such transactions, transaction locations, mortgage information, mortgage history, account balances, number of accounts, counterparty information, fraud activity, and/or fraud alerts); and insurance information (e.g, insurance claims, insurance policies, and/or insurance payments received).
[0081] The methods of the invention are useful in analyzing data of entities in various sectors including, but not limited to, compliance for banks or other financial institutions, securities investigations, investigations of counterfeiting, illicit trade, or contraband, compliance regarding technology payments, regulatory investigations, healthcare, life sciences, pharmaceuticals, social networking, online or social media marketing, marketing analytics and agencies, urban planning, political campaigns, insurance analytics, real estate analytics, education, tax compliance and government analytics.
[0082] Having described the invention in detail, it will be apparent that modifications and variations are possible without departing from the scope of the invention defined in the appended claims.
[0083] When introducing elements of the present invention or the preferred embodiments(s) thereof, the articles "a", "an", "the" and "said" are intended to mean that there are one or more of the elements. The terms "comprising", "including" and "having" are intended to be inclusive and mean that there may be additional elements other than the listed elements.
[0084] In view of the above, it will be seen that the several objects of the invention are achieved and other advantageous results attained.
[0085] As various changes could be made in the above constructions and methods without departing from the scope of the invention, it is intended that all matter contained in the above description and shown in the accompanying drawings shall be interpreted as illustrative and not in a limiting sense.

Claims

WHAT IS CLAIMED IS:
1. A method of updating a first neural network comprising: providing a computer system with a computer-readable memory storing computer- executable instructions for the first neural network and a second neural network separate from the first neural network; and providing one or more processors in communication with the computer-readable memory, wherein the one or more processors are programmed by the computer-executable instructions to at least: process a first data with the first neural network; process a second data with the second neural network; update a weight in a node of the second neural network by a delta amount as a function of the processing of the second data with the second neural network; and update a weight in a node of the first neural network as a function of the delta amount.
2. The method of claim 1 wherein the computer-readable memory: stores the first data in a first memory location; and stores the second data in a second memory location; wherein the first data is isolated from the second data.
3. The method of claim 1 or 2 further comprising: organizing the first data in a predetermined data format; and organizing the second data in the same or similar predetermined data format.
4. The method of any one of claims 1 to 3 further comprising: structuring the first neural network with a predetermined network architecture; and structuring the second neural network with the same or similar predetermined network architecture.
5. The method of any one of claims 1 to 4 further comprising: providing a third neural network separate from the second neural network and the first neural network; processing a third data with the third neural network; updating a weight in a node of the third neural network by a second delta amount as a function of the processing of the third data with the third neural network; updating a weight in a node of the first neural network as a function of the delta amount and of the second delta amount.
6. The method of any one of claims 1 to 5 wherein the computer-readable memory: stores the third data in a third memory location; wherein the first data is isolated from the second data and the third data; and wherein the second data is isolated from the third data.
7. The method of any one of claims 1 to 6 further comprising organizing the third data in the same or similar predetermined data format.
8. The method of any one of claims 1 to 7 further comprising structuring the third neural network with the same or similar predetermined network architecture.
9. A computer system for updating a first neural network comprising: a computer memory storing specific computer-executable instructions for the first neural network and a separate second neural network; and one or more processors in communication with the computer-readable memory, wherein the one or more processors are programmed by the computer-executable instructions to at least: process a first data with the first neural network; process a second data with the second neural network; update a weight in a node of the second neural network by a delta amount as a function of the processing of the second data with the second neural network; update a weight in a node of the first neural network as a function of the delta amount.
10. The computer system of claim 9 wherein the computer-readable memory: stores the first data in a first memory location; and stores the second data in a second memory location; wherein the first data is isolated from the second data.
11. The computer system of claim 9 or 10 wherein the computer-readable memory: stores the first data in a predetermined data format; and stores the second data in the same or similar predetermined data format.
12. The computer system of any one of claims 9 to 11 wherein: the computer-readable memory stores specific computer-executable instructions for the first neural network and the second neural network; the first neural network is structured with a predetermined network architecture; and the second neural network is structured with the same or similar predetermined network architecture.
13. The computer system of any one of claims 9 to 12 wherein: the computer-readable memory stores specific computer-executable instructions for a third neural network separate from the second neural network and the first neural network; and the one or more processors are programmed by the computer-executable instructions to at least: process a third data with the third neural network; update a weight in a node of the third neural network by a second delta amount as a function of the processing of the third data with the third neural network; update a weight in a node of the first neural network as a function of the delta amount and of the second delta amount.
14. The computer system of any one of claims 9 to 13 wherein the computer- readable memory: stores the third data in a third memory location; wherein the first data is isolated from the second data and the third data; and wherein the second data is isolated from the third data.
15. The computer system of any one of claims 9 to 14 wherein the computer- readable memory stores the third data in the same or similar predetermined data format.
16. The computer system of any one of claims 1 to 7 wherein the computer- readable memory stores specific computer-executable instructions for a third neural network separate from the second neural network and the first neural network; and the third neural network is structured with the same or similar predetermined network architecture.
17. A method of providing an auto-encoder for anonymizing data associated with a population of entities, the method comprising: providing a computer system with a memory storing specific computer-executable instructions for a neural network, wherein the neural network comprises: an input node; a first layer of nodes for receiving an output from the input node; a second layer of nodes positioned downstream of the first layer of nodes; a third layer of nodes positioned downstream of the second layer of nodes; and an output node for receiving an output from the third layer of nodes to provide an encoded output vector; wherein the second layer of nodes includes a number of nodes that is greater than a number of nodes in the first layer of nodes and is greater than a number of nodes in the third layer of nodes; identifying a plurality of characteristics associated with at least a subset of the entities in the population; preparing a plurality of input vectors that include at least one of the plurality of characteristics, wherein the characteristics appear in the respective input vectors as numerical information transformed from a human recognizable text; and training the neural network with the plurality of input vectors, wherein the training comprises a plurality of training cycles wherein the training cycles comprise: inputting one of the input vectors at the input node; processing said input vector with the neural network to provide an encoded output vector at the output nodes; determining an output vector reconstruction error by calculating a function of the encoded output vector and the input vector; back-propagating the output vector reconstruction error back through the neural network from the output nodes back to the input node by a chained derivative of the outputs and weights of the intervening nodes; and recalibrating a weight in one or more of the nodes in the neural network to minimize the output vector reconstruction error.
18. The method of claim 17 further comprising: setting a threshold as a function of a loss plane of the output vector reconstruction error; and stopping the training step in response to the output vector reconstruction error being less than the threshold.
19. The method of claim 17 further comprising: determining whether one of the characteristics in a plurality of selected input vectors is also found in the respective encoded output vectors but not in a human recognizable form; and fixing the weights in one or more of the nodes in the neural network in response to the respective encoded output vector containing said characteristic but not in the human recognizable form.
20. The method of claim 17 wherein a plurality of the encoded output vectors during training include at least one of the plurality of characteristics but wherein said plurality of the encoded output vectors does not contain said at least one of the plurality of characteristics in a human recognizable form.
21. The method of claim 17 further comprising: fixing the weights in one or more of the nodes in the neural network; and processing a plurality of additional input vectors through the neural network to provide a plurality of respective additional encoded output vectors at the output node; wherein the additional input vectors have a same length as the additional encoded output vectors; and wherein more than 10% of a plurality of values comprising the additional input vectors are different than a plurality of corresponding values in the respective additional encoded output vectors.
22. The method of claim 17 further comprising: fixing the weights in one or more of the nodes in the neural network; and processing a plurality of additional input vectors through the neural network to provide a plurality of respective additional encoded output vectors at the output node; wherein the additional input vectors have a same length as the additional encoded output vectors; and wherein more than 25% of a plurality of values comprising the additional input vectors are different than a plurality of corresponding values in the respective additional encoded output vectors.
23. The method of claim 17 further comprising: fixing the weights in one or more of the nodes in the neural network; and processing a plurality of additional input vectors through the neural network to provide a plurality of respective additional encoded output vectors at the output node; wherein the additional input vectors have a same length as the additional encoded output vectors; and wherein more than 50% of a plurality of values comprising the additional input vectors are different than a plurality of corresponding values in the respective additional encoded output vectors.
24. The method of claim 17 further comprising: determining whether one of the plurality of characteristics in one of the input vectors is also found in a human recognizable form in the respective encoded output vector; and performing a plurality of additional training cycles in response to the respective encoded output vector containing said one of the plurality of characteristics in the human recognizable form.
25. The method of claim 17 wherein the training step comprises performing more than 100 training cycles.
26. The method of claim 17 wherein the training step comprises performing more than 1,000 training cycles.
27. The method of claim 17 wherein the training step comprises performing more than 5,000 training cycles.
28. The method of claim 17 further comprising: determining whether one of the plurality of characteristics in one of the input vectors is also found in a human recognizable form in the respective encoded output vector; and fixing the weights in one or more of the nodes in the neural network in response to the respective encoded output vector not containing said one of the plurality of characteristics in the human recognizable form.
29. The method of claim 17 wherein the plurality of characteristics comprises data associated with any three or more of the following: a piece of personally identifiable information, a name, an age, a residential address, a business address, an address of a family relative, an address of a business associate, an educational history, an employment history, an address of any associate, a data from a social media site, a bank account number, a plurality of data providing banking information, a banking location, a purchase history, a purchase location, an invoice, a transaction date, a financial history, a credit history, a criminal record, a criminal history, a drug use history, a medical history, a hospital record, a police report, or a tracking history.
30. The method of claim 17 wherein the first layer of nodes contains a same number of nodes as the third layer of nodes.
31. The method of claim 30 wherein the first and third layers of nodes contain up to 25 nodes.
32. The method of claim 30 wherein the first and third layers of nodes contain up to 50 nodes.
33. The method of claim 30 wherein the second layer of nodes contains up to 500 nodes.
34. The method of claim 17 wherein the input node is a single node and the output node is a single node.
35. The method of claim 17 wherein the input vector has a length and wherein the encoded output vector has the same length.
36. The method of claim 17 further comprising: programming the computer system with a second neural network and with a third neural network, wherein the second and third neural networks each comprise: an input node; a first layer of nodes for receiving an output from the input node; a second layer of nodes positioned downstream of the first layer of nodes; a third layer of nodes positioned downstream of the second layer of nodes; and an output node for receiving an output from the third layer of nodes to provide an encoded output vector; wherein the second layer of nodes includes a number of nodes that is greater than a number of nodes in the first layer of nodes and is greater than a number of nodes in the third layer of nodes; training the second and third neural networks with the plurality of input vectors, wherein the training comprises a plurality of training cycles wherein the training cycles comprise, for each of the respective second and third neural networks: inputting one of the input vectors at the input nodes; processing said input vector with the respective neural network to provide an encoded output vector at the output nodes; determining an output vector reconstruction error by calculating a function of the encoded output vector and the input vector; back-propagating the output vector reconstruction error back through the respective neural network from the output nodes back to the input nodes by a chained derivative of the outputs and weights of the intervening nodes; and recalibrating a weight in one or more of the nodes in the respective neural network to minimize the output vector reconstruction error; and combining the encoded output vector of the neural network, the second neural network and the third neural network to provide a combined encoded output vector.
37. The method of claim 36 wherein the combining step further comprises concatenating the encoded output vector of the neural network, the second neural network and the third neural network to provide a concatenated combined encoded output vector.
38. The method of claim 17 further comprising: preparing an input vector for the entities in the population; processing said input vector with the neural network to provide an encoded output vector at the output node for the entities; and storing the encoded output vectors for subsequent use in identifying a common characteristic between two or more of the entities.
39. The method of claim 38 further comprising: comparing the encoded output vectors to identify the two or more entities with the common characteristic.
40. An auto-encoder system for anonymizing data associated with a population of entities, the system comprising: a computer memory storing specific computer-executable instructions for a neural network, wherein the neural network comprises: an input node; a first layer of nodes for receiving an output from the input node; a second layer of nodes positioned downstream of the first layer of nodes; a third layer of nodes positioned downstream of the second layer of nodes; and an output node for receiving an output from the third layer of nodes to provide an encoded output vector; wherein the second layer of nodes includes a number of nodes that is greater than a number of nodes in the first layer of nodes and is greater than a number of nodes in the third layer of nodes; one or more processors in communication with the computer-readable memory, wherein the one or more processors are programmed by the computer-executable instructions to at least: obtain data identifying a plurality of characteristics associated with at least a subset of the entities in the population; prepare a plurality of input vectors that include at least one of the plurality of characteristics, wherein the characteristics appear in the respective input vectors in a human recognizable form; and train the neural network with the plurality of input vectors, wherein the training comprises a plurality of training cycles wherein the training cycles comprise: inputting one of the input vectors at the input nodes; processing said input vector with the neural network to provide an encoded output vector at the output nodes; determining an output vector reconstruction error by calculating a function of the encoded output vector and the respective input vector; back- propagating the output vector reconstruction error back through the neural network from the output nodes back to the input nodes by a chained derivative of the outputs and weights of the intervening nodes; and recalibrating a weight in one or more of the nodes in the neural network to minimize the output vector reconstruction error.
41. The system of claim 40 wherein the one or more processors are programmed by the computer-executable instructions to: set a threshold as a function of a loss plane of the output vector reconstruction error; and stop the training of the neural network in response to the output vector reconstruction error being less than the threshold.
42. The system of claim 40 wherein the one or more processors are programmed by the computer-executable instructions to: determine whether one of the characteristics in a plurality of selected input vectors is also found in the respective encoded output vectors but not in a human recognizable form; and fix the weights in one or more of the nodes in the neural network in response to the respective encoded output vector containing said characteristic but not in the human recognizable form.
43. The system of claim 40 wherein a plurality of the encoded output vectors during training include at least one of the plurality of characteristics but wherein said plurality of the encoded output vectors does not contain said at least one of the plurality of characteristics in a human recognizable form.
44. The system of claim 40 wherein the one or more processors are programmed by the computer-executable instructions to: fix the weights in one or more of the nodes in the neural network; and process a plurality of additional input vectors through the neural network to provide a plurality of respective additional encoded output vectors at the output node; wherein more than 10% of a plurality of values comprising the additional input vectors are different than a plurality of corresponding values in the respective additional encoded output vectors.
45. The system of claim 40 wherein the one or more processors are programmed by the computer-executable instructions to: fix the weights in one or more of the nodes in the neural network; and process a plurality of additional input vectors through the neural network to provide a plurality of respective additional encoded output vectors at the output node; wherein more than 25% of a plurality of values comprising the additional input vectors are different than a plurality of corresponding values in the respective additional encoded output vectors.
46. The system of claim 40 wherein the one or more processors are programmed by the computer-executable instructions to: fix the weights in one or more of the nodes in the neural network; and process a plurality of additional input vectors through the neural network to provide a plurality of respective additional encoded output vectors at the output node; wherein more than 50% of a plurality of values comprising the additional input vectors are different than a plurality of corresponding values in the respective additional encoded output vectors.
47. The system of claim 40 wherein the one or more processors are programmed by the computer-executable instructions to: determine whether one of the plurality of characteristics in one of the input vectors is also found in a human recognizable form in the respective encoded output vector; and perform a plurality of additional training cycles in response to the respective encoded output vector containing said one of the plurality of characteristics in the human recognizable form.
48. The system of claim 40 wherein the one or more processors are programmed by the computer-executable instructions to perform more than 100 training cycles.
49. The system of claim 40 wherein the one or more processors are programmed by the computer-executable instructions to perform more than 1,000 training cycles.
50. The system of claim 40 wherein the one or more processors are programmed by the computer-executable instructions to perform more than 5,000 training cycles.
51. The system of claim 40 wherein the one or more processors are programmed by the computer-executable instructions to: determine whether one of the plurality of characteristics in one of the input vectors is also found in a human recognizable form in the respective encoded output vector; and fix the weights in one or more of the nodes in the neural network in response to the respective encoded output vector not containing said one of the plurality of characteristics in the human recognizable form.
52. The system of claim 40 wherein the plurality of characteristics comprises data associated with any three or more of the following: a piece of personally identifiable information, a name, an age, a residential address, a business address, an address of a family relative, an address of a business associate, an educational history, an employment history, an address of any associate, a data from a social media site, a bank account number, a plurality of data providing banking information, a banking location, a purchase history, a purchase location, an invoice, a transaction date, a financial history, a credit history, a criminal record, a criminal history, a drug use history, a medical history, a hospital record, a police report, or a tracking history.
53. The system of claim 40 wherein the first layer of nodes in the neural network contains a same number of nodes as the third layer of nodes.
54. The system of claim 53 wherein the first and third layers of nodes in the neural network contain up to 25 nodes.
55. The system of claim 53 wherein the first and third layers of nodes in the neural network contain up to 50 nodes.
56. The system of claim 53 wherein the second layer of nodes in the neural network contains up to 500 nodes.
57. The system of claim 40 wherein the input node in the neural network is a single node and the output node in the neural network is a single node.
58. The system of claim 40 wherein the input vector has a length and wherein the encoded output vector has the same length.
59. The system of claim 40: wherein the computer memory stores specific computer-executable instructions for a second neural network and a third neural network, wherein the second and third neural networks each comprise: an input node; a first layer of nodes for receiving an output from the input node; a second layer of nodes positioned downstream of the first layer of nodes; a third layer of nodes positioned downstream of the second layer of nodes; and an output node for receiving an output from the third layer of nodes to provide an encoded output vector; wherein the second layer of nodes includes a number of nodes that is greater than a number of nodes in the first layer of nodes and is greater than a number of nodes in the third layer of nodes; wherein the one or more processors are programmed by the computer-executable instructions to train the second and third neural networks with the plurality of input vectors, wherein the training comprises a plurality of training cycles wherein the training cycles comprise, for each of the respective second and third neural networks: inputting one of the input vectors at the input nodes; processing said input vector with the respective neural network to provide an encoded output vector at the output nodes; determining an output vector reconstruction error by calculating a function of the encoded output vector and the respective input vector; back-propagating the output vector reconstruction error back through the respective neural network from the output nodes back to the input nodes by a chained derivative of the outputs and weights of the intervening nodes; and recalibrating a weight in one or more of the nodes in the respective neural network to minimize the output vector reconstruction error; and wherein the one or more processors are programmed by the computer-executable instructions to combine the encoded output vector of the neural network, the second neural network and the third neural network to provide a combined encoded output vector.
60. The system of claim 58 wherein the one or more processors are programmed by the computer-executable instructions to concatenate the encoded output vector of the neural network, the second neural network and the third neural network to provide a concatenated combined encoded output vector
61. The system of claim 40 wherein the one or more processors are programmed by the computer-executable instructions to: prepare an input vector for the entities in the population; process said input vector with the neural network to provide an encoded output vector at the output node for the entities; and store the encoded output vectors for subsequent use in identifying a common characteristic between two or more of the entities.
62. The system of claim 61 wherein the one or more processors are programmed by the computer-executable instructions to: compare the encoded output vectors to identify the two or more entities with the common characteristic.
PCT/IB2020/058732 2019-09-19 2020-09-18 A federated learning system and method for detecting financial crime behavior across participating entities WO2021053615A2 (en)

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
US201962902505P 2019-09-19 2019-09-19
US201962902503P 2019-09-19 2019-09-19
US62/902,505 2019-09-19
US62/902,503 2019-09-19
US17/020,496 2020-09-14
US17/020,453 2020-09-14
US17/020,453 US11227067B2 (en) 2019-09-19 2020-09-14 Autoencoder-based information content preserving data anonymization method and system
US17/020,496 US20210089899A1 (en) 2019-09-19 2020-09-14 Federated learning system and method for detecting financial crime behavior across participating entities

Publications (2)

Publication Number Publication Date
WO2021053615A2 true WO2021053615A2 (en) 2021-03-25
WO2021053615A3 WO2021053615A3 (en) 2021-04-29

Family

ID=74882979

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2020/058732 WO2021053615A2 (en) 2019-09-19 2020-09-18 A federated learning system and method for detecting financial crime behavior across participating entities

Country Status (1)

Country Link
WO (1) WO2021053615A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023081183A1 (en) * 2021-11-03 2023-05-11 Liveramp, Inc. Differentially private split vertical learning

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023081183A1 (en) * 2021-11-03 2023-05-11 Liveramp, Inc. Differentially private split vertical learning

Also Published As

Publication number Publication date
WO2021053615A3 (en) 2021-04-29

Similar Documents

Publication Publication Date Title
Jullum et al. Detecting money laundering transactions with machine learning
US11734607B2 (en) Data clean-up method for improving predictive model training
US20210142219A1 (en) Method for providing data science, artificial intelligence and machine learning as-a-service
Segovia-Vargas Money laundering and terrorism financing detection using neural networks and an abnormality indicator
US11928681B2 (en) System and method for confidentially sharing information across a computer network
Han et al. Artificial intelligence for anti-money laundering: a review and extension
Cherif et al. Credit card fraud detection in the era of disruptive technologies: A systematic review
US20160086185A1 (en) Method of alerting all financial channels about risk in real-time
CA3192143A1 (en) Predicting data tampering using augmented machine learning models
Lokanan Predicting money laundering using machine learning and artificial neural networks algorithms in banks
Jing et al. Predicting US bank failures: A comparison of logit and data mining models
US20210089899A1 (en) Federated learning system and method for detecting financial crime behavior across participating entities
Esen et al. How to detect illegal corporate insider trading? A data mining approach for detecting suspicious insider transactions
Kuzmenko et al. Dynamic stability of the financial monitoring system: Intellectual analysis
Li et al. Artificial intelligence applications in finance: a survey
US11989327B2 (en) Autoencoder-based information content preserving data anonymization system
WO2021053615A2 (en) A federated learning system and method for detecting financial crime behavior across participating entities
CA3214663A1 (en) Systems and methods of generating risk scores and predictive fraud modeling
Owolafe et al. A long short term memory model for credit card fraud detection
Esmail et al. Enhancing loan fraud detection process in the banking sector using data mining techniques
Kaur Development of Business Intelligence Outlier and financial crime analytics system for predicting and managing fraud in financial payment services
Iyer Computational complexity of data mining algorithms used in fraud detection
Islam An efficient technique for mining bad credit accounts from both olap and oltp
Kute Explainable Deep Learning Approach for Detecting Money Laundering Transactions in Banking System
Shihembetsa Use of artificial intelligence algorithms to enhance fraud detection in the Banking Industry

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20781092

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 08.07.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20781092

Country of ref document: EP

Kind code of ref document: A2