WO2020018392A1 - Surveillance et commande de processus stochastiques continus sur la base d'événements dans des données de séries chronologiques - Google Patents

Surveillance et commande de processus stochastiques continus sur la base d'événements dans des données de séries chronologiques Download PDF

Info

Publication number
WO2020018392A1
WO2020018392A1 PCT/US2019/041733 US2019041733W WO2020018392A1 WO 2020018392 A1 WO2020018392 A1 WO 2020018392A1 US 2019041733 W US2019041733 W US 2019041733W WO 2020018392 A1 WO2020018392 A1 WO 2020018392A1
Authority
WO
WIPO (PCT)
Prior art keywords
event
events
risk
interaction
medium
Prior art date
Application number
PCT/US2019/041733
Other languages
English (en)
Inventor
Jean Belanger
Alain Briançon
James Stojanov
Gabriel M. Silberman
Original Assignee
Cerebri AI Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/127,933 external-priority patent/US10402723B1/en
Application filed by Cerebri AI Inc. filed Critical Cerebri AI Inc.
Publication of WO2020018392A1 publication Critical patent/WO2020018392A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks

Definitions

  • the present disclosure relates to artificial intelligence (AI) and more particularly to machine learning used for monitoring or controlling continuous stochastic processes based on discrete samples and other events in time series data.
  • AI artificial intelligence
  • Some aspects include a process, including: obtaining, with a computer system, one or more out of plurality of datasets having a plurality of interaction-event records, wherein: the interaction-event records describe respective interaction events, the interaction-events are interactions in which a first entity has experiences or obtains other information pertaining to second entity, and at least some of the interaction-event records are associated with respective risks by which sequences of at least some of the interaction events relative to one another are ascertainable; determining, with the computer system, based on at least some of the interaction-event records, sets of event-risk scores, the sets corresponding to at least some of the interaction events, wherein: at least some respective event-risk scores are indicative of an effective of a respective risk ascribed by the first entity to a respective aspect of the second entity; and at least some respective event-risk scores are based on both: respective contributions of respective corresponding events to a subsequent event in the one or more out of the plurality of datasets, and a risk ascribed to a subsequent event
  • Some aspects include a tangible, non-transitory, machine-readable medium storing instructions that when executed by a data processing apparatus cause the data processing apparatus to perform operations including the above-mentioned process.
  • Some aspects include a system, including: one or more processors; and memory storing instructions that when executed by the processors cause the processors to effectuate operations of the above-mentioned process.
  • Figure 1 is a logical architecture block diagram of an example continuous stochastic process controller and its computing environment in accordance with some embodiments of the present techniques
  • Figure 2 is a flow chart of a process executed by the continuous stochastic process controller in accordance with some embodiments of the present techniques
  • Figure 3 illustrates an example of a data model of including an interaction-event record operated upon by the continuous stochastic process controller in accordance with some embodiments of the present techniques.
  • Figure 4 is an example of a computing device upon which the present techniques may be implemented.
  • Managing risk as a continuous variable or signal provides the advantage of managing risk as a journey similar to customer journeys described in patent applications US Pat. App. 15456059, the contents of which are hereby incorporated by reference.
  • risk is no longer a static factor, in contrast to some traditional approaches such as a credit score or FICO score.
  • Risk e.g., a likelihood of an undesirable outcome, or expected utility thereof
  • Some embodiments improve upon the techniques in this earlier patent filing to address technical challenges that arise when applying related techniques to risk management, e.g., characterizing risk as a continuous stochastic process.
  • many conventional approaches used by suppliers to understand customers lack a way to quantify how a customer’s risky behavior impacts a supplier and the supplier’s willingness to sell certain types of products involving risk management on a continuing and consistent basis (e.g., by the microsecond, second, minute, hourly, daily, weekly, monthly, per interaction, per location, per trajectory, depending on the context).
  • Risk management concerns are often material in sales of products such as loans, insurance policies, offering discounted or free phones or other equipment for which cost is recouped over the life of a follow-on service contract (and for which the seller often bears the risk of the subscriber canceling the contract before the cost is recouped), wallet services offering credit, credit cards, and other products having an implicit or explicit credit risk or moral hazard component. Risk management also impacts the management of overbooking and rerouting of passengers for airlines.
  • a supplier by operation of some embodiments, can quantify how each customer experience is a better reflection of their behavior.
  • the supplier by operation of some embodiments, can measure potential revenue from each individual customer, the risk of customer defaulting on payments, pre-paying payments, and determine a value for each event (e.g., interaction) in an event timeline.
  • the described approaches may be applied to systems, whether single or multi- component, for which risk-related incidents represent an undesired outcome.
  • the information processed includes risk-related behavior of system components, its environment, and the people and systems it relates to, for the system under examination as well as similar systems.
  • the goal of a supplier is often to efficiently and cost-effectively maintain a close relationship with each customer while protecting the risk they take while extending de-facto credit or carrying a negative balance for a while.
  • the number of customers may be large, e.g., more than 10,000, more than a million, or more than ten million. This is often the case with loans, subsidized purchases, leases, and the like.
  • insurance which as used herein includes warranties
  • the premium charged is dependent on the risk a consumer takes.
  • a supplier typically desires to understand what motivates a customer’s actions to enable the supplier to (i) optimize the economic value that the supplier derives from the customer, (ii) provide offerings tailored to the customer’s needs, (iii) provide offerings with terms that cause the customer to finalize a transaction, and (iv) insure that the customer risk of defaulting on financial obligations or health does not bring the value of said transaction to be negative (v) there is even the opportunity to help the customers understand how their individual behaviors can impact their personal credit.
  • a customer may have several interactions with banks, but visiting countries with no extradition treaty with the home country, the risk is much different than just looking at FICO score, if such information is available to either party the bank or the individual can change their behavior according.
  • Some embodiments create a customer risk journey (e.g., a temporally (or at least sequence) indexed record of events that may affect risk-related metrics and indicators) in the form of an event timeline integrating the different events that impact or otherwise reflect the risk behavior of a customer.
  • Machine learning may be used to extract the appropriate patterns.
  • the models built and trained with the risk journey time series may be used to score a step’s (in the journey) risk posture in the form of a risk index.
  • Journeys may be encoded in memory as a set of time-stamped or sequenced entries in an interaction-event record, each entry, in some cases, including an event and information about that event.
  • the ability to assess the risk index is not limited to past and present events, in some embodiments, but it may also be used to predict the risk index for future events.
  • the models may be used to predict the likelihood a risk incident may happen, as well as plan actions (future steps) to decrease the risk index and thus improve continuous risk posture.
  • the best (estimated) possible next step (or set of steps) may be identified to meet a specific risk management objective in some embodiments.
  • Some embodiments manage (e.g., infer and effectuate decisions based on) continuous risk as a time series of events and actions taken (or not) within a system’s context (this may include human, computing and other types of components) and implement a methodology to continuously assess a continuous risk posture, or“continuous risk index” (“risk index” for short).
  • risk index for short. The higher the continuous risk index, the more vulnerable the system is to disruption.
  • Some embodiments dynamically change the risk management models to adapt new economic conditions, in some cases with active learning, and in some cases, by batch re- training of models. This is expected to result in more accurate risk management compared to statistical deficiencies and degradation of stationary credit scoring models. Some embodiment further dynamically change the risk management models based on international incidents that might later on effect the economy.
  • the risk management models may be updated continuously (e.g., within less than 10, 50, or 500 milliseconds, 1 second, 10 seconds, an hour, or a day, of an event that causes a risk score to change) by incoming data and re-trained through an adaptive modeling framework by incorporating new information as soon as it becomes available. By integrating new information, risk management models may detect changes, and the occurrence of a change may point to eventual corrective actions applicable to the models.
  • risk management is assessed using a batch data processing model which classifies data in a supervised training environment.
  • the model may use a full- memory time window incorporating all previous data and new data, and the model may use a temporary-memory time window in which only the new data is appended to the training set. Prediction of the trained models using full-memory along with the temporary-memory may provide a dynamic risk assessment.
  • multiheaded attention mechanisms may be applied, e.g., for more detailed, longer time-series data to capture longer range effects of more significant events and interactions, e.g., with a transformer architecture that maps events and relationships therebetween to a continuous space vector representation of lower dimensionality than the events, where location in the vector space, in some embodiments, is indicative of risk.
  • machine learning models may be trained using adaptive network-based fuzzy inference systems to convert the current static risk management models into a nonlinear mapping system.
  • the model may be configured to adapt itself to a new input data and gradually minimizes error based on gradient descent training.
  • Trained models may classify customers into different clusters (e.g., with unsupervised learning approaches or supervised learning approaches), using various clustering techniques such as k-means, fuzzy c-means (FCM), and subtractive approaches, based on a risk analysis.
  • the assessment may be used to identify risk factors of the system that are used to determine the risk level.
  • Anomalies can be spatial, and may be referred to as spatial anomalies, which means the values of transactions are outside the typical range.
  • GANs Generative Adversarial Networks
  • Some embodiments mitigate a risk management system using an unsupervised multivariate anomaly detection method based on Generative Adversarial Networks (GANs) that considers the entire variable set concurrently to capture the latent interactions amongst the variables.
  • GANs Generative Adversarial Networks
  • Some embodiments mitigate a risk management system using a real-time anomaly detection algorithm based on Hierarchical Temporal Memory (HTM) and Bayesian Network (BN).
  • HTM Hierarchical Temporal Memory
  • BN Bayesian Network
  • a machine learning system (such as a system described herein referred to as the Cerebri value system, a recurrent neural network (like a LSTM network), or a Hidden Markov model (like a multi-stage model or a continuous-time model, or other dynamic Bayesian networks) works on (e.g., ingests and responds to) time series of events (steps) leading to reference events by assigning relative weights to each event on the series to reflect their relative contributions to the occurrence and magnitude of reference events.
  • the assignment in some embodiments, is based on information contained in a large number of time series similar to the one under examination.
  • the system in some embodiments, thus provides information for events positioned in a time series prior to, or between reference events.
  • reference events are“risk events” (e.g., data breaches, denial of service attacks, regulatory, or compliance violations), and their impact may be measured by (and documented in memory with associated values of) remediation costs, brand value, write-down, recovery of finders’ fee, revenue loss etc. Events or steps are then actions and events taking place over time, both internal to an entity or part of the environment in which it operates, having some (positive or negative) impact on continuous risk. Costs may be characterized as a score, which may be denominated in currency or other units.
  • the risk index calculated as per the above reflect the continuous risk at every step of a time series. Responsive to these values, some embodiments output a continuous readout on current risk posture and tracking of trends over time. Some embodiments also give (e.g., output) insights into what sub-systems, adjudication function, or workflows may be contributing to the current state of affairs, guiding proactive and corrective actions when needed. Regulatory and compliance risks may include lack of proper procedures and workflows, failure to follow them, or failing regulatory tests (e.g., information assurance, reserve levels).
  • an event timeline or other interaction-event record that includes one or more interactions between a customer and a supplier may be determined or otherwise obtained (e.g., from historical logs of a CRM system, complain logs, invoicing systems, and the like).
  • a starting risk value may be assigned to individual events in the event timeline.
  • a sub-sequence comprising a portion of the event timeline that includes at least one reference event may be selected.
  • a classifier may be used to determine a previous relative risk value for a previous event that occurred before the reference event and to determine a next relative risk value for a next event that occurred after the reference event until all events in the event timeline have been processed.
  • the events in the event timeline may be traversed and a risk value assigned to individual events in the event timeline in some embodiments.
  • the system facilitates compliance with Know Your Client/Customer (KYC) rules and regulations, for example, implementing an anti-money- laundering system that detect suspicious interaction-events, especially recognizing bots and groups of individuals or companies. Key rules vary from country to country. In Australia, the Anti-Money Laundering and Counter-Terrorism Financing Act 2006 (AML/CTF Act) gives effect to KYC laws. The Anti-Money Laundering and Counter-Terrorism Financing Rules Instrument 2007 provides guidance for applying the powers and requirements of the Act. Compliance is governed by the Australian Government agency, Australian Transaction Reports and Analysis Centre, established in 1989, known as AUSTRAC.
  • KYC Know Your Client/Customer
  • risk is a trigger for marketing activities, e.g., a user may be added to an audience, selected to receive content via an email or mail, or a bid price for an online ad auction to present content to the user may be calculated based on a risk score.
  • risk information is passed to customer. This is expected to allow individuals to understand their own credit profile or how their behavior impacts credit: informing customers of their score, what they can do to improve their score, and what should their stop doing.
  • Some embodiments may compute a measure of contribution of input features (like events or types thereof), indicating a relative contribution of the feature compared to other features, e.g., by weighting deviation of an input feature from a mean or reference value by a measure of feature importance of that input feature of the model.
  • Models may be trained with various, model-appropriate, training algorithms, including Baum -Welch, gradient descent, and the like.
  • some or all of the weights or biases of a recurrent neural network described herein may be calculated by executing a machine learning algorithm on a training set of historical customer journeys. Some embodiments may execute a gradient descent optimization to reduce the error rate and select appropriate weighting and the bias values. In some cases, a predictive model (e.g., a vector of weights) may be calculated as a batch process run periodically.
  • Some embodiments may construct the model by, for example, assigning randomly selected weights or biases; calculating an error amount with which the model describes the historical data and a rates of change in that error as a function of the weights in the model in the vicinity of the current weight (e.g., a derivative, or local slope); and incrementing the weights in a downward (or error reducing) direction.
  • these steps may be iteratively repeated until a change in error between iterations is less than a threshold amount, indicating at least a local minimum, if not a global minimum.
  • some embodiments may repeat the gradient descent optimization with multiple initial random values to confirm that iterations converge on a likely global minimum error.
  • Other embodiments may iteratively adjust other machine learning models to reduce the error function, e.g., with a greedy algorithm that optimizes for the current iteration.
  • the resulting, trained model e.g., a vector of weights or biases, may be stored in memory and later retrieved for application to new calculations on newly calculated risk scores.
  • cyclic loops in the network may be unrolled during training.
  • Some embodiments may execute a Hidden Markov Model.
  • each hidden state may be mapped to a corresponding risk.
  • the model may be trained with the Baum-Welch algorithm, and the risk may be inferred with the Viterbi algorithm.
  • a subset of the training set may be withheld in each of several iterations of training the model to cross validate the model.
  • the model may be trained periodically, e.g., monthly, in advance of use of the model.
  • the above techniques may be implemented in a computing environment 10 shown in figure 1, for example, with the illustrated continuous stochastic process controller 12, in some cases executing a process described below with reference to figure 2, operating for example on data models like those described below with reference to figures 3 on computers like those described with reference to figure 4.
  • computing environment 10 may be hosted by different entities, for instance in remote datacenters that communicate via the Internet or other networks, or in some embodiments, some or all of the components of the computing environment 10 may be co-located within a network of a single entity, for instance co-located in a single datacenter.
  • the computing environment 10 and the components thereof may be implemented as a monolithic application, for instance, with different illustrated components implemented as different software modules or processes that communicate with one another, for instance via function calls, or in some cases, some or all of the components may be implemented as different processes executing concurrently on a single computing device.
  • some or all of the illustrated components may be implemented as distinct services executing on different network hosts that communicate with one another via messages exchanged via network stacks of the respective hosts, for instance, according to application program interfaces of each of the distinct services.
  • some or all of these services may be replicated, for instance, behind load balancers, to afford a relatively scalable architecture, in some cases, with elastic scaling that automatically spins up or down new instances based on load.
  • Some embodiments implement a service-oriented architecture, such as a micro services architecture, with different services hosted on virtual machines or containers executing images corresponding to different services, in some cases, with some services implemented in a serverless architecture, for instance, as lambda functions.
  • the controller 12 is configured to train a risk scoring machine learning model based upon historical interaction-event records 14 and then use the trained model to characterize risk as a continuous stochastic variable that is updated as current events are received via the event streams 17.
  • the computing environment 10 include various interaction-event record data sets 14, an exogenous event repository 16, various event streams 17, and various action-channel servers 18.
  • the controller 12 may train and apply various machine learning models to inputs from the components 14, 16, and 17 to effectuate various actions implemented via the action-channel servers 18.
  • interaction-event records reflecting previous customer risk journeys may be obtained from the interaction-event record repositories 14.
  • Three repositories are shown, but some embodiments may include substantially more. Examples of such repositories include historical process logs, customer relationship management databases, credit reports, insurance claims, records with fraudulent transactions, and the like.
  • each record may be time-series of events for one of a relatively large number of independent entities for which actions are selected to influence behavior or responsive to predicted behavior, such as of different people in a population, or in some embodiments, the entity may be non-human, for instance, a state of a robot, a manufacturing process, a market, or a datacenter’s HVAC systems.
  • the number of entities for which interaction-event records are obtained may be more than 1000, more than 10,000, more than 100,000, and in many commercially-relevant use cases, more than a million.
  • each of the interaction-event records may include a timeseries of events experienced by the corresponding subject entity, such as person, robot, industrial process, or datacenter.
  • the events may include actions taken by a system controlled by the controller 12 at the direction of the controller 12 based upon a trained model thereof. Examples include the examples described below with reference to action-channel servers 18. Some examples further include directing a thruster in a particular direction and applying a particular amount of thrust for a duration of time in the example of a subsea robot.
  • examples include extending credit, offering subsidized hardware to support a subscription, offering insurance, sending an email, causing an advertisement exposure, offering a voucher, offering a seat on an airplane for a specific leg of travel, the assignment of finite resources, sending a coupon or discount offer, calling a person, sending a text message, adjusting a price for a particular person, adding a task to a customer-relationship management (CRM) system, or the like in the example of members of a population to be influenced.
  • CRM customer-relationship management
  • examples include applying a particular set point for temperature or humidity for some duration of time, setting a fan speed for some duration of time, adjusting a balance between external and internal air recirculation, and the like.
  • the events may include actions taken by nonhuman subjects, for instance, changing a process setpoint, actuating a thruster in a particular direction for a particular duration, or undertaking a computing load in a datacenter for some duration.
  • the events may include actions taken by different subjects, for instance, people navigating to a website of a merchant, calling a help center of a merchants, contacting a salesperson of a merchant, clicking on an advertisement of a merchant, engaging with a native application of a merchant, physically visiting a storefront of a merchant, being exposed to an advertisement, submitting a review classified as having a particular sentiment among a plurality of different types of sentiments, or the like.
  • the events may further include exogenous events, which are events that are not caused by the controller 12 or the subject entity to which a record pertains, but to which the subject entity is exposed or potentially exposed. Examples include phenomena like the weather, ocean currents, and contact by sea creatures in the example of a subsea robot. Other examples include phenomena like recessions, changes in interest rates, and other macro-economic phenomena, along with cultural phenomena, like sporting events, holidays, the Olympics, elections, and the like in the example of members of a human population to be influenced.
  • the exogenous events are stochastic, and some embodiments may associate with exogenous events an estimated or known probability distribution, like likelihoods of occurring within threshold durations of time. Records of such events may be obtained from repository 16 in some embodiments.
  • each interaction-event record may correspond to a different subject entity, such as a different robot, person, or datacenter.
  • each interaction-event record may have associated therewith a unique identifier of the subject, in some cases a pseudonymous identifier.
  • each interaction-event record may further include attributes of the subject, like a maximum amount of thrust available in a subsea robot, a range of thruster angles available in such a robot, drag of the robot, and inertial properties of a tether.
  • the attributes may include attributes of people, like psychometric or demographic attributes, like age, gender, geolocation of residence, geolocation of a workplace, income, number and age of children, whether they are married, and the like.
  • the attributes may include attributes of a datacenter, for instance, a cooling capacity, an inventory of HVAC equipment therein, a volumetric flow rate maximum for fans, and the like. In some cases, such attributes may include values indicating transient responses to stimulus as well.
  • each interaction-event record or records in the repository 16 may further include a timeseries of events experienced by the subject-entity or potentially experienced by the subject entity, such as exogenous events that may or may not have impinged upon the subject entity.
  • the time series of events are stored in a format that indicates a sequence of the events, for instance with timestamps of each event, or some embodiments may indicate sequence but not time, which is not to suggest that any other described feature is limiting.
  • the number of events may be relatively large, such as more than 10, more than 100, or more than 1000 on average for the collection of subject entities for which interaction-event record are stored in the repository 14.
  • events may be associated with a confidence score indicative of a confidence that the event occurred or was experienced by the respective subject entity.
  • events may have other attributes, like an intensity of the events, a monetary value of the events, a dwell time on a website, a value indicating whether a person clicks through to an advertisement, and the like for use cases in which humans are the subject.
  • events for robots or datacenters may include, for example, a thermal load applied in a datacenter, an amount of current experience and by a subsea robot, and the like.
  • the events may be arranged in an ontology or a hierarchical taxonomy.
  • a value mapping the event to an entry in a taxonomy may indicate whether the event is an action taken by the subject-entity being controlled, an exogenous event, an event to be avoided and for which risk is to be calculated, or an act upon the subject-entity at the direction of the controller or other system to be controlled by the controller.
  • events corresponding to actions by the subject-entity may have attributes indicating whether the respective event is advancing a goal, such as a goal reflected in an objective function composed by a developer to effectuate the goal, like reducing or minimizing risk of bad outcomes, maximizing likelihood of good outcomes, or a net result based on a combination of both.
  • Examples include an attribute of a robot movement event indicating whether the robot movement event resulted in the robot moving closer to a barrier to be avoided, an attribute of a consumer event indicating whether the consumer purchased a product, an attribute of a user event indicating with the user engaged in some other desired or undesired behavior, like recycling or getting a health checkup, or an attribute of a datacenter event indicating whether a datacenter remains within a targeted band of temperature for a targeted duration of time.
  • the event attributes may not express whether such attributes are good or bad, merely provide a metric that may be interpreted with, for example, a reward function of the controller 12.
  • the reward function may be a part of an objective function, or some embodiments may apply other types of objective functions that do not include a reward function, which is not to suggest that any other description is limiting.
  • the interaction-event record records are or include the event timelines described in US patent application 15/456,059, titled BUSINESS ARTIFICIAL INTELLIGENCE MANAGEMENT ENGINE, the contents of which are hereby incorporated by reference.
  • interaction-event records may be synthetic interaction-event records.
  • some embodiments may include a generative adversarial network having a generator model and discriminator model trained to, respectively, generate synthetic (e.g., partially or entirely fake) interaction-event records and detect which records are synthetic.
  • the two models may be trained concurrently to attempt to defeat the other, and the end result generator model after training may be configured to output synthetic interaction event records that exhibit properties like real records.
  • state to which controller 12 is responsive may be ingested from various event streams 17 (some may be continuously feeds, and some may be batch feeds), which may take the form of a series of events like those described above.
  • event streams 17 may be a stream of readings from sensors of a robot, such as inertial measurement unit sensors or range finding sensors.
  • an event stream may be attributes of additional members of a population to be evaluated for risk or a new population to be evaluated for risk, or in some cases attributes of events in an event history thereof.
  • the event stream may be temperature or load assignment signals from a datacenter.
  • the ingested stream of data may have associated therewith identifiers like those described above distinguishing different subjects controlled and associated with the inbound events or attributes.
  • the stream may be a batch process, such as a set of records of subject entities that are members of a population for which a new campaign is being designed to influence the members of the population to engage in some targeted behavior.
  • the stream may be a real time stream, for instance, with data being supplied as it is obtained by, or in relation to, subject entities, for instance, in queries sent as the data is obtained to request a recommended responsive action in view of the new information.
  • the controller 12 may exercise control (which may be outcome determinative control or merely control over downstream inputs that merely influence a downstream process), via one or more of the action-channel servers 18.
  • examples of actions include setting a process parameter setpoint (like temperature, rate of acceleration, robot route, workload allocation among data center, and the like).
  • Examples of resulting actions may also include outputting a score indicative of whether a consumer should be sold or marketed a particular product like those described above.
  • Examples of resulting actions may also include configuring a parameter of a product, like an interest rate, amount of collateral, deposit, requirements for a guarantor, amount loaned, amount insured, etc.
  • Examples of resulting actions also include selecting or composing a message to be sent to a consumer based on the risk score of that consumer.
  • Other examples include various dashboards and reports indicative of risk scores for different types of consumers or different types of products, like population statistics, such as measures of central tendency or variation.
  • different servers 18 may communicate with various downstream systems, like loan or insurance underwriting computer systems, ERP systems, or CRM systems, to update those systems on predicted current state of risk, e.g., responsive to queries or by pushing updates.
  • the continuous stochastic process controller 12 includes a data classifier 20, a model trainer 22, a risk-scoring model 24, a risk-score repository 26, a control module 28, and an explainability module 30.
  • Classifying performed by the data classifier 20 may take a variety of different forms. Some embodiments may classify interaction-event records according to whether an undesirable outcome occurred as one of the events. In some cases, the undesirability of the outcome may be quantified as well, e.g., with a score. The results may be applied by the classifier 20 to the interaction-event records as, e.g., labels of a labeled training set used for training the model 24. In some embodiments, the classifier 20 may further compress the data that is ingested by removing duplicate records, such as records that are identical but for a subject identifier, and some embodiments provide the resulting filtered de-duplicated records to subsequent processing in a data pipeline like that shown.
  • the model trainer 22 may be configured to train the risk scoring model 24, for example, as a batch process or with active learning.
  • the model and corresponding form of training may take any of the forms described herein, for instance.
  • the risk scoring model 24 may update risk- scores in the risk-score repository 26 responsive to new events in event streams 17 as classified by classifier 20.
  • the risk scores may account for both a likelihood of an event and an expected cost or other measure of undesirability (e.g., downtime, amount a product is out of tolerance, loss of yield, etc.) of that event, e.g., with the product of the two values.
  • the resulting risk scores may be accessed by control module 28 to effectuate various actions via the action-channel servers 18, like those actions described above.
  • the explainability module 30 may access the risk scores and trained model parameters from the model 24 to advise users, like consumers, those offering products, regulators or auditors, on the causes of the risk scores that are calculated. Examples include outputting one or more metrics associated with one or more events or types of events indicating a measure of a contribution of those events to a given risk score.
  • a user’s computer may query the controller 12 with a request to explain a score, and in response, the controller 12 may determine contributions of various features in that user's history that contribute to the score and magnitudes of those contributions. Such results may be presented to the user to inform the user on how to adjust their behavior to reduce risk in the future.
  • controller 12 may execute a process 50 shown in figure 2. In some embodiments, different subsets of this process 50 may be executed by the illustrated components of the controller 12. It should be emphasized, though, that embodiments of the process 50 are not limited to implementations with the architecture of figure 1, and that the architecture of figure 1 may execute processes different from that described with reference to figure 2, none of which is to suggest that any other description herein is limiting.
  • the process 50 and the other functionality described herein may be implemented with program code or other instructions stored on a tangible, non- transitory, machine-readable medium, such that when the instructions are executed by one or more processors (a term which as used herein refers to physical processors, e.g., implemented on a semiconductor device), the described functionality is effectuated.
  • processors a term which as used herein refers to physical processors, e.g., implemented on a semiconductor device
  • the described functionality is effectuated.
  • the medium may be distributed, with different subsets of the instructions stored on different computing devices that effectuate those different subsets, an arrangement consistent with use of the singular term“medium” along with monolithic applications on a single device.
  • the described operations may be executed in a different order, some or all of the operations may be executed multiple times, operations may be executed concurrently with one another or multiple instances of the described process, additional operations may be inserted, operations may be omitted, operations may be executed serially, recursively, or the processes described may otherwise be varied, again none of which is to suggest that any other description herein is limiting.
  • the process 50 includes obtaining historical interaction-event records, as indicated by block 52, for example, with the above-described classifier 20 from the interaction-event record repositories 14. In some embodiments, this may further include obtaining exogenous event records from the repository 16 described above. [0066] Next, some embodiments may train a machine learning model on the historical interaction-event records, as indicated by block 54. In some embodiments, this may be performed by the above-described model trainer 22. Some embodiments may then obtain current interaction-event records, as indicated by block 56, again for example, with the classifier 20 from the repositories 14 or streams 17 described above.
  • Some embodiments may determine, based on at least some of the interaction-event records a set of event risk scores, as indicated by block 58. In some embodiments, this may be performed by the above-described risk-scoring model 24. Some embodiments may then store the sets of event-risk scores in memory, as indicated by block 60, for example, in the above-described risk-score repository 26.
  • Some embodiments may then determine various actions, for example, selecting among candidates, like launching or applying various campaigns or other programs to consumers, based on the sets of event-risk scores, as indicated by block 62. In some embodiments, this may be performed by the of above-described control module 28.
  • Some embodiments may further determine a measure of contribution of events or types of events to the sets of event-risk scores, as indicated by block 64, for example with the above-described explainability module 30. In some embodiments may cause the measure of contribution to be presented to a user to the instruct the user on how to modulate risk, as indicated by block 60.
  • Figure 3 depicts an example data model 400 used to analyze interaction-event records according to some embodiments.
  • one or more interaction-event records 402 may be analyzed by the model 24 to produce one or more results 406, which in some cases may be a risk score stored in repository 26 (which may be persistent or non- persistent memory).
  • the interaction-event records 402 may include previous purchase data, finance data (e.g., associated with consumer financing), demographic data (e.g., customer's age, income, zip code, and the like), CSR contact data (e.g., why, when, and how the customer has contacted a CSR), website access data (e.g., what type of information the customer obtains from the website), warranty data (e.g., warranty claim is indicative of certain customer activities), parts and service data, and marketing data (e.g., how the customer has been provided marketing), for example.
  • the data sets 402 may include other types of customer-related data.
  • the interaction-event records 402 may include previous purchase data 404(1), finance data (e.g., associated with consumer financing) 404(2), demographic data (e.g., customer's age, income, zip code, and the like) 404(3), CSR contact data (e.g., why, when, and how the customer has contacted a CSR) 404(4), website access data (e.g., what type of information the customer obtains from the website) 404(5), warranty data 404(6) (e.g., warranty claim is indicative of certain customer activities), and marketing data 404(M) (e.g., how the customer has been provided marketing), where M>0.
  • the data sets in the record 402 may include other types of customer-related data. These different types of data may be determined to make various contributions 408(1), 408(2), and 408(3) to an integrated value 406, like a risk score, by the presently described machine learning techniques.
  • controller 12 may perform an analysis of the data sets 402 and to determine value indexes 404 based on the analysis and various contributions to an integrated value, like a risk score 406. As illustrated, events may occur in a sequence, with events occurring in a forward direction 412 and backward direction 410 relative to a given event.
  • Figure 4 is a diagram that illustrates an exemplary computing system 1000 in accordance with embodiments of the present technique.
  • Various portions of systems and methods described herein may include or be executed on one or more computer systems similar to computing system 1000. Further, processes and modules described herein may be executed by one or more processing systems similar to that of computing system 1000.
  • Computing system 1000 may include one or more processors (e.g., processors lOlOa- 1010h) coupled to system memory 1020, an input/output I/O device interface 1030, and a network interface 1040 via an input/output (I/O) interface 1050.
  • processors e.g., processors lOlOa- 1010h
  • a processor may include a single processor or a plurality of processors (e.g., distributed processors).
  • a processor may be any suitable processor capable of executing or otherwise performing instructions.
  • a processor may include a central processing unit (CPU) that carries out program instructions to perform the arithmetical, logical, and input/output operations of computing system 1000.
  • CPU central processing unit
  • a processor may execute code (e.g., processor firmware, a protocol stack, a database management system, an operating system, or a combination thereof) that creates an execution environment for program instructions.
  • a processor may include a programmable processor.
  • a processor may include general or special purpose microprocessors.
  • a processor may receive instructions and data from a memory (e.g., system memory 1020).
  • Computing system 1000 may be a uni processor system including one processor (e.g., processor lOlOa), or a multi-processor system including any number of suitable processors (e.g., lOlOa-lOlOn). Multiple processors may be employed to provide for parallel or sequential execution of one or more portions of the techniques described herein.
  • Processes, such as logic flows, described herein may be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating corresponding output. Processes described herein may be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).
  • Computing system 1000 may include a plurality of computing devices (e.g., distributed computer systems) to implement various processing functions.
  • I/O device interface 1030 may provide an interface for connection of one or more I/O devices 1060 to computer system 1000.
  • I/O devices may include devices that receive input (e.g., from a user) or output information (e.g., to a user).
  • I/O devices 1060 may include, for example, graphical user interface presented on displays (e.g., a cathode ray tube (CRT) or liquid crystal display (LCD) monitor), pointing devices (e.g., a computer mouse or trackball), keyboards, keypads, touchpads, scanning devices, voice recognition devices, gesture recognition devices, printers, audio speakers, microphones, cameras, or the like.
  • I/O devices 1060 may be connected to computer system 1000 through a wired or wireless connection.
  • I/O devices 1060 may be connected to computer system 1000 from a remote location.
  • I/O devices 1060 located on remote computer system for example, may be connected to computer system 1000 via a network and network interface 1040.
  • Network interface 1040 may include a network adapter that provides for connection of computer system 1000 to a network.
  • Network interface may 1040 may facilitate data exchange between computer system 1000 and other devices connected to the network.
  • Network interface 1040 may support wired or wireless communication.
  • the network may include an electronic communication network, such as the Internet, a local area network (LAN), a wide area network (WAN), a cellular communications network, or the like.
  • System memory 1020 may be configured to store program instructions 1100 or data 1110.
  • Program instructions 1100 may be executable by a processor (e.g., one or more of processors lOlOa-lOlOn) to implement one or more embodiments of the present techniques.
  • Instructions 1100 may include modules of computer program instructions for implementing one or more techniques described herein with regard to various processing modules.
  • Program instructions may include a computer program (which in certain forms is known as a program, software, software application, script, or code).
  • a computer program may be written in a programming language, including compiled or interpreted languages, or declarative or procedural languages.
  • a computer program may include a unit suitable for use in a computing environment, including as a stand-alone program, a module, a component, or a subroutine.
  • a computer program may or may not correspond to a file in a file system.
  • a program may be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code).
  • a computer program may be deployed to be executed on one or more computer processors located locally at one site or distributed across multiple remote sites and interconnected by a communication network.
  • System memory 1020 may include a tangible program carrier having program instructions stored thereon.
  • a tangible program carrier may include a non-transitory computer readable storage medium.
  • a non-transitory computer readable storage medium may include a machine-readable storage device, a machine-readable storage substrate, a memory device, or any combination thereof.
  • Non-transitory computer readable storage medium may include non-volatile memory (e.g., flash memory, ROM, PROM, EPROM, EEPROM memory), volatile memory (e.g., random access memory (RAM), static random access memory (SRAM), synchronous dynamic RAM (SDRAM)), bulk storage memory (e.g., CD-ROM and/or DVD-ROM, hard-drives), or the like.
  • non-volatile memory e.g., flash memory, ROM, PROM, EPROM, EEPROM memory
  • volatile memory e.g., random access memory (RAM), static random access memory (SRAM), synchronous dynamic RAM (SDRAM)
  • bulk storage memory e.g.
  • System memory 1020 may include a non- transitory computer readable storage medium that may have program instructions stored thereon that are executable by a computer processor (e.g., one or more of processors lOlOa- 1010h) to cause the subject matter and the functional operations described herein.
  • a memory e.g., system memory 1020
  • I/O interface 1050 may be configured to coordinate I/O traffic between processors lOlOa-lOlOn, system memory 1020, network interface 1040, I/O devices 1060, and/or other peripheral devices. I/O interface 1050 may perform protocol, timing, or other data transformations to convert data signals from one component (e.g., system memory 1020) into a format suitable for use by another component (e.g., processors lOlOa-lOlOn). I/O interface 1050 may include support for devices attached through various types of peripheral buses, such as a variant of the Peripheral Component Interconnect (PCI) bus standard or the Universal Serial Bus (USB) standard.
  • PCI Peripheral Component Interconnect
  • USB Universal Serial Bus
  • Embodiments of the techniques described herein may be implemented using a single instance of computer system 1000 or multiple computer systems 1000 configured to host different portions or instances of embodiments. Multiple computer systems 1000 may provide for parallel or sequential processing/execution of one or more portions of the techniques described herein.
  • Computer system 1000 is merely illustrative and is not intended to limit the scope of the techniques described herein.
  • Computer system 1000 may include any combination of devices or software that may perform or otherwise provide for the performance of the techniques described herein.
  • computer system 1000 may include or be a combination of a cloud-computing system, a data center, a server rack, a server, a virtual server, a desktop computer, a laptop computer, a tablet computer, a server device, a client device, a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a vehicle-mounted computer, or a Global Positioning System (GPS), or the like.
  • PDA personal digital assistant
  • GPS Global Positioning System
  • Computer system 1000 may also be connected to other devices that are not illustrated, or may operate as a stand-alone system.
  • the functionality provided by the illustrated components may in some embodiments be combined in fewer components or distributed in additional components.
  • the functionality of some of the illustrated components may not be provided or other additional functionality may be available.
  • instructions stored on a computer-accessible medium separate from computer system 1000 may be transmitted to computer system 1000 via transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as a network or a wireless link.
  • Various embodiments may further include receiving, sending, or storing instructions or data implemented in accordance with the foregoing description upon a computer-accessible medium. Accordingly, the present techniques may be practiced with other computer system configurations.
  • illustrated components are depicted as discrete functional blocks, but embodiments are not limited to systems in which the functionality described herein is organized as illustrated.
  • the functionality provided by each of the components may be provided by software or hardware modules that are differently organized than is presently depicted, for example such software or hardware may be intermingled, conjoined, replicated, broken up, distributed (e.g. within a data center or geographically), or otherwise differently organized.
  • the functionality described herein may be provided by one or more processors of one or more computers executing code stored on a tangible, non-transitory, machine readable medium.
  • third party content delivery networks may host some or all of the information conveyed over networks, in which case, to the extent information (e.g., content) is said to be supplied or otherwise provided, the information may provided by sending instructions to retrieve that information from a content delivery network.
  • the word“may” is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must).
  • the words“include”,“including”, and“includes” and the like mean including, but not limited to.
  • the singular forms“a,”“an,” and“the” include plural referents unless the content explicitly indicates otherwise.
  • Statements in which a plurality of attributes or functions are mapped to a plurality of objects encompasses both all such attributes or functions being mapped to all such objects and subsets of the attributes or functions being mapped to subsets of the attributes or functions (e.g., both all processors each performing steps A-D, and a case in which processor 1 performs step A, processor 2 performs step B and part of step C, and processor 3 performs part of step C and step D), unless otherwise indicated.
  • reference to“a computer system” performing step A and“the computer system” performing step B can include the same computing device within the computer system performing both steps or different computing devices within the computer system performing steps A and B.
  • statements that one value or action is“based on” another condition or value encompass both instances in which the condition or value is the sole factor and instances in which the condition or value is one factor among a plurality of factors.
  • statements that“each” instance of some collection have some property should not be read to exclude cases where some otherwise identical or similar members of a larger collection do not have the property, i.e., each does not necessarily mean each and every.
  • a tangible, non-transitory, machine-readable medium storing instructions that when executed by one or more processors effectuate operations comprising: obtaining, with a computer system, one or more out of plurality of datasets having a plurality of interaction- event records, wherein: the interaction-event records describe respective interaction events, the interaction-events are interactions in which a first entity has experiences or obtains other information pertaining to second entity, and at least some of the interaction-event records are associated with respective risks by which sequences of at least some of the interaction events relative to one another are ascertainable; determining, with the computer system, based on at least some of the interaction-event records, sets of event-risk scores, the sets corresponding to at least some of the interaction events, wherein: at least some respective event-risk scores are indicative of an effective of a respective risk ascribed by the first entity to a respective aspect of the second entity; and at least some respective event-risk scores are based on both:
  • each of at least some of the sets of event-risk scores include a plurality of different scores corresponding to different requests of the second entity to which the first entity ascribes respective risks.
  • the request includes request of a collateralized offering, request of a collateralized offering, a request of a new offering, a modification of existing offering, obtaining a product at a cost that is subsidized in exchange for a promise of future behavior, or insurance.
  • different offerings are associated with different sets of risk profiles.
  • each respective set includes both: a respective weight corresponding to a contribution of the respective interaction event toward influencing the first entity to cause the subsequent event, and a respective risk index based on a product of respective weight and a risk ascribed to the subsequent event by the second entity.
  • the events include both transaction and non-transaction events; the events include both direct and indirect interactions; the events include factors, demographic, or macro-economic events; the events include credit score; the event includes social media information; the events include the timing
  • the interaction-event records are obtained from diverse data sets; or at least some of the event-risk scores are determined at least in part with a machine learning classifier.
  • the events include both transaction and non-transaction events; the events include both direct and indirect interactions; the events include factors, demographic, or macro-economic events; the events include credit score; the event includes social media information; the interaction-event records are obtained from diverse data sets; and at least some of the event-risk scores are determined at least in part with a machine learning classifier.
  • the events include factors, demographic, or macro-economic events; the events include credit score; the events include social media information; and the events include Know Your Client rules; the interaction- event records are obtained from diverse data sets; or at least some of the event-risk scores are determined at least in part with a machine learning classifier.
  • determining sets of event-risk scores comprises: determining initial risks of at least one type of score in the sets of event- risk scores; and iteratively adjusting the at least one type of score with machine learning.
  • any one of embodiments 1-14 comprising: obtaining a designation of one of the events as a reference event; obtaining a risk ascribed to the reference event by the first entity; selecting a portion of an event timeline including a subset of the events among which is the reference event; determining, using a classifier, relative risks for at least some events in the subset; assigning a risk index to individual events among the subset; and setting or resetting parameters of one or more transactions with second entity.
  • determining sets of event-risk scores comprises determining sets of event-risk scores with a dynamic Bayesian network.
  • determining sets of event-risk scores comprises determining sets of event-risk scores with a directed cyclic graph of perceptrons.
  • a computing system comprising: a logic subsystem including one or more processors; and a storage subsystem comprising a tangible, non-transitory storage device or devices, the storage subsystem comprising machine-readable medium storing instructions by the logic subsystem to implement a fraud detection platform and, for a journey of interaction- events available in a fraud detection platform, extract or receive information from the journey of interaction-events regarding attributes related to an entity’s progress in the journey of interaction-events, the attributes provided by the journey of interaction-events to the fraud detection attributes for each entity of a plurality of entities; for each entity of the plurality of entities, apply a classifying function to classify the entity progress in the journey of interaction-events as normal or as outlying based upon the attributes provided by the journey of interaction-events to the fraud detection platform, the classifying function being trained via machine learning; based at least in part on entity progress in the journey of interaction-events being classified as outlying, then take an action based upon classification as outlying; and based at
  • a method comprising: the operations of any one of embodiments 1-19.
  • a system comprising: one or more processors; and memory storing instructions that when executed by the processors cause the processors to effectuate operations comprising: the operations of any one of embodiments 1-19.
  • a tangible, non-transitory, machine-readable medium storing instructions that when executed by one or more processors effectuate operations comprising: obtaining, with a computer system, one or more out of plurality of datasets having a plurality of interaction- event records, wherein: the interaction-event records describe respective interaction events, the interaction-events are interactions in which a first entity has experiences or obtains other information pertaining to second entity, and at least some of the interaction-event records are associated with respective risks by which sequences of at least some of the interaction events relative to one another are ascertainable; determining, with the computer system, based on at least some of the interaction-event records, sets of event-risk scores, the sets corresponding to at least some of the interaction events, wherein: at least some respective event-risk scores are indicative of an effective of a respective risk ascribed by the first entity to a respective aspect of the second entity; and at least some respective event-risk scores are based on both:
  • the request includes request of a collateralized offering, request of a collateralized offering, a request of a new offering, a modification of existing offering, obtaining a product at a cost that is subsidized in exchange for a promise of future behavior, or insurance.
  • each respective set includes both: a respective weight corresponding to a contribution of the respective interaction event toward influencing the first entity to cause the subsequent event, and a respective risk index based on a product of respective weight and a risk ascribed to the subsequent event by the second entity.
  • the events include both transaction and non-transaction events; the events include both direct and indirect interactions; the events include factors, demographic, or macro-economic events; the events include credit score; the event includes social media information; the events include the timing
  • the interaction-event records are obtained from diverse data sets; or at least some of the event-risk scores are determined at least in part with a machine learning classifier.
  • the events include both transaction and non-transaction events; the events include both direct and indirect interactions; the events include factors, demographic, or macro-economic events; the events include credit score; the event includes social media information; the interaction-event records are obtained from diverse data sets; and at least some of the event-risk scores are determined at least in part with a machine learning classifier.
  • the events include factors, demographic, or macro-economic events; the events include credit score; the events include social media information; and the events include Know Your Client rules; the interaction- event records are obtained from diverse data sets; or at least some of the event-risk scores are determined at least in part with a machine learning classifier.
  • determining sets of event-risk scores comprises: determining initial risks of at least one type of score in the sets of event- risk scores; and iteratively adjusting the at least one type of score with machine learning.
  • any one of embodiments 1-14 comprising: obtaining a designation of one of the events as a reference event; obtaining a risk ascribed to the reference event by the first entity; selecting a portion of an event timeline including a subset of the events among which is the reference event; determining, using a classifier, relative risks for at least some events in the subset; assigning a risk index to individual events among the subset; and setting or resetting parameters of one or more transactions with second entity.
  • determining sets of event-risk scores comprises determining sets of event-risk scores with a dynamic Bayesian network.
  • determining sets of event-risk scores comprises determining sets of event-risk scores with a directed cyclic graph of perceptrons.
  • a computing system comprising: a logic subsystem including one or more processors; and a storage subsystem comprising a tangible, non-transitory storage device or devices, the storage subsystem comprising machine-readable medium storing instructions by the logic subsystem to implement a fraud detection platform and, for a journey of interaction- events available in a fraud detection platform, extract or receive information from the journey of interaction-events regarding attributes related to an entity’s progress in the journey of interaction-events, the attributes provided by the journey of interaction-events to the fraud detection attributes for each entity of a plurality of entities; for each entity of the plurality of entities, apply a classifying function to classify the entity progress in the journey of interaction-events as normal or as outlying based upon the attributes provided by the journey of interaction-events to the fraud detection platform, the classifying function being trained via machine learning; based at least in part on entity progress in the journey of interaction-events being classified as outlying, then take an action based upon classification as outlying; and based at
  • a method comprising: the operations of any one of embodiments 1-19.
  • a system comprising: one or more processors; and memory storing instructions that when executed by the processors cause the processors to effectuate operations comprising: the operations of any one of embodiments 1-19.
  • a tangible, non-transitory, machine-readable medium storing instructions that when executed by one or more processors effectuate operations comprising: obtaining, with a computer system, one or more out of plurality of datasets having a plurality of interaction- event records, wherein: the interaction-event records describe respective interaction events, the interaction-events are interactions in which a first entity has experiences or obtains other information pertaining to second entity, and at least some of the interaction-event records are associated with respective risks by which sequences of at least some of the interaction events relative to one another are ascertainable; determining, with the computer system, based on at least some of the interaction-event records, sets of event-risk scores, the sets corresponding to at least some of the interaction events, wherein: at least some respective event-risk scores are indicative of an effective of a respective risk ascribed by the first entity to a respective aspect of the second entity; and at least some respective event-risk scores are based on both:
  • each of at least some of the sets of event-risk scores include a plurality of different scores corresponding to different requests of the second entity to which the first entity ascribes respective risks.
  • the request includes request of a collateralized offering, request of a collateralized offering, a request of a new offering, a modification of existing offering, obtaining a product at a cost that is subsidized in exchange for a promise of future behavior, or insurance.
  • each respective set includes both: a respective weight corresponding to a contribution of the respective interaction event toward influencing the first entity to cause the subsequent event, and a respective risk index based on a product of respective weight and a risk ascribed to the subsequent event by the second entity.
  • the events include both transaction and non-transaction events; the events include both direct and indirect interactions; the events include factors, demographic, or macro-economic events; the events include credit score; the event includes social media information; the events include the timing
  • the interaction-event records are obtained from diverse data sets; or at least some of the event-risk scores are determined at least in part with a machine learning classifier.
  • the events include both transaction and non-transaction events; the events include both direct and indirect interactions; the events include factors, demographic, or macro-economic events; the events include credit score; the event includes social media information; the interaction-event records are obtained from diverse data sets; and at least some of the event-risk scores are determined at least in part with a machine learning classifier.
  • the events include factors, demographic, or macro-economic events; the events include credit score; the events include social media information; and the events include Know Your Client rules; the interaction- event records are obtained from diverse data sets; or at least some of the event-risk scores are determined at least in part with a machine learning classifier.
  • determining sets of event-risk scores comprises: determining initial risks of at least one type of score in the sets of event- risk scores; and iteratively adjusting the at least one type of score with machine learning.
  • any one of embodiments 1-14 comprising: obtaining a designation of one of the events as a reference event; obtaining a risk ascribed to the reference event by the first entity; selecting a portion of an event timeline including a subset of the events among which is the reference event; determining, using a classifier, relative risks for at least some events in the subset; assigning a risk index to individual events among the subset; and setting or resetting parameters of one or more transactions with second entity.
  • determining sets of event-risk scores comprises determining sets of event-risk scores with a dynamic Bayesian network.
  • determining sets of event-risk scores comprises determining sets of event-risk scores with a directed cyclic graph of perceptrons.
  • a computing system comprising: a logic subsystem including one or more processors; and a storage subsystem comprising a tangible, non-transitory storage device or devices, the storage subsystem comprising machine-readable medium storing instructions by the logic subsystem to implement a fraud detection platform and, for a journey of interaction- events available in a fraud detection platform, extract or receive information from the journey of interaction-events regarding attributes related to an entity’s progress in the journey of interaction-events, the attributes provided by the journey of interaction-events to the fraud detection attributes for each entity of a plurality of entities; for each entity of the plurality of entities, apply a classifying function to classify the entity progress in the journey of interaction-events as normal or as outlying based upon the attributes provided by the journey of interaction-events to the fraud detection platform, the classifying function being trained via machine learning; based at least in part on entity progress in the journey of interaction-events being classified as outlying, then take an action based upon classification as outlying; and based at
  • a method comprising: the operations of any one of embodiments 1-19.
  • a system comprising: one or more processors; and memory storing instructions that when executed by the processors cause the processors to effectuate operations comprising: the operations of any one of embodiments 1-19.
  • a tangible, non-transitory, machine-readable medium storing instructions that when executed by one or more processors effectuate operations comprising: obtaining, with a computer system, one or more out of plurality of datasets having a plurality of interaction- event records, wherein: the interaction-event records describe respective interaction events, the interaction-events are interactions in which a first entity has experiences or obtains other information pertaining to second entity, and at least some of the interaction-event records are associated with respective risks by which sequences of at least some of the interaction events relative to one another are ascertainable; determining, with the computer system, based on at least some of the interaction-event records, sets of event-risk scores, the sets corresponding to at least some of the interaction events, wherein: at least some respective event-risk scores are indicative of an effective of a respective risk ascribed by the first entity to a respective aspect of the second entity; and at least some respective event-risk scores are based on both:
  • each of at least some of the sets of event-risk scores include a plurality of different scores corresponding to different requests of the second entity to which the first entity ascribes respective risks.
  • the request includes request of a collateralized offering, request of a collateralized offering, a request of a new offering, a modification of existing offering, obtaining a product at a cost that is subsidized in exchange for a promise of future behavior, or insurance.
  • each respective set includes both: a respective weight corresponding to a contribution of the respective interaction event toward influencing the first entity to cause the subsequent event, and a respective risk index based on a product of respective weight and a risk ascribed to the subsequent event by the second entity.
  • the events include both transaction and non-transaction events; the events include both direct and indirect interactions; the events include factors, demographic, or macro-economic events; the events include credit score; the event includes social media information; the events include the timing
  • the interaction-event records are obtained from diverse data sets; or at least some of the event-risk scores are determined at least in part with a machine learning classifier.
  • the events include both transaction and non-transaction events; the events include both direct and indirect interactions; the events include factors, demographic, or macro-economic events; the events include credit score; the event includes social media information; the interaction-event records are obtained from diverse data sets; and at least some of the event-risk scores are determined at least in part with a machine learning classifier.
  • the events include factors, demographic, or macro-economic events; the events include credit score; the events include social media information; and the events include Know Your Client rules; the interaction- event records are obtained from diverse data sets; or at least some of the event-risk scores are determined at least in part with a machine learning classifier.
  • determining sets of event-risk scores comprises: determining initial risks of at least one type of score in the sets of event- risk scores; and iteratively adjusting the at least one type of score with machine learning.
  • any one of embodiments 1-14 comprising: obtaining a designation of one of the events as a reference event; obtaining a risk ascribed to the reference event by the first entity; selecting a portion of an event timeline including a subset of the events among which is the reference event; determining, using a classifier, relative risks for at least some events in the subset; assigning a risk index to individual events among the subset; and setting or resetting parameters of one or more transactions with second entity.
  • determining sets of event-risk scores comprises determining sets of event-risk scores with a dynamic Bayesian network.
  • determining sets of event-risk scores comprises determining sets of event-risk scores with a directed cyclic graph of perceptrons.
  • a computing system comprising: a logic subsystem including one or more processors; and a storage subsystem comprising a tangible, non-transitory storage device or devices, the storage subsystem comprising machine-readable medium storing instructions by the logic subsystem to implement a fraud detection platform and, for a journey of interaction- events available in a fraud detection platform, extract or receive information from the journey of interaction-events regarding attributes related to an entity’s progress in the journey of interaction-events, the attributes provided by the journey of interaction-events to the fraud detection attributes for each entity of a plurality of entities; for each entity of the plurality of entities, apply a classifying function to classify the entity progress in the journey of interaction-events as normal or as outlying based upon the attributes provided by the journey of interaction-events to the fraud detection platform, the classifying function being trained via machine learning; based at least in part on entity progress in the journey of interaction-events being classified as outlying, then take an action based upon classification as outlying; and based at
  • a method comprising: the operations of any one of embodiments 1-19.
  • a system comprising: one or more processors; and memory storing instructions that when executed by the processors cause the processors to effectuate operations comprising: the operations of any one of embodiments 1-19.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Mathematics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Algebra (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Financial Or Insurance-Related Operations Such As Payment And Settlement (AREA)

Abstract

L'invention concerne un processus consistant : à obtenir des enregistrements d'événements d'interaction; à déterminer, en fonction d'au moins certains des enregistrements d'événements d'interaction, des ensembles de scores de risque d'événement, comme suit : au moins certains scores respectifs de risque d'événement sont indicatifs d'un risque effectif respectif attribué par une première entité à un aspect respectif d'une seconde entité; et au moins certains scores respectifs de risque d'événement sont fondés sur les deux éléments suivants : des contributions respectives d'événements correspondants respectifs à un événement ultérieur, et un risque attribué à un événement ultérieur; et à mémoriser les ensembles de scores de risque d'événement en mémoire.
PCT/US2019/041733 2018-07-16 2019-07-12 Surveillance et commande de processus stochastiques continus sur la base d'événements dans des données de séries chronologiques WO2020018392A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201862698769P 2018-07-16 2018-07-16
US62/698,769 2018-07-16
US16/127,933 US10402723B1 (en) 2018-09-11 2018-09-11 Multi-stage machine-learning models to control path-dependent processes
US16/127,933 2018-09-11

Publications (1)

Publication Number Publication Date
WO2020018392A1 true WO2020018392A1 (fr) 2020-01-23

Family

ID=69163721

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2019/041733 WO2020018392A1 (fr) 2018-07-16 2019-07-12 Surveillance et commande de processus stochastiques continus sur la base d'événements dans des données de séries chronologiques

Country Status (1)

Country Link
WO (1) WO2020018392A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113223392A (zh) * 2021-05-18 2021-08-06 信阳农林学院 一种用于pm2.5小时浓度预测的混合集成模型
US11316750B1 (en) 2020-10-22 2022-04-26 Cisco Technology, Inc. Peer risk benchmarking using generative adversarial networks

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100094791A1 (en) * 2008-06-12 2010-04-15 Tom Miltonberger Fraud Detection and Analysis
WO2014075108A2 (fr) * 2012-11-09 2014-05-15 The Trustees Of Columbia University In The City Of New York Système de prévision à l'aide de procédés à base d'ensemble et d'apprentissage machine
US8949164B1 (en) * 2011-09-08 2015-02-03 George O. Mohler Event forecasting system
KR20160030996A (ko) * 2013-07-12 2016-03-21 마이크로소프트 테크놀로지 라이센싱, 엘엘씨 컴퓨터-인간 대화형 학습에서의 대화형 세그먼트 추출
US20160358090A1 (en) * 2015-06-08 2016-12-08 The Charles Stark Draper Laboratory, Inc. Method and system for obtaining and analyzing information from a plurality of sources

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100094791A1 (en) * 2008-06-12 2010-04-15 Tom Miltonberger Fraud Detection and Analysis
US8949164B1 (en) * 2011-09-08 2015-02-03 George O. Mohler Event forecasting system
WO2014075108A2 (fr) * 2012-11-09 2014-05-15 The Trustees Of Columbia University In The City Of New York Système de prévision à l'aide de procédés à base d'ensemble et d'apprentissage machine
KR20160030996A (ko) * 2013-07-12 2016-03-21 마이크로소프트 테크놀로지 라이센싱, 엘엘씨 컴퓨터-인간 대화형 학습에서의 대화형 세그먼트 추출
US20160358090A1 (en) * 2015-06-08 2016-12-08 The Charles Stark Draper Laboratory, Inc. Method and system for obtaining and analyzing information from a plurality of sources

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11316750B1 (en) 2020-10-22 2022-04-26 Cisco Technology, Inc. Peer risk benchmarking using generative adversarial networks
US11533241B2 (en) 2020-10-22 2022-12-20 Cisco Technology, Inc. Peer risk benchmarking using generative adversarial networks
US11843515B2 (en) 2020-10-22 2023-12-12 Cisco Technology, Inc. Peer risk benchmarking using generative adversarial networks
CN113223392A (zh) * 2021-05-18 2021-08-06 信阳农林学院 一种用于pm2.5小时浓度预测的混合集成模型

Similar Documents

Publication Publication Date Title
US11941691B2 (en) Dynamic business governance based on events
US11416896B2 (en) Customer journey management engine
US11810204B2 (en) Artificial intelligence transaction risk scoring and anomaly detection
US11715164B2 (en) Robotic process automation system for negotiation
US10861028B2 (en) Detecting and reducing bias (including discrimination) in an automated decision making process
US20210035152A1 (en) Predicting the effectiveness of a marketing campaign prior to deployment
US20200349641A1 (en) System and method for determining credit and issuing a business loan using tokens and machine learning
CA3118313A1 (fr) Procedes et systemes pour ameliorer des machines et des systemes qui automatisent l'execution de registre distribue et d'autres transactions sur des marches au comptant et a terme pour l'energie, le calcul, le stock age et d'autres ressources
Ogwueleka et al. Neural network and classification approach in identifying customer behavior in the banking sector: A case study of an international bank
CA3177410A1 (fr) Systeme d'orchestration de marche pour faciliter les transactions de places de marche electroniques
WO2017205463A1 (fr) Systèmes et procédés pour fournir des résultats spécifiques à l'utilisateur sur la base d'un test de produit ou de service
WO2020018392A1 (fr) Surveillance et commande de processus stochastiques continus sur la base d'événements dans des données de séries chronologiques
Leventhal Predictive Analytics for Marketers: Using Data Mining for Business Advantage
US11544713B1 (en) Fraud detection using augmented analytics
Njeru Detection of Fraudulent Vehicle Insurance Claims Using Machine Learning
US20240161117A1 (en) Trigger-Based Electronic Fund Transfers
Vercosa et al. Impact of unusual features in credit scoring problem
Tanlamai et al. Identifying arbitrage opportunities in retail markets with artificial intelligence
Ezeji Artificial Intelligence for detecting and preventing procurement fraud
Akintunde et al. An Ensemble-based Shill Bidding Prediction Model in Car* Auction System

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19837508

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19837508

Country of ref document: EP

Kind code of ref document: A1