US20180046475A1 - Detecting scripted or otherwise anomalous interactions with social media platform - Google Patents
Detecting scripted or otherwise anomalous interactions with social media platform Download PDFInfo
- Publication number
- US20180046475A1 US20180046475A1 US15/675,319 US201715675319A US2018046475A1 US 20180046475 A1 US20180046475 A1 US 20180046475A1 US 201715675319 A US201715675319 A US 201715675319A US 2018046475 A1 US2018046475 A1 US 2018046475A1
- Authority
- US
- United States
- Prior art keywords
- api call
- call sequence
- api
- account creation
- account
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/55—Detecting local intrusion or implementing counter-measures
- G06F21/552—Detecting local intrusion or implementing counter-measures involving long-term monitoring or reporting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/445—Program loading or initiating
- G06F9/44589—Program code verification, e.g. Java bytecode verification, proof-carrying code
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/903—Querying
- G06F16/9032—Query formulation
- G06F16/90324—Query formulation using system suggestions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/903—Querying
- G06F16/90335—Query processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/445—Program loading or initiating
- G06F9/44552—Conflict resolution, i.e. enabling coexistence of conflicting executables
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/14—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
- H04L63/1433—Vulnerability analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
- G06Q50/01—Social networking
Abstract
Description
- This application is a Non-Provisional of, and claims priority to, U.S. Provisional Application No. 62/373,946, titled “Data Science and Machine Learning at Scale”, filed on Aug. 11, 2016, the disclosure of which is incorporated herein by reference.
- Due to the popularity of social media, many people are interacting with, and consuming content from, a variety of social networks that are available electronically. Some social media platforms require that a person go through a formal signup process to register themselves before they can engage with the services or other functionality offered. For example, the person can create an account for that particular social media platform.
- As with many other online activities, social media platforms are being subjected to different forms of misuse. For example, spammers are interested in spreading their message or other content to as many people as possible, and they have been known to use software (e.g., a script) to try to automatically create accounts with a social media platform. Such illicit automated creation of numerous accounts within a short time period can serve as an attempt to maximize the number of people that can be reached by the spamming campaign before the platform can act against it and terminate or deactivate the account. Moreover, spam messaging is not the only undesirable consequence that can follow when social media accounts are created by scripts being executed. Rather, such fake accounts are often associated with harassment or abuse of other participants, or other violations of the policies applicable to the social media platform.
- In a first aspect, a method includes receiving, in a computer system, an account creation request for a social media platform. The account creation request can be created and sent to the computer system using a frontend component. The method includes receiving, from the frontend component, an application programming interface (API) call sequence associated with the account creation request. The API call sequence can reflect API calls registered by the frontend component in connection with creation of the account creation request, and timings of the registered API calls. The method includes applying an API call sequence model to the received API call sequence. The API call sequence model can be generated by providing training API call sequences to a machine learning component. The method includes taking at least one action in response to the application of the API call sequence model indicating that the received API call sequence is anomalous. The action can be taken with regard to the account creation request, or with regard to an account created in response to the account creation request.
- Implementations can include any or all of the following features. Applying the API call sequence model can include an evaluation of whether the received API call sequence is missing a particular API call of the frontend component. Some of the training API call sequences can correspond to valid account creation requests and others of the training API call sequences can correspond to invalid account creation requests. The particular API call can be identified for use in the evaluation based on the particular API call having a greater frequency of occurrence for the valid account creation requests than for the invalid account creation requests. The method can further include, in response to determining that the received API call sequence is missing the particular API call of the frontend component, evaluating whether the received API call sequence is missing another particular API call of the frontend component. Applying the API call sequence model can include evaluating the timing of the API calls. Evaluating the timing of the API calls can include determining whether a temporal separation of the API calls is less than a threshold. Evaluating the timing of the API calls can include determining whether a temporal separation of the API calls is randomized. Applying the API call sequence model can include counting the API calls in the received API call sequence.
- Multiple API call sequences can be received, the multiple API call sequences corresponding to respective account creation requests. The method can further include storing the received API call sequences in a log, and evaluating the log to determine whether any of the received API call sequences are essentially identical to each other. The account can be created in response to the account creation request, and the method can further include: receiving engagement data regarding the account, the engagement data reflecting use of the frontend component to interact with the social media platform; applying an engagement model to the received engagement data, the engagement model generated by providing training engagement data to the machine learning component; and in response to the application of the engagement model indicating that the use of the frontend component is anomalous, taking at least one action with regard to the account.
- Applying the API call sequence model can include determining a score for the received API call sequence. The application of the API call sequence model can indicate that the received API call sequence is anomalous in response to the determined score not meeting a threshold for account creation normalcy. The connection between the API calls and the creation of the account creation request can include that at least one of the API calls was registered by the frontend component during a predefined period of time after the account generation request was generated. Taking the at least one action can include attempting to contact a person associated with the account generation request, and determining whether the account generation request was generated by a script interacting with the frontend component.
- The method can further include receiving additional training API call sequences after applying the API call sequence model to the received API call sequence; generating an updated API call sequence model by providing the additional training API call sequences to the machine learning component; receiving another account creation request for the social media platform after generating the updated API call sequence model; applying the updated API call sequence model to the received other API call sequence; and in response to the application of the updated API call sequence model indicating that the received other API call sequence is anomalous, taking at least one action with regard to the other account creation request, or with regard to an other account created in response to the other account creation request. The method can further include applying the updated API call sequence model to a previous account creation request, including at least the received account creation request.
- In a second aspect, a non-transitory computer-readable storage medium has stored therein instructions that when executed cause at least one processor to perform operations. The operations can include: receiving, in a computer system, an account creation request for a social media platform, the account creation request created and sent to the computer system using a frontend component; receiving, from the frontend component, an application programming interface (API) call sequence associated with the account creation request, the API call sequence reflecting API calls registered by the frontend component in connection with creation of the account creation request, and timings of the registered API calls; applying an API call sequence model to the received API call sequence, the API call sequence model generated by providing training API call sequences to a machine learning component; and in response to the application of the API call sequence model indicating that the received API call sequence is anomalous, taking at least one action with regard to the account creation request, or with regard to an account created in response to the account creation request.
- In a third aspect, a computer system includes an interface configured to receive an account creation request for a social media platform. The account creation request can be created and sent to the computer system using a frontend component. The interface can also be configured to receive an application programming interface (API) call sequence associated with the account creation request. The API call sequence can reflect API calls registered by the frontend component in connection with creation of the account creation request, and timings of the registered API calls. The system further includes a log in which the computer system records received API call sequences. The system further includes a bot configured to apply an API call sequence model to at least the received API call sequence record in the log. The API call sequence model can be generated by providing training API call sequences to a machine learning component. In response to the bot indicating that the received API call sequence is anomalous, the computer system can take at least one action with regard to the account creation request, or with regard to an account created in response to the account creation request.
- Implementations can include any or all of the following features. In applying the API call sequence model the bot can evaluate the timing of the API calls. In evaluating the timing of the API calls the bot can determine whether a temporal separation of the API calls is less than a threshold. In evaluating the timing of the API calls the bot can determine whether a temporal separation of the API calls is randomized.
- The details of one or more implementations are set forth in the accompanying drawings and the description below. Other features will be apparent from the description and drawings, and from the claims.
-
FIG. 1 shows an example of a system that can detect anomalies in interactions with a social media platform. -
FIG. 2 schematically shows an example of analysis of an API call sequence. -
FIG. 3 schematically shows an example of logging and analysis of interactions with a social media platform. -
FIG. 4 shows an example of a system that performs analysis and creates a log. -
FIG. 5 shows an example of a system that creates a machine learning model. -
FIG. 6 shows an example of checking for anomalies in an API call sequence. -
FIG. 7 shows an example of a system that can train a machine learning model and apply the model to detect anomalies. -
FIG. 8 shows an example of a process. -
FIG. 9 shows another example of a process. -
FIG. 10 shows an example of a distributed computer device that can be used to implement the described techniques. - Like reference symbols in the various drawings indicate like elements.
- This disclosure describes systems and techniques for detecting anomalous interactions with a social media platform. In some implementations, a machine learning model can be trained based on valid and/or invalid examples of signup sequences, and the model can then be applied to determine whether one or more accounts were established illegitimately. For example, a sequence of application programming interface (API) calls for an account can be analyzed using the machine learning model. If the API call sequence shows signs of anomaly, further investigation can be performed and/or the account can be terminated.
- Implementations can provide a technical solution to the situation that spammers or other online abusers gain unwanted access to a social media platform and its participants by using scripts or other automated procedures to go through the account signup process. For example, having fake accounts on a social media platform can lead to unwanted spamming or other abuse of legitimate users and/or to degradation in system performance due to excessive illegitimate traffic. As another example, implementations can provide the technical advantage of making the signup process more streamlined because validation measures (to ensure that the user is a human being and not a script) need only be applied if an indication of anomaly has been detected.
-
FIG. 1 shows an example of asystem 100 that can detect anomalies in interactions with asocial media platform 102. Some components of thesystem 100 are illustrated as boxes for simplicity. A box can represent one or more components of software, hardware, firmware, or combinations thereof. For example, one or more instances similar or identical to the device shown inFIG. 10 can be used for implementing thesystem 100. In some implementations, two or more of the components can be combined into a common component, and/or, one or more of the shown boxes can be distributed over two or more components. - The
system 100 includes one or more user devices 104 that can communicate with thesocial media platform 102. The user devices 104 can include any type of device, including, but not limited to, a personal computer, a laptop, a handheld electronic device, a smartphone, a wearable electronic device, a tablet device, and combinations thereof. - The user device 104 includes one or
more frontend component 106 for thesocial media platform 102. Thefrontend component 106 can allow the user to register with thesocial media platform 102 and to engage in activities or interaction with other users by way of thesocial media platform 102. In some implementations, thefrontend component 106 is a software application (e.g., an “app”) that is being executed on a smartphone. For example, thefrontend component 106 can be obtained from an online provider of software programs and downloaded to, and installed on, the user device 104. - The
social media platform 102 can facilitate interaction with users or prospective users in one or more ways. Here, thesocial media platform 102 provides one or more social media functions 108. In some implementations, thesocial media functions 108 can include one or more ways of distributing messages or other content between two or more of the user devices 104. For example, a news-and-social-network service can be provided that enables users to post and interact with messages using thefrontend component 106. - The
social media platform 102 may require that an account be created before a user can engage with one or more of the social media functions 108. Here, thesocial media platform 102 includes anaccount management module 110 that can manage the process of creating new accounts and maintaining created accounts. In some implementations, thefrontend component 106 can be configured to allow the user to enter therein information required for the signup process, and this information (and optionally additional information) can then be provided to theaccount management module 110 for processing. - The
account management module 110 can receiveinformation 112 from thefrontend component 106. The information can be sent as a single communication or as multiple communications. The information here includes an account creation request. In some implementations, thesocial media platform 102 can include aninterface 111 that facilitates communication with the user device(s) 104. Theinterface 111 can facilitate communications over one or more networks, including, but not limited to, a private network or the internet. For example, theinterface 111 allows thesocial media platform 102 to receive account creation requests and API call sequence data from thefrontend components 106 on the respective user devices 104. - In principle, the
account management module 110 can accept the request if the required information has been provided, and create a new account as a result. If some required information has not been provided, or if another basis exists for not opening a new account, theaccount management module 110 can deny the request or request additional information. For example, such measures may be triggered if the account creation request is deemed to be anomalous. - The
social media platform 102 can include ananomaly detection module 113. Theanomaly detection module 113 can be used for detecting one or more types of anomaly with regard to how the user devices 104 interact with thesocial media platform 102. In some implementations, theanomaly detection module 113 can include one or moremachine learning components 114 that are trained to detect anomalous behavior. Themachine learning component 114 can be trained using one or more sets oftraining data 116. For example,training data 116 can include valid and/or invalid examples of interactions, such that themachine learning component 114 can create a model of valid and/or invalid behavior. For example, invalid behavior can include using a script that engages with thefrontend component 106 to perform the signup procedure. Thesocial media platform 102 can collect and store alog 118 in theanomaly detection module 113. For example, the log can include account creations requests and/or other interactions created by thefrontend component 106. - In some implementations, the
anomaly detection module 113 can analyze one or more interactions with thefrontend component 106. The frontend component can be designed to have a number of application programming interfaces (APIs). The APIs can relate to any or all interactions that a user can have with thefrontend component 106. The signup procedure by which the user generates the account creation request is defined in terms of APIs that reflect what information the user can enter and submit to thesocial media platform 102. As another example, other interactions—such as whether the user places a cursor or a screen, or performs a search, or visits a home page, or uploads a picture, or clicks a social media message—can correspond to respective APIs and accordingly be detected. In particular, the invocation of an API can be referred to as an API call, and two or more such API calls that take place on a particular user device—that is, that are registered by thesame frontend component 106—can be referred to as an API call sequence. In some implementations, theanomaly detection module 113 can analyze one or more API call sequences to detect anomalies. For example, theanomaly detection module 113 can detect whether the API call were made by a script executing on the user device 104, as opposed to by a human interacting directly with thefrontend component 106. - One or more types of various machine learning approaches can be used in the
machine learning component 114. In some implementations, such approaches can include one or more of Markov models, logistic regression, decision tree analysis, random forest analysis, neural nets, and combinations thereof. Generally, machine learning is the field where a computer learns to perform classes of tasks using the feedback generated from experience or data that the machine learning process acquires during computer performance of those tasks. In supervised machine learning, the computer can learn one or more rules or functions to map between example inputs and desired outputs as predetermined by an operator or programmer. Labeled data points can then be used in training the computer. Unsupervised machine learning can involve using unlabeled data, and the computer can then identify implicit relationships in the data, for example by reducing the dimensionality of the data set. - One type of machine learning approach that can be used in the
machine learning component 114 is a classifier. Objects can be labeled—for example, by a human—and be used for generating a training set. The training set can form at least part of thetraining data 116. For example, a subsampling and/or reweighing of the labeled objects can be performed to generate the training set. The training set based on the labeled objects can be provided to a classifier to train the classifier as to relationships within the training set. This can be an iterative process where the classifier is re-trained using objects that have been labeled according to an output of the classifier. Once trained, the classifier can receive one or more unlabeled objects and apply the trained machine learning model thereto, in order to arrive at an output of a proper labeling. - Thus, the
system 100 can be used to perform one or more methods or processes relating to anomaly detection with regard to a social media platform. For example, thesocial media platform 102 can receive an account creation request from thefrontend component 106 via theinformation 112. The intended use of thefrontend component 106 is that a human should be the one using thefrontend component 106 to establish an account for themselves on thesocial media platform 102 and then interact with others by way of the social media functions 108. However, as indicated above, it can happen that illicit signup attempts are made by executing a script on the user device 104, the script interacting with thefrontend component 106 and invoking some or all of its APIs. Thus, it can be desirable to determine whether the account creation request was generated by a human or in an automated way, such as by an executed script. - The
social media platform 102 can therefore receive, within theinformation 112 sent from thefrontend component 106, an API call sequence associated with the account creation request. The API call sequence can reflecting API calls registered by thefrontend component 106 in connection with creation of the account creation request. The API call sequence can also reflect timings of the registered API calls. Thesocial media platform 102 can store the account creation request in thelog 118. - The
machine learning component 114 can apply an API call sequence model to the received API call sequence. Such an API call sequence model may have been generated by providing training API call sequences—for example, from thetraining data 116—to themachine learning component 114. The API call sequence model can be different depending on the type of client or on the implementation environment. The API call sequence model can determine whether the received API call sequence is consistent with the API call sequence that would be expected from a human user, or whether it has any anomaly that suggests it may have been created by execution of a script. - That is, the
anomaly detection module 113—for example, themachine learning component 114—can indicate whether the received API call sequence is anomalous. In response to the application of the API call sequence model indicating that the received API call sequence is anomalous, the social media platform 102 (e.g., theanomaly detection module 113 thereof) can take at least one action with regard to the account creation request. - One or more types of remedial actions can be performed after an anomaly is detected. For example, the user can be electronically prompted for further information, as schematically indicated by part of the
information 112 also being sent to theuser device 102. As another example, theanomaly detection module 113 can initiate a phone challenge to the registering user, where a phone call is placed to gather more information. As such, taking the at least one action can include attempting to contact a person associated with the account generation request, and determining whether the account generation request was generated by a script interacting with thefrontend component 106. The action(s) taken in response to the detected anomaly can be performed before any account is created for the received account creation request. That is, the processing can be performed essentially in real time before the account creation request is accepted or rejected. As another example, if an account has already been created in response to the account creation request, then the remedial measures can be taken with regard to the created account. For example, the account can be terminated or placed in a restrictive mode. - The
machine learning module 114 can be updated. For example, if thefrontend component 106 is modified so as to introduce an additional API or remove an existing one, it may be necessary or desirable to retrain themachine learning module 114. This can allow theanomaly detection module 114 to change its analysis in one or more regards, for example so as to accept an API call sequence as normal that might earlier have been flagged as anomalous, and/or to flag an API call sequence as anomalous that might earlier have been considered normal. As such, themachine learning module 114 can receive additional training API call sequences (e.g., stored in the training data 116) after themachine learning module 114 has been trained. Themachine learning module 114 can then be trained using (also) thenew training data 116. An updated API call sequence model can be generated by providing the additional training API call sequences to themachine learning module 114. At a later time, another account creation request for the social media platform can be received after the updated API call sequence model is generated. The updated API call sequence model can be applied to the received other API call sequence and at least one action can be taken if necessary or desirable. - Backfill processes can be performed. For example, when the
machine learning component 114 has been retrained, it can be applied not only to future account creation requests, but also to one or more requests in thelog 118. As such, an account creation request that was already analyzed using the previous API call sequence model, and then found to be legitimate, can again be evaluated using the updated API call sequence model. - Some implementations can analyze whether any anomalies exist in how a user of an already created account interacts with the
social media platform 102. For example, even if the account was originally created in response to a human going through the signup process, the resulting account may later have been compromised. The user's engagement with the social medial function(s) 108 can therefore be analyzed to determine whether they are consistent with the user being a human, or whether they appears to be the result of an automated process (e.g., a script or other software) interacting with thefrontend component 106. That is, this processing takes place after the account has been created. Themachine learning component 114 can then use an engagement model to detect anomalies. - The
social media platform 102 can receive engagement data regarding the account, such engagement data reflecting use of thefrontend component 106 to interact with thesocial media platform 102. Themachine learning component 114 can applying the engagement model to the received engagement data. Similar to the API call sequence model exemplified above, the engagement model may have been generated by providing training engagement data (e.g., from training data 116) to themachine learning component 114. The engagement model will indicate whether the engagement data is normal or appears to be anomalous in one or more ways (e.g., by being the result of scripted or otherwise automated interactions). If an anomaly is detected, one or more actions can be taken with regard to the account. - Examples herein mention that a machine learning model may be trained to detect what is anomalous, and this can be referred to as anomaly detection. In some implementations, however, the machine learning model can instead be trained to recognize that which is normal, and this can then be referred to as signature detection. As such, when an API call sequence does not fit any applied signature-detection model, the system can infer that the API call sequence is not a normal one and accordingly flag it as anomalous. Yet other implementations can apply both some form of anomaly detection and some form of signature detection.
-
FIG. 2 schematically shows an example of analysis of anAPI call sequence 200. Some components ofFIG. 1 will be mentioned for illustrative purposes. TheAPI call sequence 200 is here illustrated using atimeline 202. TheAPI call sequence 200 includes individual API calls 204 marked on thetimeline 202. Each of the API calls 204 corresponds to an API call that was registered by thefrontend component 106 on the user device 104 (FIG. 1 ). One or more of the API calls 204 can correspond to the user clicking a button on a graphical user interface of the frontend component, or entering text in a field, or making a selection, or simply placing a cursor on the screen, to name just a few examples. At some point in time, anaccount creation request 206 is generated by thefrontend component 106 and sent by the user device 104 to thesocial media platform 102. One or more of the API calls 204 can be collected after theaccount creation request 206 is generated. For example, API calldata 208 can later be transmitted by thefrontend component 106 to thesocial media platform 102. As such, the connection between the API calls 204 and the creation of theaccount creation request 206 can be that at least one of the API calls 204 was registered by thefrontend component 106 during a predefined period of time after theaccount generation request 206 was generated. - When the
anomaly detection module 113 applies the API call sequence model to theAPI call sequence 200, theanomaly detection module 113 can evaluate one or more aspects of theAPI call sequence 200, including, but not limited to, the length of theAPI call sequence 200, the number of the API calls 204 therein, the timing of the API calls 204, or combinations thereof. - Applying the API call sequence model can include evaluating the timing of the API calls 204. In some implementations, the
anomaly detection module 113 can determine whether a temporal separation of the API calls 204 is less than a threshold. Here, temporal separations 210 a-c between individual ones of the API calls 204 are schematically marked for illustrative purposes. If the temporal separations 210 a-c are all identical, the API call sequence model can flag this as an anomaly. If the temporal separations 210 a-c are all less than a threshold, the API call sequence model can flag this as an anomaly. If the temporal separations 210 a-c are all randomized, the API call sequence model can flag this as an anomaly. - The API calls 204 can be counted to determine whether the signup process is anomalous. For example, it can be known from analysis that human users make on the order of 10000 API calls per minute during a normal signup procedure, whereas a typical automated signup script makes perhaps 10 API calls per minute. As such, the
machine learning component 114 can use this metric to determine whether the received API call sequence is anomalous. - The
anomaly detection module 113 can determine a score for the receivedAPI call sequence 200. Account creation normalcy can then be evaluated based on whether the score meets a threshold. For example, the number of the API calls 204, the timing of the API calls 204, and/or other factors relating to potential abnormality, can be assigned numerical values using one or more metrics and then compiled into a score for the account creation request. - Anomalous behavior can be detected also or instead by looking at more than one account creation request. For example, a spike in signups can be a sign of anomalous behavior. In some implementations, the
anomaly detection module 113 can receive multiple API call sequences, store them in thelog 118, and determine whether any of them are essentially identical to each other. For example, if an automated signup script is used repeatedly (to create separate accounts) it may give rise to essentially identical API call sequences each time. As such, this can be a sign that two or more signups are illicit. -
FIG. 3 schematically shows an example of logging and analysis of interactions with a social media platform. A user actions component 300 here represents the actions that can be undertaken using a frontend component, such as to create an account for a social media platform and to engage with one or more other users and their contents via the platform. For example, adetector component 302 can receive an account creation request from the user, analyze the account creation request and related metadata, such as using a trained machine learning model, and at the same time store the account creation request and related model in a Hadoop Distributed File System (HDFS) 304, and invoke abot 306 to score users in real time and take actions (e.g., through phone challenges). For an account that has been created, adetector 308 can receive one or more interactions from the user actions component 300 (e.g., a message to one or more other social media users). The detector can analyze the interaction, such as using the same or a different machine learning model, and at the same time log the received information and model in theHDFS 304 and trigger the bot to take action with regard to a user. Aresponse path 310 indicates that thebot 306 can take one or more actions with regard to the user actions component 300, depending on the outcomes of the analyses and models. Aconnection 312 between thebot 306 and theHDFS 304 indicates that thebot 306 can read metadata and the model(s) from theHDFS 304, and/or log results of bots to theHDFS 304. -
FIG. 4 shows an example of asystem 400 that performs analysis and creates a log. Anevent 402 can be provided as input to abot making component 404. For example, the input can include an identifier of a social media message, a text of a social media message, a mention of a social media message, a reported user of a social media platform, a reported reason for an anomaly flagging on a social media platform, or combinations thereof. Thebot making component 404 can analyze the event(s) 402 and generate one or more derived features 406. For example, the derived features can include a bi- or tri-gram of a social media message, a model score of a social media message, a safeguard relating to a social media message, a user role for a social media platform, and combinations thereof. The derived features 406 and/or theevent 402 can be stored in alog 408. Thebot making component 404 can also generate one or more verdicts andactions 410 regarding input it receives. -
FIG. 5 shows an example of asystem 500 that creates amachine learning model 502. Thesystem 500 includes alog 504. For example this can include valid and invalid examples of user interactions, such as account creation requests and/or engagements with a social media platform. Some or all contents of thelog 504 can be subjected toformatter 506. Human evaluation results 508 can be provided, such as to provide examples of valid and/or invalid interactions. This can provide one ormore labels 510. Thelabels 510 and the data fromformatter 506 can be provided as data/record format 512. For example, this can be the format required by amachine learning infrastructure 514. For example, themachine learning infrastructure 514 can be configured to take labeled data as input, analyze the data in one or more iterations, and generate themachine learning model 502 as an output. Abot 516 can receive themachine learning model 502 from themachine learning infrastructure 514. For example, thebot 516 can apply themachine learning model 502 to received user interaction data to detect indications of anomalous behavior. -
FIG. 6 shows an example of checking for anomalies in an API call sequence. Through analysis of valid and invalid examples of API call sequences (e.g., those relating to the making of an account creation request), it may be known that certain API calls occur often in the valid examples, and seldom in the invalid examples. For example, such particular API calls may be those that a user very often makes in connection with a signup procedure, but that a scammer probably does not think about including in the script when trying to deceive the signup functionality. For example, the particular API calls may frequently be made in the first, say, two hours after signing up for the social media platform. As such, the valid example(s) may have a greater frequency of occurrence of the API call(s) than the invalid example(s). If the particular API call is one that is not exposed to users, but rather is an internal one to the frontend component and the social media platform, this can make it very difficult or impossible for a spammer to adapt their script to artificially create such an API call. As such, applying the API call sequence model can include an evaluation of whether the received API call sequence is missing a particular API call of the frontend component. - In this example, two API calls have been identified as suitable candidates for this analysis. For example, several API call endpoints can be identified as occurring frequently in valid user data, but almost never in invalid user data. From that information, a machine learning model can be trained, such as by way of decision tree training, to identify one of the API calls as the most significant feature, and one or more other API calls as a secondary (or later) feature. Here, the API call “i/anonymize” corresponding to a request for anonymization is represented by a
feature 600 and the API call “/:userid” corresponding to a visit to a user profile page is represented by afeature 602. Thefeature 600 here indicates that the false positive rate on the training dataset is almost zero while maintaining almost 100% recall, with cross-validation implemented. That is, if the situation at theelement 600 is that the API call “i/anonymize” is present in the API call sequence, then, atfeature 604, the API call sequence can be deemed valid as far as the machine learning model can tell. In principle, a valid API call sequence can lead to the user being granted an account as requested. If other information about the API call sequence later becomes available, this assessment can be reconsidered or changed. To ensure that this approach does not lead to over-fitting, one can test the method on, say, everyone who used a particular signup mode (e.g., a smartphone) during a particular time period. - If the API call “i/anonymize” is not present in the API call sequence at the
feature 600, then the API call “/:userid” can be tested at thefeature 602. Similarly to feature 600 above, if this API call is present in the API call sequence atfeature 602, then, atfeature 606, the machine learning model can deem the API call sequence to be valid. On the other hand, if this API call is not present in the API call sequence atfeature 602, then, atfeature 608, the machine learning model can deem the API call sequence to be invalid. -
FIG. 7 shows an example of asystem 700 that can train a machine learning model and apply the model to detect anomalies. In some implementations, thesystem 700 can be considered as operating according to at least three phase: a preparetraining data phase 702, a train machinelearning models phase 704, and a score users andphone challenge phase 706. In the preparetraining data phase 702, thesystem 700 can include alabel feeder aspect 708 and afeature data aspect 710. In thelabel feeder aspect 708, thesystem 700 can include a bad user IDs collection 712 and a good user IDs collection 714. This can be the result of performing clustering on user IDs. The bad user IDs collection 712 can include user IDs known or strongly suspected to be the result of scripted signups, whereas the good user IDs collection 714 can include user IDs known or believed to correspond to human users. For example, the bad user IDs collection 712 can be generated based on a manual suspension batch 716 of user IDs and/or onalerts 718 which may have been generated based on behavior believed to be generated by software. The good user IDs collection 714 can be generated based on one ormore sources 720, which may identify user IDs that have earned credibility based on observed behavior. That is, thelabel feeder aspect 708 can provide user IDs (good ones and bad ones) that can be used for labeling data relating to the respective users, such as their API call sequences. - In the
feature data aspect 710, data such as client-side features, user behaviors, email addresses, usernames, internet protocol (IP) addresses, and combinations thereof, can be involved. Here, thesystem 700 can gain access to afrontend log 722. In some implementations, thefrontend log 722 is a log of data collected from frontend components that have been executed on respective user devices. For example, the data can include API call sequences. Thefrontend log 722 can be populated with data based on one ormore requests 724. For example, a social media platform can request data collected by the respective frontend components. - The
system 700 can prepare jobs fordata processing 726. In some implementations, this can be done using a framework (e.g., Apache Hadoop) for MapReduce processing. For example, a so-called scalding job involves the Cascading abstraction layer for Hadoop, which can be implemented by way of an API using the Scala programming language. Thedata processing 726 can result in one or more labeled data sets 728. For example, the labeleddata set 728 can include API call sequences collected until 10 minutes after signup (e.g., the account creation request), labeled with the bad user IDs 712 and the good user IDs 714. - In the train machine
learning models phase 704, thesystem 700 can include atrain models aspect 730. Any machine learning approach can be used to train one or more models based on the labeled data set(s) 728. In some implementations, decision trees and/or random forest analysis can be used. Model files 732 are generated based on thetrain models aspect 730. In some implementations, the model files use a markup language format. For example, predictive model markup language (PMML) format can be used. Aloading phase 734 involves loading the model(s) into a bot making component. - In the score users and
phone challenge phase 706, the system can gather one or more new user signups 736. In some implementations, a social media platform can receive the new user signup(s) 736 from the frontend(s) on one or more user devices. For example, the score users andphone challenge phase 706 can serve as an anomaly detection module for the social media platform. - The new user signup(s) 736 and the model files 732 are provided to a
bot 738 that is configured to apply the trained model of the model files 732 to the user signup(s) 736. In some implementations, the application of the trained model will cause a particular user signup to either be flagged as anomalous or not. When the user signup is flagged as anomalous, one or more actions can be taken. For example, thesystem 700 can cause aphone challenge 740 to be performed, wherein the system attempts to contact the user by phone to verify that the signup was not performed by software being executed. -
FIG. 8 shows an example of aprocess 800, andFIG. 9 shows an example of aprocess 900. Either or both of the processes can be performed by one or more processors executing instructions stored in a computer readable medium. In some implementations, more or fewer operations can be performed. Two or more operations can be merged into a common step, and/or one or more steps can be split up into multiple operations. - At 802, one or more bad user IDs can be prepared. In some implementations, this can be for the purpose of labeling data (e.g., user interaction data) that is to serve as examples of anomalous behavior. For example, the system 700 (
FIG. 7 ) can prepare the bad user IDs collection 712. - At 804, signup client and signup date can be obtained. In some implementations, this information is obtained only for those bad user IDs that were prepared at 802. For example, information can be obtained from a log of API call data (e.g., log 722 in
FIG. 7 ). - At 806, a processing job can be run to prepare machine learning model training data. In some implementations, the processing job 726 (
FIG. 7 ) can be run. For example, the processing can label API call sequence data with the bad user IDs. - At 808, training data can be moved from a HDFS to a local resource. In some implementations, the system 700 (
FIG. 7 ) can move the labeleddata 728 from the preparetraining data phase 702 to the train machinelearning models phase 704. - At 810, the training data can be preprocessed. In some implementations, this can involve executing a script that preprocesses the data. In some implementations, preprocessing of training data includes applying statistical and data science techniques. For example, this can include examining training data quality and cleaning up the data to ensure good model quality, and/or performing exploratory data analysis (EDA) to visualize and understand the structure of the data. Based on results of the EDA, it can be possible to recommend the best models which could fit the data. As other examples, preprocessing can additionally or alternatively include filtering out noise, missing data and outliers, and/or down-sampling or over-sampling data to get a balanced dataset for training.
- At 812, a machine learning model can be trained, and machine learning model file output can be produced. In some implementations, this can be done in the train machine learning models phase 704 (
FIG. 7 ) of thesystem 700. - At 814, a model file can be uploaded to the HDFS and model files can be prepared. In some implementations, the system 700 (
FIG. 7 ) can do this in theloading phase 734. - At 816, a bot making component's configuration bot can be modified and a new bot can be created. In some implementations, the bot 738 (
FIG. 7 ) can be created. For example, the new bot is created so as to have the ability to apply an API call sequence model to a received API call sequence, in order to determine whether any anomaly exists in the API calls. - At 818, the bot can be run in test mode, the results can be evaluated, and a decision can be made whether to launch the bot in production. For example, this can be a preliminary step performed in order to determine whether a bot is sufficiently reliable to be applied to actual user interactions with a social media platform.
- At 820, a backfill job can be launched. In some implementations, user interaction data that has been collected previously (e.g., in a log) can be processes in the backfill job to determine whether any of those user interactions show signs of being anomalous according to the machine learning model that has been generated using the
process 800. For example, the log can contain API call sequences that were evaluated using an earlier version or iteration of the machine learning model, and that should now be re-evaluated according to the principles of the newly created machine learning model. - Turning now to
FIG. 9 , at 910 of theprocess 900 an account creation request can be received in a computer system. The account creation request can be for a social media platform, and may have been created and sent to the computer system using a frontend component. For example, with reference toFIG. 1 , thefrontend component 106 can send, by way ofinformation 112 being received atinterface 111, an account creation request to thesocial media platform 102. - At 920, an API call sequence can be received. The API call sequence can be received from the frontend component and can reflect API calls registered by the frontend component in connection with creation of the account creation request. The API call sequence can also reflect timings of the registered API calls. For example, with reference to
FIG. 1 , thesocial media platform 102 can, via theinterface 111, receive theinformation 112 from the user device 104, whichinformation 112 can include the API call logs registered by thefrontend component 106. - At 930 the API call sequence model can be applied to the received API call sequence. The API call sequence model may have been generated by providing training API call sequences to a machine learning component. For example, with reference to
FIG. 1 , theanomaly detection module 113 can cause themachine learning component 114 to apply the machine learning model based on thetraining data 116 to some or all of the contents in thelog 118. - At 940, one or more actions can be taken. In response to the application of the API call sequence model indicating that the received API call sequence is anomalous, at least one action with regard to the account creation request, or can be taken with regard to an account created in response to the account creation request. For example, with reference to
FIG. 1 , if themachine learning component 114 determines that the received API call sequence is anomalous, then the anomaly detection module 113 (or theaccount management module 110, to name another example) can cause a phone challenge to be made. -
FIG. 10 illustrates a diagrammatic representation of a machine in the example form of acomputing device 1000 within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed. Thecomputing device 1000 may be a mobile phone, a smart phone, a netbook computer, a rackmount server, a router computer, a server computer, a personal computer, a mainframe computer, a laptop computer, a tablet computer, a desktop computer etc., within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed. In one implementation, thecomputing device 1000 may present an overlay UI to a user (as discussed above). In alternative implementations, the machine may be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, or the Internet. The machine may operate in the capacity of a server machine in client-server network environment. The machine may be a personal computer (PC), a set-top box (STB), a server, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. - The
example computing device 1000 includes a processing device (e.g., a processor) 1002, a main memory 1004 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM)), a static memory 1006 (e.g., flash memory, static random access memory (SRAM)) and adata storage device 1018, which communicate with each other via a bus 1030. -
Processing device 1002 represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, theprocessing device 1002 may be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets or processors implementing a combination of instruction sets. Theprocessing device 1002 may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. Theprocessing device 1002 is configured to execute instructions 1026 (e.g., instructions for an application ranking system) for performing the operations and steps discussed herein. - The
computing device 1000 may further include anetwork interface device 1008 which may communicate with anetwork 1020. Thecomputing device 1000 also may include a video display unit 1010 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alphanumeric input device 1012 (e.g., a keyboard), a cursor control device 1014 (e.g., a mouse) and a signal generation device 1016 (e.g., a speaker). In one implementation, thevideo display unit 1010, thealphanumeric input device 1012, and thecursor control device 1014 may be combined into a single component or device (e.g., an LCD touch screen). - The
data storage device 1018 may include a computer-readable storage medium 1028 on which is stored one or more sets of instructions 1026 (e.g., instructions for the application ranking system) embodying any one or more of the methodologies or functions described herein. Theinstructions 1026 may also reside, completely or at least partially, within themain memory 1004 and/or within theprocessing device 1002 during execution thereof by thecomputing device 1000, themain memory 1004 and theprocessing device 1002 also constituting computer-readable media. The instructions may further be transmitted or received over anetwork 1020 via thenetwork interface device 1008. - While the computer-
readable storage medium 1028 is shown in an example implementation to be a single medium, the term “computer-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database and/or associated caches and servers) that store the one or more sets of instructions. The term “computer-readable storage medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure. The term “computer-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media and magnetic media. The term “computer-readable storage medium” does not include transitory signals. - In the above description, numerous details are set forth. It will be apparent, however, to one of ordinary skill in the art having the benefit of this disclosure, that implementations of the disclosure may be practiced without these specific details. Moreover, implementations are not limited to the exact order of some operations, and it is understood that some operations shown as two steps may be combined and some operations shown as one step may be split. In some instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the description.
- Some portions of the detailed description are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
- It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the above discussion, it is appreciated that throughout the description, discussions utilizing terms such as “identifying,” “determining,” “calculating,” “updating,” “transmitting,” “receiving,” “generating,” “changing,” or the like, refer to the actions and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (e.g., electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
- Implementations of the disclosure also relate to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a non-transitory computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, flash memory, or any type of media suitable for storing electronic instructions.
- The words “example” or “exemplary” are used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as “example' or “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs. Rather, use of the words “example” or “exemplary” is intended to present concepts in a concrete fashion. As used in this application, the term “or” is intended to mean an inclusive “or” rather than an exclusive “or”. That is, unless specified otherwise, or clear from context, “X includes A or B” is intended to mean any of the natural inclusive permutations. That is, if X includes A; X includes B; or X includes both A and B, then “X includes A or B” is satisfied under any of the foregoing instances. In addition, the articles “a” and “an” as used in this application and the appended claims should generally be construed to mean “one or more” unless specified otherwise or clear from context to be directed to a singular form. Moreover, use of the term “an implementation” or “one embodiment” or “an implementation” or “one implementation” throughout is not intended to mean the same embodiment or implementation unless described as such. Furthermore, the terms “first,” “second,” “third,” “fourth,” etc. as used herein are meant as labels to distinguish among different elements and may not necessarily have an ordinal meaning according to their numerical designation.
- The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct a more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will appear from the description below. In addition, the present disclosure is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the disclosure as described herein.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/675,319 US20180046475A1 (en) | 2016-08-11 | 2017-08-11 | Detecting scripted or otherwise anomalous interactions with social media platform |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201662373946P | 2016-08-11 | 2016-08-11 | |
US15/675,319 US20180046475A1 (en) | 2016-08-11 | 2017-08-11 | Detecting scripted or otherwise anomalous interactions with social media platform |
Publications (1)
Publication Number | Publication Date |
---|---|
US20180046475A1 true US20180046475A1 (en) | 2018-02-15 |
Family
ID=59677438
Family Applications (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/675,696 Active 2040-05-30 US11132602B1 (en) | 2016-08-11 | 2017-08-11 | Efficient online training for machine learning |
US15/675,319 Abandoned US20180046475A1 (en) | 2016-08-11 | 2017-08-11 | Detecting scripted or otherwise anomalous interactions with social media platform |
US15/675,671 Active 2038-03-11 US10649794B2 (en) | 2016-08-11 | 2017-08-11 | Aggregate features for machine learning |
US15/929,375 Active 2037-10-11 US11416268B2 (en) | 2016-08-11 | 2020-04-29 | Aggregate features for machine learning |
US17/819,134 Abandoned US20220382564A1 (en) | 2016-08-11 | 2022-08-11 | Aggregate features for machine learning |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/675,696 Active 2040-05-30 US11132602B1 (en) | 2016-08-11 | 2017-08-11 | Efficient online training for machine learning |
Family Applications After (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/675,671 Active 2038-03-11 US10649794B2 (en) | 2016-08-11 | 2017-08-11 | Aggregate features for machine learning |
US15/929,375 Active 2037-10-11 US11416268B2 (en) | 2016-08-11 | 2020-04-29 | Aggregate features for machine learning |
US17/819,134 Abandoned US20220382564A1 (en) | 2016-08-11 | 2022-08-11 | Aggregate features for machine learning |
Country Status (5)
Country | Link |
---|---|
US (5) | US11132602B1 (en) |
EP (2) | EP3497625A1 (en) |
CN (1) | CN109643347A (en) |
DE (1) | DE202017007517U1 (en) |
WO (2) | WO2018031921A1 (en) |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180270261A1 (en) * | 2017-03-17 | 2018-09-20 | Target Brands, Inc. | Word embeddings for anomaly classification from event logs |
US20200366459A1 (en) * | 2019-05-17 | 2020-11-19 | International Business Machines Corporation | Searching Over Encrypted Model and Encrypted Data Using Secure Single-and Multi-Party Learning Based on Encrypted Data |
US20210240778A1 (en) * | 2018-10-03 | 2021-08-05 | The Toronto-Dominion Bank | Systems and methods for intelligent responses to queries based on trained processes |
US11126678B2 (en) | 2019-03-05 | 2021-09-21 | Corinne Chantal David | Method and system to filter out harassment from incoming social media data |
US11128649B1 (en) | 2019-03-06 | 2021-09-21 | Trend Micro Incorporated | Systems and methods for detecting and responding to anomalous messaging and compromised accounts |
US11170064B2 (en) | 2019-03-05 | 2021-11-09 | Corinne David | Method and system to filter out unwanted content from incoming social media data |
US11181894B2 (en) * | 2018-10-15 | 2021-11-23 | Uptake Technologies, Inc. | Computer system and method of defining a set of anomaly thresholds for an anomaly detection model |
US11258741B2 (en) * | 2019-08-15 | 2022-02-22 | Rovi Guides, Inc. | Systems and methods for automatically identifying spam in social media comments |
US11340968B1 (en) * | 2021-04-21 | 2022-05-24 | EMC IP Holding Company LLC | Executing repetitive custom workflows through API recording and playback |
US11356476B2 (en) * | 2018-06-26 | 2022-06-07 | Zignal Labs, Inc. | System and method for social network analysis |
US20220210172A1 (en) * | 2020-12-29 | 2022-06-30 | Capital One Services, Llc | Detection of anomalies associated with fraudulent access to a service platform |
US11411923B2 (en) * | 2016-10-26 | 2022-08-09 | Ping Identity Corporation | Methods and systems for deep learning based API traffic security |
US11430312B2 (en) * | 2018-07-05 | 2022-08-30 | Movidius Limited | Video surveillance with neural networks |
US20220292190A1 (en) * | 2017-10-13 | 2022-09-15 | Ping Identity Corporation | Methods and apparatus for analyzing sequences of application programming interface traffic to identify potential malicious actions |
US11496475B2 (en) | 2019-01-04 | 2022-11-08 | Ping Identity Corporation | Methods and systems for data traffic based adaptive security |
US11537931B2 (en) * | 2017-11-29 | 2022-12-27 | Google Llc | On-device machine learning platform to enable sharing of machine-learned models between applications |
US11582199B2 (en) | 2015-05-27 | 2023-02-14 | Ping Identity Corporation | Scalable proxy clusters |
US11640420B2 (en) | 2017-12-31 | 2023-05-02 | Zignal Labs, Inc. | System and method for automatic summarization of content with event based analysis |
US20230161691A1 (en) * | 2021-11-19 | 2023-05-25 | Bank Of America Corporation | Electronic system for machine learning based anomaly detection in program code |
US11677703B2 (en) | 2019-08-15 | 2023-06-13 | Rovi Guides, Inc. | Systems and methods for automatically identifying spam in social media comments based on context |
US11755915B2 (en) | 2018-06-13 | 2023-09-12 | Zignal Labs, Inc. | System and method for quality assurance of media analysis |
EP3918500B1 (en) * | 2019-03-05 | 2024-04-24 | Siemens Industry Software Inc. | Machine learning-based anomaly detections for embedded software applications |
Families Citing this family (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11093852B2 (en) * | 2016-10-19 | 2021-08-17 | Accertify, Inc. | Systems and methods for recognizing a device and/or an instance of an app invoked on a device |
US11403563B2 (en) | 2016-10-19 | 2022-08-02 | Accertify, Inc. | Systems and methods for facilitating recognition of a device and/or an instance of an app invoked on a device |
US11315685B2 (en) * | 2017-01-25 | 2022-04-26 | UCB Biopharma SRL | Method and system for predicting optimal epilepsy treatment regimes |
US11748653B2 (en) * | 2017-10-05 | 2023-09-05 | DataRobot, Inc. | Machine learning abstraction |
US11695783B2 (en) * | 2018-08-13 | 2023-07-04 | Ares Technologies, Inc. | Systems, devices, and methods for determining a confidence level associated with a device using heuristics of trust |
US11824882B2 (en) * | 2018-08-13 | 2023-11-21 | Ares Technologies, Inc. | Systems, devices, and methods for determining a confidence level associated with a device using heuristics of trust |
US11620300B2 (en) * | 2018-09-28 | 2023-04-04 | Splunk Inc. | Real-time measurement and system monitoring based on generated dependency graph models of system components |
US11429627B2 (en) | 2018-09-28 | 2022-08-30 | Splunk Inc. | System monitoring driven by automatically determined operational parameters of dependency graph model with user interface |
US11470101B2 (en) | 2018-10-03 | 2022-10-11 | At&T Intellectual Property I, L.P. | Unsupervised encoder-decoder neural network security event detection |
US20210166157A1 (en) * | 2018-11-30 | 2021-06-03 | Apple Inc. | Private federated learning with protection against reconstruction |
US11941513B2 (en) * | 2018-12-06 | 2024-03-26 | Electronics And Telecommunications Research Institute | Device for ensembling data received from prediction devices and operating method thereof |
US10897402B2 (en) * | 2019-01-08 | 2021-01-19 | Hewlett Packard Enterprise Development Lp | Statistics increment for multiple publishers |
US11544621B2 (en) * | 2019-03-26 | 2023-01-03 | International Business Machines Corporation | Cognitive model tuning with rich deep learning knowledge |
US11032150B2 (en) * | 2019-06-17 | 2021-06-08 | International Business Machines Corporation | Automatic prediction of behavior and topology of a network using limited information |
US11443235B2 (en) * | 2019-11-14 | 2022-09-13 | International Business Machines Corporation | Identifying optimal weights to improve prediction accuracy in machine learning techniques |
US11354596B2 (en) | 2020-02-03 | 2022-06-07 | Kaskada, Inc. | Machine learning feature engineering |
US11238354B2 (en) * | 2020-02-03 | 2022-02-01 | Kaskada, Inc. | Event-based feature engineering |
US11792877B2 (en) * | 2020-02-21 | 2023-10-17 | Qualcomm Incorporated | Indication triggering transmission of known data for training artificial neural networks |
US11948352B2 (en) * | 2020-03-26 | 2024-04-02 | Amazon Technologies, Inc. | Speculative training using partial gradients update |
US11363057B1 (en) * | 2020-04-17 | 2022-06-14 | American Express Travel Related Services Company, Inc. | Computer-based system for analyzing and quantifying cyber threat patterns and methods of use thereof |
US11609924B2 (en) * | 2021-04-14 | 2023-03-21 | Citrix Systems, Inc. | Database query execution on multiple databases |
US20230252033A1 (en) * | 2022-02-09 | 2023-08-10 | Microsoft Technology Licensing, Llc | Impression discounting for followfeed |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050039177A1 (en) * | 1997-07-12 | 2005-02-17 | Trevor Burke Technology Limited | Method and apparatus for programme generation and presentation |
US20100274815A1 (en) * | 2007-01-30 | 2010-10-28 | Jonathan Brian Vanasco | System and method for indexing, correlating, managing, referencing and syndicating identities and relationships across systems |
US20150350174A1 (en) * | 2014-05-30 | 2015-12-03 | Ca, Inc. | Controlling application programming interface transactions based on content of earlier transactions |
US20160026438A1 (en) * | 2013-11-20 | 2016-01-28 | Wolfram Research, Inc. | Cloud Storage Methods and Systems |
US20170178027A1 (en) * | 2015-12-16 | 2017-06-22 | Accenture Global Solutions Limited | Machine for development and deployment of analytical models |
US20170346802A1 (en) * | 2016-05-27 | 2017-11-30 | Dropbox, Inc. | Out of box experience application api integration |
Family Cites Families (64)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050125401A1 (en) * | 2003-12-05 | 2005-06-09 | Hewlett-Packard Development Company, L. P. | Wizard for usage in real-time aggregation and scoring in an information handling system |
US20050125280A1 (en) * | 2003-12-05 | 2005-06-09 | Hewlett-Packard Development Company, L.P. | Real-time aggregation and scoring in an information handling system |
CN1691576A (en) * | 2004-04-27 | 2005-11-02 | 微软公司 | Account creation via a mobile device |
US8504575B2 (en) * | 2006-03-29 | 2013-08-06 | Yahoo! Inc. | Behavioral targeting system |
US8027938B1 (en) * | 2007-03-26 | 2011-09-27 | Google Inc. | Discriminative training in machine learning |
US7974974B2 (en) | 2008-03-20 | 2011-07-05 | Microsoft Corporation | Techniques to perform relative ranking for search results |
US8150723B2 (en) * | 2009-01-09 | 2012-04-03 | Yahoo! Inc. | Large-scale behavioral targeting for advertising over a network |
US8595194B2 (en) * | 2009-09-15 | 2013-11-26 | At&T Intellectual Property I, L.P. | Forward decay temporal data analysis |
US20130304818A1 (en) * | 2009-12-01 | 2013-11-14 | Topsy Labs, Inc. | Systems and methods for discovery of related terms for social media content collection over social networks |
US8904149B2 (en) * | 2010-06-24 | 2014-12-02 | Microsoft Corporation | Parallelization of online learning algorithms |
US8260826B2 (en) | 2010-09-23 | 2012-09-04 | Hewlett-Packard Development Company, L.P. | Data processing system and method |
US9558339B2 (en) * | 2010-11-29 | 2017-01-31 | Biocatch Ltd. | Method, device, and system of protecting a log-in process of a computerized service |
US9037483B1 (en) * | 2011-04-07 | 2015-05-19 | Aggregate Knowledge, Inc. | Multi-touch attribution model for valuing impressions and other online activities |
US8719273B2 (en) * | 2011-08-26 | 2014-05-06 | Adobe Systems Incorporated | Analytics data indexing system and methods |
US9218573B1 (en) * | 2012-05-22 | 2015-12-22 | Google Inc. | Training a model using parameter server shards |
US9817827B2 (en) * | 2012-10-04 | 2017-11-14 | Netflix, Inc. | Relationship-based search and recommendations |
US10152676B1 (en) * | 2013-11-22 | 2018-12-11 | Amazon Technologies, Inc. | Distributed training of models using stochastic gradient descent |
US9535897B2 (en) | 2013-12-20 | 2017-01-03 | Google Inc. | Content recommendation system using a neural network language model |
US9450978B2 (en) * | 2014-01-06 | 2016-09-20 | Cisco Technology, Inc. | Hierarchical event detection in a computer network |
US20150324690A1 (en) * | 2014-05-08 | 2015-11-12 | Microsoft Corporation | Deep Learning Training System |
US20150324686A1 (en) * | 2014-05-12 | 2015-11-12 | Qualcomm Incorporated | Distributed model learning |
US20160011732A1 (en) * | 2014-07-11 | 2016-01-14 | Shape Security, Inc. | Disrupting automated attacks on client-server interactions using polymorphic application programming interfaces |
US20190147365A1 (en) * | 2014-08-19 | 2019-05-16 | Google Inc. | Deep vector table machine systems |
US9984337B2 (en) * | 2014-10-08 | 2018-05-29 | Nec Corporation | Parallelized machine learning with distributed lockless training |
US9836480B2 (en) * | 2015-01-12 | 2017-12-05 | Qumulo, Inc. | Filesystem capacity and performance metrics and visualizations |
US10445641B2 (en) * | 2015-02-06 | 2019-10-15 | Deepmind Technologies Limited | Distributed training of reinforcement learning systems |
CN106156810B (en) * | 2015-04-26 | 2019-12-03 | 阿里巴巴集团控股有限公司 | General-purpose machinery learning algorithm model training method, system and calculate node |
US9953063B2 (en) * | 2015-05-02 | 2018-04-24 | Lithium Technologies, Llc | System and method of providing a content discovery platform for optimizing social network engagements |
US10540608B1 (en) * | 2015-05-22 | 2020-01-21 | Amazon Technologies, Inc. | Dynamically scaled training fleets for machine learning |
US10579750B2 (en) * | 2015-06-05 | 2020-03-03 | Uptake Technologies, Inc. | Dynamic execution of predictive models |
CN104980518B (en) * | 2015-06-26 | 2018-11-23 | 深圳市腾讯计算机系统有限公司 | The methods, devices and systems of more learning agent parallel training models |
US10679145B2 (en) * | 2015-08-07 | 2020-06-09 | Nec Corporation | System and method for balancing computation with communication in parallel learning |
US10229357B2 (en) * | 2015-09-11 | 2019-03-12 | Facebook, Inc. | High-capacity machine learning system |
US10671916B1 (en) * | 2015-09-29 | 2020-06-02 | DataRobot, Inc. | Systems and methods to execute efficiently a plurality of machine learning processes |
US20170091668A1 (en) * | 2015-09-30 | 2017-03-30 | Nec Laboratories America, Inc. | System and method for network bandwidth aware distributed learning |
US10402469B2 (en) * | 2015-10-16 | 2019-09-03 | Google Llc | Systems and methods of distributed optimization |
US10474951B2 (en) * | 2015-10-23 | 2019-11-12 | Nec Corporation | Memory efficient scalable deep learning with model parallelization |
CN105446834B (en) * | 2015-11-30 | 2018-10-19 | 华为技术有限公司 | The generation method and device of virtual machine snapshot |
CN106980900A (en) * | 2016-01-18 | 2017-07-25 | 阿里巴巴集团控股有限公司 | A kind of characteristic processing method and equipment |
US10922620B2 (en) * | 2016-01-26 | 2021-02-16 | Microsoft Technology Licensing, Llc | Machine learning through parallelized stochastic gradient descent |
US10762539B2 (en) * | 2016-01-27 | 2020-09-01 | Amobee, Inc. | Resource estimation for queries in large-scale distributed database system |
CN107025205B (en) * | 2016-01-30 | 2021-06-22 | 华为技术有限公司 | Method and equipment for training model in distributed system |
US10482392B2 (en) * | 2016-02-12 | 2019-11-19 | Google Llc | Robust large-scale machine learning in the cloud |
WO2017156791A1 (en) * | 2016-03-18 | 2017-09-21 | Microsoft Technology Licensing, Llc | Method and apparatus for training a learning machine |
CN107229518B (en) * | 2016-03-26 | 2020-06-30 | 阿里巴巴集团控股有限公司 | Distributed cluster training method and device |
CN107292326A (en) * | 2016-03-31 | 2017-10-24 | 阿里巴巴集团控股有限公司 | The training method and device of a kind of model |
US10338931B2 (en) * | 2016-04-29 | 2019-07-02 | International Business Machines Corporation | Approximate synchronization for parallel deep learning |
WO2018005489A1 (en) * | 2016-06-27 | 2018-01-04 | Purepredictive, Inc. | Data quality detection and compensation for machine learning |
CN109716346A (en) * | 2016-07-18 | 2019-05-03 | 河谷生物组学有限责任公司 | Distributed machines learning system, device and method |
US20180039905A1 (en) * | 2016-08-03 | 2018-02-08 | International Business Machines Corporation | Large scale distributed training of data analytics models |
CN107784364B (en) * | 2016-08-25 | 2021-06-15 | 微软技术许可有限责任公司 | Asynchronous training of machine learning models |
US10643150B2 (en) * | 2016-10-11 | 2020-05-05 | International Business Machines Corporation | Parameter version vectors used for deterministic replay of distributed execution of workload computations |
WO2018101862A1 (en) * | 2016-11-29 | 2018-06-07 | Telefonaktiebolaget Lm Ericsson (Publ) | A master node, a local node and respective methods performed thereby for predicting one or more metrics associated with a communication network |
US20180189609A1 (en) * | 2017-01-04 | 2018-07-05 | Qualcomm Incorporated | Training data for machine-based object recognition |
CN110168580B (en) * | 2017-01-10 | 2022-10-04 | 华为技术有限公司 | Fault tolerant recovery system and method when training classifier models using distributed systems |
US20180218287A1 (en) * | 2017-02-01 | 2018-08-02 | Facebook, Inc. | Determining performance of a machine-learning model based on aggregation of finer-grain normalized performance metrics |
US11544740B2 (en) * | 2017-02-15 | 2023-01-03 | Yahoo Ad Tech Llc | Method and system for adaptive online updating of ad related models |
EP3443508B1 (en) * | 2017-03-09 | 2023-10-04 | Huawei Technologies Co., Ltd. | Computer system for distributed machine learning |
US10649806B2 (en) * | 2017-04-12 | 2020-05-12 | Petuum, Inc. | Elastic management of machine learning computing |
US10360500B2 (en) * | 2017-04-20 | 2019-07-23 | Sas Institute Inc. | Two-phase distributed neural network training system |
US10614356B2 (en) * | 2017-04-24 | 2020-04-07 | International Business Machines Corporation | Local multicast in single-host multi-GPU machine for distributed deep learning systems |
US10891156B1 (en) * | 2017-04-26 | 2021-01-12 | EMC IP Holding Company LLC | Intelligent data coordination for accelerated computing in cloud environment |
KR102372423B1 (en) * | 2017-05-16 | 2022-03-10 | 한국전자통신연구원 | Apparatus for sharing parameter and method for using the same |
KR102197247B1 (en) * | 2017-06-01 | 2020-12-31 | 한국전자통신연구원 | Parameter server and method for sharing distributed deep learning parameter using the same |
-
2017
- 2017-08-11 DE DE202017007517.2U patent/DE202017007517U1/en not_active Expired - Lifetime
- 2017-08-11 US US15/675,696 patent/US11132602B1/en active Active
- 2017-08-11 US US15/675,319 patent/US20180046475A1/en not_active Abandoned
- 2017-08-11 CN CN201780048673.6A patent/CN109643347A/en active Pending
- 2017-08-11 WO PCT/US2017/046569 patent/WO2018031921A1/en unknown
- 2017-08-11 EP EP17755014.2A patent/EP3497625A1/en not_active Ceased
- 2017-08-11 EP EP17755397.1A patent/EP3497609B1/en active Active
- 2017-08-11 US US15/675,671 patent/US10649794B2/en active Active
- 2017-08-11 WO PCT/US2017/046641 patent/WO2018031958A1/en unknown
-
2020
- 2020-04-29 US US15/929,375 patent/US11416268B2/en active Active
-
2022
- 2022-08-11 US US17/819,134 patent/US20220382564A1/en not_active Abandoned
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050039177A1 (en) * | 1997-07-12 | 2005-02-17 | Trevor Burke Technology Limited | Method and apparatus for programme generation and presentation |
US20100274815A1 (en) * | 2007-01-30 | 2010-10-28 | Jonathan Brian Vanasco | System and method for indexing, correlating, managing, referencing and syndicating identities and relationships across systems |
US20160026438A1 (en) * | 2013-11-20 | 2016-01-28 | Wolfram Research, Inc. | Cloud Storage Methods and Systems |
US20150350174A1 (en) * | 2014-05-30 | 2015-12-03 | Ca, Inc. | Controlling application programming interface transactions based on content of earlier transactions |
US20170178027A1 (en) * | 2015-12-16 | 2017-06-22 | Accenture Global Solutions Limited | Machine for development and deployment of analytical models |
US20170346802A1 (en) * | 2016-05-27 | 2017-11-30 | Dropbox, Inc. | Out of box experience application api integration |
Cited By (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11641343B2 (en) | 2015-05-27 | 2023-05-02 | Ping Identity Corporation | Methods and systems for API proxy based adaptive security |
US11582199B2 (en) | 2015-05-27 | 2023-02-14 | Ping Identity Corporation | Scalable proxy clusters |
US11411923B2 (en) * | 2016-10-26 | 2022-08-09 | Ping Identity Corporation | Methods and systems for deep learning based API traffic security |
US11924170B2 (en) | 2016-10-26 | 2024-03-05 | Ping Identity Corporation | Methods and systems for API deception environment and API traffic control and security |
US11855968B2 (en) | 2016-10-26 | 2023-12-26 | Ping Identity Corporation | Methods and systems for deep learning based API traffic security |
US10530795B2 (en) * | 2017-03-17 | 2020-01-07 | Target Brands, Inc. | Word embeddings for anomaly classification from event logs |
US20180270261A1 (en) * | 2017-03-17 | 2018-09-20 | Target Brands, Inc. | Word embeddings for anomaly classification from event logs |
US11783033B2 (en) * | 2017-10-13 | 2023-10-10 | Ping Identity Corporation | Methods and apparatus for analyzing sequences of application programming interface traffic to identify potential malicious actions |
US20220292190A1 (en) * | 2017-10-13 | 2022-09-15 | Ping Identity Corporation | Methods and apparatus for analyzing sequences of application programming interface traffic to identify potential malicious actions |
US11537931B2 (en) * | 2017-11-29 | 2022-12-27 | Google Llc | On-device machine learning platform to enable sharing of machine-learned models between applications |
US11640420B2 (en) | 2017-12-31 | 2023-05-02 | Zignal Labs, Inc. | System and method for automatic summarization of content with event based analysis |
US11755915B2 (en) | 2018-06-13 | 2023-09-12 | Zignal Labs, Inc. | System and method for quality assurance of media analysis |
US11356476B2 (en) * | 2018-06-26 | 2022-06-07 | Zignal Labs, Inc. | System and method for social network analysis |
US11430312B2 (en) * | 2018-07-05 | 2022-08-30 | Movidius Limited | Video surveillance with neural networks |
US20230056418A1 (en) * | 2018-07-05 | 2023-02-23 | Movidius Limited | Video surveillance with neural networks |
US11928112B2 (en) * | 2018-10-03 | 2024-03-12 | The Toronto-Dominion Bank | Systems and methods for intelligent responses to queries based on trained processes |
US20210240778A1 (en) * | 2018-10-03 | 2021-08-05 | The Toronto-Dominion Bank | Systems and methods for intelligent responses to queries based on trained processes |
US11181894B2 (en) * | 2018-10-15 | 2021-11-23 | Uptake Technologies, Inc. | Computer system and method of defining a set of anomaly thresholds for an anomaly detection model |
US11496475B2 (en) | 2019-01-04 | 2022-11-08 | Ping Identity Corporation | Methods and systems for data traffic based adaptive security |
US11843605B2 (en) | 2019-01-04 | 2023-12-12 | Ping Identity Corporation | Methods and systems for data traffic based adaptive security |
EP3918500B1 (en) * | 2019-03-05 | 2024-04-24 | Siemens Industry Software Inc. | Machine learning-based anomaly detections for embedded software applications |
US11170064B2 (en) | 2019-03-05 | 2021-11-09 | Corinne David | Method and system to filter out unwanted content from incoming social media data |
US11126678B2 (en) | 2019-03-05 | 2021-09-21 | Corinne Chantal David | Method and system to filter out harassment from incoming social media data |
US11128649B1 (en) | 2019-03-06 | 2021-09-21 | Trend Micro Incorporated | Systems and methods for detecting and responding to anomalous messaging and compromised accounts |
US20200366459A1 (en) * | 2019-05-17 | 2020-11-19 | International Business Machines Corporation | Searching Over Encrypted Model and Encrypted Data Using Secure Single-and Multi-Party Learning Based on Encrypted Data |
US11677703B2 (en) | 2019-08-15 | 2023-06-13 | Rovi Guides, Inc. | Systems and methods for automatically identifying spam in social media comments based on context |
US11258741B2 (en) * | 2019-08-15 | 2022-02-22 | Rovi Guides, Inc. | Systems and methods for automatically identifying spam in social media comments |
US20220210172A1 (en) * | 2020-12-29 | 2022-06-30 | Capital One Services, Llc | Detection of anomalies associated with fraudulent access to a service platform |
US11516240B2 (en) * | 2020-12-29 | 2022-11-29 | Capital One Services, Llc | Detection of anomalies associated with fraudulent access to a service platform |
US11340968B1 (en) * | 2021-04-21 | 2022-05-24 | EMC IP Holding Company LLC | Executing repetitive custom workflows through API recording and playback |
US20230161691A1 (en) * | 2021-11-19 | 2023-05-25 | Bank Of America Corporation | Electronic system for machine learning based anomaly detection in program code |
Also Published As
Publication number | Publication date |
---|---|
EP3497625A1 (en) | 2019-06-19 |
US11416268B2 (en) | 2022-08-16 |
WO2018031958A1 (en) | 2018-02-15 |
US10649794B2 (en) | 2020-05-12 |
US20200257543A1 (en) | 2020-08-13 |
WO2018031921A1 (en) | 2018-02-15 |
EP3497609B1 (en) | 2020-10-07 |
DE202017007517U1 (en) | 2022-05-03 |
US20180046918A1 (en) | 2018-02-15 |
US20220382564A1 (en) | 2022-12-01 |
CN109643347A (en) | 2019-04-16 |
EP3497609A1 (en) | 2019-06-19 |
US11132602B1 (en) | 2021-09-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3497609B1 (en) | Detecting scripted or otherwise anomalous interactions with social media platform | |
US10699010B2 (en) | Methods and apparatus for analyzing sequences of application programming interface traffic to identify potential malicious actions | |
CN106992994B (en) | Automatic monitoring method and system for cloud service | |
US10740411B2 (en) | Determining repeat website users via browser uniqueness tracking | |
US9350749B2 (en) | Application attack monitoring | |
US8756698B2 (en) | Method and system for managing computer system vulnerabilities | |
US11533330B2 (en) | Determining risk metrics for access requests in network environments using multivariate modeling | |
KR20190109427A (en) | Ongoing Learning for Intrusion Detection | |
US10958657B2 (en) | Utilizing transport layer security (TLS) fingerprints to determine agents and operating systems | |
US10944791B2 (en) | Increasing security of network resources utilizing virtual honeypots | |
US11563727B2 (en) | Multi-factor authentication for non-internet applications | |
US20230199025A1 (en) | Account classification using a trained model and sign-in data | |
US11074337B2 (en) | Increasing security of a password-protected resource based on publicly available data | |
US20230224325A1 (en) | Distributed endpoint security architecture enabled by artificial intelligence | |
CN108804501B (en) | Method and device for detecting effective information | |
US9904661B2 (en) | Real-time agreement analysis | |
CN114301713A (en) | Risk access detection model training method, risk access detection method and risk access detection device | |
US20240111890A1 (en) | Systems and methods for sanitizing sensitive data and preventing data leakage from mobile devices | |
US20240111892A1 (en) | Systems and methods for facilitating on-demand artificial intelligence models for sanitizing sensitive data | |
US20230283634A1 (en) | Determining intent of phishers through active engagement | |
US20230101198A1 (en) | Computer-implemented systems and methods for application identification and authentication | |
US20230094066A1 (en) | Computer-implemented systems and methods for application identification and authentication | |
CN117193994A (en) | Method, device, electronic equipment and medium for compliance detection | |
Annamalai et al. | FP-Fed: Privacy-Preserving Federated Detection of Browser Fingerprinting | |
Cui et al. | Potentially Unwanted App Detection for Blockchain-Based Android App Marketplace |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: TWITTER, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:WEI, WENDY RAN;SHEN, SIWEI;SIGNING DATES FROM 20170821 TO 20170828;REEL/FRAME:043492/0878 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
STCV | Information on status: appeal procedure |
Free format text: NOTICE OF APPEAL FILED |
|
STCV | Information on status: appeal procedure |
Free format text: APPEAL BRIEF (OR SUPPLEMENTAL BRIEF) ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STCV | Information on status: appeal procedure |
Free format text: EXAMINER'S ANSWER TO APPEAL BRIEF MAILED |
|
STCV | Information on status: appeal procedure |
Free format text: APPEAL READY FOR REVIEW |
|
STCV | Information on status: appeal procedure |
Free format text: ON APPEAL -- AWAITING DECISION BY THE BOARD OF APPEALS |
|
STCV | Information on status: appeal procedure |
Free format text: BOARD OF APPEALS DECISION RENDERED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
AS | Assignment |
Owner name: MORGAN STANLEY SENIOR FUNDING, INC., MARYLAND Free format text: SECURITY INTEREST;ASSIGNOR:TWITTER, INC.;REEL/FRAME:062079/0677 Effective date: 20221027 Owner name: MORGAN STANLEY SENIOR FUNDING, INC., MARYLAND Free format text: SECURITY INTEREST;ASSIGNOR:TWITTER, INC.;REEL/FRAME:061804/0086 Effective date: 20221027 Owner name: MORGAN STANLEY SENIOR FUNDING, INC., MARYLAND Free format text: SECURITY INTEREST;ASSIGNOR:TWITTER, INC.;REEL/FRAME:061804/0001 Effective date: 20221027 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO PAY ISSUE FEE |