US20210326436A1 - Malicious behavior detection and mitigation in a document execution environment - Google Patents
Malicious behavior detection and mitigation in a document execution environment Download PDFInfo
- Publication number
- US20210326436A1 US20210326436A1 US16/854,802 US202016854802A US2021326436A1 US 20210326436 A1 US20210326436 A1 US 20210326436A1 US 202016854802 A US202016854802 A US 202016854802A US 2021326436 A1 US2021326436 A1 US 2021326436A1
- Authority
- US
- United States
- Prior art keywords
- document
- malicious behavior
- activity
- execution
- execution environment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/55—Detecting local intrusion or implementing counter-measures
- G06F21/554—Detecting local intrusion or implementing counter-measures involving event detection and direct action
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/55—Detecting local intrusion or implementing counter-measures
- G06F21/56—Computer malware detection or handling, e.g. anti-virus arrangements
- G06F21/567—Computer malware detection or handling, e.g. anti-virus arrangements using dedicated hardware
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2221/00—Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F2221/03—Indexing scheme relating to G06F21/50, monitoring users, programs or devices to maintain the integrity of platforms
- G06F2221/034—Test or assess a computer or a system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2221/00—Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F2221/21—Indexing scheme relating to G06F21/00 and subgroups addressing additional information or applications relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F2221/2111—Location-sensitive, e.g. geographical location, GPS
Definitions
- the disclosure generally relates to the field of document execution, and specifically to detecting and preventing malicious behavior in a document execution environment.
- An entity may provide or create a document for execution within an online document execution environment (or simply “online document system”). Since online environments are subject to malicious threats and activity, the document within the document execution environment may be at risk for corruption and/or exploitation. Such threats may be associated with compromised accounts, may be geographically centered, and may be associated with particular activities within the document execution environment. Thus, there is a need for a system that identifies and acts in response to such malicious activity.
- a method for detecting and preventing malicious behavior in a document execution environment accesses a training set of information that represents incidents of malicious behavior in the document execution environment as well as remedial actions taken in response to the incidents that resulted in some level of mitigation of the malicious behavior.
- the method trains a machine learned model based on the training set of information such that the trained machine learned model is configured to detect malicious behavior based on activity that occurs within the document execution environment and recommend remedial actions in response.
- the method receives a document for execution, detects activity associated with the document, and applies the trained machine learned model to the detected activity.
- the trained machine learned model determines whether the detected activity represents malicious behavior and identifies remedial actions that can mitigate the malicious behavior.
- the method subsequently provides, to a device of a user, a recommendation to perform the identified remedial actions.
- a system and/or a non-transitory computer readable storage medium performs the steps described above.
- FIG. 1 illustrates an example document execution environment in which malicious behavior can be detected and prevented, in accordance with one or more embodiments.
- FIG. 2 illustrates training and applying a machine learning model configured to detect and prevent malicious behavior in the document execution environment, in accordance with one or more embodiments.
- FIG. 3 illustrates example detected activity that may be representative of malicious behavior in the document execution environment, in accordance with one or more embodiments.
- FIG. 4 illustrates an example process for detecting and preventing malicious behavior in a document execution environment, in accordance with one or more embodiments.
- a document execution environment enables a party (e.g., individuals, organizations, etc.) to create and send documents to one or more receiving parties for negotiation, collaborative editing, and electronic execution (e.g., signature).
- a receiving party may review content and/or terms presented in a document, and in response to agreeing to the content and/or terms, can execute the document.
- the receiving party provides the sending party (e.g., the party that created and sent the document for execution) with feedback on the content and/or terms in the document received for execution.
- the receiving party completes and/or contributes to a portion of the content and/or terms in the document.
- the sending party may access and/or share data associated with the document within the document execution environment, such as a time and location at which the receiving party accesses, views, and/or executes the document.
- the document execution environment enables payments between the receiving and sending parties.
- DocuSign, Inc's e-Signature product is an example functionality that is implemented within a document execution environment. A document execution environment and example functionality is further described in U.S. Pat. No. 9,634,875, issued Apr. 25, 2017, and U.S. Pat. No. 10,430,570, issued Oct. 1, 2019, which are hereby incorporated by reference in their entireties.
- While the document execution environment described herein implements security measures to help ensure the security and confidentiality of documents sent to receiving parties for execution, threats to online environments more generally increasingly occur. Thus, documents created, collaboratively modified, and sent for execution are also at risk for malicious behavior and corruption.
- the methods and systems described herein help ensure timely detection of such malicious activity associated with documents within a document execution environment, and help provide recommendations for remedial actions that, when performed, help mitigate any detected malicious activity.
- FIG. 1 illustrates an example document execution environment 100 in which malicious behavior can be detected and prevented, in accordance with one or more embodiments.
- the document execution environment 100 enables a sending party to create and send documents for execution to one or more receiving parties. The receiving parties may review, modify, and execute the documents.
- the document execution environment 100 uses a machine learned model to detect activity associated with a document sent for execution that be indicative of malicious behavior.
- the document execution environment includes a document for execution 110 , a client device 120 , a set of training documents 130 , and a malicious behavior detection engine 140 , each communicatively interconnected via a network 180 .
- the document execution environment includes components other than those described herein. For the purposes of concision, the web servers, data centers, and other components associated with an online document execution environment are not shown in the embodiment of FIG. 1 .
- the document for execution 110 is analyzed for associated activity that is indicative of malicious behavior.
- documents for execution include but are not limited to: a sales contract, a permission slip, a rental and/or lease agreement, a liability waiver, a financial document, an investment term sheet, a purchase order, an employment agreement, a mortgage application, and so on.
- the document execution environment 100 receives the document for execution 110 from the sending party via the client device 120 (or receives instructions to create the document within the document execution environment 100 from the client device 120 ) and provides it to the receiving party (not illustrated in the embodiment of FIG. 1 ), for instance for signing.
- the client device 120 provides the document for execution 110 to the document execution environment 100 .
- the client device 120 is a computing device capable of transmitting and/or receiving data over the network 190 .
- the client device 120 may be a conventional computer (e.g., a laptop or a desktop computer), a cell phone, or a similar device.
- the client device 120 enables a user (e.g., of the sending party) to create and/or provide the document for execution 110 to the document execution environment 100 .
- the client device 120 After the document execution environment 100 determines that some activity associated with the document for execution 110 is malicious, the client device 120 notifies the user of the malicious behavior and/or provides, to the user, recommended remedial actions.
- the client device 120 notifies the user of recommended remedial actions based on user input specifying types of malicious behavior and/or recommended actions that warrant notifications.
- the client device 120 includes a user interface that displays the detected malicious activity and recommended remedial actions.
- Incidents and/or activity associated with the training documents 130 serve as a training set of information for training the machine learned model to detect malicious behavior and/or suggest recommended remedial actions.
- one or more users responsible for creating and/or managing the training documents 130 manually curate and/or provide the malicious incidents and activity to the document execution environment 100 .
- Remedial actions, associated with the training documents 130 , taken in response to each of the malicious incidents and/or activity are also added to the training set of information.
- the training set of information can include historical documents associated with the document execution environment 100 , historical activity and/or incidents that have been identified as malicious, historical remedial actions taken by other users in response to the malicious activity and/or incidents, and measures of mitigation representative of the effectiveness of the historical remedial actions taken.
- the malicious behavior detection engine 140 detects malicious behavior within the document execution environment 100 associated with the document for execution 110 using a machine learned model 160 and in response, recommends remedial actions to a user of the client device 120 .
- the malicious behavior detection engine 140 includes a server 150 , which hosts and/or executes a machine learned model 160 and a database 170 .
- the server 150 stores and receives information from the document execution environment 100 .
- the server 150 may be located on a local or remote physical computer and/or may be located within a cloud-based computing system.
- the server 150 receives, from the client device 120 , the document for execution 110 and/or any associated activity or incidents that have occurred within the document execution environment 100 .
- the activity associated with the document for execution 110 may occur in devices other than the client device 120 .
- the activity may be performed by client devices of the receiving party of the document for execution 110 or by a third-party, and therefore may not occur on the client device 120 .
- the document for execution 110 is provided to and stored by a system other than server 150 —in these embodiments, the malicious behavior detection engine 140 can implement one or more monitoring routines configured to monitor activity associated with the document and/or with the system that stores the document.
- the machine learned model 160 is configured to detect malicious behavior based on activity associated with the document for execution 110 within the document execution environment 100 . In some embodiments, the machine learned model 160 is further configured to identify remedial actions that may mitigate the detected malicious behavior.
- the machine learned model 160 is trained on a training set of information. The training set of information includes incidents representative of malicious behavior in the document execution environment 100 , remedial actions taken in response to the incidents, and resulting levels of mitigation of the malicious behavior. After being trained, the machine learned model 160 is applied to the detected activity associated with the document for execution 110 . The machine learned model 160 can then output information indicating whether the activity is likely malicious behavior or not. In some embodiments, in response to identifying malicious behavior, the machine learned model 160 also outputs recommendations of remedial actions that, when performed, may help mitigate or end the malicious behavior. The training and application of the machine learned model 160 is further discussed with respect to FIG. 2 .
- the database 170 stores information relevant to the malicious behavior detection engine 140 .
- the stored data includes, but is not limited to, the document for execution 110 , activity associated with the document for execution 110 , the training set of information, the training documents 130 , and so on.
- the database 170 stores information representative of detected activity determined to be malicious and detected activity determined not to be malicious, representative of remedial actions taken in response to activity determined to be malicious, and representative of the mitigation of such remedial actions.
- the malicious behavior detection engine 140 can add such information to the training set of information, and can retrain the machine learned model 160 based on this information.
- the network 180 transmits data within the document execution environment 100 .
- the network 180 may be a local area and/or wide area network using wireless and/or wired communication systems, such as the Internet.
- the network 180 transmits data over a single connection (e.g., a data component of a cellular signal, or WiFi, among others), and/or over multiple connections.
- the network 180 may include encryption capabilities to ensure the security of customer data.
- encryption technologies may include secure sockets layers (SSL), transport layer security (TLS), virtual private networks (VPNs), and Internet Protocol security (IPsec), among others.
- FIG. 2 illustrates training and applying the machine learning model 160 configured to detect and prevent malicious behavior in the document execution environment 100 , in accordance with one or more embodiments.
- the machine learning model 160 takes, as input, information representative of activity within the document execution environment 100 associated with the document for execution 110 to determine whether the activity is indicative of malicious behavior. Based on the information representative of the activity associated with the document, the machine learning model 160 outputs a likelihood that the activity is indicative of malicious behavior, and (if the likelihood exceeds a threshold) the machine learning model 160 provides recommendations on remedial actions that may mitigate the malicious behavior.
- the document information 210 includes information characterizing each of the training documents 130 .
- the document information 210 includes a type of the document, size of the document, languages within the document, region in which the document originated, characteristics associated with the sending and receiving party of the document (e.g., size, industry, location of headquarters, revenue, corporate structure), types or categories of information or passages within the document, and the like.
- the incidents 220 include activity associated with each of the training documents 130 that has occurred within the document execution environment 100 .
- the incidents 220 may correspond to content or actions taken with regards to content of the training documents 130 , such as the modification, addition, and/or removal of any terms and/or conditions in the document; dates recited in the document; parties designated for execution of the document; discrepancies between the document and other similar documents; and on the like.
- the incidents 220 can be classified as malicious or non-malicious behavior based on the document information 210 , for instance by users of the document execution environment 100 , by network administrators, by security personnel, automatically (for instance by algorithm), or any entity associated with a document or the document execution environment.
- the classification of normal (e.g., non-malicious) behavior may depend on the document information 210 . For example, a time between the access and execution of a 10-page document may take longer than the time between the access and execution of a 1-page document. Accordingly, users may classify an above-threshold amount of time between the access and execution of the 1-page document as representative of malicious behavior.
- a licensing agreement sent to a 500+ person company may require more signatories than the same licensing agreement sent to a company with less than 50 employees.
- users may designate access to the document in a geographic region where neither the sending nor the receiving party have any employees as malicious behavior. Accordingly, the document information 210 can be leveraged to determine whether detected activity is normal or malicious behavior for the document and the parties involved.
- the remedial actions 230 include actions taken in response to malicious behavior determined from the incidents 220 .
- the remedial actions 230 include, for the training documents 130 associated with malicious activity, restoring a deleted document within the document execution environment 100 , providing a document to additional individuals from the receiving party for additional review and/or signatures, limiting access to the document, deleting and/or suspending an account of a suspicious user associated with the document, notifying an entity (such as the receiving party, the sending party, an account manager, or a network administrator) of the malicious behavior, and the like.
- the measure of mitigation is determined based on the document information 210 . For example, suspending an account of a suspicious user may be most effective for a document found to have been accessed in a region where no employees of both the sending and the receiving party are located. For a document with an abnormal review time before execution, sending the document for additional review may be sufficient.
- the measures of mitigation are determined based on feedback from a sending and/or receiving party of the document, from an account or network manager, automatically by algorithm, based on a change in activity corresponding to the malicious behavior, or based on any other suitable criteria.
- the measures of mitigation can be represented numerically, for instance, as a likelihood that the remedial action ended the malicious behavior, categorically (e.g., “successful”, “partially successful”, “not successful”), or in any other suitable way.
- the remedial actions 230 include preventative measures, specific to the document information 210 , to prevent malicious behavior.
- preventative measures may include identifying times at which activities within the document execution environment 100 may be indicative of malicious behavior, but in reality, are not. These include, for example, high activity periods (e.g., a time of year during which the execution of documents occurs rapidly) and management periods (e.g., a time at which an administrator of the receiving or sending party is likely to delete or modify a large number of documents within the document execution environment 100 ).
- the preventative measures include monitoring and flagging activity within a geographic region identified as a location associated with prior malicious activity.
- the preventative measures include performing the remedial actions 230 prior to detecting the incidents 220 that are representative of malicious behavior. For example, administrators of the sending party of the document may require additional review prior to an individual from the receiving party signing the document.
- the training set 200 may be separated into a positive training set and negative training set.
- the positive training set includes the document information 210 associated with the incidents 220 that are designated (for instance, by the users of the document execution environment 100 ) as malicious behavior, as well as the associated remedial actions 230 taken in response to the malicious incidents 220 .
- the negative training set includes the document information 210 associated with the incidents 220 that are designated as non-malicious. In some embodiments, the negative training set includes the preventative measures from the remedial actions 230 .
- the malicious behavior detection engine 140 uses supervised or unsupervised machine learning to train the machine learned model 160 using the positive and negative training sets of the training set 200 .
- Different machine learning techniques may be used in various embodiments, such as linear support vector machines (linear SVM), boosting for other algorithms (e.g., AdaBoost), neural networks, logistic regression, na ⁇ ve Bayes, memory based learning, random forests, bagged trees, decision trees, boosted trees, or boosted stumps.
- linear SVM linear support vector machines
- AdaBoost boosting for other algorithms
- neural networks logistic regression, na ⁇ ve Bayes, memory based learning, random forests, bagged trees, decision trees, boosted trees, or boosted stumps.
- the training of the machine learned model 160 helps the machine learned model 160 identify relationships between the document information 210 , the incidents 220 , and the remedial actions 230 .
- the trained machine learned model 160 notifies the user of the client device 120 of the malicious behavior and/or the recommended actions 250 .
- the user provides feedback on whether the behavior determined by the machine learned model 160 to be malicious is accurate or not, which is subsequently added to the training set 200 for re-training of the machine learned model 160 .
- the user can manually re-define types of malicious activity 240 , threshold amounts of activity that qualify as malicious, types of remedial actions that can be recommended, and the like to the malicious behavior detection engine 140 for re-training the machine learned model.
- the malicious behavior detection engine 140 may present the notifications to the user via the display of the client device 120 .
- the display of the client device 120 includes a user interface including interface elements for each of the recommended actions 250 ; when selected by the user, each interface element causes the corresponding recommended action 250 to be automatically performed.
- each recommended action 250 is displayed with a likelihood that the action will resolve or address the identified malicious behavior. The likelihoods are determined based on measures of mitigation associated with similar actions in the training set 200 .
- the machine-learned model 160 classifies more than 3 login failures within a predetermined time interval as malicious, and notifies the user of the detected malicious activity. Likewise, the machine-learned model classifies an above-threshold number of active sessions within a particular geographic region as malicious, but doesn't notify the user of the detected malicious activity originating from China, Sudan, and Russia. Similarly, recommended actions to mitigate detected malicious behavior (e.g., the recommended actions 250 ) may be presented via the same interface or a different interface of the document execution environment 100 .
- FIG. 4 illustrates an example process for detecting and preventing malicious behavior in a document execution environment, in accordance with one or more embodiments.
- a malicious behavior detection engine of the document execution environment accesses 410 a training set including (for example) information representative of malicious and non-malicious activity within the document execution environment associated with one or more documents, information representative of a set of training documents, and remedial and/or preventative actions (taken in response to detected malicious behavior.
- the training set also includes a measure of mitigation achieved by performing the remedial actions and/or a measure of prevention achieved by performing the preventative actions.
- the malicious behavior detection engine receives 430 a document for execution.
- the document can be a contract or employment agreement uploaded to the document execution environment by a client device.
- the document can be created and collaboratively modified within the document execution environment by a number of parties.
- the malicious behavior detection engine detects 440 activity within the document execution environment associated with the document.
- the document execution environment applies 450 the trained machine learned model to characteristics of the detected activity to determine whether the activity is likely malicious behavior. If the activity is determined to be malicious,
- the malicious behavior detection engine recommends 460 the remedial actions to the user, who may perform the remedial actions.
- a software module is implemented with a computer program product comprising a computer-readable medium containing computer program code, which can be executed by a computer processor for performing any or all of the steps, operations, or processes described.
- Embodiments may also relate to an apparatus for performing the operations herein.
- This apparatus may be specially constructed for the required purposes, and/or it may comprise a general-purpose computing device selectively activated or reconfigured by a computer program stored in the computer.
- a computer program may be stored in a non-transitory, tangible computer readable storage medium, or any type of media suitable for storing electronic instructions, which may be coupled to a computer system bus.
- any computing systems referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.
Abstract
Description
- The disclosure generally relates to the field of document execution, and specifically to detecting and preventing malicious behavior in a document execution environment.
- An entity may provide or create a document for execution within an online document execution environment (or simply “online document system”). Since online environments are subject to malicious threats and activity, the document within the document execution environment may be at risk for corruption and/or exploitation. Such threats may be associated with compromised accounts, may be geographically centered, and may be associated with particular activities within the document execution environment. Thus, there is a need for a system that identifies and acts in response to such malicious activity.
- A method for detecting and preventing malicious behavior in a document execution environment is disclosed. The method accesses a training set of information that represents incidents of malicious behavior in the document execution environment as well as remedial actions taken in response to the incidents that resulted in some level of mitigation of the malicious behavior. The method trains a machine learned model based on the training set of information such that the trained machine learned model is configured to detect malicious behavior based on activity that occurs within the document execution environment and recommend remedial actions in response. The method receives a document for execution, detects activity associated with the document, and applies the trained machine learned model to the detected activity. The trained machine learned model determines whether the detected activity represents malicious behavior and identifies remedial actions that can mitigate the malicious behavior. The method subsequently provides, to a device of a user, a recommendation to perform the identified remedial actions. In some embodiments, a system and/or a non-transitory computer readable storage medium performs the steps described above.
- The disclosed embodiments have other advantages and features which will be more readily apparent from the detailed description, the appended claims, and the accompanying figures (or drawings). A brief introduction of the figures is below.
-
FIG. 1 illustrates an example document execution environment in which malicious behavior can be detected and prevented, in accordance with one or more embodiments. -
FIG. 2 illustrates training and applying a machine learning model configured to detect and prevent malicious behavior in the document execution environment, in accordance with one or more embodiments. -
FIG. 3 illustrates example detected activity that may be representative of malicious behavior in the document execution environment, in accordance with one or more embodiments. -
FIG. 4 illustrates an example process for detecting and preventing malicious behavior in a document execution environment, in accordance with one or more embodiments. - The Figures (FIGS.) and the following description relate to preferred embodiments by way of illustration only. It should be noted that from the following discussion, alternative embodiments of the structures and methods disclosed herein will be readily recognized as viable alternatives that may be employed without departing from the principles of what is claimed.
- Reference will now be made in detail to several embodiments, examples of which are illustrated in the accompanying figures. It is noted that wherever practicable similar or like reference numbers may be used in the figures and may indicate similar or like functionality. The figures depict embodiments of the disclosed system (or method) for purposes of illustration only. One skilled in the art will readily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles described herein.
- The methods described herein use machine learning to detect and prevent malicious behavior in a document execution environment. A document execution environment enables a party (e.g., individuals, organizations, etc.) to create and send documents to one or more receiving parties for negotiation, collaborative editing, and electronic execution (e.g., signature). Within the document execution environment, a receiving party may review content and/or terms presented in a document, and in response to agreeing to the content and/or terms, can execute the document. In some embodiments, the receiving party provides the sending party (e.g., the party that created and sent the document for execution) with feedback on the content and/or terms in the document received for execution. In some embodiments, the receiving party completes and/or contributes to a portion of the content and/or terms in the document. Additionally, the sending party may access and/or share data associated with the document within the document execution environment, such as a time and location at which the receiving party accesses, views, and/or executes the document. In some embodiments, the document execution environment enables payments between the receiving and sending parties. DocuSign, Inc's e-Signature product is an example functionality that is implemented within a document execution environment. A document execution environment and example functionality is further described in U.S. Pat. No. 9,634,875, issued Apr. 25, 2017, and U.S. Pat. No. 10,430,570, issued Oct. 1, 2019, which are hereby incorporated by reference in their entireties.
- While the document execution environment described herein implements security measures to help ensure the security and confidentiality of documents sent to receiving parties for execution, threats to online environments more generally increasingly occur. Thus, documents created, collaboratively modified, and sent for execution are also at risk for malicious behavior and corruption. The methods and systems described herein help ensure timely detection of such malicious activity associated with documents within a document execution environment, and help provide recommendations for remedial actions that, when performed, help mitigate any detected malicious activity.
-
FIG. 1 illustrates an exampledocument execution environment 100 in which malicious behavior can be detected and prevented, in accordance with one or more embodiments. As described above, thedocument execution environment 100 enables a sending party to create and send documents for execution to one or more receiving parties. The receiving parties may review, modify, and execute the documents. Thedocument execution environment 100 uses a machine learned model to detect activity associated with a document sent for execution that be indicative of malicious behavior. As illustrated inFIG. 1 , the document execution environment includes a document forexecution 110, aclient device 120, a set oftraining documents 130, and a malicious behavior detection engine 140, each communicatively interconnected via anetwork 180. In some embodiments, the document execution environment includes components other than those described herein. For the purposes of concision, the web servers, data centers, and other components associated with an online document execution environment are not shown in the embodiment ofFIG. 1 . - The document for
execution 110 is analyzed for associated activity that is indicative of malicious behavior. Examples of documents for execution include but are not limited to: a sales contract, a permission slip, a rental and/or lease agreement, a liability waiver, a financial document, an investment term sheet, a purchase order, an employment agreement, a mortgage application, and so on. Thedocument execution environment 100 receives the document forexecution 110 from the sending party via the client device 120 (or receives instructions to create the document within thedocument execution environment 100 from the client device 120) and provides it to the receiving party (not illustrated in the embodiment ofFIG. 1 ), for instance for signing. - The
client device 120 provides the document forexecution 110 to thedocument execution environment 100. Theclient device 120 is a computing device capable of transmitting and/or receiving data over the network 190. Theclient device 120 may be a conventional computer (e.g., a laptop or a desktop computer), a cell phone, or a similar device. Theclient device 120 enables a user (e.g., of the sending party) to create and/or provide the document forexecution 110 to thedocument execution environment 100. After thedocument execution environment 100 determines that some activity associated with the document forexecution 110 is malicious, theclient device 120 notifies the user of the malicious behavior and/or provides, to the user, recommended remedial actions. In some embodiments, theclient device 120 notifies the user of recommended remedial actions based on user input specifying types of malicious behavior and/or recommended actions that warrant notifications. In some embodiments, theclient device 120 includes a user interface that displays the detected malicious activity and recommended remedial actions. - Incidents and/or activity associated with the
training documents 130 serve as a training set of information for training the machine learned model to detect malicious behavior and/or suggest recommended remedial actions. In some embodiments, one or more users responsible for creating and/or managing thetraining documents 130 manually curate and/or provide the malicious incidents and activity to thedocument execution environment 100. Remedial actions, associated with thetraining documents 130, taken in response to each of the malicious incidents and/or activity are also added to the training set of information. For example, the training set of information can include historical documents associated with thedocument execution environment 100, historical activity and/or incidents that have been identified as malicious, historical remedial actions taken by other users in response to the malicious activity and/or incidents, and measures of mitigation representative of the effectiveness of the historical remedial actions taken. - The malicious behavior detection engine 140 detects malicious behavior within the
document execution environment 100 associated with the document forexecution 110 using a machine learnedmodel 160 and in response, recommends remedial actions to a user of theclient device 120. The malicious behavior detection engine 140 includes aserver 150, which hosts and/or executes a machine learnedmodel 160 and adatabase 170. - The
server 150 stores and receives information from thedocument execution environment 100. Theserver 150 may be located on a local or remote physical computer and/or may be located within a cloud-based computing system. Theserver 150 receives, from theclient device 120, the document forexecution 110 and/or any associated activity or incidents that have occurred within thedocument execution environment 100. The activity associated with the document forexecution 110 may occur in devices other than theclient device 120. As mentioned above, while the user of theclient device 120 may have access to the associated activity, the activity may be performed by client devices of the receiving party of the document forexecution 110 or by a third-party, and therefore may not occur on theclient device 120. It should be noted that in some embodiments, the document forexecution 110 is provided to and stored by a system other thanserver 150—in these embodiments, the malicious behavior detection engine 140 can implement one or more monitoring routines configured to monitor activity associated with the document and/or with the system that stores the document. - The machine learned
model 160 is configured to detect malicious behavior based on activity associated with the document forexecution 110 within thedocument execution environment 100. In some embodiments, the machine learnedmodel 160 is further configured to identify remedial actions that may mitigate the detected malicious behavior. The machine learnedmodel 160 is trained on a training set of information. The training set of information includes incidents representative of malicious behavior in thedocument execution environment 100, remedial actions taken in response to the incidents, and resulting levels of mitigation of the malicious behavior. After being trained, the machine learnedmodel 160 is applied to the detected activity associated with the document forexecution 110. The machine learnedmodel 160 can then output information indicating whether the activity is likely malicious behavior or not. In some embodiments, in response to identifying malicious behavior, the machine learnedmodel 160 also outputs recommendations of remedial actions that, when performed, may help mitigate or end the malicious behavior. The training and application of the machine learnedmodel 160 is further discussed with respect toFIG. 2 . - The
database 170 stores information relevant to the malicious behavior detection engine 140. The stored data includes, but is not limited to, the document forexecution 110, activity associated with the document forexecution 110, the training set of information, thetraining documents 130, and so on. In some embodiments, thedatabase 170 stores information representative of detected activity determined to be malicious and detected activity determined not to be malicious, representative of remedial actions taken in response to activity determined to be malicious, and representative of the mitigation of such remedial actions. The malicious behavior detection engine 140 can add such information to the training set of information, and can retrain the machine learnedmodel 160 based on this information. - The
network 180 transmits data within thedocument execution environment 100. Thenetwork 180 may be a local area and/or wide area network using wireless and/or wired communication systems, such as the Internet. In some embodiments, thenetwork 180 transmits data over a single connection (e.g., a data component of a cellular signal, or WiFi, among others), and/or over multiple connections. Thenetwork 180 may include encryption capabilities to ensure the security of customer data. For example, encryption technologies may include secure sockets layers (SSL), transport layer security (TLS), virtual private networks (VPNs), and Internet Protocol security (IPsec), among others. -
FIG. 2 illustrates training and applying themachine learning model 160 configured to detect and prevent malicious behavior in thedocument execution environment 100, in accordance with one or more embodiments. As described with respect toFIG. 1 , themachine learning model 160 takes, as input, information representative of activity within thedocument execution environment 100 associated with the document forexecution 110 to determine whether the activity is indicative of malicious behavior. Based on the information representative of the activity associated with the document, themachine learning model 160 outputs a likelihood that the activity is indicative of malicious behavior, and (if the likelihood exceeds a threshold) themachine learning model 160 provides recommendations on remedial actions that may mitigate the malicious behavior. - The malicious behavior detection engine 140 trains the machine learned
model 160 using a training set of information 200 (e.g., “the training set 200”). The training set 200 includes document information 210 (e.g., information about the training documents 130), information representative of activity within thedocument execution environment 100 determined to be malicious and activity determined to not be malicious (“incidents 220”), and remedial actions 230 (e.g., actions taken in response to incidents identified as malicious behavior). In some embodiments, the training set 200 additional includes measures of mitigation resulting from the remedial actions 230 (not illustrated inFIG. 2 ). Thedocument information 210, theincidents 220, theremedial actions 230, and the measures of mitigation may be provided via client devices to thedocument execution environment 100. In other embodiments, thedocument execution environment 100 may automatically collect thedocument information 210, theincidents 220, theremedial actions 230, and/or the measures of mitigation to add to thetraining set 200. In other embodiments, a user of thedocument execution environment 100 may manually input or curate a subset of thedocument information 210, theincidents 220, theremedial actions 230, and/or the measures of mitigation to thetraining set 200. It should be noted that the information included in the training set 200 may be representative of historical documents, activity, malicious behavior, remedial actions, and measures of mitigation within thedocument execution environment 100. - The
document information 210 includes information characterizing each of the training documents 130. For example, for each of thetraining documents 130, thedocument information 210 includes a type of the document, size of the document, languages within the document, region in which the document originated, characteristics associated with the sending and receiving party of the document (e.g., size, industry, location of headquarters, revenue, corporate structure), types or categories of information or passages within the document, and the like. - The
incidents 220 include activity associated with each of thetraining documents 130 that has occurred within thedocument execution environment 100. Theincidents 220 may correspond to content or actions taken with regards to content of thetraining documents 130, such as the modification, addition, and/or removal of any terms and/or conditions in the document; dates recited in the document; parties designated for execution of the document; discrepancies between the document and other similar documents; and on the like. Theincidents 220 may also correspond to the management of =thetraining documents 130 within thedocument execution environment 100. These include, for each of the training documents 130, modifying, adding, and/or removing parties and administrators that can access, view, and/or edit the document and associated documents; modifying permissions associated with document; downloading, exporting, and/or sending the document to another party; modifying account credentials or security requirements necessary to access and/or edit the document (e.g., requiring two factor authentication); log in attempts, successes, and failures to access, view and/or modify the document execution environment 100; modifying, removing, and/or adding an email of an administrator and/or party that can access, view and/or edit the document; modifying, adding, and/or removing recovery instructions and/or notifications in case the document is deleted; modifying integrations with other documents and/or partnering products compatible with the document execution environment 100; modifying, creating, and/or removing a template for the document; a time, geographic location, and/or IP address at which the document is accessed and/or executed; a network and/or device from which the document is accessed and/or executed; a number of devices accessing and/or executing the document concurrently and/or within a threshold of time of one another; an amount and/or time of payment corresponding to access and/or execution of the document; and the like. - The
incidents 220 can be classified as malicious or non-malicious behavior based on thedocument information 210, for instance by users of thedocument execution environment 100, by network administrators, by security personnel, automatically (for instance by algorithm), or any entity associated with a document or the document execution environment. The classification of normal (e.g., non-malicious) behavior may depend on thedocument information 210. For example, a time between the access and execution of a 10-page document may take longer than the time between the access and execution of a 1-page document. Accordingly, users may classify an above-threshold amount of time between the access and execution of the 1-page document as representative of malicious behavior. In another example, a licensing agreement sent to a 500+ person company may require more signatories than the same licensing agreement sent to a company with less than 50 employees. Similarly, users may designate access to the document in a geographic region where neither the sending nor the receiving party have any employees as malicious behavior. Accordingly, thedocument information 210 can be leveraged to determine whether detected activity is normal or malicious behavior for the document and the parties involved. - The
remedial actions 230 include actions taken in response to malicious behavior determined from theincidents 220. Theremedial actions 230 include, for thetraining documents 130 associated with malicious activity, restoring a deleted document within thedocument execution environment 100, providing a document to additional individuals from the receiving party for additional review and/or signatures, limiting access to the document, deleting and/or suspending an account of a suspicious user associated with the document, notifying an entity (such as the receiving party, the sending party, an account manager, or a network administrator) of the malicious behavior, and the like. Additional examples of remedial behavior include increasing the security requirements or access criteria for the document, revoking privileges (such as edit privileges, signing privileges, and the like) of parties associated with the document, limiting a number of documents a party can access or a number of actions a party can take for parties associated with the document, implementing one or encryption or encoding protocols for the document, disabling access to or one or more actions that can be taken on a document for a threshold amount of time or for a particular time interval, disabling access to one or more actions that can be taken on a document for parties within a particular geographic region, limiting access or action privileges for one or more client devices or user accounts, and the like. In some embodiments, theremedial policies 230 are associated with a resulting measure of mitigation. In some embodiments, the measure of mitigation is determined based on thedocument information 210. For example, suspending an account of a suspicious user may be most effective for a document found to have been accessed in a region where no employees of both the sending and the receiving party are located. For a document with an abnormal review time before execution, sending the document for additional review may be sufficient. In some embodiments, the measures of mitigation are determined based on feedback from a sending and/or receiving party of the document, from an account or network manager, automatically by algorithm, based on a change in activity corresponding to the malicious behavior, or based on any other suitable criteria. The measures of mitigation can be represented numerically, for instance, as a likelihood that the remedial action ended the malicious behavior, categorically (e.g., “successful”, “partially successful”, “not successful”), or in any other suitable way. - In some embodiments, the
remedial actions 230 include preventative measures, specific to thedocument information 210, to prevent malicious behavior. For example, preventative measures may include identifying times at which activities within thedocument execution environment 100 may be indicative of malicious behavior, but in reality, are not. These include, for example, high activity periods (e.g., a time of year during which the execution of documents occurs rapidly) and management periods (e.g., a time at which an administrator of the receiving or sending party is likely to delete or modify a large number of documents within the document execution environment 100). In another example, the preventative measures include monitoring and flagging activity within a geographic region identified as a location associated with prior malicious activity. In some embodiments, the preventative measures include performing theremedial actions 230 prior to detecting theincidents 220 that are representative of malicious behavior. For example, administrators of the sending party of the document may require additional review prior to an individual from the receiving party signing the document. - The training set 200 may be separated into a positive training set and negative training set. The positive training set includes the
document information 210 associated with theincidents 220 that are designated (for instance, by the users of the document execution environment 100) as malicious behavior, as well as the associatedremedial actions 230 taken in response to themalicious incidents 220. The negative training set includes thedocument information 210 associated with theincidents 220 that are designated as non-malicious. In some embodiments, the negative training set includes the preventative measures from theremedial actions 230. - The malicious behavior detection engine 140 uses supervised or unsupervised machine learning to train the machine learned
model 160 using the positive and negative training sets of thetraining set 200. Different machine learning techniques may be used in various embodiments, such as linear support vector machines (linear SVM), boosting for other algorithms (e.g., AdaBoost), neural networks, logistic regression, naïve Bayes, memory based learning, random forests, bagged trees, decision trees, boosted trees, or boosted stumps. The training of the machine learnedmodel 160 helps the machine learnedmodel 160 identify relationships between thedocument information 210, theincidents 220, and theremedial actions 230. In other words, training the machine learnedmodel 160 enables the machine learnedmodel 160 to identify the relationships between activity within thedocument execution environment 100, documents within thedocument execution environment 100, and remedial actions taken in order to classify subsequent activity as malicious and to recommend remedial actions to take in response. - The trained machine learned
model 160, when applied to detectedactivity 240 associated with the document forexecution 110, determines whether theactivity 240 is representative of malicious behavior and outputs recommendedactions 250 that will help mitigate the malicious behavior. In some embodiments, the trained machine learnedmodel 160 determines a likelihood that theactivity 240 is malicious, or determines a likelihood that theactivity 240 is one or more of a set of different types of malicious behavior, and selects one or more remedial actions in response to one or more of the determined likelihoods exceeding a threshold. Theactivity 240 associated with the document forexecution 110 may be substantially similar to any of theincidents 220 in thetraining set 200. Likewise, the recommendedactions 250 may be substantially similar to any of theremedial actions 230 of thetraining set 200. In response to determining that theactivity 240 is malicious (or exceeds a malicious likelihood threshold), the trained machine learnedmodel 160 notifies the user of theclient device 120 of identified malicious behavior and/or the recommendedremedial actions 250. The malicious behavior detection engine 140 may automatically perform the recommendedactions 250 to mitigate the threat presented by the detected malicious behavior associated with theactivity 240. In some embodiments, the malicious behavior detection engine 140 automatically performs the recommendedactions 250 after determining that the identifiedactivity 240 is above a threshold level of severity or risk and/or after a passage of a threshold amount of time without the user to whom the actions are recommend performing the actions. The threshold level of severity or risk may be specified by the user of theclient device 120. - The trained machine learned
model 160 notifies the user of theclient device 120 of the malicious behavior and/or the recommendedactions 250. In some embodiments, the user provides feedback on whether the behavior determined by the machine learnedmodel 160 to be malicious is accurate or not, which is subsequently added to the training set 200 for re-training of the machine learnedmodel 160. In some embodiments, the user can manually re-define types ofmalicious activity 240, threshold amounts of activity that qualify as malicious, types of remedial actions that can be recommended, and the like to the malicious behavior detection engine 140 for re-training the machine learned model. - The malicious behavior detection engine 140 may present the notifications to the user via the display of the
client device 120. The display of theclient device 120 includes a user interface including interface elements for each of the recommendedactions 250; when selected by the user, each interface element causes the corresponding recommendedaction 250 to be automatically performed. In some embodiments, each recommendedaction 250 is displayed with a likelihood that the action will resolve or address the identified malicious behavior. The likelihoods are determined based on measures of mitigation associated with similar actions in thetraining set 200. - Example Detected Activity within Document Execution Environment
-
FIG. 3 illustrates example detected activity (e.g., the activity 240) that may be representative of malicious behavior in thedocument execution environment 100, in accordance with one or more embodiments. After the detected activity is input to the machine learnedmodel 160 and the machine learnedmodel 160 classifies the detected activity as indicative of malicious behavior, the user can confirm whether the activity is indeed indicative of malicious behavior and whether the user wants notifications of the detected activity. InFIG. 3 , an interface of thedocument execution environment 100 shows two types of detected activity: login failures and active sessions, accessible and viewable by users with access to thedocument execution environment 100. As mentioned with respect toFIG. 2 , users may choose types of activity to be notified about. In this particular example, the user has selectedalerts 310 for login failures but silencedalerts 320 for active sessions. Accordingly, the machine-learnedmodel 160 classifies more than 3 login failures within a predetermined time interval as malicious, and notifies the user of the detected malicious activity. Likewise, the machine-learned model classifies an above-threshold number of active sessions within a particular geographic region as malicious, but doesn't notify the user of the detected malicious activity originating from China, Sudan, and Russia. Similarly, recommended actions to mitigate detected malicious behavior (e.g., the recommended actions 250) may be presented via the same interface or a different interface of thedocument execution environment 100. -
FIG. 4 illustrates an example process for detecting and preventing malicious behavior in a document execution environment, in accordance with one or more embodiments. A malicious behavior detection engine of the document execution environment accesses 410 a training set including (for example) information representative of malicious and non-malicious activity within the document execution environment associated with one or more documents, information representative of a set of training documents, and remedial and/or preventative actions (taken in response to detected malicious behavior. In some embodiments, the training set also includes a measure of mitigation achieved by performing the remedial actions and/or a measure of prevention achieved by performing the preventative actions. - The malicious behavior detection engine trains 420 a machine learned model using the training set. The machine learned model determines relationships between documents, activity, and remedial actions within the document execution environment. For instance, the machine learned model may be a convolutional neural network that, when applied to subsequent activity (such as a pattern of failed login attempts from a particular geographic region) associated with a document within the document execution environment, can output a likelihood that the activity is malicious.
- The malicious behavior detection engine receives 430 a document for execution. For instance, the document can be a contract or employment agreement uploaded to the document execution environment by a client device. Likewise, the document can be created and collaboratively modified within the document execution environment by a number of parties.
- The malicious behavior detection engine detects 440 activity within the document execution environment associated with the document. The document execution environment applies 450 the trained machine learned model to characteristics of the detected activity to determine whether the activity is likely malicious behavior. If the activity is determined to be malicious, The malicious behavior detection engine recommends 460 the remedial actions to the user, who may perform the remedial actions.
- The foregoing description of the embodiments has been presented for the purpose of illustration; it is not intended to be exhaustive or to limit the patent rights to the precise forms disclosed. Persons skilled in the relevant art can appreciate that many modifications and variations are possible in light of the above disclosure.
- Some portions of this description describe the embodiments in terms of algorithms and symbolic representations of operations on information. These algorithmic descriptions and representations are commonly used by those skilled in the data processing arts to convey the substance of their work effectively to others skilled in the art. These operations, while described functionally, computationally, or logically, are understood to be implemented by computer programs or equivalent electrical circuits, microcode, or the like.
- Furthermore, it has also proven convenient at times, to refer to these arrangements of operations as modules, without loss of generality. The described operations and their associated modules may be embodied in software, firmware, hardware, or any combinations thereof.
- Any of the steps, operations, or processes described herein may be performed or implemented with one or more hardware or software modules, alone or in combination with other devices. In one embodiment, a software module is implemented with a computer program product comprising a computer-readable medium containing computer program code, which can be executed by a computer processor for performing any or all of the steps, operations, or processes described.
- Embodiments may also relate to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, and/or it may comprise a general-purpose computing device selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a non-transitory, tangible computer readable storage medium, or any type of media suitable for storing electronic instructions, which may be coupled to a computer system bus. Furthermore, any computing systems referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.
- Embodiments may also relate to a product that is produced by a computing process described herein. Such a product may comprise information resulting from a computing process, where the information is stored on a non-transitory, tangible computer readable storage medium and may include any embodiment of a computer program product or other data combination described herein.
- Finally, the language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the patent rights. It is therefore intended that the scope of the patent rights be limited not by this detailed description, but rather by any claims that issue on an application based hereon. Accordingly, the disclosure of the embodiments is intended to be illustrative, but not limiting, of the scope of the patent rights, which is set forth in the following claims.
Claims (20)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/854,802 US20210326436A1 (en) | 2020-04-21 | 2020-04-21 | Malicious behavior detection and mitigation in a document execution environment |
US18/355,825 US20230367874A1 (en) | 2020-04-21 | 2023-07-20 | Malicious behavior detection and mitigation in a document execution environment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/854,802 US20210326436A1 (en) | 2020-04-21 | 2020-04-21 | Malicious behavior detection and mitigation in a document execution environment |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/355,825 Continuation US20230367874A1 (en) | 2020-04-21 | 2023-07-20 | Malicious behavior detection and mitigation in a document execution environment |
Publications (1)
Publication Number | Publication Date |
---|---|
US20210326436A1 true US20210326436A1 (en) | 2021-10-21 |
Family
ID=78081134
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/854,802 Abandoned US20210326436A1 (en) | 2020-04-21 | 2020-04-21 | Malicious behavior detection and mitigation in a document execution environment |
US18/355,825 Pending US20230367874A1 (en) | 2020-04-21 | 2023-07-20 | Malicious behavior detection and mitigation in a document execution environment |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/355,825 Pending US20230367874A1 (en) | 2020-04-21 | 2023-07-20 | Malicious behavior detection and mitigation in a document execution environment |
Country Status (1)
Country | Link |
---|---|
US (2) | US20210326436A1 (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210200955A1 (en) * | 2019-12-31 | 2021-07-01 | Paypal, Inc. | Sentiment analysis for fraud detection |
US20210342441A1 (en) * | 2020-05-01 | 2021-11-04 | Forcepoint, LLC | Progressive Trigger Data and Detection Model |
US20220368656A1 (en) * | 2020-04-30 | 2022-11-17 | Beijing Bytedance Network Technology Co., Ltd. | Information interaction method and apparatus, and non-transitory computer-readable storage medium |
US20220394002A1 (en) * | 2020-04-30 | 2022-12-08 | Beijing Bytedance Network Technology Co., Ltd. | Information exchange method and apparatus, electronic device, and storage medium |
US11588830B1 (en) * | 2020-06-30 | 2023-02-21 | Sequoia Benefits and Insurance Services, LLC | Using machine learning to detect malicious upload activity |
US20230111652A1 (en) * | 2020-06-16 | 2023-04-13 | Paypal, Inc. | Training a Recurrent Neural Network Machine Learning Model with Behavioral Data |
US20230342460A1 (en) * | 2022-04-25 | 2023-10-26 | Palo Alto Networks, Inc. | Malware detection for documents with deep mutual learning |
WO2024015576A3 (en) * | 2022-07-14 | 2024-03-28 | Iqvia Inc. | Harmonized quality (hq) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080114710A1 (en) * | 2006-11-09 | 2008-05-15 | Pucher Max J | Method For Training A System To Specifically React On A Specific Input |
US8607353B2 (en) * | 2010-07-29 | 2013-12-10 | Accenture Global Services Gmbh | System and method for performing threat assessments using situational awareness |
US20180191770A1 (en) * | 2016-12-30 | 2018-07-05 | X Development Llc | Remedial actions based on user risk assessments |
US20180239959A1 (en) * | 2017-02-22 | 2018-08-23 | Anduin Transactions, Inc. | Electronic data parsing and interactive user interfaces for data processing |
US20200311646A1 (en) * | 2019-03-28 | 2020-10-01 | Eric Koenig | Blockchain-based system for analyzing and tracking work performance |
US20200351285A1 (en) * | 2019-05-03 | 2020-11-05 | EMC IP Holding Company LLC | Anomaly detection based on evaluation of user behavior using multi-context machine learning |
US20210029170A1 (en) * | 2018-03-26 | 2021-01-28 | Virsec Systems, Inc. | Trusted Execution Security Policy Platform |
US20210042180A1 (en) * | 2019-08-06 | 2021-02-11 | Oracle International Corporation | Predictive system remediation |
-
2020
- 2020-04-21 US US16/854,802 patent/US20210326436A1/en not_active Abandoned
-
2023
- 2023-07-20 US US18/355,825 patent/US20230367874A1/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080114710A1 (en) * | 2006-11-09 | 2008-05-15 | Pucher Max J | Method For Training A System To Specifically React On A Specific Input |
US8607353B2 (en) * | 2010-07-29 | 2013-12-10 | Accenture Global Services Gmbh | System and method for performing threat assessments using situational awareness |
US20180191770A1 (en) * | 2016-12-30 | 2018-07-05 | X Development Llc | Remedial actions based on user risk assessments |
US20180239959A1 (en) * | 2017-02-22 | 2018-08-23 | Anduin Transactions, Inc. | Electronic data parsing and interactive user interfaces for data processing |
US20210029170A1 (en) * | 2018-03-26 | 2021-01-28 | Virsec Systems, Inc. | Trusted Execution Security Policy Platform |
US20200311646A1 (en) * | 2019-03-28 | 2020-10-01 | Eric Koenig | Blockchain-based system for analyzing and tracking work performance |
US20200351285A1 (en) * | 2019-05-03 | 2020-11-05 | EMC IP Holding Company LLC | Anomaly detection based on evaluation of user behavior using multi-context machine learning |
US20210042180A1 (en) * | 2019-08-06 | 2021-02-11 | Oracle International Corporation | Predictive system remediation |
Non-Patent Citations (1)
Title |
---|
Buczak et al. "A Survey of Data Mining and Machine Learning Methods for Cyber Security Intrusion Detection." IEEE Communications Surveys & Tutorials, vol. 18, no. 2. October 25, 2015. Pg. 1153-1176. (Year: 2015) * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210200955A1 (en) * | 2019-12-31 | 2021-07-01 | Paypal, Inc. | Sentiment analysis for fraud detection |
US20220368656A1 (en) * | 2020-04-30 | 2022-11-17 | Beijing Bytedance Network Technology Co., Ltd. | Information interaction method and apparatus, and non-transitory computer-readable storage medium |
US20220394002A1 (en) * | 2020-04-30 | 2022-12-08 | Beijing Bytedance Network Technology Co., Ltd. | Information exchange method and apparatus, electronic device, and storage medium |
US11706170B2 (en) * | 2020-04-30 | 2023-07-18 | Beijing Bytedance Network Technology Co., Ltd. | Collaborative editing method of an electronic mail, electronic device, and storage medium |
US20210342441A1 (en) * | 2020-05-01 | 2021-11-04 | Forcepoint, LLC | Progressive Trigger Data and Detection Model |
US20230111652A1 (en) * | 2020-06-16 | 2023-04-13 | Paypal, Inc. | Training a Recurrent Neural Network Machine Learning Model with Behavioral Data |
US11588830B1 (en) * | 2020-06-30 | 2023-02-21 | Sequoia Benefits and Insurance Services, LLC | Using machine learning to detect malicious upload activity |
US11936670B2 (en) | 2020-06-30 | 2024-03-19 | Sequoia Benefits and Insurance Services, LLC | Using machine learning to detect malicious upload activity |
US20230342460A1 (en) * | 2022-04-25 | 2023-10-26 | Palo Alto Networks, Inc. | Malware detection for documents with deep mutual learning |
WO2024015576A3 (en) * | 2022-07-14 | 2024-03-28 | Iqvia Inc. | Harmonized quality (hq) |
Also Published As
Publication number | Publication date |
---|---|
US20230367874A1 (en) | 2023-11-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20230367874A1 (en) | Malicious behavior detection and mitigation in a document execution environment | |
JP6476339B2 (en) | System and method for monitoring, controlling, and encrypting per-document information on corporate information stored on a cloud computing service (CCS) | |
US20210084063A1 (en) | Insider threat management | |
US11587177B2 (en) | Joined and coordinated detection, handling, and prevention of cyberattacks | |
US10339309B1 (en) | System for identifying anomalies in an information system | |
US20220377093A1 (en) | System and method for data compliance and prevention with threat detection and response | |
US8607353B2 (en) | System and method for performing threat assessments using situational awareness | |
CN112703712A (en) | Supervised learning system for identity hazard risk calculation | |
US20200106793A1 (en) | Methods, systems, and computer program products for continuous cyber risk monitoring | |
JP2021039754A (en) | Endpoint agent expansion of machine learning cyber defense system for electronic mail | |
US11722510B2 (en) | Monitoring and preventing remote user automated cyber attacks | |
EP4229532B1 (en) | Behavior detection and verification | |
US10445514B1 (en) | Request processing in a compromised account | |
US20220129990A1 (en) | Multidimensional assessment of cyber security risk | |
US20230004806A1 (en) | High-risk passage automation in a digital transaction management platform | |
US20230300153A1 (en) | Data Surveillance In a Zero-Trust Network | |
Thomas et al. | ETHICAL ISSUES OF USER BEHAVIORAL ANALYSIS THROUGH MACHINE LEARNING. | |
He et al. | Healthcare security incident response strategy-a proactive incident response (ir) procedure | |
US20230068946A1 (en) | Integrated cybersecurity threat management | |
WO2020102601A1 (en) | Comprehensive data loss prevention and compliance management | |
US20230021423A1 (en) | Generating entity risk scores and mitigating risk based on the risk scores | |
Reddy | Data breaches in healthcare security systems | |
Thapliyal et al. | Security Threats in Healthcare Big Data: A Comparative Study | |
Aswathy et al. | 10 Privacy Breaches | |
Shivakumara et al. | Review Paper on Dynamic Mechanisms of Data Leakage Detection and Prevention |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: DOCUSIGN, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:WEST, NICHOLAS WILLIAM;YECKLEY, BRIAN;SALVI, ABHIJIT;AND OTHERS;SIGNING DATES FROM 20200709 TO 20200806;REEL/FRAME:053531/0620 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |