US20170134423A1 - Decoy and deceptive data object technology - Google Patents
Decoy and deceptive data object technology Download PDFInfo
- Publication number
- US20170134423A1 US20170134423A1 US15/414,850 US201715414850A US2017134423A1 US 20170134423 A1 US20170134423 A1 US 20170134423A1 US 201715414850 A US201715414850 A US 201715414850A US 2017134423 A1 US2017134423 A1 US 2017134423A1
- Authority
- US
- United States
- Prior art keywords
- deception
- decoy
- implemented method
- environment
- computer implemented
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/14—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
- H04L63/1441—Countermeasures against malicious traffic
- H04L63/1491—Countermeasures against malicious traffic using deception as countermeasure, e.g. honeypots, honeynets, decoys or entrapment
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/55—Detecting local intrusion or implementing counter-measures
- G06F21/554—Detecting local intrusion or implementing counter-measures involving event detection and direct action
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/14—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
- H04L63/1408—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
- H04L63/1425—Traffic logging, e.g. anomaly detection
Definitions
- the present invention in some embodiments thereof, relates to detecting potential unauthorized operations in a protected network, and, more specifically, but not exclusively, to detecting potential unauthorized operations in a protected network by monitoring interaction between dynamically updated deception data objects deployed in the protected system and deception applications hosted by a decoy endpoint.
- staged approach steps involve tactical iterations through what is known in the art as observe, orient, decide, act (OODA) loop.
- This tactic may present itself as most useful for the attackers who may face an unknown environment and therefore begin by observing their surroundings, orienting themselves, then deciding on a course of action and carrying it out.
- a computer implemented method of detecting unauthorized access to a protected network by monitoring a dynamically updated deception environment comprising:
- the decoy endpoint is a member selected from a group consisting of: a physical device comprising one or more processors and a virtual machine.
- the virtual machine is hosted by a local endpoint, a cloud service and/or a vendor service.
- Each of the plurality of deception data objects emulates a valid data object used for interacting with the one or more applications.
- Each of the plurality of deception data objects is a hashed credentials object, a browser cocky, a registry key, a Server Message Block (SMB) mapped share, a Mounted Network Storage element, a configuration file for remote desktop authentication credentials, a source code file with embedded database authentication credentials and/or a configuration file to a source-code version control system.
- SMB Server Message Block
- the usage indication comprises impersonating that the plurality of deception data objects are used to interact with the one or more deception applications.
- the one or more potential unauthorized operation is initiated by a user, a process, an automated tool and/or a machine.
- Each of the plurality of applications is an application, a tool, a local service and/or a remote service.
- Each of the plurality of applications is selected by one or more of: a user and an automated tool.
- the monitoring comprises one or more of:
- the one or more decoy operating system, the plurality of deception applications and/or the plurality of deception data objects are divided to a plurality of groups according to one or more characteristic of the protected network.
- a plurality of templates is provided for creating the one or more decoy operating system, the plurality of deception application and/or the plurality of deception data objects.
- each of the plurality of templates comprises a definition of a relationship between at least two of the one or more decoy operating system, the plurality of deception application and/or the plurality of deception data objects.
- one or more of the templates is adjusted by one or more users adapting the one or more templates according to one or more characteristic of the protected network.
- an alert is generated at detection of the one or more potential unauthorized operations.
- the alert is generated at detection of a combination of a plurality of potential unauthorized operations to detect a complex sequence of the interaction.
- the analysis comprises preventing false positive analysis to avoid identifying one or more legitimate operations as the one or more potential unauthorized operations.
- the one or more potential unauthorized operations are analyzed to identify an activity pattern.
- a learning process is applied on the activity pattern to classify the activity pattern in order to improve detection and classification of one or more future potential unauthorized operations.
- a system for detecting unauthorized access to a protected network by monitoring a dynamically updated deception environment comprising a program store storing a code and one or more processor on one or more decoy endpoint coupled to the program store for executing the stored code.
- the code comprising:
- a computer implemented method of containing a malicious attack within a deception environment by directing the malicious attack to a dynamically created deception environment comprising:
- the decoy endpoint is a member selected from a group consisting of: a local endpoint comprising one or more processors and a virtual machine, wherein the virtual machine is hosted by one or more of: a local endpoint, a cloud service and a vendor service.
- the potential attacker is a member selected from a group consisting of: a user, a process, an automated tool and a machine.
- the deception environment is created based on public information of the certain user.
- the public information is available in one or more networked processing nodes accessible over one or more networks.
- the false access information comprises credentials of the certain user.
- the attempt is not reported to the certain user.
- the false access information was provided to the potential attacker during a past attempt of the potential attacker to obtain a real version of the false access information of the certain user.
- the past attempt is a phishing attack to obtain the real version of the false access information of the certain user.
- the past attempt is based on attracting the certain user to register to a fictive service created by the potential attacker to obtain the real version of the false access information of the certain user.
- the past attempt is not reported to the certain user.
- the attempt is detected by comparing a password included in the false access information to one or more predicted passwords created based on an analysis of public information of the certain user.
- robustness of a real password created by the certain user is evaluated by comparing the real password to the one or more predicted password and alerting the certain user in case the real password is insufficiently robust, wherein the robustness is determined sufficient in case a variation between the predicted password and the real password exceeds a pre-defined number of characters.
- the certain user is requested to change the real password in case the real password is insufficiently robust.
- the attack vector comprises one or more action initiated by the potential attacker.
- the attack vector is a multi-stage attack vector comprising a plurality of actions initiated by the potential attacker. At least two of the actions are executed in one or more modes selected from: a series execution, a parallel execution.
- the deception environment is dynamically updated based on analysis of the attack vector in order to deceive the potential attacker to presume the deception environment is a real processing environment.
- the update includes updating one or more of: an information item of the certain user, a structure of the deception environment and a deployment of the deception environment.
- the deception environment is extended dynamically based on analysis of the attack vector in order to contain the attack vector.
- a system for containing a malicious attack within a deception environment by directing the malicious attack to a dynamically created deception environment comprising a program store storing a code and one or more processors on one or more decoy endpoints in a deception environment.
- the processor(s) is coupled to the program store for executing the stored code, the code comprising:
- FIG. 1 is a flowchart of an exemplary process for creating and maintaining a deception environment in order to detect potential unauthorized operations in a protected network, according to some embodiments of the present invention
- FIG. 2A is a schematic illustration of an exemplary first embodiment of a system for creating and maintaining a deception environment in order to detect potential unauthorized operations in a protected network, according to some embodiments of the present invention
- FIG. 2B is a schematic illustration of an exemplary second embodiment of a system for creating a deception environment for detecting potential unauthorized operations in a protected network, according to some embodiments of the present invention
- FIG. 2C is a schematic illustration of an exemplary third embodiment of a system for creating a deception environment for detecting potential unauthorized operations in a protected network, according to some embodiments of the present invention
- FIG. 2D is a schematic illustration of an exemplary fourth embodiment of a system for creating a deception environment for detecting potential unauthorized operations in a protected network, according to some embodiments of the present invention
- FIG. 2E is a schematic illustration of an exemplary fifth embodiment of a system for creating a deception environment for detecting potential unauthorized operations in a protected network, according to some embodiments of the present invention
- FIG. 2F is a schematic illustration of an exemplary sixth embodiment of a system for creating a deception environment for detecting potential unauthorized operations in a protected network, according to some embodiments of the present invention
- FIG. 3A is a screenshot of an exemplary first configuration screen of a campaign manager for configuring a deception campaign, according to some embodiments of the present invention
- FIG. 3B is a screenshot of an exemplary second configuration screen of a campaign manager for configuring a deception campaign, according to some embodiments of the present invention.
- FIG. 3C is a screenshot of an exemplary third configuration screen of a campaign manager for configuring a deception campaign, according to some embodiments of the present invention.
- FIG. 4 is a block diagram of exemplary building blocks of a deception environment for detecting potential unauthorized operations in a protected network, according to some embodiments of the present invention
- FIG. 5 is a block diagram of an exemplary utilization of deception environment building blocks for detecting potential unauthorized operations in a protected network, according to some embodiments of the present invention
- FIG. 6A is a screenshot of an exemplary first status screen of a campaign manager dashboard presenting structural information of a deception campaign, according to some embodiments of the present invention
- FIG. 6B is a screenshot of an exemplary second status screen of a campaign manager dashboard for investigation potential threats detected during a deception campaign, according to some embodiments of the present invention.
- FIG. 7 is a flowchart of an exemplary process for containing a malicious attack within a deception environment created dynamically in a protected network, according to some embodiments of the present invention.
- the present invention in some embodiments thereof, relates to detecting potential unauthorized operations in a protected network, and, more specifically, but not exclusively, to detecting potential unauthorized operations in a protected network by monitoring interaction between dynamically updated deception data objects deployed in the protected system and deception applications hosted by a decoy endpoint.
- the deception environment is created, maintained and monitored through one or more deception campaigns each comprising a plurality of deception components.
- the deception environment co-exists with a real (valid) processing environment of the protected network while separated from the real processing environment.
- the deception environment is based on deploying deception data objects (breadcrumbs), for example, credential files, password files, share lists, “cookies”, access protocols and/or the like in the real processing environment on one or more endpoints, for example, work stations, servers, processing nodes and/or the like in the protected network.
- the deception data objects interact with decoy operating system(s) (OS) and/or deception applications created and launched on one or more decoy endpoints in the protected system according to pre-defined relationship(s) applied in the deception environment.
- the decoy OS(s) and the deception application(s) may be adapted according to the characteristics of the real (valid) OS(s) and/or application used by the real processing environment of the protected network.
- the deception data objects are deployed to attract potential attacker(s) to use the deception data objects while observing, orienting, deciding and acting (OODA) within the protected network.
- the created deception data objects are of the same type(s) as valid data objects used in the real processing environment.
- the deception data objects interact with the decoy OS(s) and/or the deception application(s).
- the interaction as well as general activity in the deception environment is constantly monitored and analyzed. Since the deception environment may be transparent to legitimate users, applications, processes and/or the like in the real processing environment, operation(s) in the protected network that uses the deception data objects may indicate that the operations(s) are potentially unauthorized operation(s) that may likely be performed by the potential attacker(s).
- the deception environment is updated dynamically and continuously to make the deception data objects look like they are in use by the real processing environment in the protected network and therefore seem as valid data objects to the potential attacker thus leading the potential attacker to believe the emulated deception environment is a real one.
- the provided methods, systems and computer program products further allow a user, for example, an IT person and/or a system administrator to create the deception environment using templates for the deception components, specifically, the decoy OS(s), the deception application(s) and the deception data object(s).
- Automated tools are provided to automatically create, adjust and/or adapt the deception environment according to the characteristics of the real processing environment and/or the protected network such that the deception environment maps the construction and/or operation of the real processing environment.
- the emulated deception environment may present significant advantages compared to currently existing methods for detecting potential attackers and/or preventing the potential attackers from accessing resources in the protected network.
- the presented deception environment deceives the potential attacker from the very first time the attacker enters the protected network by creating a false environment—the emulated deception environment.
- Engaging the attacker at the act stage and trying to block the attack may lead the attacker to search for an alternative path in order to circumvent the blocked path.
- the currently existing methods are responsive in nature, i.e. respond to operations of the attacker, by creating the false environment in which the attacker advances, the initiative is taken such that the attacker may be directed and/or led to trap(s) that may reveal him (them).
- honeypots computer security mechanisms set to detect, deflect and/or counteract unauthorized attempts to use information systems.
- honeypots computer security mechanisms set to detect, deflect and/or counteract unauthorized attempts to use information systems.
- the honeypots that are usually emulating services and/or systems are typically placed inside the target network(s) and/or at the edges. The honeypots are directed to attract the attacker to use them and generate an alert when usage of the honeypots is detected.
- honeypots approach may provide some benefits when dealing with automated attack tools and/or unsophisticated attackers, however the honeypots present some drawbacks.
- the honeypots may be difficult to scale to large organizations as each of the honeypot application(s) and/or service(s) may need to be individually installed and managed.
- the advanced attacker may learn of the presence and/or nature of the honeypot since it may be static and/or inactive within the active target network.
- the honeypots may not be able to gather useful forensic data about the attack and/or the attacker(s).
- multiple false positive alerts may be generated when legitimate activity is conducted with the honeypot.
- the presented deception environment may overcome the drawback of the currently existing deception methods by updating dynamically and constantly the deception environment such that the deception data objects appear to be used in the protected network. This may serve to create an impression of a real active environment and may lead the potential attacker(s) to believe the deception data objects are genuine (valid) data objects. As the potential attacker(s) may not detect the deception environment, he (they) may interact with the deception environment during multiple iterations of the OODA loop thus revealing his (their) activity pattern and possible intention(s). The activity pattern may be collected and analyzed to adapt the deception environment accordingly. Since the deception environment is transparent to legitimate users in the protected network, any operations involving the decoy OSs, the deception applications and/or the deception data objects may accurately indicate a potential attacker thus avoiding false positive alerts.
- the presented deception environment methods and systems may allow for high scaling capabilities over large organizations, networks and/or systems.
- Using the templates for creating the decoy OS(s) and/or the deception application(s) coupled with the automated tools to create and launch the decoy OS(s) and/or the deception application(s) as well as automatically deploy the deception data objects may significantly reduce the effort to construct the deception environment and improve the efficiency and/or integrity of the deception environment.
- the centralized management and monitoring of the deception environment may further simplify tracking the potential unauthorized operations and/or potential attacks.
- a deception environment created and/or updated dynamically in a protected network in response to detection of an access attempt of a potential attacker for example, a human user, a process, an automated tool, a machine and/or the like.
- the deception environment may be created and/or updated in response, for example, to an attempt of a potential attacker to access the protected network using false access information of a certain user of the protected network.
- the deception environment may be further updated in response to one or more operations the potential attacker may apply as part of an attack vector.
- the potential attacker may be detected by identifying false access information the potential attacker uses to access the protected network.
- the false access information may be identified by predicting access information of the certain user based on public information of the certain available online over one or more networks, for example, the Internet. Predicting the access information of the certain user may simulate methods and/or techniques applied by the potential attacker to predict (“guess”) the access information of the certain user.
- the false access information may be further identified as false access information that was provided to the potential attacker during one or more past access attempts and/or attacks directed at the certain user. Once detecting use of the false access information, the access attempt is determined to be initiated by the potential attacker.
- the potential attacker is granted access to a deception environment created dynamically according to public information of the certain user to make the deception environment consistent with what the potential attacker may know of the certain user thus leading the potential attacker to assume the deception environment is in fact a real (valid) processing environment of the protected network and/or part thereof.
- the deception environment may be dynamically updated in real time according to one or more actions made by the potential attacker as part of his attack vector to make the deception environment appear as the real (valid) processing environment and encourage detonation of the attack vector.
- Encouraging the potential attacker to access the deception environment and detonating the attack vector may present significant advantages compared to currently existing methods for detecting and/or protecting the protected network from potential attackers. While the existing methods may detect the access attempt made (attack) by the potential attacker, the existing methods may typically block the access attempt and/or inform an authorized person and/or system of the attempted access. This may allow preventing the current attack, however since the resources required by the potential attacker for launching such an attack are significantly low, the potential attacker may initiate multiple additional access attempts that may eventually succeed.
- the attack vector of the potential attacker may be analyzed and/or learned in order to improve protection from such access attempts and/or attacks.
- the potential attacker may spend extensive resources, for example, time, tools and/or the like for the attack. This may discourage the potential attacker from initiating additional attacks and/or significantly reduce the number of attacks initiated by the potential attacker.
- the potential attacker may be deceived to believe that the deception environment is actually the real (valid) processing environment. This may encourage the potential attacker to operate, for example, apply the attack vector hence detonating the attack vector. Doing so allows monitoring, analyzing and/or learning the attack vector and/or the intentions of the potential attacker while containing the attack within the deception environment thus protecting the real (valid) processing environment of the protected network from any malicious action(s) initiated by the potential attacker.
- the present invention may be a system, a method, and/or a computer program product.
- the computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.
- the computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device.
- the computer readable medium may be a computer readable signal medium or a computer readable storage medium. Any combination of one or more computer readable medium(s) may be utilized.
- a computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
- a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
- Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network.
- a network for example, the Internet, a local area network, a wide area network and/or a wireless network.
- the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
- the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
- LAN local area network
- WAN wide area network
- Internet Service Provider for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
- electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.
- FPGA field-programmable gate arrays
- PLA programmable logic arrays
- each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s).
- the functions noted in the block may occur out of the order noted in the figures.
- two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
- FIG. 1 is a flowchart of an exemplary process for creating and maintaining a deception environment in order to detect potential unauthorized operations in a protected network, according to some embodiments of the present invention.
- a process 100 is executed to launch one or more deception campaigns comprising a plurality of deception components to create, launch, maintain and monitor a deception environment that co-exists with a real processing environment of a protected network.
- the deception components comprise one or more decoy OS(s) and deception application(s) adapted according to the characteristics of the OS(s) and/or applications used in the protected network.
- the decoy OS(s) and/or the deception application(s) are launched on one or more decoy endpoints that may be physical endpoint and/or virtual endpoints.
- the deception components further comprise a plurality of deception data objects (breadcrumbs) interacting with the decoy OS s and/or the deception applications.
- the deception data objects are deployed within the real processing environment of the protected network to attract potential attacker(s) to use the deception data objects while performing the OODA loop within the protected network.
- the deception data objects are of the same type(s) as valid data objects used to interact with the real OSs and/or applications in the real processing environment such that the deception environment efficiently emulates and/or impersonates as the real processing environment and/or a part thereof.
- the deception data objects When used, instead of interacting with the real operating systems and/or application, the deception data objects interact with the decoy OS(s) and/or the deception application(s).
- the deception environment is transparent to legitimate users, applications, processes and/or the like of the protected network's real processing environment. Therefore, operation(s) in the protected network that use the deception data object(s) may be considered as potential unauthorized operation(s) that in turn may be indicative of a potential attacker.
- the deception data objects are updated constantly and dynamically to avoid stagnancy and mimic a real and dynamic environment with the deception data objects appearing as valid data objects such that the potential attacker believes the emulated deception environment is a real one.
- FIG. 2A , FIG. 2B , FIG. 2C , FIG. 2D , FIG. 2E and FIG. 2F are exemplary embodiments of a system for creating and maintaining a deception environment in order to detect potential unauthorized operations in a protected network, according to some embodiments of the present invention.
- One or more exemplary systems 200 A, 200 B, 200 C, 200 D, 200 E and 200 F may be used to execute a process such as the process 100 to launch one or more deception campaigns for detecting and/or alerting of potential unauthorized operations in a protected network 235 .
- the deception campaign(s) include creating, maintaining and monitoring the deception environment in the protected network 235 . While co-existing with the real processing environment of the protected network 235 , the deception environment is separated from the real processing environment to maintain partitioning between the deception environment and the real processing environment.
- the systems 200 A, 200 B, 200 C, 200 D, 200 E and 200 F include the protected network 235 that comprises a plurality of endpoints 220 connected to a network 230 facilitated through one or more network infrastructures, for example, a local area network (LAN), a wide area network (WAN), a personal area network (PAN), a metropolitan area network (MAN) and/or the internet 240 .
- the protected network 235 may be a local protected network that may be a centralized single location network where all the endpoints 220 are on premises or a distributed network where the endpoints 220 may be located at multiple physical and/or geographical locations.
- the protected network 235 may further be a virtual protected network hosted by one or more cloud services 245 , for example, Amazon Web Service (AWS), Google Cloud, Microsoft Azure and/or the like.
- the protected network 235 may also be a combination of the local protected network and the virtual protected network.
- the protected network 235 may be, for example, an organization network, an institution network and/or the like.
- the endpoint 220 may be a physical device, for example, a computer, a workstation, a server, a processing node, a cluster of processing nodes, a network node, a Smartphone, a tablet, a modem, a hub, a bridge, a switch, a router, a printer and/or any network connected device having one or more processors.
- the endpoint 220 may further be a virtual device hosted by one or more of the physical devices, instantiated through one or more of the cloud services 245 and/or provided as a service through one or more hosted services available by the cloud service(s) 245 .
- Each of the endpoints 220 is capable of executing one or more real applications 222 , for example, an OS, an application, a service, a utility, a tool, a process, an agent and/or the like.
- the endpoint 220 may further be a virtual device, for example, a virtual machine (VM) executed by the physical device.
- the virtual device may provide an abstracted and platform-dependent and/or independent program execution environment.
- the virtual device may imitate operation of the dedicated hardware components, operate in a physical system environment and/or operate in a virtualized system environment.
- the virtual devices may serve as a platform for executing one or more of the real applications 222 utilized as system VMs, process VMs, application VMs and/or other virtualized implementations.
- the local protected networks 235 as implemented in the systems 200 A and 200 B further includes a decoy server 201 , for example, a computer, a workstation, a server, a processing node, a cluster of processing nodes, a network node and/or the like serving as the decoy endpoint.
- the decoy server 201 comprises a processor(s) 202 , a program store 204 , a user interface 206 for interacting with one or more users 260 , for example, an information technology (IT) person, a system administrator and/or the like and a network interface 208 for communicating with the network 230 .
- IT information technology
- the processor(s) 202 may include one or more processing nodes arranged for parallel processing, as clusters and/or as one or more multi core processor(s).
- the user interface 206 may include one or more human-machine interfaces, for example, a text interface, a pointing devices interface, a display, a touchscreen, an audio interface and/or the like.
- the program store 204 may include one or more non-transitory persistent storage devices, for example, a hard drive, a Flash array and/or the like.
- the program store 204 may further comprise one or more network storage devices, for example, a storage server, a network accessible storage (NAS), a network drive, and/or the like.
- NAS network accessible storage
- the program store 204 may be used for storing one or more software modules each comprising a plurality of program instructions that may be executed by the processor(s) 202 from the program store 204 .
- the software modules may include, for example, a decoy OS 210 and/or a deception application 212 that may be created, configured and/or executed by the processor(s) 202 to emulate a processing environment within the protected network 235 .
- the decoy OS(s) 210 and/or the deception application(s) 212 may be executed by the processor(s) 202 in a naive implementation as shown for the system 200 A and/or over a nested decoy VM 203 A hosted by the decoy server 201 as shown for the system 200 B and serving as the decoy endpoint.
- the software modules may further include a deception campaign manager 216 executed by the processor(s) 202 to create, control and/or monitor one or more deception campaigns to crate the deception environment to detect potential unauthorized operations in the protected network 235 .
- the user 260 may use the campaign manager 216 to create, adjust, configure and/or launch one or more of the decoy OSs 210 and/or the deception application 212 on one or more of the decoy endpoints.
- the decoy endpoints are set to emulate the real endpoints 220 and as such may be physical and/or virtual endpoints.
- the user 260 may further use the campaign manager 216 to create, deploy and/or update a plurality of deception data objects 214 (breadcrumbs) deployed on one or more of the endpoints 220 in the protected network 235 .
- the deployed deception data objects 214 interact with respective one or more of the deception applications 212 .
- the deception data objects 214 are deployed to tempt the potential attacker(s) attempting to access resource(s) in the protected network 235 to use the deception data objects 214 .
- the deception data objects 214 are configured to emulate valid data objects that are available in the endpoints 220 for interacting with applications 222 .
- the user 260 may interact with one or more of the software modules such as the campaign manager 216 , the decoy OS(s) 210 and/or the deception application(s) 212 using the user interface 206 .
- the user interface may include, for example, a graphic user interface (GUI) utilized through one or more of the human-machine interface(s).
- GUI graphic user interface
- the user 260 interacts with the campaign manager 216 , the decoy OS(s) 210 and/or the deception application(s) 212 remotely over the network 230 by using one or more applications, for example, a local agent and/or a web browser executed on one or more of the endpoints 220 and/or from a remote location over the internet 240 .
- applications for example, a local agent and/or a web browser executed on one or more of the endpoints 220 and/or from a remote location over the internet 240 .
- the user 260 executes the campaign manager 216 on one or more of the endpoints 220 to create, control and/or interact with the decoy OS 210 and/or the deception applications 212 over the network 230 .
- the decoy OS(s) 210 and/or the deception application(s) 212 may be executed as one or more decoy VMs 203 B serving as the decoy endpoint(s) over a virtualization infrastructure available by one or more hosting endpoints 220 A such as the endpoints 220 of the protected network 235 .
- the virtualization infrastructure may utilize, for example, Elastic Sky X (ESXi), XEN, Kernel-based Virtual Machine (KVM) and/or the like.
- the user 260 may interact with the campaign manager 216 , the decoy OS(s) 210 and/or the deception application(s) 212 through a user interface such as the user interface 206 provided by the hosting endpoint(s) 220 A. Additionally and/or alternatively, the user 260 may use one or more applications, for example a local agent and/or a web browser executed on one or more of the endpoints 220 to interact remotely over the network 230 with the campaign manager 216 , the decoy OS(s) 210 and/or the deception application(s) 212 executed by the hosting endpoint(s) 220 A. Optionally, one or more of the other endpoints 220 executes the campaign manager 216 that interacts with the hosting endpoint(s) 220 A OS 210 and/or the deception applications 212 over the network 230 .
- the decoy OS(s) 210 and/or the deception application(s) 212 may be executed through computing resources available from the one or more cloud services 245 serving as the decoy endpoint(s).
- the decoy OS(s) 210 and/or the deception application(s) 212 may be utilized as one or more decoy VMs 205 instantiated using the cloud service(s) 245 and/or through one or more hosted services 207 , for example, software as a service (SaaS), platform as a service (PaaS) and/or the like that may be provided by the cloud service(s) 245 .
- the campaign manager 216 may also be available through the cloud services 245 .
- the hosted service(s) 207 is provided by the vendor of the campaign manager 216 .
- the user 260 may use one or more applications, for example, a the local agent and/or a the web browser executed on one or more of the endpoints 220 to interact remotely over the network 230 and the internet 240 with the campaign manager 216 .
- the user 260 executes the campaign manager 216 on one or more of the endpoints 220 and interacts with the decoy OS(s) 210 and/or the deception application(s) 212 over the network 230 and the internet 240 .
- the protected network 235 and/or a part thereof is a virtual protected network that may be hosted and/or provided through the cloud service(s) 245 .
- the cloud service(s) 245 may transfer and/or set their infrastructure comprising one or more of the applications 222 , for example, a webserver, a database, an internal mail server, an internal web application and/or the like to the cloud, for example, through the cloud service(s) 245 .
- the protected network 235 may distributed to two or more subnetworks such as the networks 235 A and 235 B that are part of the same logical protected network 235 while they may be physically distributed at a plurality of sites as a combination of the local network and the virtual network.
- the protected network 235 is virtual, hosted and/or provided by the cloud service 245 , i.e. the protected network 235 comprises of only the subnetwork 235 B.
- the subnetwork 235 A is a local network similar the network 235 as described before for the systems 200 A- 200 D and may include one or more of the endpoints 220 either as the physical devices and/or the virtual devices executing the application(s) 212 .
- the network 235 B on the other hand is a virtual network hosted and/or provided through the cloud service(s) 245 as one or more, for example, private networks, virtual private clouds (VPCs), private domains and/or the like.
- VPCs virtual private clouds
- Each of the private cloud(s), private network(s) and/or private domain(s) may include one or more virtual endpoints 220 that may be, for example, instantiated through the cloud service(s) 245 , provided as the hosted service 207 and/or the like, where each of the endpoints 220 may execute one or more of the applications 212 .
- the decoy OS(s) 210 may be executed as independent instance(s) deployed directly to the cloud service(s) 245 using an account for the cloud service 245 , for example, AWS, for a VPC provided by the AWS for use for the organizational infrastructure.
- users of the virtual protected network 235 may remotely access, communicate and/or interact with the applications 212 by using one or more access applications 225 , for example, the local agent, a local service and/or the web browser executed on one or more of the endpoints 220 and/or one or more client terminals 221 .
- the client terminal 221 may include, for example, a computer, a workstation, a server, a processing node, a network node, a Smartphone, a tablet.
- the decoy OS(s) 210 and/or the deception application(s) 212 may be executed through computing resources available from the cloud services 245 similarly to the system 200 D that serve as the decoy endpoint(s).
- the campaign manager 216 may be executed and accessed as described for the system 200 D.
- the deception data objects 214 may be adapted and/or adjusted in the systems 200 E and/or 200 F according to the characteristics of the protected networks 235 A and/or 235 B with respect to the executed applications 222 and/or interaction with the user(s) of the applications 222 .
- the protected networks 235 , 235 A and 235 B are referred herein after as the protected network 235 whether implemented as the local protected networks 235 , as the virtual protected network, and/or as a combination of the two.
- the process 100 may be executed using one or more software modules such as the campaign manager 216 to launch one or more deception campaigns.
- Each deception campaign comprises creating, updating and monitoring the deception environment in the protected network 235 in order to detect and/or alert of potential attackers accessing the protected network 235 .
- Each deception campaign may be defined according a required deception scope and is constructed according to one or more characteristics of the protected network 235 processing environment.
- the deception environment may be designed, created and deployed to follow design patterns, which are general reusable solutions to common problems and are in general use.
- the deception campaign may be launched to emulate one or more design patterns and/or best-practice solutions that are widely used by a plurality of organizations.
- a virtual private network (VPN) link may exist to connect to a resource of the protected network 235 , for example, a remote branch, a database backup server and/or the like.
- the deception campaign may be created to include one or more decoy OSs 210 , deception applications 212 and respective deception data objects 214 to emulate the VPN link and/or one or more of the real resources of the protected network 235 .
- Using this approach may give a reliable impression of the deception environment to appear as the real processing environment thus effectively attracting and/or misleading the potential attacker who may typically be familiar with the design patterns.
- Each deception campaign may define one or more groups to divide and/or delimit the organizational units in order to create an efficient deception environment that may allow better classification of the potential attacker(s).
- the groups may be defined according to one or more organizational characteristics, for example, business units of the organization using the protected network 235 , for example, human resources (HR), sales, finance, development, IT, data center, retail branch and/or the like.
- the groups may also be defined according to one or more other characteristics of the protected network 235 , for example, a subnet, a subdomain, an active directory, a type of application(s) 222 used by the group of users, an access permission on the protected network 235 , a user type and/or the like.
- the process 100 for launching one or more deception campaigns starts with the user 260 using the campaign manager 216 to create one or more images of the decoy OSs 210 .
- the decoy OS 210 is a full stack operating system that contains baseline configurations and states that are relevant to the protected network 235 in which the decoy OS(s) 210 is deployed.
- the image of the decoy OS(s) 210 is selected according to one or more characteristics of the protected network 235 , for example, a type of OS(s), for example, Windows, Linux, CentOS and/or the like deployed on endpoints such as the endpoints 220 , a number of endpoints 220 and/or the like.
- the decoy OS(s) 210 may also be selected according to the deception application(s) 212 that the user 260 intends to use in the deception environment and are to be hosted by the decoy OS(s) 210 .
- the campaign manager 216 provides one or more generic templates for creating the image of the decoy OS(s) 210 .
- the templates may support one or more of a plurality of OSs, for example, Windows, Linux, CentOS and/or the like.
- the template(s) may be adjusted to include one or more applications and/or services such as the application 212 mapping respective applications 222 according to the configuration of the respective OS(s) in the real processing environment of the protected network 235 .
- the adjusted template(s) may be defined as a baseline idle state of the images of the decoy OS(s) 210 .
- the application(s) 212 included in the idle template may include, for example, generic OS applications and/or services that are part of the out-of-the-box manifest of services, as per the OS, for example, “explorer.exe” for the Windows OS.
- the application(s) 212 included in the idle state image may also include applications and/or services per the policy applied to the protected network 235 , for example, an organization policy.
- the adjustment to the template may be done by the user 260 through the campaign manager 216 GUI and/or using one or more automated tools that analyze the endpoints 220 of the protected network 235 to identify application(s) 222 that are installed and used at the endpoints 220 .
- the campaign manager 216 supports defining the template(s) to include orchestration, provisioning and/or update services for the decoy OS(s) 210 to ensure that the instantiated templates of the decoy OS(s) 210 are up-to-date with the other OS(s) deployed in the protected network 235 .
- the user 260 using the campaign manager 216 creates one or more of the deception applications 212 to be hosted by the decoy OS(s) 210 .
- the deception applications 212 include a manifest of applications, services, tools, processes and/or the like selected according to applications and services such as the applications 222 characteristic to the protected network 235 .
- the deception applications 212 may be selected based on a desired scope of deception and/or characteristic(s) of the protected network 235 .
- the deception application(s) 212 are selected to match deception data objects such as the deception data objects 214 deployed in the endpoints 220 to allow interaction between the deception data objects 214 and the respective deception application(s) 212 .
- the selection of the deception applications 212 may be done by the user 260 using the campaign manager 216 .
- the campaign manager 216 may use one or more automated tools to explore the protected network 235 and identify the applications 222 executed on the endpoints 220 . Based on the identified applications 222 , the campaign manager may select automatically the deception applications(s) 212 to be included in the deception environment.
- the application(s) 212 may include one or more applications and/or services mapping respective application(s) 222 , for example, an off-the-shelf application, a custom application, a web based application and/or service, a remote service and/or the like. Naturally, the applications 212 are selected to operate with the decoy OS(s) 210 selected for the deception campaign.
- the campaign manager 216 provides one or more generic templates for one or more of a plurality of deception applications 212 .
- the templates of the deception applications 212 may be adjusted to adapt to the protected network 235 to maintain similarity of the deception environment with the real processing environment of the protected network such that the deception application(s) 212 appear to be valid applications such as the applications 222 .
- the campaign manager 216 may create, define and/or adjust the off-the-shelf application(s) for the deception environment through tools, packages and/or services customized to manipulate the off-the-shelf application(s).
- the campaign manager 216 may also use an Application Programming Interface (API) of a respective off-the-shelf application to create, define and/or adjust the template for creating the deception application 212 mapping the off-the-shelf application(s).
- API may provide a record, for example, an XML file that describes the expected inputs and/or outputs of the off-the-shelf application available as a containerized application, a service and/or an executable.
- the record may further describe expected interaction of the off-the-shelf application with the OS in idle state(s), i.e. with no user interaction.
- the campaign manager 216 may use the interaction description of the off-the-shelf application with the OS to adjust the template of the respective deception application 212 to operate with the decoy OS 210 . Defining the idle state(s) may allow the campaign manager 216 to detect user interaction once the deception campaign is launched. Containerization and declaration may be required for the custom applications to allow the campaign manager 216 to take advantage of the template mechanism for use with the custom application(s).
- the campaign manager 216 may use the API of the web based application(s) and/or service(s) and the remote service(s) similarly to what is done for the off-the-shelf application(s) and/or service(s) to define the expected inputs, outputs, web responses and/or back-end data structures.
- the campaign manager 216 defines relationship(s) between each of the deception applications 212 and the respective decoy OS(s) 210 to set the processing interaction between them during the deception campaign.
- the relationship(s) may be based on pre-defined declarations provided by the templates according to the type of the respective deception application 212 and the corresponding decoy OS 210 .
- the relationship declarations may be further adjusted automatically by the campaign manager 216 and/or the by the user 260 using the campaign manager 216 to adapt to one or more operational, structural and/or organization characteristics of the protected network.
- the operational, structural and/or organization characteristics may include, for example, a network structure of the protected network, a mapping method of the application(s) 222 used in the protected network and/or the like.
- the deception environment may be further created and/or adapted to emulate one or more applications and/or services such as the applications 222 that are provided by the cloud services 245 .
- the applications 222 that are provided by the cloud services 245 may not be directly associated with the decoy OSs 210 but may rather be considered as decoy entities on their own.
- cloud services 245 such as, for example the AWS may provide an application 222 of type Simple Storage Service (S3) bucket service.
- S3 bucket service has become a basic building block of any cloud deployment to the AWS.
- the S3 bucket service is used extensively for a plurality of storage purposes, for example, a dumb storage of large amounts of logs, an intermediate storage for software deployment, an actual storage mechanism used by web application(s) to store files and/or the like.
- the S3 bucket service provided by the AWS establishes a new bucket storage concept by providing an API allowing extensive capabilities and operability for the S3 bucket service, for example, monitoring of action(s) on the S3 bucket either read and/or write operations. This may serve to extend the deception environment to take advantage of the S3 bucket as a decoy, i.e. an S3 storage decoy.
- the S3 storage decoy may be created and deployed as an active part of the deception environment.
- the campaign manager 216 is used to launch the decoy OS(s) 210 and the deception application(s) 212 .
- the decoy OS(s) 210 is instantiated in one or more forms as presented for the systems 200 A, 200 B, 200 C, 200 D, 200 E and/or 200 F.
- the instantiation of the decoy OS(s) 210 may be defined by the configuration of the groups declared for the deception campaign as well as by the configuration of the protected network.
- the instantiation of the decoy OS(s) 210 over the dedicated decoy server 201 and/or the virtualization infrastructure, for example, ESXi, XEN and/or KVM such as the decoy virtual machine(s) 203 B and/or 205 and/or the hosted service(s) 207 may be done manually by the user 260 and/or automatically using the campaign manager 216 .
- the campaign manager 216 is used to create the deception data objects 214 and define the interaction with one or more of the deception applications 212 by declaring the relationship(s) of each of the deception data objects 214 .
- the deception data objects 214 are created to emulate valid data objects used to interact with the application 222 .
- the deception data objects 214 may include, for example, one or more of the following:
- the deception data objects 214 are directed, once deployed, to attract the potential attackers during the OODA process in the protected network.
- the deception data objects 214 may be created with one or more attributes that may be attractive to the potential attacker, for example, a name, a type and/or the like.
- the deception data objects 214 may be created to attract the attention of the attacker using an attacker stack, i.e. tools, utilities, services, application and/or the like that are typically used by the attacker. As such, the deception data objects 214 may not be visible to users using a user stack, i.e. tools, utilities, services, application and/or the like that are typically used by a legitimate user.
- Taking this approach may allow creating the deception campaign in a manner that the user may need to go out of his way, perform unnatural operations and/or actions to detect, find and/or use the deception data objects 214 while it may be a most natural course of action or method of operation for the attacker.
- browser cookies are rarely accessed and/or reviewed by the legitimate user(s). At most, the cookies may be cleared en-masse.
- one of the main methods for the attacker(s) to obtain website credentials and/or discover internal websites visited by the legitimate user(s) is to look for cookies and analyze them.
- open shares that indicate shares with network resources made by the legitimate user(s) using the application(s) 212 is typically not available for the user stack while it is a common method for the attacker that may review them using, for example, a “net use” command from a shell.
- Other examples include, for example, web browsers history logs, files in temporary folders, shell command history logs, etc. that are typically not easily accessible using the user stack while they are easily available using the attacker stack.
- Each of the deception data objects 214 is configured to interact with one or more of the decoy OSs 210 and/or the deception applications 212 .
- the deception data objects 214 may be created and their relationships defined according to the deception policy and/or methods defined for the deception campaign. Naturally, the deception policy and/or methods that dictate the selection and configuration of the deception application(s) 212 also dictate the type and configuration of the deception data objects 214 .
- the deception data objects 214 may further be created according to the groups defined for the deception campaign. For example, the deceptive data object 214 of type “browser cookie” may be created to interact with a website and/or an application launched using an application 212 of type “browser” created during the deception campaign. As another example, a deceptive data object 214 of type “mapped share” may be created to interact with an application 212 of type “share service” created during the deception campaign.
- the deception data objects 214 may be created and/or adapted according to the configuration of the protected network 235 and/or the construction of the deception environment. As an example, it is assumed that the deception campaign is launched to create the deception campaign for the virtual protected network 235 as described in the systems 200 E and/or 200 F.
- the deception environment may be created to place a stronger focus on standard network setup, for example, remote access using Secure Shell (SSH), remote backup using SSH and/or Secure Copy (SCP), SSH using private keys (Privacy-enhanced Electronic Mail (PEM) files) and/or the like. Focusing on the standard network setup for these configuration(s) is done as opposed to for, example, user/password combinations deception data objects 214 created for the deception environment for the local implementation of the protected network 235 as described in the systems 200 A- 200 D.
- SSH Secure Shell
- SCP Secure Copy
- POM Private keys
- the deception data objects 214 may be created and deployed to interact with one or more deception applications 212 emulating one or more applications and/or services such as the applications 222 that are provided by the cloud services 245 .
- the deception data objects 214 may be created and deployed to interact with the S3 storage decoy. Due to regulation, it is common practice to encrypt the data that is stored through the S3 bucket service in order to protect the stored data from breaches that may be initiated by the cloud provider, for example, Amazon.
- the decryption key(s) may be stored at the same storage mechanism, for example, the AWS S3 bucket service however, in order to increase the security level, the decryption key(s) are typically stored through a storage bucket service provided by one or more other cloud providers, for example, the Google Cloud Engine.
- the campaign manager 216 may be used to create an S3 storage decoy that may store data that is set to attract the attacker.
- Deception data object(s) 214 of a type decryption key may be created to interact with the S3 storage decoy.
- the decryption key deception data object(s) 214 may be deployed using the storage mechanism of the same cloud service(s) provider providing the S3 storage decoy and/or using the storage mechanism of the of one or more of the other cloud service(s) providers.
- This deception extension that takes advantage of the S3 bucket service may seem highly realistic, valid and attractive to the potential attacker seeking to obtain the encrypted data available at the supposedly valid S3 storage decoy.
- the campaign manager 216 is used to deploy the deception data objects 214 on one or more of the endpoints 220 in the protected network 235 to attract the potential attackers who attempt to OODA the protected network 235 .
- the deployment of the deception data objects 214 may be done using the groups' definition.
- the deceptive data object 214 of the type “browser cookie” may be deployed using a Group Policy Login Script throughout a respective network segment comprising a subset of the endpoints 220 .
- the deceptive data object 214 of the type “mapped share” may be deployed using a Windows Management Instrumentation (WMI) script to a specific subset of endpoints 220 in the domain of the protected network 235 .
- WMI Windows Management Instrumentation
- the deception data objects 214 may be deployed using automated tools, for example, provisioning and/or orchestration tools, for example, Group Policy, Puppet, Chef and/or the like.
- the deployment of the deception data objects 214 may also be done using local agents executed on the endpoints 220 .
- the local agents may be pre-installed on the endpoints 220 and/or they may be volatile agents that install the deception data objects 214 and then delete themselves.
- the deception environment and/or the campaign manager 216 may provide custom scripts and/or commands that may be executed by the user 260 in the protected network 235 to deploy the deception data objects 214 .
- the campaign manager 216 provides a GUI to allow the user 260 to create, configure, launch and/or deploy one or more of the deception components.
- the GUI may be provided by the campaign manager 216 locally when the user 260 interacts directly with the decoy server 201 and/or the decoy VM 203 A.
- the campaign manager 216 may perform as a server that provides the GUI to the user 260 through one or more applications for accessing the campaign manager 216 remotely, for example, the local agent and/or a the web browser executed on one or more of the endpoints 220 .
- FIG. 3A , FIG. 3B and FIG. 3C are screenshots of an exemplary configuration screen of a campaign manager for configuring a deception campaign, according to some embodiments of the present invention.
- Screenshots 300 A, 300 B, 300 C and 300 D may be presented to one or more users such as the user 260 through a GUI of a campaign manager such as the campaign manager 216 .
- the GUI allows the user 260 to create and/or launch a deception campaign by creating, configuring and launching one or more deception components such as the decoy OS(s) 210 , the deception application(s) 212 and/or the deception data objects (breadcrumbs) 214 .
- the campaign manager 216 may use pre-defined templates that may be adjusted according to the protected network 235 characteristics in order to create the deception components.
- the screen shot 300 A presents an interface for creating one or more images of the decoy OS(s) 210 .
- the user 260 may select a decoys tab 310 A to create one or more images of the decoy OS(s).
- the campaign manager 216 presents an interface for creating an image for the decoy OS 210 to allow the user 260 to select an OS template, for example, Linux, Windows, CentOS and/or the like for creating an image for the decoy OS 210 .
- the user 260 may further assign a name designating the decoy OS 210 image and/or a host where the decoy OS 210 will be launched.
- the user 260 selected a template of Linux Ubuntu to create an image for a decoy OS 210 designated “HR_Server” that is hosted by an endpoint 220 designated “hrsrv01”.
- the screen shot 300 B presents an interface for creating one or more deception applications 212 .
- the user 260 may select a services tab 310 B to create one or more deception applications 212 .
- the campaign manager 216 presents an interface for creating one or more deception applications 212 to allow the user 260 to select a template for creating the deception application(s) 212 .
- the user 260 may further assign a name designating the created deception application 212 and/or define a relationship (interaction) between the created deception application 212 and one or more of the decoy OSs 210 .
- the user 260 selected a template of an SMB service for a deception application 212 designated “Personnel_Files” that is included in a services group designated “HR_Services” and connected to the decoy OS 210 “HR_Server”. Through the interface, the user 260 may activate/deactivate the selected deception application 212 . The interface may be further used to display the deception data objects that are attached (interact) to the created deception application 212 .
- the screenshot 300 C presents an interface for creating one or more deception data objects (breadcrumbs) 214 .
- the user 260 may select a breadcrumbs tab 310 C to create one or more deception data objects 214 .
- the campaign manager 216 presents an interface for creating one or more deception data objects 214 to allow the user 260 to select the a template representing a type of a data object for creating the deception data object 214 .
- the user 260 may further assign a name designating the created deception data object 214 and/or define a relationship (interaction) between the created deception data object 214 and one or more of the deception applications 212 .
- the user 260 selected a template of a Network share for a deception data object 214 designated “Personnel_Files_BC” that is included in a breadcrumbs group designated “HR_bc_group” and connected to the SMB deception application 212 “Personnel_Files” that is part of the services group “HR_Services”.
- the screen shot 300 D presents an interface for generating a script for deploying the created deception data object(s) 214 . While the breadcrumbs tab 310 C is presented, the user 260 may select the generate button presented by the interface. The campaign manager 216 may then generate a script that when executed by one or more of the endpoints 220 will create the created deception data object 214 on the respective endpoint(s) 220 . The campaign manager 216 may create a script that once executed by the endpoint 220 deletes itself leaving no traces on the endpoint 220 .
- the deception environment is operational and the relationships between the deception data objects 214 , the deception application(s) 212 and the decoy OS(s) 210 are applicable.
- FIG. 4 is a block diagram of exemplary building blocks of a deception environment for detecting potential unauthorized operations in a protected network, according to some embodiments of the present invention.
- a deception environment 400 created using a campaign manager such as the campaign manager 216 comprises a plurality of deception data objects 214 deployed on one or more endpoints such as the endpoints 220 in a protected network such as the protected network 235 .
- the campaign manager 216 is used to define relationships 410 between each of the deception data items 214 and one or more of a plurality of deception applications 212 .
- the campaign manager 216 is also used to define relationships 412 between each of the deception applications 212 and one or more of a plurality of decoy OSs 210 .
- the deception data objects 214 , the deception applications 212 and/or the decoy OSs 210 may be arranged in one or more groups 402 , 404 and/or 406 respectively according to one or more of the characteristics of the protected network 235 .
- operations that use data available in the deception data objects 214 interact with the deception application(s) 212 according to the defined relationships 410 that in turn interact with the decoy OS(s) 210 according to the defined relationships 412 .
- the defined relationships 410 and/or 412 may later allow detection of one or more unauthorized operations by monitoring and analyzing the interaction between the deception data objects, the deception applications 212 and/or the decoy OSs 210 .
- FIG. 5 is a block diagram of an exemplary utilization of deception environment building blocks for detecting potential unauthorized operations in a protected network, according to some embodiments of the present invention.
- a campaign manager such as the campaign manager 216
- an exemplary deception environment 500 is created and launched to protect a bank.
- the network of the bank such as the network 230 is typically divided to two segments (groups), the internal office network comprising a plurality of workstations used by employees and a network for Automatic Teller Machines (ATMs) that are available to customers.
- ATMs Automatic Teller Machines
- Both the workstations and the ATMs are exemplary endpoints such as the endpoint 220 and/or the client terminal 221 .
- the deception environment 500 is created to comprise two groups A and B each directed at one of two main deception “stories”, a first story for the ATM machines network and a second story for the internal network comprising the workstations.
- a plurality of deception data objects such as the deception data objects 214 that are grouped in a group 402 A are deployed on each of the workstations.
- the deception data objects 214 deployed on the workstations may include, for example, an open share deception data object 214 A for sharing and/or accessing various company documents, a browser cookie deception data object 214 B for an internal company website and a hashed-credentials deception data object 214 C used to access an internal company website and/or log into a faked domain.
- a plurality of deception data objects (breadcrumbs) such as the deception data objects 214 that are grouped in a group 402 B are deployed on each of the ATMs.
- the deception data objects 214 deployed on the ATMs may include, for example, the hashed-credentials deception data object 214 C and a configuration file deception data object 214 D for a faked ATM service.
- the deception applications 212 are created and launched.
- the deception applications 212 may be divided to two groups 404 A and 404 B to interact with the deception data objects 214 of the internal network and the ATM network respectively.
- the group 404 A may include, for example:
- the group 404 B may include, for example an ATM service deception application 212 D utilizing the faked ATM service and interacting with the deception data object 214 C of the group 402 B and the configuration file deception data object 214 D.
- Interaction and/or relationship 410 F and/or 410 H may be defined for the interaction of the deception data object 214 C and the deception data object 214 D respectively with the deception application 212 D.
- the deception applications 212 A through 212 D are hosted by decoy OSs such as the decoy OS 210 .
- the SMB share deception application 212 A and the LIS server deception application 212 B are hosted by a Windows Server 2003 decoy OS 210 A while the domain controller deception application 212 C is hosted by a Windows Server 2008R2 decoy OS 210 B.
- the Windows Server 2003 decoy OS 210 A and the Windows Server 2008R2 decoy OS 210 B are grouped together in a group 406 A.
- the ATM service deception application 212 D is hosted by a Windows XP SP2 decoy OS 210 C that is associated with a group 406 B.
- Interaction and/or relationship 412 A and/or 412 B may be defined for the interaction of the deception application 212 A and the deception application 212 B respectively with the decoy OS 210 A.
- Interaction and/or relationship 412 C may be defined for the interaction of the deception application 212 C with the decoy OS 210 B.
- Interaction and/or relationship 412 C may be defined for the interaction of the deception application 214 C with the decoy OS 210 B.
- Interaction and/or relationship 412 D may be defined for the interaction of the deception application 212 D with the decoy OS 210 C.
- the campaign manager 216 updates dynamically and continuously the deception environment and/or the deception data objects 214 deployed on the endpoints 220 .
- the deception environment is constantly updated to make the deception data objects 214 seem as valid data objects to the potential attacker.
- the campaign manager 216 update usage indication(s), for example, footprints, traces, access residues, log records and/or the like in the respective deception applications 212 indicating usage of the deception data objects 214 .
- the campaign manager 216 update usage indication(s) to create an impression (impersonate) that the deception data objects 214 are valid and/or real data objects used by users, applications, services and/or the like in the protected network 235 .
- the campaign manager 216 may use one or more automated tools, for example, scripts to update the deception environment and/or the deception data objects 214 .
- the campaign manager 216 may be configured to continuously update the deception environment and/or the deception data objects 214 for a pre-defined time period, for example, a day, a week, a month, a year and/or for an unlimited period of time.
- the campaign manager 216 may apply a schedule for updating the deception environment.
- the campaign manager 216 may therefore detect a returning potential attacker that attempted to access the protected network 235 in the past.
- the campaign manager 216 updates the deception environment according to a behavioral pattern of the potential attacker such that the deception data objects are adapted to trap the potential attacker.
- the campaign manager 216 may further adapt the deception environment and/or the deception data objects 214 according to one or more characteristics of the returning potential attacker.
- the campaign manager 216 continuously monitors the protected network 235 in order to detect the potential attacker.
- the potential attacker may be detected by identifying one or more unauthorized operations that are initiated in the protected network 235 .
- the unauthorized operation(s) may be initiated by a user, a process, a utility, an automated tool, an endpoint and/or the like.
- the unauthorized operation(s) may originate within the protected network 235 and/or from a remote location accessing the protected network 235 over the network 230 and/or the internet 240 .
- the campaign manager 216 monitors the decoy OS(s) 210 and/or the deception applications 212 at one or more levels and/or layers, for example:
- the campaign manager 216 analyzes the monitored data and/or activity to detect the unauthorized operation that may indicate of the potential attacker. Based on the analysis, the campaign manager 216 creates one or more of a plurality of detection events, for example, a touch event, an interaction event, a code execution event, an OS interaction event and/or a hardware interaction event.
- the analysis conducted by the campaign manager 216 may include false positive analysis to avoid identification of one or more operations initiated by one or more legitimate users, processes, applications and/or the like as the potential unauthorized operation.
- the touch event(s) may be created when the campaign manager 216 detects network traffic on one or more ports.
- the interaction events may be created the campaign manager 216 detects a meaningful interaction with one or more of the deception applications 212 .
- the campaign manager 216 may create the interaction event when detecting usage of data that is included, provided and/or available from one or more of the deception data objects 214 for accessing and/or interacting with one or more of the deception applications 212 .
- the campaign manager 216 may create an interaction event when detecting an attempt to logon to a deception application 212 of type “remote desktop service” using credentials stored in a deception data object 214 of type “hashed credentials”.
- the campaign manager 216 may detect a file access on an SMB share deception application 212 where the file name is available from a deception data object 214 of type “SMB mapped shares”. Additionally, the campaign manager 216 may create an interaction event when detecting interaction with the deception application(s) 212 using data that is available from valid data objects, i.e. not one of the deception data objects 214 . For example, the campaign manager 216 may detect an HTTP request from an LIS deception application 212 . Optionally, the campaign manager 216 may be configured to create interaction events when detecting one or more pre-defined interaction types, for example, logging on a specific deception application 212 , executing a specific command, clicking a specific button(s) and/or the like.
- the user 260 may further define “scripts” that comprise a plurality of the pre-defined interaction types to configure the campaign manager 216 to create an interaction event at detection of complex interactions between one or more of the deception components, i.e. the decoy OS(s) 210 , the deception application(s) 212 and/or the deception data object(s) 214 .
- the code execution events may be created when the campaign manager 216 detects that foreign code is executed on the underlying OS of one or more of the decoy OSs 210 .
- the OS interaction event may be created when the campaign manager 216 detects that one or more applications such as the applications 222 attempt to interact with one or more of the decoy OSs 210 , for example, opening a port, changing a log and/or the like.
- the hardware interaction event may be created when the campaign manager 216 detects that one or more of the decoy OSs 210 and/or the deception applications 212 attempts to access one or more hardware components of the hardware platform on which the decoy OSs 210 and/or the deception applications 212 are executed.
- the user 260 may define complex sequence comprising a plurality of events to identify more complex operations and/or interaction detected with the deception components. Defining the complex sequences may further serve to avoid the false positive identification.
- the campaign manager 216 creates an activity pattern of the potential attacker by analyzing the identified unauthorized operation(s). Using the activity pattern, the campaign manager 216 may gather useful forensic data on the operations of the potential attacker and may classify the potential attacker in order to estimate a course of action and/or intentions of the potential attacker. The campaign manager 216 may than adapt the deception environment to tackle the estimated course of action and/or intentions of the potential attacker.
- the campaign manager 216 employs one or more machine learning processes, methods, algorithms and/or techniques on the identified activity pattern.
- the machine learning may serve to increase the accuracy of classifying the potential attacker based on the activity pattern.
- the machine learning may further be used by campaign manager 216 to adjust future deception environments and deception components to adapt to the learned activity pattern(s) of a plurality of potential attacker(s).
- the campaign manager 216 generates one or more alerts following the detection event indicting the potential unauthorized operation.
- the user 260 may configure the campaign manager 216 to set an alert policy defining one or more of the events and/or combination of events that trigger the alert(s).
- the campaign manager 216 may be configured during the creation of the detection campaign and/or at any time after the deception campaign is launched.
- the alert may be delivered to the user 260 monitoring the campaign manager 216 and/or through any other method, for example, an email message, a text message, an alert in a mobile application and/or the like.
- the campaign manager 216 and/or the deception environment may be further configured to take one or more additional actions following the alert.
- One action may be pushing a log of potential unauthorized operation(s) using one or more external applications and/or services, for example, syslog, email and/or the like.
- the log may be pushed with varying levels of urgency according to the policy defined for the deception campaign.
- the external system(s) in turn may take additional actions such as, for example, mitigating the potential threat by blocking executables detected as malware, block network access to compromised endpoints 220 and/or the like.
- Another action may be taking a snapshot of the affected decoy OSs 210 and/or deception applications 212 and turn them off in order to limit the potential attacker's ability to use the decoy OSs 210 and/or the deception applications 212 as a staging point for further action(s).
- the snapshot may serve for later forensic analysis to analyze the data captured before and during the attack until the turn off time.
- Yet another action may be to trigger call back function(s) to one or more clients using an API supported by the deception environment. Details of the attack may be relayed to the client(s) that may be configured with user-defined procedure(s) and/or direction(s) to take further action.
- the client(s) may use the API of the deception environment to create, launch and/or deploy one or more additional deception elements, for example, the decoy OS 210 , the deception application 212 and/or the deception data object 214 .
- the campaign manager 216 presents the user(s) 260 with real time and/or previously captured status information relating to the deception campaign(s), for example, created events, detected potential attackers, attack patterns and/or the like.
- the campaign manager 216 may provide, for example, a dashboard GUI provided through the user interface 206 .
- the campaign manager 216 may also presents the status information and/or through a remote access application, for example, a web browser and/or a local agent executed on one of the endpoints 220 and/or at a remote location accessing the campaign manager 216 remotely over the network 230 and/or the internet 240 .
- FIG. 6A is a screenshot of an exemplary first status screen of a campaign manager dashboard presenting structural information of a deception campaign, according to some embodiments of the present invention.
- a screenshot 600 A describing a deception campaign may be presented to one or more users such as the user 260 through a GUI of a campaign manager such as the campaign manager 216 .
- the user 260 may select a campaign tab 610 A to show an overall view of the deception campaign launched in the protected network 235 . Once the user 260 selects the campaign tab 610 A the campaign manager 216 presents status information on the deception campaign.
- the campaign manager 216 may present a structural diagram of the deception campaign including, for example, the deception components used during the deception campaign and/or the relationships (interactions) defined for each of the deception components. Furthermore, through the provided interface, the user 260 may define the type of events that may trigger alerts.
- FIG. 6B is a screenshot of an exemplary second status screen of a campaign manager dashboard for investigation potential threats detected during a deception campaign, according to some embodiments of the present invention.
- the user 260 may select an investigation tab 610 B to show potential threats, for example, unauthorized operation(s), suspected interactions and/or the like that may indicate of a potential attackers operating within the protected network 235 .
- the campaign manager 216 presents status information on potential threats.
- Each entry may present one or more potential; threats and the user 260 may select any one of the entries to investigate further the nature of the potential threat.
- a deception environment created and/or updated dynamically in a protected network in response to detection of the potential attacker.
- the deception environment may be created and/or updated in response, for example, to an attempt of a potential attacker to access the protected network using false access information of a certain user of the protected network.
- the deception environment may be further updated in response to one or more operations the potential attacker may apply as part of an attack vector.
- the potential attacker initiating the access attempt and/or the attack vector may be, for example, a human user, a process, an automated tool, a machine and/or the like.
- the potential attacker may predict (“guess”) the access information of the certain user, for example, a credential, a password, a password hint question and/or the like based on public information of the certain user, for example, an email address, a phone number, a work place, a home address, a parent name, a spouse name, a child name, a birth date and/or the like.
- the potential attacker may obtain the public information of the certain user from one or more publicly accessible networked resources, for example, an online news website, a workplace website, an online government service, an online social network (e.g. Facebook, Google+, LinkedIn, etc.) and/or the like.
- the potential attacker may assume a more active role. For example, the potential attacker may set up a fictive service and attract the certain user to open an account on the fictive service. Based on the access information the certain user used for creating the account on the fictive service, the potential attacker may predict the access information the certain user may use for accessing one or more valid (genuine) services. In another example, the potential attacker may apply one or more social engineering techniques to get the certain user to reveal his password, for example, phishing and/or the like. During the phishing attack, the certain user is lead to believe he is accessing one or more of the valid (genuine) services and may provide his real access information.
- the potential attacker may be lead to believe he has entered a real processing environment of the protected network while in fact he is granted access into the deception environment. This may be done by identifying false access information used by the potential attacker while attempting to access the protected network.
- the access information of the certain user may be identified by predicting the false access information using the public information of the certain user to simulate the prediction process done by the potential attacker. Additionally and/or alternatively, the false access information may identified as false access information provided to the potential attacker by intentionally (knowingly) following the path the potential attacker lays to lead the certain user to reveal his access information at the fictive website and/or fictive service and provide the false access information.
- the potential attacker In order to detonate the attack, i.e. cause the potential attacker to operate, for example, apply the attack vector, the potential attacker has to be convinced that the deception environment (also known as a “sandbox”) he unknowingly entered is a real (valid) processing environment. This may be done by dynamically updating the deception environment in real time in response to the access attempt and/or in response to one or more operations of the attack vector that may be a multi-stage attack vector.
- the deception environment also known as a “sandbox”
- This may be done by dynamically updating the deception environment in real time in response to the access attempt and/or in response to one or more operations of the attack vector that may be a multi-stage attack vector.
- a process 700 may be executed by a campaign manager such as the campaign manager 216 to protect a protected network such as the protected network 235 from a potential attacker attempting to access the protected network 235 .
- the process 700 may be carried out by the campaign manager 216 in one or more of the systems 200 A, 200 B, 200 C, 200 D, 200 E and/or 200 F collectively referred to herein after as the system 200 for brevity.
- the process 700 starts with the campaign manager 216 detecting an attempt of the potential attacker to access the protected network 235 .
- the campaign manager 216 may detect the attempted access by identifying that the potential attacker uses false access information, for example, a credential, a password, a password hint question and/or the like of a certain user of the protected network 235 .
- the campaign manager 216 may identify the false access information the potential attacker uses by comparing the false access information to predicted access information of the certain user the campaign manager 216 predicts itself. By predicting (“guessing”) the access information of the certain user, the campaign manager 216 may simulate methods and/or techniques that may be used by the potential attacker to predict the access information of the certain user. Often the certain user may use his (own) personal information to create his access information in order to easily remember the access information. The potential attacker may therefore use public information available for the certain user, for example, an email address, a phone number, a work place, a work place address, a residence address, a parent name, a spouse name, a child name, a birth date and/or the like to predict (“guess”) the access information of the certain user.
- the potential attacker may obtain the public information of the certain user from one or more publicly accessible networked resources, for example, an online news website, a workplace website, an online government service, an online social media or network (e.g. Facebook, Google+, LinkedIn, etc.) and/or the like.
- a publicly accessible networked resources for example, an online news website, a workplace website, an online government service, an online social media or network (e.g. Facebook, Google+, LinkedIn, etc.) and/or the like.
- the campaign manager 216 may create a list of predicted access information candidates the certain user may typically create for accessing one or more privileged resources on the protected network 235 , for example, a service, an account, a network, a database, a file and/or the like.
- the campaign manager 216 may be configured to apply one or more privacy laws, for example, according to a type of information, a geographical location of the certain user and/or the like when collecting the public information of the certain user in order to avoid privacy breaching.
- the campaign manager 216 evaluates robustness of the created access information by comparing the created access information to the predicted access information candidates.
- the comparison applied by the campaign manager 216 may not be a strict comparison in which the created access information matches the predicted access information candidate(s) exactly.
- the campaign manager 216 may apply the comparison to evaluate similarity of the created access information to the predicted access information candidate(s), for example, evaluate the linguistic distance of the created access information compared to the predicted access information candidate(s).
- the campaign manager 216 may determine that the created access information is insufficiently robust, i.e. the created access information is similar to the predicted access information candidate(s) in case the linguistic distance (variation) between the created access information and the predicted access information candidate(s) does not exceed a pre-defined number of characters, for example, 2 characters.
- the campaign manager 216 may take one or more actions, for example, reject the created access information, request the certain user to change the access information and/or the like.
- the campaign manager 216 may further offer the certain user robust access information created by the campaign manager 216 .
- the list of predicted access information candidate(s) created by the campaign manager 216 may be updated according to the techniques and/or methods applied by the certain user to create his access information. Moreover, the campaign manager 216 verifies that the list of predicted access information candidate(s) does not include the actual access information created and used by the certain user in the protected network 235 .
- the campaign manager 216 identifies the false access information to be false access information provided during one or more past attempts to accesses the protected network 235 .
- the potential attacker may apply, for example, a social engineering attack such as a phishing attack embedded, for example, in an email message to divert the certain user to a fictive website emulating a real (valid website).
- the past attack may include luring the certain user to register to a fictive service created by the potential attacker.
- the objective of the (past) attempt(s) and/or attacks is to predict the access information used by the certain user to access one or more real (valid) services, accounts, networks, privileged resources and/or the like.
- the campaign manager 216 may intentionally (knowingly) “fall” in one or more traps laid out for the certain user by the potential attacker to lure the certain user to reveal his access information.
- the campaign manager 216 may detect the phishing attack using one or more techniques as known in the art.
- the campaign manager 216 may detect a suspected email message that may be identified to be a phishing attack. While typically, such a phishing attack may be blocked, reported and/or discarded, the campaign manager 216 may intentionally (knowingly) follow the sequence laid out by the phishing attack and provide the potential attacker with the false access information.
- the campaign manager 216 may intentionally (knowingly) follow the registration sequence in the fictive website/service providing the false access information.
- the campaign manager 216 may be configured to inform the certain user, other users and/or systems of the access attempt in case the (past) attempt(s) and/or attack(s).
- the (past) attempt(s) and/or attack(s) are not reported to the certain user hence the certain user is unaware of the (past) attempt(s) and/or attack(s) made by the potential attacker.
- the false access information provided by the campaign manager 216 may be very similar to probable (predicted) access information that the certain user may use in order to lead the potential attacker to believe the false access information is in fact real (genuine).
- one or more of the predicted access information candidates are used as the false access information provided to the potential attacker as part of the registration process.
- the campaign manager 216 may classify the access information used during the access attempt to several access information categories:
- the campaign manager 216 may therefore detect the attempted access of the potential attacker into the protected network 235 by evaluating the access information used by the potential attacker against the access information categories.
- the campaign manager 216 may easily identify the attempt to be done by the potential attacker.
- the campaign manager 216 may determine if wrong access information is entered by the certain user or by the potential attacker during the access attempt.
- the campaign manager 216 may also apply the linguistic distance comparison with the pre-defined number of characters to determine if the wrong access information is likely to be entered by the certain user or by the potential attacker. For example, assuming a real password of the certain user is GadiDean1, selected based on names of founders of a certain company using the protected network 235 .
- the certain user may be reasonably expected to make mistakes such as, for example, typing a password GadiDean or GadiDean2 when logging into the privileged resource(s)
- the certain user is less likely to make mistakes such as, for example, typing a password Shorashim1, selected based on a residence address of the certain user.
- the residence address of the certain user is publicly available, for example, on the Internet
- the password Shorashim1 is likely to be in the list of the predicted access information candidates.
- the campaign manager 216 may therefore identify the first incident (GadiDean or GadiDean2) to be an access attempt of the certain user, while the second incident (Shorashim1) may be an attempted access of the potential attacker.
- the campaign manager 216 may be configured to inform the certain user, other users and/or systems of the access attempt in case the access attempt is determined to be initiated by the potential attacker. Optionally, the access attempt is not reported to the certain user hence the certain user is unaware of the access attempt by the potential attacker.
- the campaign manager 216 creates and/or updates the deception environment in real time in response to the detected attempt of the potential attacker to access the protected network 235 . Based on the detected false access information, the campaign manager 216 may collect information on the certain user whose access information is used by the potential attacker in order to generate a false identity of the certain user, for example, an account, a working environment and/or the like as part of the deception environment.
- the campaign manager 216 may construct the false identity according to the public information of the certain user that may typically be available to the potential attacker. By exposing the real (public) information of the certain user to the potential attacker, the false identity may seem consistent and legitimate to the potential attacker.
- the campaign manager 216 may create a false account, for example, a Facebook account of the certain user that includes the same public information that is publicly available to other Facebook users from the real (genuine) Facebook account of the certain user. Specifically, the public information of the certain user is publicly available with no need for specific access permission(s).
- the campaign manager 216 may create a fake company account for the certain user in the deception environment in the protected network 235 .
- the fake company account may include information specific to the role and/or job title of certain user within the company, for example, a programmer, an accountant, an IT person and/or the like.
- one or more generic fake identity templates may be used to create the false identity of the certain user.
- Each of the generic fake identity templates may be configured to include information typical, for example, to a role in the company, a job title holder in the company and/or the like.
- the campaign manager 216 may further combine one or more of the generic fake identity templates with the public information of the certain user to create the false identify associated with the certain user.
- the campaign manager 216 uses one or more of the generic fake identity templates in case the access attempt is not identified to be associated with any user such as the certain user of the protected network 235 .
- the campaign manager 216 adds additional information to the false identity to make it more attractive for the potential attacker to hack.
- the campaign manager 216 may create the fake identity to be consistent with information of the certain user as used during one or more of the past attempts and/or attacks. For example, assuming that based on the public information of the certain user the potential attacker identified that the certain user is attending dance classes and launched a past phishing attack in which a phishing e-mail message targeting dancers, for example a dancing event. During the current access attempt of the potential attacker, the campaign manager 216 may include in the fake identity, for example, information of dancing habits of the certain user. This may make the false identity more consistent and legitimate looking to the potential attacker.
- the campaign manager 216 may include related information on the certain user that is not publicly available. For example, assuming the phishing attack was directed towards hunting interests of the certain user, the campaign manager 216 may include false hunting information of the certain user in the fake identity.
- the deception environment created by the campaign manager 216 may include one or more decoy endpoints such as the decoy endpoint discussed before (physical endpoints and/or virtual endpoints) that may execute decoy OSs such as the decoy OSs 210 and/or deception application such as the deception application 212 .
- the campaign manager 216 may further create the deception environment to include a decoy network comprising a plurality of decoy endpoint networked together to further make the deception environment seem convincing to the potential attacker that is lead to believe the deception environment is a real (valid) processing environment.
- the campaign manager 216 creates and/or updates one or more of the decoy endpoints and/or the decoy network to comply with the fake identity created for the certain user in order to verify consistency of the deception environment as viewed by the potential attacker.
- the campaign manager 216 may create the decoy endpoint to include typical programming environment consistent with the programming area of the certain user, for example, relevant programming tool(s), build tool(s) and/or programs that are appropriate for the programming area of the certain user and/or the company that he works for.
- the campaign manager 216 may create the decoy network for the company X to include publicly available known data about the company X. The campaign manager 216 may use this publicly available data to create a believable deception environment and deception story.
- the created decoy network may include common network services that exist in every network, for example, file shares, exchange server, and/or the like.
- the campaign manager 216 may simulate real activity in the fake identity, the decoy endpoint(s) and/or the decoy network.
- the campaign manager 216 may create and/or maintain (update dynamically) a plurality of usage indications, for example, a browsing history, a file edit history and/or the like as may be typically done by real users in the real (valid) processing environment of the protected network 235 .
- the real activity simulation may be done automatically by the campaign manager 216 , manually by one or more users of the protected network 235 and/or in combination of the automatic and manual simulations.
- updating one or more of the usage indications may be done automatically to make the usage indication appear as if dynamically changing over time.
- the campaign manager 216 may further use the real processing environment of the protected network 235 and/or part thereof as the deception environment and or part of. Doing so may be beneficial assuming useful elements of the real processing environment, for example, a file with a password, a file with an associated credentials and/or the like may be properly detected to serve, for example, the fake identity, the fake account and/or the like.
- the campaign manager 216 may use the real processing environment in which one or more of the detected payloads modified to trap the potential attacker while maintaining the rest of the processing environment unaltered. The campaign manager 216 may need to exercise caution when employing such approach since the potential attacker, in particular, a skilled attacker, may take advantage of one or more aspects of the real processing environment, for example, the identity, the account and/or the like that are left unchanged.
- the campaign manager 216 grants the potential attacker access into the deception environment.
- the potential attacker may be convinced that he is actually entering the real (valid) processing environment of the protected network 235 .
- the campaign manager 216 analyzes the attack vector applied by the potential attacker in order to identify one or more intentions of the potential attacker.
- the campaign manager 216 may take one or more actions in response to the attack vector action(s). For example, the campaign manager 216 may alert one or more authorized persons and/or systems, for example, a user such as the user 260 , an Information technology (IT) person, a security system, security software and/or the like.
- a user such as the user 260
- IT Information technology
- the main purpose of the actions taken by the campaign manager 216 is to detonate the attack vector.
- Detonating the attack means allowing and/or encouraging the potential attacker to operate, for example, apply the attack vector, in the deception environment regarded as a safe “sandbox” to make the potential attacker detectable by the campaign manager 216 . This may be achieved by dynamically adjusting the deception environment and/or by responding to the action(s) applied through the attack vector in an authentic manner in order to convince the potential attacker that he actually entered the real (valid) processing environment of the protected network 235 .
- the campaign manager 216 may update the deception environment as described in step 704 to adapt according to the action(s) made by the potential attacker. Since the attack vector may be a multi-stage attack vector comprising of a plurality of actions, the campaign manager 216 may continuously respond to the attack vector action(s) by constantly updating the deception environment, for example, adjusting the fake identity, adding/removing and/or adjusting one or more of the decoy endpoints and/or the like. For example, assuming the campaign manager 216 identifies the potential attacker tries to access another endpoint on the decoy network, the campaign manager 216 may create in real time one or more additional decoy endpoints that may be added to the decoy network.
- the campaign manager 216 may intentionally (knowingly) install the malware in the deception environment and initiate actions expected by the malware. For example, in case the malware is a word file, the campaign manager 216 may open the word in the deception environment, for example, on the decoy endpoint using the typical tools for opening a word file. In another example, the malware is a suspected browser tool, the campaign manager may download the malware into the deception environment and launch the malware on the decoy endpoint for browsing the network(s). The campaign manager 216 may follow additional instructions initiated by the malware. However, the execution of the malware is contained within the deception environment.
- the attack vector and hence the potential attacker may be detected by the campaign manager 216 . This may allow the campaign manager 216 to further analyze the attack vector as done in step 708 and take additional actions in response to the attack vector based on the analysis.
- the campaign manager 216 may be configured to continuously update the deception environment for as long as defined, for example, a day, a week, a month, a year and/or for an unlimited period of time. This may allow the campaign manager 216 to identify one or more potential attackers that return to attempt to gain access into the protected network 235 .
- the campaign manager 216 may identify the returning attacker(s) by analyzing one or more Indicators of Compromise (IOC), for example, an attribute, an operational parameter and/or a behavioral characteristic of the returning attacker(s). For example, an originating IP of the attacker, a common attack tool used by the attacker, a common filename used by the attacker and/or the like may be detected to identify the potential attacker as the returning attacker.
- IOC Indicators of Compromise
- the campaign manager 216 may take additional measures on detection of the returning potential attacker, for example, restore the deception environment to be adapted according to characteristics of the returning potential attacker and/or the attack vector(s) used by the returning potential attacker during previous access attempts into the protected network 235 . For example, assuming the campaign manager 216 identified during a past attempted access of the potential attacker that the attack vector of the potential attacker was directed towards obtaining technology aspects of one or more products of the company the certain user works for. On the current attempted access of the returning potential attacker, the campaign manager 216 may therefore create and/or update the deception environment to include, for example, fabricated information leading to an account and/or a decoy endpoint of a technology research leader that may be attractive to the returning potential attacker.
- the returning potential attacker may be further convinced that the deception environment is the real (valid) processing environment of the protected network 235 .
- the returning potential attacker looked to access a financial restricted file directory and the campaign manager 216 adjusted the deception environment to include a decoy endpoint designated with a financial oriented title, for example, a desktop of a secretary of the Chief Financial Officer (CFO).
- the campaign manager 216 may extend the deception environment to include a decoy endpoint designated, for example, “CFO Laptop” to attract the returning potential attacker to attempt to access the decoy endpoint.
- the campaign manager 216 based on the analysis of the attack vector applied by the potential attacker, identifies one or more activity pattern of the potential attacker. Using the activity pattern(s), the campaign manager 216 may gather useful forensic data on the operations of the potential attacker and may classify the potential attacker in order to estimate a course of action and/or the intention(s) of the potential attacker. The campaign manager 216 may than further adapt the deception environment to tackle the estimated course of action and/or intention(s) of the potential attacker. This may allow learning the attack vector and applying protection means to real user accounts to protect them against future attack vector(s) and/or part thereof as detected by the campaign manager 216 applying the process 700 .
- composition or method may include additional ingredients and/or steps, but only if the additional ingredients and/or steps do not materially alter the basic and novel characteristics of the claimed composition or method.
- a compound or “at least one compound” may include a plurality of compounds, including mixtures thereof.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Debugging And Monitoring (AREA)
Abstract
Description
- This application is a Continuation-In-Part (CIP) of PCT/IB2016/054306 having international filing date of Jul. 20, 2016, which claims the benefit of priority under 35 USC 119(e) of U.S. Provisional Patent Application No. 62/194,863 filed on Jul. 21, 2015, the contents of which are incorporated herein by reference in their entirety.
- The present invention, in some embodiments thereof, relates to detecting potential unauthorized operations in a protected network, and, more specifically, but not exclusively, to detecting potential unauthorized operations in a protected network by monitoring interaction between dynamically updated deception data objects deployed in the protected system and deception applications hosted by a decoy endpoint.
- Organizations of all sizes and types face the threat of being attacked by advanced attackers who may be characterized as having substantial resources of time and tools, and are therefore able to carry out complicated and technologically advanced operations against targets to achieve specific goals, for example, retrieve sensitive data, damage infrastructure and/or the like.
- Generally, advanced attackers operate in a staged manner, first collecting intelligence about the target organizations, networks, services and/or systems, initiate an initial penetration of the target, perform lateral movement and escalation within the target network and/or services, take actions on detected objectives and leave the target while covering the tracks. Each of the staged approach steps involves tactical iterations through what is known in the art as observe, orient, decide, act (OODA) loop. This tactic may present itself as most useful for the attackers who may face an unknown environment and therefore begin by observing their surroundings, orienting themselves, then deciding on a course of action and carrying it out.
- According to an aspect of some embodiments of the present invention there is provided a computer implemented method of detecting unauthorized access to a protected network by monitoring a dynamically updated deception environment, comprising:
-
- Launching, on one or more decoy endpoints, one or more decoy operating systems (OS) managing one or more of a plurality of deception applications mapping a plurality of applications executed in a protected network.
- Updating dynamically a usage indication for a plurality of deception data objects deployed in the protected network to emulate usage of the plurality of deception data objects for accessing the one or more deception application. The deception data objects are configured to trigger an interaction with the one or more deception applications when used.
- Detecting usage of data contained in one or more of the plurality of deception data objects by monitoring the interaction.
- Identifying one or more potential unauthorized operations based on analysis of the detection.
- The decoy endpoint is a member selected from a group consisting of: a physical device comprising one or more processors and a virtual machine.
- The virtual machine is hosted by a local endpoint, a cloud service and/or a vendor service.
- Each of the plurality of deception data objects emulates a valid data object used for interacting with the one or more applications.
- Each of the plurality of deception data objects is a hashed credentials object, a browser cocky, a registry key, a Server Message Block (SMB) mapped share, a Mounted Network Storage element, a configuration file for remote desktop authentication credentials, a source code file with embedded database authentication credentials and/or a configuration file to a source-code version control system.
- The usage indication comprises impersonating that the plurality of deception data objects are used to interact with the one or more deception applications.
- The one or more potential unauthorized operation is initiated by a user, a process, an automated tool and/or a machine.
- Each of the plurality of applications is an application, a tool, a local service and/or a remote service.
- Each of the plurality of applications is selected by one or more of: a user and an automated tool.
- The monitoring comprises one or more of:
-
- Monitoring network activity of one or more of the plurality of deception applications.
- Monitoring interaction of the one or more deception applications with the one or more decoy operating systems.
- Monitoring one or more log records created by the one or more deception applications.
- Monitoring interaction of one or more of the plurality of deception applications with one or more of a plurality of hardware components in the protected network.
- Optionally, the one or more decoy operating system, the plurality of deception applications and/or the plurality of deception data objects are divided to a plurality of groups according to one or more characteristic of the protected network.
- Optionally, a plurality of templates is provided for creating the one or more decoy operating system, the plurality of deception application and/or the plurality of deception data objects.
- Optionally, each of the plurality of templates comprises a definition of a relationship between at least two of the one or more decoy operating system, the plurality of deception application and/or the plurality of deception data objects.
- Optionally, one or more of the templates is adjusted by one or more users adapting the one or more templates according to one or more characteristic of the protected network.
- Optionally, an alert is generated at detection of the one or more potential unauthorized operations.
- Optionally, the alert is generated at detection of a combination of a plurality of potential unauthorized operations to detect a complex sequence of the interaction.
- Optionally, the analysis comprises preventing false positive analysis to avoid identifying one or more legitimate operations as the one or more potential unauthorized operations.
- Optionally, the one or more potential unauthorized operations are analyzed to identify an activity pattern.
- Optionally, a learning process is applied on the activity pattern to classify the activity pattern in order to improve detection and classification of one or more future potential unauthorized operations.
- According to an aspect of some embodiments of the present invention there is provided a system for detecting unauthorized access to a protected network by monitoring a dynamically updated deception environment, comprising a program store storing a code and one or more processor on one or more decoy endpoint coupled to the program store for executing the stored code. The code comprising:
-
- Code instructions to launch one or more decoy operating systems (OS) managing one or more of a plurality of deception applications mapping a plurality of applications executed in a protected network.
- Code instructions to update dynamically a usage indication for a plurality of deception data objects deployed in the protected network to emulate usage of the plurality of deception data objects for accessing the one or more deception applications. The plurality of deception data objects are configured to trigger an interaction with the one or more deception applications when used.
- Code instructions to detect usage of data contained in one or more of the plurality of deception data objects by monitoring the interaction.
- Code instructions to identify one or more potential unauthorized operations based on an analysis of the detection.
- According to an aspect of some embodiments of the present invention there is provided a computer implemented method of containing a malicious attack within a deception environment by directing the malicious attack to a dynamically created deception environment, comprising:
-
- Detecting an attempt of a potential attacker to access a protected network by identifying false access information used by the potential attacker. Wherein the false access information is associated with a certain user of the protected network.
- Creating dynamically a deception environment associated with the certain user within the protected network in response to the attempt. Wherein the deception environment comprises one or more members selected from a group consisting of: a false account, a decoy endpoint and a decoy network comprising a plurality of decoy endpoints.
- In response to the attempt, granting access to the potential attacker into the deception environment.
- Monitoring an attack vector applied by the potential attacker using the false access information in the deception environment.
- The decoy endpoint is a member selected from a group consisting of: a local endpoint comprising one or more processors and a virtual machine, wherein the virtual machine is hosted by one or more of: a local endpoint, a cloud service and a vendor service.
- The potential attacker is a member selected from a group consisting of: a user, a process, an automated tool and a machine.
- The deception environment is created based on public information of the certain user.
- The public information is available in one or more networked processing nodes accessible over one or more networks.
- The false access information comprises credentials of the certain user.
- Optionally, the attempt is not reported to the certain user.
- The false access information was provided to the potential attacker during a past attempt of the potential attacker to obtain a real version of the false access information of the certain user.
- The past attempt is a phishing attack to obtain the real version of the false access information of the certain user.
- The past attempt is based on attracting the certain user to register to a fictive service created by the potential attacker to obtain the real version of the false access information of the certain user.
- Optionally, the past attempt is not reported to the certain user.
- The attempt is detected by comparing a password included in the false access information to one or more predicted passwords created based on an analysis of public information of the certain user.
- Optionally, robustness of a real password created by the certain user is evaluated by comparing the real password to the one or more predicted password and alerting the certain user in case the real password is insufficiently robust, wherein the robustness is determined sufficient in case a variation between the predicted password and the real password exceeds a pre-defined number of characters.
- Optionally, the certain user is requested to change the real password in case the real password is insufficiently robust.
- The attack vector comprises one or more action initiated by the potential attacker.
- The attack vector is a multi-stage attack vector comprising a plurality of actions initiated by the potential attacker. At least two of the actions are executed in one or more modes selected from: a series execution, a parallel execution.
- The deception environment is dynamically updated based on analysis of the attack vector in order to deceive the potential attacker to presume the deception environment is a real processing environment. The update includes updating one or more of: an information item of the certain user, a structure of the deception environment and a deployment of the deception environment.
- Optionally, the deception environment is extended dynamically based on analysis of the attack vector in order to contain the attack vector.
- According to an aspect of some embodiments of the present invention there is provided a system for containing a malicious attack within a deception environment by directing the malicious attack to a dynamically created deception environment, comprising a program store storing a code and one or more processors on one or more decoy endpoints in a deception environment. The processor(s) is coupled to the program store for executing the stored code, the code comprising:
-
- Code instructions to detect an attempt of a potential attacker to access a protected network by identifying false access information used by the potential attacker. Wherein the false access information is associated with a certain user of the protected network.
- Code instructions to create dynamically a deception environment associated with the certain user within the protected network in response to the attempted access. Wherein the deception environment comprises one or more member selected from a group consisting of: a false account, a decoy endpoint and a decoy network comprising a plurality of decoy endpoints.
- Code instructions to grant access to the potential attacker into the deception environment.
- Code instructions to monitor an attack vector applied by the potential attacker using the false access information in the deception environment.
- Unless otherwise defined, all technical and/or scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the invention pertains. Although methods and materials similar or equivalent to those described herein can be used in the practice or testing of embodiments of the invention, exemplary methods and/or materials are described below. In case of conflict, the patent specification, including definitions, will control. In addition, the materials, methods, and examples are illustrative only and are not intended to be necessarily limiting.
- Some embodiments of the invention are herein described, by way of example only, with reference to the accompanying drawings. With specific reference now to the drawings in detail, it is stressed that the particulars shown are by way of example and for purposes of illustrative discussion of embodiments of the invention. In this regard, the description taken with the drawings makes apparent to those skilled in the art how embodiments of the invention may be practiced
- In the drawings:
-
FIG. 1 is a flowchart of an exemplary process for creating and maintaining a deception environment in order to detect potential unauthorized operations in a protected network, according to some embodiments of the present invention; -
FIG. 2A is a schematic illustration of an exemplary first embodiment of a system for creating and maintaining a deception environment in order to detect potential unauthorized operations in a protected network, according to some embodiments of the present invention; -
FIG. 2B is a schematic illustration of an exemplary second embodiment of a system for creating a deception environment for detecting potential unauthorized operations in a protected network, according to some embodiments of the present invention; -
FIG. 2C is a schematic illustration of an exemplary third embodiment of a system for creating a deception environment for detecting potential unauthorized operations in a protected network, according to some embodiments of the present invention; -
FIG. 2D is a schematic illustration of an exemplary fourth embodiment of a system for creating a deception environment for detecting potential unauthorized operations in a protected network, according to some embodiments of the present invention; -
FIG. 2E is a schematic illustration of an exemplary fifth embodiment of a system for creating a deception environment for detecting potential unauthorized operations in a protected network, according to some embodiments of the present invention; -
FIG. 2F is a schematic illustration of an exemplary sixth embodiment of a system for creating a deception environment for detecting potential unauthorized operations in a protected network, according to some embodiments of the present invention; -
FIG. 3A is a screenshot of an exemplary first configuration screen of a campaign manager for configuring a deception campaign, according to some embodiments of the present invention; -
FIG. 3B is a screenshot of an exemplary second configuration screen of a campaign manager for configuring a deception campaign, according to some embodiments of the present invention; -
FIG. 3C is a screenshot of an exemplary third configuration screen of a campaign manager for configuring a deception campaign, according to some embodiments of the present invention; -
FIG. 4 is a block diagram of exemplary building blocks of a deception environment for detecting potential unauthorized operations in a protected network, according to some embodiments of the present invention; -
FIG. 5 is a block diagram of an exemplary utilization of deception environment building blocks for detecting potential unauthorized operations in a protected network, according to some embodiments of the present invention; -
FIG. 6A is a screenshot of an exemplary first status screen of a campaign manager dashboard presenting structural information of a deception campaign, according to some embodiments of the present invention; -
FIG. 6B is a screenshot of an exemplary second status screen of a campaign manager dashboard for investigation potential threats detected during a deception campaign, according to some embodiments of the present invention; and -
FIG. 7 is a flowchart of an exemplary process for containing a malicious attack within a deception environment created dynamically in a protected network, according to some embodiments of the present invention. - The present invention, in some embodiments thereof, relates to detecting potential unauthorized operations in a protected network, and, more specifically, but not exclusively, to detecting potential unauthorized operations in a protected network by monitoring interaction between dynamically updated deception data objects deployed in the protected system and deception applications hosted by a decoy endpoint.
- According to some embodiments of the present invention, there are provided methods, systems and computer program products for creating an emulated deception environment to allow detection of potential unauthorized operations in a protected network. The deception environment is created, maintained and monitored through one or more deception campaigns each comprising a plurality of deception components. The deception environment co-exists with a real (valid) processing environment of the protected network while separated from the real processing environment. The deception environment is based on deploying deception data objects (breadcrumbs), for example, credential files, password files, share lists, “cookies”, access protocols and/or the like in the real processing environment on one or more endpoints, for example, work stations, servers, processing nodes and/or the like in the protected network. The deception data objects interact with decoy operating system(s) (OS) and/or deception applications created and launched on one or more decoy endpoints in the protected system according to pre-defined relationship(s) applied in the deception environment. The decoy OS(s) and the deception application(s) may be adapted according to the characteristics of the real (valid) OS(s) and/or application used by the real processing environment of the protected network. The deception data objects are deployed to attract potential attacker(s) to use the deception data objects while observing, orienting, deciding and acting (OODA) within the protected network. In order for the deception environment to effectively mimic and/or emulate the real processing environment, the created deception data objects are of the same type(s) as valid data objects used in the real processing environment. However when used, instead of interacting with the real OS(s) and/or application(s), the deception data objects interact with the decoy OS(s) and/or the deception application(s). The interaction as well as general activity in the deception environment is constantly monitored and analyzed. Since the deception environment may be transparent to legitimate users, applications, processes and/or the like in the real processing environment, operation(s) in the protected network that uses the deception data objects may indicate that the operations(s) are potentially unauthorized operation(s) that may likely be performed by the potential attacker(s).
- The deception environment is updated dynamically and continuously to make the deception data objects look like they are in use by the real processing environment in the protected network and therefore seem as valid data objects to the potential attacker thus leading the potential attacker to believe the emulated deception environment is a real one.
- The provided methods, systems and computer program products further allow a user, for example, an IT person and/or a system administrator to create the deception environment using templates for the deception components, specifically, the decoy OS(s), the deception application(s) and the deception data object(s). Automated tools are provided to automatically create, adjust and/or adapt the deception environment according to the characteristics of the real processing environment and/or the protected network such that the deception environment maps the construction and/or operation of the real processing environment.
- The emulated deception environment may present significant advantages compared to currently existing methods for detecting potential attackers and/or preventing the potential attackers from accessing resources in the protected network. First as opposed to some of the currently existing methods that engage with the potential attacker at the act stage, the presented deception environment deceives the potential attacker from the very first time the attacker enters the protected network by creating a false environment—the emulated deception environment. Engaging the attacker at the act stage and trying to block the attack may lead the attacker to search for an alternative path in order to circumvent the blocked path. Moreover, while the currently existing methods are responsive in nature, i.e. respond to operations of the attacker, by creating the false environment in which the attacker advances, the initiative is taken such that the attacker may be directed and/or led to trap(s) that may reveal him (them).
- Some of the currently existing methods do try to deceive the attacker, however the measures used may be basic and/or simple, for example, obscurity, i.e. hiding the valuable data out of plain sight. Since advanced attacker(s) may have the time and resources to explore the target network, the attacker(s) is (are) likely to find the valuable data. More advanced currently existing methods employ a higher level of deception, mostly by using honeypots (computer security mechanisms set to detect, deflect and/or counteract unauthorized attempts to use information systems). The honeypots that are usually emulating services and/or systems are typically placed inside the target network(s) and/or at the edges. The honeypots are directed to attract the attacker to use them and generate an alert when usage of the honeypots is detected. The honeypots approach may provide some benefits when dealing with automated attack tools and/or unsophisticated attackers, however the honeypots present some drawbacks. First, the honeypots may be difficult to scale to large organizations as each of the honeypot application(s) and/or service(s) may need to be individually installed and managed. In addition, the advanced attacker may learn of the presence and/or nature of the honeypot since it may be static and/or inactive within the active target network. Moreover, even if the attack is eventually blocked, the honeypots may not be able to gather useful forensic data about the attack and/or the attacker(s). Furthermore, due to the unsophisticated nature of the honeypot in which alerts may be generated on every interaction with the honeypot, multiple false positive alerts may be generated when legitimate activity is conducted with the honeypot.
- The presented deception environment may overcome the drawback of the currently existing deception methods by updating dynamically and constantly the deception environment such that the deception data objects appear to be used in the protected network. This may serve to create an impression of a real active environment and may lead the potential attacker(s) to believe the deception data objects are genuine (valid) data objects. As the potential attacker(s) may not detect the deception environment, he (they) may interact with the deception environment during multiple iterations of the OODA loop thus revealing his (their) activity pattern and possible intention(s). The activity pattern may be collected and analyzed to adapt the deception environment accordingly. Since the deception environment is transparent to legitimate users in the protected network, any operations involving the decoy OSs, the deception applications and/or the deception data objects may accurately indicate a potential attacker thus avoiding false positive alerts.
- Moreover, the presented deception environment methods and systems may allow for high scaling capabilities over large organizations, networks and/or systems. Using the templates for creating the decoy OS(s) and/or the deception application(s) coupled with the automated tools to create and launch the decoy OS(s) and/or the deception application(s) as well as automatically deploy the deception data objects may significantly reduce the effort to construct the deception environment and improve the efficiency and/or integrity of the deception environment. The centralized management and monitoring of the deception environment may further simplify tracking the potential unauthorized operations and/or potential attacks.
- Before explaining at least one embodiment of the invention in detail, it is to be understood that the invention is not necessarily limited in its application to the details of construction and the arrangement of the components and/or methods set forth in the following description and/or illustrated in the drawings and/or the Examples. The invention is capable of other embodiments or of being practiced or carried out in various ways.
- According to some embodiments of the present invention, there are provided methods, systems and software program products for containing a malicious attack within a deception environment created and/or updated dynamically in a protected network in response to detection of an access attempt of a potential attacker for example, a human user, a process, an automated tool, a machine and/or the like. The deception environment may be created and/or updated in response, for example, to an attempt of a potential attacker to access the protected network using false access information of a certain user of the protected network. The deception environment may be further updated in response to one or more operations the potential attacker may apply as part of an attack vector.
- The potential attacker may be detected by identifying false access information the potential attacker uses to access the protected network. The false access information may be identified by predicting access information of the certain user based on public information of the certain available online over one or more networks, for example, the Internet. Predicting the access information of the certain user may simulate methods and/or techniques applied by the potential attacker to predict (“guess”) the access information of the certain user. The false access information may be further identified as false access information that was provided to the potential attacker during one or more past access attempts and/or attacks directed at the certain user. Once detecting use of the false access information, the access attempt is determined to be initiated by the potential attacker.
- The potential attacker is granted access to a deception environment created dynamically according to public information of the certain user to make the deception environment consistent with what the potential attacker may know of the certain user thus leading the potential attacker to assume the deception environment is in fact a real (valid) processing environment of the protected network and/or part thereof.
- The deception environment may be dynamically updated in real time according to one or more actions made by the potential attacker as part of his attack vector to make the deception environment appear as the real (valid) processing environment and encourage detonation of the attack vector.
- Encouraging the potential attacker to access the deception environment and detonating the attack vector may present significant advantages compared to currently existing methods for detecting and/or protecting the protected network from potential attackers. While the existing methods may detect the access attempt made (attack) by the potential attacker, the existing methods may typically block the access attempt and/or inform an authorized person and/or system of the attempted access. This may allow preventing the current attack, however since the resources required by the potential attacker for launching such an attack are significantly low, the potential attacker may initiate multiple additional access attempts that may eventually succeed. By granting access to the potential attacker into the deception environment that the potential attacker is lead to believe is the real (valid) processing environment of the protected network, the attack vector of the potential attacker may be analyzed and/or learned in order to improve protection from such access attempts and/or attacks. Moreover, by allowing the potential attacker to access explore and/or advance in the deception environment, the potential attacker may spend extensive resources, for example, time, tools and/or the like for the attack. This may discourage the potential attacker from initiating additional attacks and/or significantly reduce the number of attacks initiated by the potential attacker.
- By creating the deception environment according to the public information of the certain user and/or continuously updating the deception environment the potential attacker may be deceived to believe that the deception environment is actually the real (valid) processing environment. This may encourage the potential attacker to operate, for example, apply the attack vector hence detonating the attack vector. Doing so allows monitoring, analyzing and/or learning the attack vector and/or the intentions of the potential attacker while containing the attack within the deception environment thus protecting the real (valid) processing environment of the protected network from any malicious action(s) initiated by the potential attacker.
- Before explaining at least one embodiment of the invention in detail, it is to be understood that the invention is not necessarily limited in its application to the details of construction and the arrangement of the components and/or methods set forth in the following description and/or illustrated in the drawings and/or the Examples. The invention is capable of other embodiments or of being practiced or carried out in various ways.
- The present invention may be a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.
- The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. Any combination of one or more computer readable medium(s) may be utilized. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
- Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network.
- The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.
- Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.
- The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.
- Reference is now made to
FIG. 1 , which is a flowchart of an exemplary process for creating and maintaining a deception environment in order to detect potential unauthorized operations in a protected network, according to some embodiments of the present invention. Aprocess 100 is executed to launch one or more deception campaigns comprising a plurality of deception components to create, launch, maintain and monitor a deception environment that co-exists with a real processing environment of a protected network. The deception components comprise one or more decoy OS(s) and deception application(s) adapted according to the characteristics of the OS(s) and/or applications used in the protected network. The decoy OS(s) and/or the deception application(s) are launched on one or more decoy endpoints that may be physical endpoint and/or virtual endpoints. The deception components further comprise a plurality of deception data objects (breadcrumbs) interacting with the decoy OS s and/or the deception applications. The deception data objects are deployed within the real processing environment of the protected network to attract potential attacker(s) to use the deception data objects while performing the OODA loop within the protected network. The deception data objects are of the same type(s) as valid data objects used to interact with the real OSs and/or applications in the real processing environment such that the deception environment efficiently emulates and/or impersonates as the real processing environment and/or a part thereof. When used, instead of interacting with the real operating systems and/or application, the deception data objects interact with the decoy OS(s) and/or the deception application(s). The deception environment is transparent to legitimate users, applications, processes and/or the like of the protected network's real processing environment. Therefore, operation(s) in the protected network that use the deception data object(s) may be considered as potential unauthorized operation(s) that in turn may be indicative of a potential attacker. The deception data objects are updated constantly and dynamically to avoid stagnancy and mimic a real and dynamic environment with the deception data objects appearing as valid data objects such that the potential attacker believes the emulated deception environment is a real one. - Reference is now made to
FIG. 2A ,FIG. 2B ,FIG. 2C ,FIG. 2D ,FIG. 2E andFIG. 2F which are exemplary embodiments of a system for creating and maintaining a deception environment in order to detect potential unauthorized operations in a protected network, according to some embodiments of the present invention. One or moreexemplary systems process 100 to launch one or more deception campaigns for detecting and/or alerting of potential unauthorized operations in a protectednetwork 235. The deception campaign(s) include creating, maintaining and monitoring the deception environment in the protectednetwork 235. While co-existing with the real processing environment of the protectednetwork 235, the deception environment is separated from the real processing environment to maintain partitioning between the deception environment and the real processing environment. - The
systems network 235 that comprises a plurality ofendpoints 220 connected to anetwork 230 facilitated through one or more network infrastructures, for example, a local area network (LAN), a wide area network (WAN), a personal area network (PAN), a metropolitan area network (MAN) and/or theinternet 240. The protectednetwork 235 may be a local protected network that may be a centralized single location network where all theendpoints 220 are on premises or a distributed network where theendpoints 220 may be located at multiple physical and/or geographical locations. The protectednetwork 235 may further be a virtual protected network hosted by one ormore cloud services 245, for example, Amazon Web Service (AWS), Google Cloud, Microsoft Azure and/or the like. The protectednetwork 235 may also be a combination of the local protected network and the virtual protected network. The protectednetwork 235 may be, for example, an organization network, an institution network and/or the like. Theendpoint 220 may be a physical device, for example, a computer, a workstation, a server, a processing node, a cluster of processing nodes, a network node, a Smartphone, a tablet, a modem, a hub, a bridge, a switch, a router, a printer and/or any network connected device having one or more processors. Theendpoint 220 may further be a virtual device hosted by one or more of the physical devices, instantiated through one or more of thecloud services 245 and/or provided as a service through one or more hosted services available by the cloud service(s) 245. Each of theendpoints 220 is capable of executing one or morereal applications 222, for example, an OS, an application, a service, a utility, a tool, a process, an agent and/or the like. Theendpoint 220 may further be a virtual device, for example, a virtual machine (VM) executed by the physical device. The virtual device may provide an abstracted and platform-dependent and/or independent program execution environment. The virtual device may imitate operation of the dedicated hardware components, operate in a physical system environment and/or operate in a virtualized system environment. The virtual devices may serve as a platform for executing one or more of thereal applications 222 utilized as system VMs, process VMs, application VMs and/or other virtualized implementations. - The local protected
networks 235 as implemented in thesystems decoy server 201, for example, a computer, a workstation, a server, a processing node, a cluster of processing nodes, a network node and/or the like serving as the decoy endpoint. Thedecoy server 201 comprises a processor(s) 202, aprogram store 204, auser interface 206 for interacting with one ormore users 260, for example, an information technology (IT) person, a system administrator and/or the like and anetwork interface 208 for communicating with thenetwork 230. The processor(s) 202, homogenous or heterogeneous, may include one or more processing nodes arranged for parallel processing, as clusters and/or as one or more multi core processor(s). Theuser interface 206 may include one or more human-machine interfaces, for example, a text interface, a pointing devices interface, a display, a touchscreen, an audio interface and/or the like. Theprogram store 204 may include one or more non-transitory persistent storage devices, for example, a hard drive, a Flash array and/or the like. Theprogram store 204 may further comprise one or more network storage devices, for example, a storage server, a network accessible storage (NAS), a network drive, and/or the like. Theprogram store 204 may be used for storing one or more software modules each comprising a plurality of program instructions that may be executed by the processor(s) 202 from theprogram store 204. The software modules may include, for example, adecoy OS 210 and/or adeception application 212 that may be created, configured and/or executed by the processor(s) 202 to emulate a processing environment within the protectednetwork 235. The decoy OS(s) 210 and/or the deception application(s) 212 may be executed by the processor(s) 202 in a naive implementation as shown for thesystem 200A and/or over a nesteddecoy VM 203A hosted by thedecoy server 201 as shown for thesystem 200B and serving as the decoy endpoint. The software modules may further include adeception campaign manager 216 executed by the processor(s) 202 to create, control and/or monitor one or more deception campaigns to crate the deception environment to detect potential unauthorized operations in the protectednetwork 235. - The
user 260 may use thecampaign manager 216 to create, adjust, configure and/or launch one or more of thedecoy OSs 210 and/or thedeception application 212 on one or more of the decoy endpoints. The decoy endpoints are set to emulate thereal endpoints 220 and as such may be physical and/or virtual endpoints. Theuser 260 may further use thecampaign manager 216 to create, deploy and/or update a plurality of deception data objects 214 (breadcrumbs) deployed on one or more of theendpoints 220 in the protectednetwork 235. The deployed deception data objects 214 interact with respective one or more of thedeception applications 212. The deception data objects 214 are deployed to tempt the potential attacker(s) attempting to access resource(s) in the protectednetwork 235 to use the deception data objects 214. The deception data objects 214 are configured to emulate valid data objects that are available in theendpoints 220 for interacting withapplications 222. - The
user 260 may interact with one or more of the software modules such as thecampaign manager 216, the decoy OS(s) 210 and/or the deception application(s) 212 using theuser interface 206. The user interface may include, for example, a graphic user interface (GUI) utilized through one or more of the human-machine interface(s). - Optionally, the
user 260 interacts with thecampaign manager 216, the decoy OS(s) 210 and/or the deception application(s) 212 remotely over thenetwork 230 by using one or more applications, for example, a local agent and/or a web browser executed on one or more of theendpoints 220 and/or from a remote location over theinternet 240. - Optionally, the
user 260 executes thecampaign manager 216 on one or more of theendpoints 220 to create, control and/or interact with thedecoy OS 210 and/or thedeception applications 212 over thenetwork 230. - Optionally, for the local protected
networks 235 as implemented in thesystem 200C, the decoy OS(s) 210 and/or the deception application(s) 212 may be executed as one ormore decoy VMs 203B serving as the decoy endpoint(s) over a virtualization infrastructure available by one or more hostingendpoints 220A such as theendpoints 220 of the protectednetwork 235. The virtualization infrastructure may utilize, for example, Elastic Sky X (ESXi), XEN, Kernel-based Virtual Machine (KVM) and/or the like. Theuser 260 may interact with thecampaign manager 216, the decoy OS(s) 210 and/or the deception application(s) 212 through a user interface such as theuser interface 206 provided by the hosting endpoint(s) 220A. Additionally and/or alternatively, theuser 260 may use one or more applications, for example a local agent and/or a web browser executed on one or more of theendpoints 220 to interact remotely over thenetwork 230 with thecampaign manager 216, the decoy OS(s) 210 and/or the deception application(s) 212 executed by the hosting endpoint(s) 220A. Optionally, one or more of theother endpoints 220 executes thecampaign manager 216 that interacts with the hosting endpoint(s)220 A OS 210 and/or thedeception applications 212 over thenetwork 230. - Optionally, for the local protected
networks 235 as implemented in thesystem 200D, the decoy OS(s) 210 and/or the deception application(s) 212 may be executed through computing resources available from the one ormore cloud services 245 serving as the decoy endpoint(s). The decoy OS(s) 210 and/or the deception application(s) 212 may be utilized as one ormore decoy VMs 205 instantiated using the cloud service(s) 245 and/or through one or more hostedservices 207, for example, software as a service (SaaS), platform as a service (PaaS) and/or the like that may be provided by the cloud service(s) 245. Thecampaign manager 216 may also be available through the cloud services 245. Optionally, the hosted service(s) 207 is provided by the vendor of thecampaign manager 216. - The
user 260 may use one or more applications, for example, a the local agent and/or a the web browser executed on one or more of theendpoints 220 to interact remotely over thenetwork 230 and theinternet 240 with thecampaign manager 216. Optionally, theuser 260 executes thecampaign manager 216 on one or more of theendpoints 220 and interacts with the decoy OS(s) 210 and/or the deception application(s) 212 over thenetwork 230 and theinternet 240. - Optionally, as presented in the
systems network 235 and/or a part thereof is a virtual protected network that may be hosted and/or provided through the cloud service(s) 245. As a growing trend, many organizations may transfer and/or set their infrastructure comprising one or more of theapplications 222, for example, a webserver, a database, an internal mail server, an internal web application and/or the like to the cloud, for example, through the cloud service(s) 245. In thesystem 200E, the protectednetwork 235 may distributed to two or more subnetworks such as thenetworks network 235 while they may be physically distributed at a plurality of sites as a combination of the local network and the virtual network. In thesystem 200F, the protectednetwork 235 is virtual, hosted and/or provided by thecloud service 245, i.e. the protectednetwork 235 comprises of only thesubnetwork 235B. Thesubnetwork 235A is a local network similar thenetwork 235 as described before for thesystems 200A-200D and may include one or more of theendpoints 220 either as the physical devices and/or the virtual devices executing the application(s) 212. Thenetwork 235B on the other hand is a virtual network hosted and/or provided through the cloud service(s) 245 as one or more, for example, private networks, virtual private clouds (VPCs), private domains and/or the like. Each of the private cloud(s), private network(s) and/or private domain(s) may include one or morevirtual endpoints 220 that may be, for example, instantiated through the cloud service(s) 245, provided as the hostedservice 207 and/or the like, where each of theendpoints 220 may execute one or more of theapplications 212. In such configuration(s), the decoy OS(s) 210 may be executed as independent instance(s) deployed directly to the cloud service(s) 245 using an account for thecloud service 245, for example, AWS, for a VPC provided by the AWS for use for the organizational infrastructure. - Typically, users of the virtual protected
network 235 may remotely access, communicate and/or interact with theapplications 212 by using one ormore access applications 225, for example, the local agent, a local service and/or the web browser executed on one or more of theendpoints 220 and/or one ormore client terminals 221. Theclient terminal 221 may include, for example, a computer, a workstation, a server, a processing node, a network node, a Smartphone, a tablet. - For both
systems 200E and/or 200F, the decoy OS(s) 210 and/or the deception application(s) 212 may be executed through computing resources available from thecloud services 245 similarly to thesystem 200D that serve as the decoy endpoint(s). In the same fashion, thecampaign manager 216 may be executed and accessed as described for thesystem 200D. The deception data objects 214 may be adapted and/or adjusted in thesystems 200E and/or 200F according to the characteristics of the protectednetworks 235A and/or 235B with respect to the executedapplications 222 and/or interaction with the user(s) of theapplications 222. - For brevity, the protected
networks network 235 whether implemented as the local protectednetworks 235, as the virtual protected network, and/or as a combination of the two. - Reference is made once again to
FIG. 1 . Theprocess 100 may be executed using one or more software modules such as thecampaign manager 216 to launch one or more deception campaigns. Each deception campaign comprises creating, updating and monitoring the deception environment in the protectednetwork 235 in order to detect and/or alert of potential attackers accessing the protectednetwork 235. Each deception campaign may be defined according a required deception scope and is constructed according to one or more characteristics of the protectednetwork 235 processing environment. - In order to launch effective and/or reliable deception campaigns, the deception environment may be designed, created and deployed to follow design patterns, which are general reusable solutions to common problems and are in general use. The deception campaign may be launched to emulate one or more design patterns and/or best-practice solutions that are widely used by a plurality of organizations. For example, a virtual private network (VPN) link may exist to connect to a resource of the protected
network 235, for example, a remote branch, a database backup server and/or the like. The deception campaign may be created to include one ormore decoy OSs 210,deception applications 212 and respective deception data objects 214 to emulate the VPN link and/or one or more of the real resources of the protectednetwork 235. Using this approach may give a reliable impression of the deception environment to appear as the real processing environment thus effectively attracting and/or misleading the potential attacker who may typically be familiar with the design patterns. - Each deception campaign may define one or more groups to divide and/or delimit the organizational units in order to create an efficient deception environment that may allow better classification of the potential attacker(s). The groups may be defined according to one or more organizational characteristics, for example, business units of the organization using the protected
network 235, for example, human resources (HR), sales, finance, development, IT, data center, retail branch and/or the like. The groups may also be defined according to one or more other characteristics of the protectednetwork 235, for example, a subnet, a subdomain, an active directory, a type of application(s) 222 used by the group of users, an access permission on the protectednetwork 235, a user type and/or the like. - As shown at 102, the
process 100 for launching one or more deception campaigns starts with theuser 260 using thecampaign manager 216 to create one or more images of thedecoy OSs 210. Thedecoy OS 210 is a full stack operating system that contains baseline configurations and states that are relevant to the protectednetwork 235 in which the decoy OS(s) 210 is deployed. The image of the decoy OS(s) 210 is selected according to one or more characteristics of the protectednetwork 235, for example, a type of OS(s), for example, Windows, Linux, CentOS and/or the like deployed on endpoints such as theendpoints 220, a number ofendpoints 220 and/or the like. The decoy OS(s) 210 may also be selected according to the deception application(s) 212 that theuser 260 intends to use in the deception environment and are to be hosted by the decoy OS(s) 210. - Optionally, the
campaign manager 216 provides one or more generic templates for creating the image of the decoy OS(s) 210. The templates may support one or more of a plurality of OSs, for example, Windows, Linux, CentOS and/or the like. The template(s) may be adjusted to include one or more applications and/or services such as theapplication 212 mappingrespective applications 222 according to the configuration of the respective OS(s) in the real processing environment of the protectednetwork 235. The adjusted template(s) may be defined as a baseline idle state of the images of the decoy OS(s) 210. The application(s) 212 included in the idle template may include, for example, generic OS applications and/or services that are part of the out-of-the-box manifest of services, as per the OS, for example, “explorer.exe” for the Windows OS. The application(s) 212 included in the idle state image may also include applications and/or services per the policy applied to the protectednetwork 235, for example, an organization policy. The adjustment to the template may be done by theuser 260 through thecampaign manager 216 GUI and/or using one or more automated tools that analyze theendpoints 220 of the protectednetwork 235 to identify application(s) 222 that are installed and used at theendpoints 220. - Optionally, the
campaign manager 216 supports defining the template(s) to include orchestration, provisioning and/or update services for the decoy OS(s) 210 to ensure that the instantiated templates of the decoy OS(s) 210 are up-to-date with the other OS(s) deployed in the protectednetwork 235. - As shown at 104, the
user 260 using thecampaign manager 216 creates one or more of thedeception applications 212 to be hosted by the decoy OS(s) 210. Thedeception applications 212 include a manifest of applications, services, tools, processes and/or the like selected according to applications and services such as theapplications 222 characteristic to the protectednetwork 235. Thedeception applications 212 may be selected based on a desired scope of deception and/or characteristic(s) of the protectednetwork 235. The deception application(s) 212 are selected to match deception data objects such as the deception data objects 214 deployed in theendpoints 220 to allow interaction between the deception data objects 214 and the respective deception application(s) 212. The selection of thedeception applications 212 may be done by theuser 260 using thecampaign manager 216. Optionally, thecampaign manager 216 may use one or more automated tools to explore the protectednetwork 235 and identify theapplications 222 executed on theendpoints 220. Based on the identifiedapplications 222, the campaign manager may select automatically the deception applications(s) 212 to be included in the deception environment. The application(s) 212 may include one or more applications and/or services mapping respective application(s) 222, for example, an off-the-shelf application, a custom application, a web based application and/or service, a remote service and/or the like. Naturally, theapplications 212 are selected to operate with the decoy OS(s) 210 selected for the deception campaign. - Optionally, the
campaign manager 216 provides one or more generic templates for one or more of a plurality ofdeception applications 212. The templates of thedeception applications 212 may be adjusted to adapt to the protectednetwork 235 to maintain similarity of the deception environment with the real processing environment of the protected network such that the deception application(s) 212 appear to be valid applications such as theapplications 222. - The
campaign manager 216 may create, define and/or adjust the off-the-shelf application(s) for the deception environment through tools, packages and/or services customized to manipulate the off-the-shelf application(s). Thecampaign manager 216 may also use an Application Programming Interface (API) of a respective off-the-shelf application to create, define and/or adjust the template for creating thedeception application 212 mapping the off-the-shelf application(s). The API may provide a record, for example, an XML file that describes the expected inputs and/or outputs of the off-the-shelf application available as a containerized application, a service and/or an executable. The record may further describe expected interaction of the off-the-shelf application with the OS in idle state(s), i.e. with no user interaction. Thecampaign manager 216 may use the interaction description of the off-the-shelf application with the OS to adjust the template of therespective deception application 212 to operate with thedecoy OS 210. Defining the idle state(s) may allow thecampaign manager 216 to detect user interaction once the deception campaign is launched. Containerization and declaration may be required for the custom applications to allow thecampaign manager 216 to take advantage of the template mechanism for use with the custom application(s). - The
campaign manager 216 may use the API of the web based application(s) and/or service(s) and the remote service(s) similarly to what is done for the off-the-shelf application(s) and/or service(s) to define the expected inputs, outputs, web responses and/or back-end data structures. - The
campaign manager 216 defines relationship(s) between each of thedeception applications 212 and the respective decoy OS(s) 210 to set the processing interaction between them during the deception campaign. The relationship(s) may be based on pre-defined declarations provided by the templates according to the type of therespective deception application 212 and thecorresponding decoy OS 210. The relationship declarations may be further adjusted automatically by thecampaign manager 216 and/or the by theuser 260 using thecampaign manager 216 to adapt to one or more operational, structural and/or organization characteristics of the protected network. The operational, structural and/or organization characteristics may include, for example, a network structure of the protected network, a mapping method of the application(s) 222 used in the protected network and/or the like. - For configurations of the virtual protected
network 235 configurations as described in thesystems 200E and/or 200F, the deception environment may be further created and/or adapted to emulate one or more applications and/or services such as theapplications 222 that are provided by the cloud services 245. Theapplications 222 that are provided by thecloud services 245 may not be directly associated with thedecoy OSs 210 but may rather be considered as decoy entities on their own. - For example,
cloud services 245, such as, for example the AWS may provide anapplication 222 of type Simple Storage Service (S3) bucket service. The S3 bucket service has become a basic building block of any cloud deployment to the AWS. The S3 bucket service is used extensively for a plurality of storage purposes, for example, a dumb storage of large amounts of logs, an intermediate storage for software deployment, an actual storage mechanism used by web application(s) to store files and/or the like. The S3 bucket service provided by the AWS establishes a new bucket storage concept by providing an API allowing extensive capabilities and operability for the S3 bucket service, for example, monitoring of action(s) on the S3 bucket either read and/or write operations. This may serve to extend the deception environment to take advantage of the S3 bucket as a decoy, i.e. an S3 storage decoy. The S3 storage decoy may be created and deployed as an active part of the deception environment. - As shown at 106, the
campaign manager 216 is used to launch the decoy OS(s) 210 and the deception application(s) 212. The decoy OS(s) 210 is instantiated in one or more forms as presented for thesystems dedicated decoy server 201 and/or the virtualization infrastructure, for example, ESXi, XEN and/or KVM such as the decoy virtual machine(s) 203B and/or 205 and/or the hosted service(s) 207 may be done manually by theuser 260 and/or automatically using thecampaign manager 216. - As shown at 108, the
campaign manager 216 is used to create the deception data objects 214 and define the interaction with one or more of thedeception applications 212 by declaring the relationship(s) of each of the deception data objects 214. The deception data objects 214 are created to emulate valid data objects used to interact with theapplication 222. The deception data objects 214 may include, for example, one or more of the following: -
- Hashed credentials in Windows 7 user workstations.
- Browser cookies to a web application or site.
- Windows registry keys referencing remote application settings.
- Server Message Block (SMB) mapped shares on a Windows machine.
- Mounted Network Storage element(s) on a Linux workstation.
- Configuration files referencing remote desktop authentication credentials.
- Source code files with embedded database authentication credentials.
- Configuration files to source-code version control system such as, for example, Git.
- The deception data objects 214 are directed, once deployed, to attract the potential attackers during the OODA process in the protected network. To create an efficiently deceptive campaign, the deception data objects 214 may be created with one or more attributes that may be attractive to the potential attacker, for example, a name, a type and/or the like. The deception data objects 214 may be created to attract the attention of the attacker using an attacker stack, i.e. tools, utilities, services, application and/or the like that are typically used by the attacker. As such, the deception data objects 214 may not be visible to users using a user stack, i.e. tools, utilities, services, application and/or the like that are typically used by a legitimate user. Taking this approach may allow creating the deception campaign in a manner that the user may need to go out of his way, perform unnatural operations and/or actions to detect, find and/or use the deception data objects 214 while it may be a most natural course of action or method of operation for the attacker. For example, browser cookies are rarely accessed and/or reviewed by the legitimate user(s). At most, the cookies may be cleared en-masse. However, one of the main methods for the attacker(s) to obtain website credentials and/or discover internal websites visited by the legitimate user(s) is to look for cookies and analyze them. As another example, open shares that indicate shares with network resources made by the legitimate user(s) using the application(s) 212 is typically not available for the user stack while it is a common method for the attacker that may review them using, for example, a “net use” command from a shell. Other examples include, for example, web browsers history logs, files in temporary folders, shell command history logs, etc. that are typically not easily accessible using the user stack while they are easily available using the attacker stack.
- Each of the deception data objects 214 is configured to interact with one or more of the
decoy OSs 210 and/or thedeception applications 212. The deception data objects 214 may be created and their relationships defined according to the deception policy and/or methods defined for the deception campaign. Naturally, the deception policy and/or methods that dictate the selection and configuration of the deception application(s) 212 also dictate the type and configuration of the deception data objects 214. The deception data objects 214 may further be created according to the groups defined for the deception campaign. For example, the deceptive data object 214 of type “browser cookie” may be created to interact with a website and/or an application launched using anapplication 212 of type “browser” created during the deception campaign. As another example, adeceptive data object 214 of type “mapped share” may be created to interact with anapplication 212 of type “share service” created during the deception campaign. - The deception data objects 214 may be created and/or adapted according to the configuration of the protected
network 235 and/or the construction of the deception environment. As an example, it is assumed that the deception campaign is launched to create the deception campaign for the virtual protectednetwork 235 as described in thesystems 200E and/or 200F. The deception environment may be created to place a stronger focus on standard network setup, for example, remote access using Secure Shell (SSH), remote backup using SSH and/or Secure Copy (SCP), SSH using private keys (Privacy-enhanced Electronic Mail (PEM) files) and/or the like. Focusing on the standard network setup for these configuration(s) is done as opposed to for, example, user/password combinations deception data objects 214 created for the deception environment for the local implementation of the protectednetwork 235 as described in thesystems 200A-200D. - For configurations of the virtual protected
network 235 configurations as described in thesystems 200E and/or 200F, the deception data objects 214 may be created and deployed to interact with one ormore deception applications 212 emulating one or more applications and/or services such as theapplications 222 that are provided by the cloud services 245. For example, the deception data objects 214 may be created and deployed to interact with the S3 storage decoy. Due to regulation, it is common practice to encrypt the data that is stored through the S3 bucket service in order to protect the stored data from breaches that may be initiated by the cloud provider, for example, Amazon. The decryption key(s) may be stored at the same storage mechanism, for example, the AWS S3 bucket service however, in order to increase the security level, the decryption key(s) are typically stored through a storage bucket service provided by one or more other cloud providers, for example, the Google Cloud Engine. Thecampaign manager 216 may be used to create an S3 storage decoy that may store data that is set to attract the attacker. Deception data object(s) 214 of a type decryption key may be created to interact with the S3 storage decoy. The decryption key deception data object(s) 214 may be deployed using the storage mechanism of the same cloud service(s) provider providing the S3 storage decoy and/or using the storage mechanism of the of one or more of the other cloud service(s) providers. This deception extension that takes advantage of the S3 bucket service may seem highly realistic, valid and attractive to the potential attacker seeking to obtain the encrypted data available at the supposedly valid S3 storage decoy. - As shown at 110, the
campaign manager 216 is used to deploy the deception data objects 214 on one or more of theendpoints 220 in the protectednetwork 235 to attract the potential attackers who attempt to OODA the protectednetwork 235. - The deployment of the deception data objects 214 may be done using the groups' definition. For example, the deceptive data object 214 of the type “browser cookie” may be deployed using a Group Policy Login Script throughout a respective network segment comprising a subset of the
endpoints 220. As another example, the deceptive data object 214 of the type “mapped share” may be deployed using a Windows Management Instrumentation (WMI) script to a specific subset ofendpoints 220 in the domain of the protectednetwork 235. The deception data objects 214 may be deployed using automated tools, for example, provisioning and/or orchestration tools, for example, Group Policy, Puppet, Chef and/or the like. The deployment of the deception data objects 214 may also be done using local agents executed on theendpoints 220. The local agents may be pre-installed on theendpoints 220 and/or they may be volatile agents that install the deception data objects 214 and then delete themselves. The deception environment and/or thecampaign manager 216 may provide custom scripts and/or commands that may be executed by theuser 260 in the protectednetwork 235 to deploy the deception data objects 214. - As discussed before, the
campaign manager 216 provides a GUI to allow theuser 260 to create, configure, launch and/or deploy one or more of the deception components. The GUI may be provided by thecampaign manager 216 locally when theuser 260 interacts directly with thedecoy server 201 and/or thedecoy VM 203A. However thecampaign manager 216 may perform as a server that provides the GUI to theuser 260 through one or more applications for accessing thecampaign manager 216 remotely, for example, the local agent and/or a the web browser executed on one or more of theendpoints 220. - Reference is now made to
FIG. 3A ,FIG. 3B andFIG. 3C , which are screenshots of an exemplary configuration screen of a campaign manager for configuring a deception campaign, according to some embodiments of the present invention.Screenshots user 260 through a GUI of a campaign manager such as thecampaign manager 216. The GUI allows theuser 260 to create and/or launch a deception campaign by creating, configuring and launching one or more deception components such as the decoy OS(s) 210, the deception application(s) 212 and/or the deception data objects (breadcrumbs) 214. Thecampaign manager 216 may use pre-defined templates that may be adjusted according to the protectednetwork 235 characteristics in order to create the deception components. - The screen shot 300A presents an interface for creating one or more images of the decoy OS(s) 210. The
user 260 may select adecoys tab 310A to create one or more images of the decoy OS(s). Once theuser 260 selects thedecoys tab 310A thecampaign manager 216 presents an interface for creating an image for thedecoy OS 210 to allow theuser 260 to select an OS template, for example, Linux, Windows, CentOS and/or the like for creating an image for thedecoy OS 210. Theuser 260 may further assign a name designating thedecoy OS 210 image and/or a host where thedecoy OS 210 will be launched. As shown in theexemplary screenshot 300A, theuser 260 selected a template of Linux Ubuntu to create an image for adecoy OS 210 designated “HR_Server” that is hosted by anendpoint 220 designated “hrsrv01”. - The screen shot 300B presents an interface for creating one or
more deception applications 212. Theuser 260 may select aservices tab 310B to create one ormore deception applications 212. Once theuser 260 selects theservices tab 310B thecampaign manager 216 presents an interface for creating one ormore deception applications 212 to allow theuser 260 to select a template for creating the deception application(s) 212. Theuser 260 may further assign a name designating the createddeception application 212 and/or define a relationship (interaction) between the createddeception application 212 and one or more of thedecoy OSs 210. As shown in theexemplary screenshot 300B, theuser 260 selected a template of an SMB service for adeception application 212 designated “Personnel_Files” that is included in a services group designated “HR_Services” and connected to thedecoy OS 210 “HR_Server”. Through the interface, theuser 260 may activate/deactivate the selecteddeception application 212. The interface may be further used to display the deception data objects that are attached (interact) to the createddeception application 212. - The
screenshot 300C presents an interface for creating one or more deception data objects (breadcrumbs) 214. Theuser 260 may select abreadcrumbs tab 310C to create one or more deception data objects 214. Once theuser 260 selects theservices tab 310C thecampaign manager 216 presents an interface for creating one or more deception data objects 214 to allow theuser 260 to select the a template representing a type of a data object for creating thedeception data object 214. Theuser 260 may further assign a name designating the created deception data object 214 and/or define a relationship (interaction) between the created deception data object 214 and one or more of thedeception applications 212. As shown in theexemplary screenshot 300C, theuser 260 selected a template of a Network share for adeception data object 214 designated “Personnel_Files_BC” that is included in a breadcrumbs group designated “HR_bc_group” and connected to theSMB deception application 212 “Personnel_Files” that is part of the services group “HR_Services”. - The screen shot 300D presents an interface for generating a script for deploying the created deception data object(s) 214. While the
breadcrumbs tab 310C is presented, theuser 260 may select the generate button presented by the interface. Thecampaign manager 216 may then generate a script that when executed by one or more of theendpoints 220 will create the created deception data object 214 on the respective endpoint(s) 220. Thecampaign manager 216 may create a script that once executed by theendpoint 220 deletes itself leaving no traces on theendpoint 220. - Once the deception data objects 214 are deployed, the deception environment is operational and the relationships between the deception data objects 214, the deception application(s) 212 and the decoy OS(s) 210 are applicable.
- Reference is now made to
FIG. 4 , which is a block diagram of exemplary building blocks of a deception environment for detecting potential unauthorized operations in a protected network, according to some embodiments of the present invention. Adeception environment 400 created using a campaign manager such as thecampaign manager 216 comprises a plurality of deception data objects 214 deployed on one or more endpoints such as theendpoints 220 in a protected network such as the protectednetwork 235. Thecampaign manager 216 is used to definerelationships 410 between each of thedeception data items 214 and one or more of a plurality ofdeception applications 212. Thecampaign manager 216 is also used to definerelationships 412 between each of thedeception applications 212 and one or more of a plurality ofdecoy OSs 210. The deception data objects 214, thedeception applications 212 and/or thedecoy OSs 210 may be arranged in one ormore groups network 235. Once deployed, operations that use data available in the deception data objects 214 interact with the deception application(s) 212 according to the definedrelationships 410 that in turn interact with the decoy OS(s) 210 according to the definedrelationships 412. The definedrelationships 410 and/or 412 may later allow detection of one or more unauthorized operations by monitoring and analyzing the interaction between the deception data objects, thedeception applications 212 and/or thedecoy OSs 210. - Reference is now made to
FIG. 5 , which is a block diagram of an exemplary utilization of deception environment building blocks for detecting potential unauthorized operations in a protected network, according to some embodiments of the present invention. Using a campaign manager such as thecampaign manager 216, anexemplary deception environment 500 is created and launched to protect a bank. The network of the bank such as thenetwork 230 is typically divided to two segments (groups), the internal office network comprising a plurality of workstations used by employees and a network for Automatic Teller Machines (ATMs) that are available to customers. Both the workstations and the ATMs are exemplary endpoints such as theendpoint 220 and/or theclient terminal 221. A potential attacker may start his lateral movement in thenetwork 230 of the bank from either one of the two network segments. To protect thenetwork 230 of the bank, thedeception environment 500 is created to comprise two groups A and B each directed at one of two main deception “stories”, a first story for the ATM machines network and a second story for the internal network comprising the workstations. - For the internal network, a plurality of deception data objects (breadcrumbs) such as the deception data objects 214 that are grouped in a
group 402A are deployed on each of the workstations. The deception data objects 214 deployed on the workstations may include, for example, an open share deception data object 214A for sharing and/or accessing various company documents, a browser cookie deception data object 214B for an internal company website and a hashed-credentials deception data object 214C used to access an internal company website and/or log into a faked domain. Similarly, for the ATM network, a plurality of deception data objects (breadcrumbs) such as the deception data objects 214 that are grouped in agroup 402B are deployed on each of the ATMs. The deception data objects 214 deployed on the ATMs may include, for example, the hashed-credentials deception data object 214C and a configuration file deception data object 214D for a faked ATM service. - In order to support the breadcrumbs of the two
groups deception applications 212 are created and launched. Thedeception applications 212 may be divided to twogroups group 404A may include, for example: -
- An SMB share
deception application 212A to interact with the open share deception data object 214A. Interaction and/orrelationship 410A may be defined for the interaction between the deception data object 214A and thedeception application 212A. - A Location Information Server (LIS)
deception application 212B to interact with the browser cookie deception data object 214B and/or the hashed-credentials deception data object 214C. Interaction and/orrelationship 410B and/or 410C may be defined for the interaction of the deception data object 214B and the deception data object 214C respectively with thedeception application 212B. - A domain
controller deception application 212C providing the fake domain and interacting with the hashed-credentials deception data object 214C and/or the configuration file deception data object 214D. Interaction and/orrelationship group 402A, the deception data object 214C of thegroup 402B and the deception data object 214D respectively with thedeception application 212C.
- An SMB share
- The
group 404B may include, for example an ATMservice deception application 212D utilizing the faked ATM service and interacting with the deception data object 214C of thegroup 402B and the configuration file deception data object 214D. Interaction and/orrelationship 410F and/or 410H may be defined for the interaction of the deception data object 214C and the deception data object 214D respectively with thedeception application 212D. - The
deception applications 212A through 212D are hosted by decoy OSs such as thedecoy OS 210. In theexemplary deception environment 500, the SMBshare deception application 212A and the LISserver deception application 212B are hosted by aWindows Server 2003decoy OS 210A while the domaincontroller deception application 212C is hosted by a Windows Server2008R2 decoy OS 210B. To maintain the groups partitioning, theWindows Server 2003decoy OS 210A and the Windows Server2008R2 decoy OS 210B are grouped together in agroup 406A. The ATMservice deception application 212D is hosted by a Windows XPSP2 decoy OS 210C that is associated with agroup 406B. Interaction and/or relationship 412A and/or 412B may be defined for the interaction of thedeception application 212A and thedeception application 212B respectively with thedecoy OS 210A. Interaction and/orrelationship 412C may be defined for the interaction of thedeception application 212C with thedecoy OS 210B. Interaction and/orrelationship 412C may be defined for the interaction of thedeception application 214C with thedecoy OS 210B. Interaction and/orrelationship 412D may be defined for the interaction of thedeception application 212D with thedecoy OS 210C. - Reference is made once again to
FIG. 1 . As shown at 112, thecampaign manager 216 updates dynamically and continuously the deception environment and/or the deception data objects 214 deployed on theendpoints 220. The deception environment is constantly updated to make the deception data objects 214 seem as valid data objects to the potential attacker. As part of updating the deception environment, thecampaign manager 216 update usage indication(s), for example, footprints, traces, access residues, log records and/or the like in therespective deception applications 212 indicating usage of the deception data objects 214. Thecampaign manager 216 update usage indication(s) to create an impression (impersonate) that the deception data objects 214 are valid and/or real data objects used by users, applications, services and/or the like in the protectednetwork 235. - The
campaign manager 216 may use one or more automated tools, for example, scripts to update the deception environment and/or the deception data objects 214. Thecampaign manager 216 may be configured to continuously update the deception environment and/or the deception data objects 214 for a pre-defined time period, for example, a day, a week, a month, a year and/or for an unlimited period of time. Thecampaign manager 216 may apply a schedule for updating the deception environment. Thecampaign manager 216 may therefore detect a returning potential attacker that attempted to access the protectednetwork 235 in the past. Optionally, thecampaign manager 216 updates the deception environment according to a behavioral pattern of the potential attacker such that the deception data objects are adapted to trap the potential attacker. Thecampaign manager 216 may further adapt the deception environment and/or the deception data objects 214 according to one or more characteristics of the returning potential attacker. - As shown at 114, the
campaign manager 216 continuously monitors the protectednetwork 235 in order to detect the potential attacker. The potential attacker may be detected by identifying one or more unauthorized operations that are initiated in the protectednetwork 235. The unauthorized operation(s) may be initiated by a user, a process, a utility, an automated tool, an endpoint and/or the like. The unauthorized operation(s) may originate within the protectednetwork 235 and/or from a remote location accessing the protectednetwork 235 over thenetwork 230 and/or theinternet 240. In order to identify the unauthorized operation(s), thecampaign manager 216 monitors the decoy OS(s) 210 and/or thedeception applications 212 at one or more levels and/or layers, for example: -
- Network monitoring in which the
campaign manager 216 monitors egress and/or ingress traffic at one or more of theendpoints 220. Thecampaign manager 216 may further record the monitored network traffic. - Log monitoring in which the
campaign manager 216 monitors log records created by one or more of the deception application(s) 212. - OS monitoring in which the
campaign manager 216 monitors interaction made by one or more of thedeception applications 212 with the decoy OS(s) 210. - Kernel level monitoring in which the
campaign manager 216 monitors and analyzes activity at the kernel level of the decoy OS(s) 210.
- Network monitoring in which the
- As shown at 116, the
campaign manager 216 analyzes the monitored data and/or activity to detect the unauthorized operation that may indicate of the potential attacker. Based on the analysis, thecampaign manager 216 creates one or more of a plurality of detection events, for example, a touch event, an interaction event, a code execution event, an OS interaction event and/or a hardware interaction event. The analysis conducted by thecampaign manager 216 may include false positive analysis to avoid identification of one or more operations initiated by one or more legitimate users, processes, applications and/or the like as the potential unauthorized operation. - The touch event(s) may be created when the
campaign manager 216 detects network traffic on one or more ports. - The interaction events may be created the
campaign manager 216 detects a meaningful interaction with one or more of thedeception applications 212. Thecampaign manager 216 may create the interaction event when detecting usage of data that is included, provided and/or available from one or more of the deception data objects 214 for accessing and/or interacting with one or more of thedeception applications 212. For example, thecampaign manager 216 may create an interaction event when detecting an attempt to logon to adeception application 212 of type “remote desktop service” using credentials stored in adeception data object 214 of type “hashed credentials”. Another example may be thecampaign manager 216 may detect a file access on an SMBshare deception application 212 where the file name is available from adeception data object 214 of type “SMB mapped shares”. Additionally, thecampaign manager 216 may create an interaction event when detecting interaction with the deception application(s) 212 using data that is available from valid data objects, i.e. not one of the deception data objects 214. For example, thecampaign manager 216 may detect an HTTP request from anLIS deception application 212. Optionally, thecampaign manager 216 may be configured to create interaction events when detecting one or more pre-defined interaction types, for example, logging on aspecific deception application 212, executing a specific command, clicking a specific button(s) and/or the like. Theuser 260 may further define “scripts” that comprise a plurality of the pre-defined interaction types to configure thecampaign manager 216 to create an interaction event at detection of complex interactions between one or more of the deception components, i.e. the decoy OS(s) 210, the deception application(s) 212 and/or the deception data object(s) 214. - The code execution events may be created when the
campaign manager 216 detects that foreign code is executed on the underlying OS of one or more of thedecoy OSs 210. - The OS interaction event may be created when the
campaign manager 216 detects that one or more applications such as theapplications 222 attempt to interact with one or more of thedecoy OSs 210, for example, opening a port, changing a log and/or the like. - The hardware interaction event may be created when the
campaign manager 216 detects that one or more of thedecoy OSs 210 and/or thedeception applications 212 attempts to access one or more hardware components of the hardware platform on which thedecoy OSs 210 and/or thedeception applications 212 are executed. - Using the
campaign manager 216 theuser 260 may define complex sequence comprising a plurality of events to identify more complex operations and/or interaction detected with the deception components. Defining the complex sequences may further serve to avoid the false positive identification. - Optionally, the
campaign manager 216 creates an activity pattern of the potential attacker by analyzing the identified unauthorized operation(s). Using the activity pattern, thecampaign manager 216 may gather useful forensic data on the operations of the potential attacker and may classify the potential attacker in order to estimate a course of action and/or intentions of the potential attacker. Thecampaign manager 216 may than adapt the deception environment to tackle the estimated course of action and/or intentions of the potential attacker. - Optionally, the
campaign manager 216 employs one or more machine learning processes, methods, algorithms and/or techniques on the identified activity pattern. The machine learning may serve to increase the accuracy of classifying the potential attacker based on the activity pattern. The machine learning may further be used bycampaign manager 216 to adjust future deception environments and deception components to adapt to the learned activity pattern(s) of a plurality of potential attacker(s). - As shown at 118, the
campaign manager 216 generates one or more alerts following the detection event indicting the potential unauthorized operation. Theuser 260 may configure thecampaign manager 216 to set an alert policy defining one or more of the events and/or combination of events that trigger the alert(s). Thecampaign manager 216 may be configured during the creation of the detection campaign and/or at any time after the deception campaign is launched. The alert may be delivered to theuser 260 monitoring thecampaign manager 216 and/or through any other method, for example, an email message, a text message, an alert in a mobile application and/or the like. - The
campaign manager 216 and/or the deception environment may be further configured to take one or more additional actions following the alert. One action may be pushing a log of potential unauthorized operation(s) using one or more external applications and/or services, for example, syslog, email and/or the like. The log may be pushed with varying levels of urgency according to the policy defined for the deception campaign. The external system(s) in turn may take additional actions such as, for example, mitigating the potential threat by blocking executables detected as malware, block network access to compromisedendpoints 220 and/or the like. Another action may be taking a snapshot of the affecteddecoy OSs 210 and/ordeception applications 212 and turn them off in order to limit the potential attacker's ability to use thedecoy OSs 210 and/or thedeception applications 212 as a staging point for further action(s). The snapshot may serve for later forensic analysis to analyze the data captured before and during the attack until the turn off time. Yet another action may be to trigger call back function(s) to one or more clients using an API supported by the deception environment. Details of the attack may be relayed to the client(s) that may be configured with user-defined procedure(s) and/or direction(s) to take further action. For example, the client(s) may use the API of the deception environment to create, launch and/or deploy one or more additional deception elements, for example, thedecoy OS 210, thedeception application 212 and/or thedeception data object 214. - Optionally, the
campaign manager 216 presents the user(s) 260 with real time and/or previously captured status information relating to the deception campaign(s), for example, created events, detected potential attackers, attack patterns and/or the like. Thecampaign manager 216 may provide, for example, a dashboard GUI provided through theuser interface 206. Thecampaign manager 216 may also presents the status information and/or through a remote access application, for example, a web browser and/or a local agent executed on one of theendpoints 220 and/or at a remote location accessing thecampaign manager 216 remotely over thenetwork 230 and/or theinternet 240. - Reference is now made to
FIG. 6A , which is a screenshot of an exemplary first status screen of a campaign manager dashboard presenting structural information of a deception campaign, according to some embodiments of the present invention. Ascreenshot 600A describing a deception campaign may be presented to one or more users such as theuser 260 through a GUI of a campaign manager such as thecampaign manager 216. Theuser 260 may select acampaign tab 610A to show an overall view of the deception campaign launched in the protectednetwork 235. Once theuser 260 selects thecampaign tab 610A thecampaign manager 216 presents status information on the deception campaign. Thecampaign manager 216 may present a structural diagram of the deception campaign including, for example, the deception components used during the deception campaign and/or the relationships (interactions) defined for each of the deception components. Furthermore, through the provided interface, theuser 260 may define the type of events that may trigger alerts. - Reference is also made to
FIG. 6B , which is a screenshot of an exemplary second status screen of a campaign manager dashboard for investigation potential threats detected during a deception campaign, according to some embodiments of the present invention. Theuser 260 may select aninvestigation tab 610B to show potential threats, for example, unauthorized operation(s), suspected interactions and/or the like that may indicate of a potential attackers operating within the protectednetwork 235. Once theuser 260 selects theinvestigation tab 610B thecampaign manager 216 presents status information on potential threats. Each entry may present one or more potential; threats and theuser 260 may select any one of the entries to investigate further the nature of the potential threat. - According to some embodiments of the present invention, there are provided methods, systems and software program products for containing a malicious attack by directing the malicious attack to a deception environment created and/or updated dynamically in a protected network in response to detection of the potential attacker. The deception environment may be created and/or updated in response, for example, to an attempt of a potential attacker to access the protected network using false access information of a certain user of the protected network. The deception environment may be further updated in response to one or more operations the potential attacker may apply as part of an attack vector. As described before, the potential attacker initiating the access attempt and/or the attack vector may be, for example, a human user, a process, an automated tool, a machine and/or the like.
- The potential attacker may predict (“guess”) the access information of the certain user, for example, a credential, a password, a password hint question and/or the like based on public information of the certain user, for example, an email address, a phone number, a work place, a home address, a parent name, a spouse name, a child name, a birth date and/or the like. The potential attacker may obtain the public information of the certain user from one or more publicly accessible networked resources, for example, an online news website, a workplace website, an online government service, an online social network (e.g. Facebook, Google+, LinkedIn, etc.) and/or the like.
- In some scenarios, the potential attacker may assume a more active role. For example, the potential attacker may set up a fictive service and attract the certain user to open an account on the fictive service. Based on the access information the certain user used for creating the account on the fictive service, the potential attacker may predict the access information the certain user may use for accessing one or more valid (genuine) services. In another example, the potential attacker may apply one or more social engineering techniques to get the certain user to reveal his password, for example, phishing and/or the like. During the phishing attack, the certain user is lead to believe he is accessing one or more of the valid (genuine) services and may provide his real access information.
- In order to protect the certain user (or in practice, a plurality of users such as the certain user), the potential attacker may be lead to believe he has entered a real processing environment of the protected network while in fact he is granted access into the deception environment. This may be done by identifying false access information used by the potential attacker while attempting to access the protected network. The access information of the certain user may be identified by predicting the false access information using the public information of the certain user to simulate the prediction process done by the potential attacker. Additionally and/or alternatively, the false access information may identified as false access information provided to the potential attacker by intentionally (knowingly) following the path the potential attacker lays to lead the certain user to reveal his access information at the fictive website and/or fictive service and provide the false access information.
- Moreover, advanced attackers, either human users and/or automated tools, for example, a malware and/or the like may apply caution when operating in the protected network in order to avoid detection.
- In order to detonate the attack, i.e. cause the potential attacker to operate, for example, apply the attack vector, the potential attacker has to be convinced that the deception environment (also known as a “sandbox”) he unknowingly entered is a real (valid) processing environment. This may be done by dynamically updating the deception environment in real time in response to the access attempt and/or in response to one or more operations of the attack vector that may be a multi-stage attack vector.
- Reference is now made to
FIG. 7 , which is flowchart of an exemplary process for containing a malicious attack within a deception environment created dynamically in a protected network, according to some embodiments of the present invention. Aprocess 700 may be executed by a campaign manager such as thecampaign manager 216 to protect a protected network such as the protectednetwork 235 from a potential attacker attempting to access the protectednetwork 235. Theprocess 700 may be carried out by thecampaign manager 216 in one or more of thesystems system 200 for brevity. - As shown at 702, the
process 700 starts with thecampaign manager 216 detecting an attempt of the potential attacker to access the protectednetwork 235. Thecampaign manager 216 may detect the attempted access by identifying that the potential attacker uses false access information, for example, a credential, a password, a password hint question and/or the like of a certain user of the protectednetwork 235. - The
campaign manager 216 may identify the false access information the potential attacker uses by comparing the false access information to predicted access information of the certain user thecampaign manager 216 predicts itself. By predicting (“guessing”) the access information of the certain user, thecampaign manager 216 may simulate methods and/or techniques that may be used by the potential attacker to predict the access information of the certain user. Often the certain user may use his (own) personal information to create his access information in order to easily remember the access information. The potential attacker may therefore use public information available for the certain user, for example, an email address, a phone number, a work place, a work place address, a residence address, a parent name, a spouse name, a child name, a birth date and/or the like to predict (“guess”) the access information of the certain user. The potential attacker may obtain the public information of the certain user from one or more publicly accessible networked resources, for example, an online news website, a workplace website, an online government service, an online social media or network (e.g. Facebook, Google+, LinkedIn, etc.) and/or the like. - By simulating the process that may typically be applied by the potential attacker, based on the public information of the certain user, the
campaign manager 216 may create a list of predicted access information candidates the certain user may typically create for accessing one or more privileged resources on the protectednetwork 235, for example, a service, an account, a network, a database, a file and/or the like. Thecampaign manager 216 may be configured to apply one or more privacy laws, for example, according to a type of information, a geographical location of the certain user and/or the like when collecting the public information of the certain user in order to avoid privacy breaching. - According to some embodiments of the present invention, when the certain user creates (real) access information for accessing the privileged resource(s), the
campaign manager 216 evaluates robustness of the created access information by comparing the created access information to the predicted access information candidates. The comparison applied by thecampaign manager 216 may not be a strict comparison in which the created access information matches the predicted access information candidate(s) exactly. Thecampaign manager 216 may apply the comparison to evaluate similarity of the created access information to the predicted access information candidate(s), for example, evaluate the linguistic distance of the created access information compared to the predicted access information candidate(s). Thecampaign manager 216 may determine that the created access information is insufficiently robust, i.e. the created access information is similar to the predicted access information candidate(s) in case the linguistic distance (variation) between the created access information and the predicted access information candidate(s) does not exceed a pre-defined number of characters, for example, 2 characters. - In case the
campaign manager 216 identifies that the created access information is not sufficiently robust, i.e. matches one or more of the predicted access information candidates, thecampaign manager 216 may take one or more actions, for example, reject the created access information, request the certain user to change the access information and/or the like. Thecampaign manager 216 may further offer the certain user robust access information created by thecampaign manager 216. - The list of predicted access information candidate(s) created by the
campaign manager 216 may be updated according to the techniques and/or methods applied by the certain user to create his access information. Moreover, thecampaign manager 216 verifies that the list of predicted access information candidate(s) does not include the actual access information created and used by the certain user in the protectednetwork 235. - In some embodiments of the present invention, the
campaign manager 216 identifies the false access information to be false access information provided during one or more past attempts to accesses the protectednetwork 235. During the (past) attempts, the potential attacker may apply, for example, a social engineering attack such as a phishing attack embedded, for example, in an email message to divert the certain user to a fictive website emulating a real (valid website). In another example, the past attack may include luring the certain user to register to a fictive service created by the potential attacker. The objective of the (past) attempt(s) and/or attacks is to predict the access information used by the certain user to access one or more real (valid) services, accounts, networks, privileged resources and/or the like. - The
campaign manager 216 may intentionally (knowingly) “fall” in one or more traps laid out for the certain user by the potential attacker to lure the certain user to reveal his access information. For example, in case the potential attacker applies a social engineering technique, for example, a phishing attack, thecampaign manager 216 may detect the phishing attack using one or more techniques as known in the art. For example, thecampaign manager 216 may detect a suspected email message that may be identified to be a phishing attack. While typically, such a phishing attack may be blocked, reported and/or discarded, thecampaign manager 216 may intentionally (knowingly) follow the sequence laid out by the phishing attack and provide the potential attacker with the false access information. In another example, in case the potential attacker lures the certain user to register to a fictive website and/or a fictive service, thecampaign manager 216 may intentionally (knowingly) follow the registration sequence in the fictive website/service providing the false access information. Thecampaign manager 216 may be configured to inform the certain user, other users and/or systems of the access attempt in case the (past) attempt(s) and/or attack(s). Optionally, the (past) attempt(s) and/or attack(s) are not reported to the certain user hence the certain user is unaware of the (past) attempt(s) and/or attack(s) made by the potential attacker. - The false access information provided by the
campaign manager 216 may be very similar to probable (predicted) access information that the certain user may use in order to lead the potential attacker to believe the false access information is in fact real (genuine). Optionally, one or more of the predicted access information candidates are used as the false access information provided to the potential attacker as part of the registration process. - Based on the predicted access information candidates and/or the false access information provided to the potential attacker during the past attempt(s) and/or attacks, the
campaign manager 216 may classify the access information used during the access attempt to several access information categories: -
- Correct access information.
- Access information similar to the correct access information.
- Predicted access information candidates from the list created by the
campaign manager 216. - False access information provided by the
campaign manager 216 during the past attempts and/or attacks. - Other access information.
- The
campaign manager 216 may therefore detect the attempted access of the potential attacker into the protectednetwork 235 by evaluating the access information used by the potential attacker against the access information categories. - In case during the (current) access attempt the potential attacker uses the false access information provided by the
campaign manager 216 during the past attempt(s) and/or attack(s), thecampaign manager 216 may easily identify the attempt to be done by the potential attacker. - Similarly, since the
campaign manager 216 is aware of the actual access information of the certain user, thecampaign manager 216 may determine if wrong access information is entered by the certain user or by the potential attacker during the access attempt. Thecampaign manager 216 may also apply the linguistic distance comparison with the pre-defined number of characters to determine if the wrong access information is likely to be entered by the certain user or by the potential attacker. For example, assuming a real password of the certain user is GadiDean1, selected based on names of founders of a certain company using the protectednetwork 235. While the certain user may be reasonably expected to make mistakes such as, for example, typing a password GadiDean or GadiDean2 when logging into the privileged resource(s), the certain user is less likely to make mistakes such as, for example, typing a password Shorashim1, selected based on a residence address of the certain user. Typically, assuming the residence address of the certain user is publicly available, for example, on the Internet, the password Shorashim1 is likely to be in the list of the predicted access information candidates. Thecampaign manager 216 may therefore identify the first incident (GadiDean or GadiDean2) to be an access attempt of the certain user, while the second incident (Shorashim1) may be an attempted access of the potential attacker. - The
campaign manager 216 may be configured to inform the certain user, other users and/or systems of the access attempt in case the access attempt is determined to be initiated by the potential attacker. Optionally, the access attempt is not reported to the certain user hence the certain user is unaware of the access attempt by the potential attacker. - As shown at 704, the
campaign manager 216 creates and/or updates the deception environment in real time in response to the detected attempt of the potential attacker to access the protectednetwork 235. Based on the detected false access information, thecampaign manager 216 may collect information on the certain user whose access information is used by the potential attacker in order to generate a false identity of the certain user, for example, an account, a working environment and/or the like as part of the deception environment. - In order to convince the potential attacker that the deception environment is the real (valid) processing environment and/or part thereof, the
campaign manager 216 may construct the false identity according to the public information of the certain user that may typically be available to the potential attacker. By exposing the real (public) information of the certain user to the potential attacker, the false identity may seem consistent and legitimate to the potential attacker. For example, thecampaign manager 216 may create a false account, for example, a Facebook account of the certain user that includes the same public information that is publicly available to other Facebook users from the real (genuine) Facebook account of the certain user. Specifically, the public information of the certain user is publicly available with no need for specific access permission(s). In another example, thecampaign manager 216 may create a fake company account for the certain user in the deception environment in the protectednetwork 235. The fake company account may include information specific to the role and/or job title of certain user within the company, for example, a programmer, an accountant, an IT person and/or the like. - Optionally, one or more generic fake identity templates may be used to create the false identity of the certain user. Each of the generic fake identity templates may be configured to include information typical, for example, to a role in the company, a job title holder in the company and/or the like. The
campaign manager 216 may further combine one or more of the generic fake identity templates with the public information of the certain user to create the false identify associated with the certain user. - Optionally, the
campaign manager 216 uses one or more of the generic fake identity templates in case the access attempt is not identified to be associated with any user such as the certain user of the protectednetwork 235. - Optionally, the
campaign manager 216 adds additional information to the false identity to make it more attractive for the potential attacker to hack. - The
campaign manager 216 may create the fake identity to be consistent with information of the certain user as used during one or more of the past attempts and/or attacks. For example, assuming that based on the public information of the certain user the potential attacker identified that the certain user is attending dance classes and launched a past phishing attack in which a phishing e-mail message targeting dancers, for example a dancing event. During the current access attempt of the potential attacker, thecampaign manager 216 may include in the fake identity, for example, information of dancing habits of the certain user. This may make the false identity more consistent and legitimate looking to the potential attacker. Moreover, assuming that the past phishing attack initiated by the potential attacker included information that is not publicly available for the certain user and/or was illegally obtained by the potential attacker, thecampaign manager 216 may include related information on the certain user that is not publicly available. For example, assuming the phishing attack was directed towards hunting interests of the certain user, thecampaign manager 216 may include false hunting information of the certain user in the fake identity. - The deception environment created by the
campaign manager 216 may include one or more decoy endpoints such as the decoy endpoint discussed before (physical endpoints and/or virtual endpoints) that may execute decoy OSs such as thedecoy OSs 210 and/or deception application such as thedeception application 212. Thecampaign manager 216 may further create the deception environment to include a decoy network comprising a plurality of decoy endpoint networked together to further make the deception environment seem convincing to the potential attacker that is lead to believe the deception environment is a real (valid) processing environment. - The
campaign manager 216 creates and/or updates one or more of the decoy endpoints and/or the decoy network to comply with the fake identity created for the certain user in order to verify consistency of the deception environment as viewed by the potential attacker. For example, assuming the certain user is a programmer, thecampaign manager 216 may create the decoy endpoint to include typical programming environment consistent with the programming area of the certain user, for example, relevant programming tool(s), build tool(s) and/or programs that are appropriate for the programming area of the certain user and/or the company that he works for. In another example, assuming the certain user works for company X, thecampaign manager 216 may create the decoy network for the company X to include publicly available known data about the company X. Thecampaign manager 216 may use this publicly available data to create a believable deception environment and deception story. The created decoy network may include common network services that exist in every network, for example, file shares, exchange server, and/or the like. - In order to make the deception environment seem real to the potential attacker, the
campaign manager 216 may simulate real activity in the fake identity, the decoy endpoint(s) and/or the decoy network. For example, thecampaign manager 216 may create and/or maintain (update dynamically) a plurality of usage indications, for example, a browsing history, a file edit history and/or the like as may be typically done by real users in the real (valid) processing environment of the protectednetwork 235. The real activity simulation may be done automatically by thecampaign manager 216, manually by one or more users of the protectednetwork 235 and/or in combination of the automatic and manual simulations. Optionally, when simulated manually, updating one or more of the usage indications may be done automatically to make the usage indication appear as if dynamically changing over time. - The
campaign manager 216 may further use the real processing environment of the protectednetwork 235 and/or part thereof as the deception environment and or part of. Doing so may be beneficial assuming useful elements of the real processing environment, for example, a file with a password, a file with an associated credentials and/or the like may be properly detected to serve, for example, the fake identity, the fake account and/or the like. Thecampaign manager 216 may use the real processing environment in which one or more of the detected payloads modified to trap the potential attacker while maintaining the rest of the processing environment unaltered. Thecampaign manager 216 may need to exercise caution when employing such approach since the potential attacker, in particular, a skilled attacker, may take advantage of one or more aspects of the real processing environment, for example, the identity, the account and/or the like that are left unchanged. - As shown at 706, the
campaign manager 216 grants the potential attacker access into the deception environment. When accessing the deception environment, the potential attacker may be convinced that he is actually entering the real (valid) processing environment of the protectednetwork 235. - As shown at 708, the
campaign manager 216 analyzes the attack vector applied by the potential attacker in order to identify one or more intentions of the potential attacker. - As shown at 710, based on the analysis of the attack vector applied by the potential attacker, the
campaign manager 216 may take one or more actions in response to the attack vector action(s). For example, thecampaign manager 216 may alert one or more authorized persons and/or systems, for example, a user such as theuser 260, an Information technology (IT) person, a security system, security software and/or the like. - The main purpose of the actions taken by the
campaign manager 216 is to detonate the attack vector. Detonating the attack means allowing and/or encouraging the potential attacker to operate, for example, apply the attack vector, in the deception environment regarded as a safe “sandbox” to make the potential attacker detectable by thecampaign manager 216. This may be achieved by dynamically adjusting the deception environment and/or by responding to the action(s) applied through the attack vector in an authentic manner in order to convince the potential attacker that he actually entered the real (valid) processing environment of the protectednetwork 235. - The
campaign manager 216 may update the deception environment as described instep 704 to adapt according to the action(s) made by the potential attacker. Since the attack vector may be a multi-stage attack vector comprising of a plurality of actions, thecampaign manager 216 may continuously respond to the attack vector action(s) by constantly updating the deception environment, for example, adjusting the fake identity, adding/removing and/or adjusting one or more of the decoy endpoints and/or the like. For example, assuming thecampaign manager 216 identifies the potential attacker tries to access another endpoint on the decoy network, thecampaign manager 216 may create in real time one or more additional decoy endpoints that may be added to the decoy network. In another example, assuming the potential attacker is a malware, thecampaign manager 216 may intentionally (knowingly) install the malware in the deception environment and initiate actions expected by the malware. For example, in case the malware is a word file, thecampaign manager 216 may open the word in the deception environment, for example, on the decoy endpoint using the typical tools for opening a word file. In another example, the malware is a suspected browser tool, the campaign manager may download the malware into the deception environment and launch the malware on the decoy endpoint for browsing the network(s). Thecampaign manager 216 may follow additional instructions initiated by the malware. However, the execution of the malware is contained within the deception environment. - By detonating the attack vector, the attack vector and hence the potential attacker may be detected by the
campaign manager 216. This may allow thecampaign manager 216 to further analyze the attack vector as done instep 708 and take additional actions in response to the attack vector based on the analysis. - The
campaign manager 216 may be configured to continuously update the deception environment for as long as defined, for example, a day, a week, a month, a year and/or for an unlimited period of time. This may allow thecampaign manager 216 to identify one or more potential attackers that return to attempt to gain access into the protectednetwork 235. Thecampaign manager 216 may identify the returning attacker(s) by analyzing one or more Indicators of Compromise (IOC), for example, an attribute, an operational parameter and/or a behavioral characteristic of the returning attacker(s). For example, an originating IP of the attacker, a common attack tool used by the attacker, a common filename used by the attacker and/or the like may be detected to identify the potential attacker as the returning attacker. Thecampaign manager 216 may take additional measures on detection of the returning potential attacker, for example, restore the deception environment to be adapted according to characteristics of the returning potential attacker and/or the attack vector(s) used by the returning potential attacker during previous access attempts into the protectednetwork 235. For example, assuming thecampaign manager 216 identified during a past attempted access of the potential attacker that the attack vector of the potential attacker was directed towards obtaining technology aspects of one or more products of the company the certain user works for. On the current attempted access of the returning potential attacker, thecampaign manager 216 may therefore create and/or update the deception environment to include, for example, fabricated information leading to an account and/or a decoy endpoint of a technology research leader that may be attractive to the returning potential attacker. By adapting the deception environment according to the characteristic(s) of the returning potential attacker, the returning potential attacker may be further convinced that the deception environment is the real (valid) processing environment of the protectednetwork 235. For example, in case during a first access attempt, the returning potential attacker looked to access a financial restricted file directory and thecampaign manager 216 adjusted the deception environment to include a decoy endpoint designated with a financial oriented title, for example, a desktop of a secretary of the Chief Financial Officer (CFO). In case thecampaign manager 216 detects the same potential attacker returning to try another access attempt, thecampaign manager 216 may extend the deception environment to include a decoy endpoint designated, for example, “CFO Laptop” to attract the returning potential attacker to attempt to access the decoy endpoint. - Optionally, based on the analysis of the attack vector applied by the potential attacker, the
campaign manager 216 identifies one or more activity pattern of the potential attacker. Using the activity pattern(s), thecampaign manager 216 may gather useful forensic data on the operations of the potential attacker and may classify the potential attacker in order to estimate a course of action and/or the intention(s) of the potential attacker. Thecampaign manager 216 may than further adapt the deception environment to tackle the estimated course of action and/or intention(s) of the potential attacker. This may allow learning the attack vector and applying protection means to real user accounts to protect them against future attack vector(s) and/or part thereof as detected by thecampaign manager 216 applying theprocess 700. This may further allow thecampaign manager 216 to characterize the potential attacker into one or more attacker types and adapt the deception environment according to typical characteristics of the attacker type. For example, assuming thecampaign manager 216 identifies the potential attacker attack vector is directed towards obtaining financial records, thecampaign manager 216 may characterize the potential attacker as a financial information seeking attacker. Thecampaign manager 216 may then update the deception environment to include, for example, fabricated information leading to an account and/or a decoy endpoint of a financial person that may be attractive to the potential attacker. - The descriptions of the various embodiments of the present invention have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
- It is expected that during the life of a patent maturing from this application many relevant systems, methods and computer programs will be developed and the scope of the term endpoint and virtual machine is intended to include all such new technologies a priori.
- As used herein the term “about” refers to ±10%.
- The terms “comprises”, “comprising”, “includes”, “including”, “having” and their conjugates mean “including but not limited to”. This term encompasses the terms “consisting of” and “consisting essentially of”.
- The phrase “consisting essentially of” means that the composition or method may include additional ingredients and/or steps, but only if the additional ingredients and/or steps do not materially alter the basic and novel characteristics of the claimed composition or method.
- As used herein, the singular form “a”, “an” and “the” include plural references unless the context clearly dictates otherwise. For example, the term “a compound” or “at least one compound” may include a plurality of compounds, including mixtures thereof.
- The word “exemplary” is used herein to mean “serving as an example, instance or illustration”. Any embodiment described as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments and/or to exclude the incorporation of features from other embodiments.
- The word “optionally” is used herein to mean “is provided in some embodiments and not provided in other embodiments”. Any particular embodiment of the invention may include a plurality of “optional” features unless such features conflict.
- It is appreciated that certain features of the invention, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the invention, which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable subcombination or as suitable in any other described embodiment of the invention. Certain features described in the context of various embodiments are not to be considered essential features of those embodiments, unless the embodiment is inoperative without those elements.
- Although the invention has been described in conjunction with specific embodiments thereof, it is evident that many alternatives, modifications and variations will be apparent to those skilled in the art. Accordingly, it is intended to embrace all such alternatives, modifications and variations that fall within the spirit and broad scope of the appended claims.
- All publications, patents and patent applications mentioned in this specification are herein incorporated in their entirety by reference into the specification, to the same extent as if each individual publication, patent or patent application was specifically and individually indicated to be incorporated herein by reference. In addition, citation or identification of any reference in this application shall not be construed as an admission that such reference is available as prior art to the present invention. To the extent that section headings are used, they should not be construed as necessarily limiting.
Claims (39)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/414,850 US20170134423A1 (en) | 2015-07-21 | 2017-01-25 | Decoy and deceptive data object technology |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201562194863P | 2015-07-21 | 2015-07-21 | |
PCT/IB2016/054306 WO2017013589A1 (en) | 2015-07-21 | 2016-07-20 | Decoy and deceptive data object technology |
US15/414,850 US20170134423A1 (en) | 2015-07-21 | 2017-01-25 | Decoy and deceptive data object technology |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IB2016/054306 Continuation-In-Part WO2017013589A1 (en) | 2015-07-21 | 2016-07-20 | Decoy and deceptive data object technology |
Publications (1)
Publication Number | Publication Date |
---|---|
US20170134423A1 true US20170134423A1 (en) | 2017-05-11 |
Family
ID=57833916
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/746,411 Expired - Fee Related US10270807B2 (en) | 2015-07-21 | 2016-07-20 | Decoy and deceptive data object technology |
US15/414,850 Abandoned US20170134423A1 (en) | 2015-07-21 | 2017-01-25 | Decoy and deceptive data object technology |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/746,411 Expired - Fee Related US10270807B2 (en) | 2015-07-21 | 2016-07-20 | Decoy and deceptive data object technology |
Country Status (2)
Country | Link |
---|---|
US (2) | US10270807B2 (en) |
WO (1) | WO2017013589A1 (en) |
Cited By (51)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170289107A1 (en) * | 2016-03-30 | 2017-10-05 | Oracle International Corporation | Enforcing data security in a cleanroom data processing envrionment |
US20170310704A1 (en) * | 2016-04-26 | 2017-10-26 | Acalvio Technologies, Inc. | Threat engagement and deception escalation |
US20170324774A1 (en) * | 2016-05-05 | 2017-11-09 | Javelin Networks, Inc. | Adding supplemental data to a security-related query |
US9912695B1 (en) * | 2017-04-06 | 2018-03-06 | Qualcomm Incorporated | Techniques for using a honeypot to protect a server |
US20180324213A1 (en) * | 2017-05-02 | 2018-11-08 | International Business Machines Corporation | Methods and systems for cyber-hacking detection |
US20180373868A1 (en) * | 2017-06-25 | 2018-12-27 | ITsMine Ltd. | Utilization of deceptive decoy elements to identify data leakage processes invoked by suspicious entities |
US20190028488A1 (en) * | 2016-08-08 | 2019-01-24 | Namusoft Co., Ltd. | Method and system for blocking phishing or ransomware attack |
US10270807B2 (en) * | 2015-07-21 | 2019-04-23 | Cymmetria, Inc. | Decoy and deceptive data object technology |
US10382483B1 (en) * | 2018-08-02 | 2019-08-13 | Illusive Networks Ltd. | User-customized deceptions and their deployment in networks |
US10404747B1 (en) * | 2018-07-24 | 2019-09-03 | Illusive Networks Ltd. | Detecting malicious activity by using endemic network hosts as decoys |
US10419480B1 (en) | 2017-08-24 | 2019-09-17 | Amdocs Development Limited | System, method, and computer program for real-time cyber intrusion detection and intruder identity analysis |
US10432665B1 (en) * | 2018-09-03 | 2019-10-01 | Illusive Networks Ltd. | Creating, managing and deploying deceptions on mobile devices |
US20190379694A1 (en) * | 2018-06-07 | 2019-12-12 | Intsights Cyber Intelligence Ltd. | System and method for detection of malicious interactions in a computer network |
US10515187B2 (en) | 2016-06-29 | 2019-12-24 | Symantec Corporation | Artificial intelligence (AI) techniques for learning and modeling internal networks |
EP3594841A1 (en) * | 2018-07-09 | 2020-01-15 | Juniper Networks, Inc. | Real-time signatureless malware detection |
US10574698B1 (en) * | 2017-09-01 | 2020-02-25 | Amazon Technologies, Inc. | Configuration and deployment of decoy content over a network |
US10587652B2 (en) * | 2017-11-29 | 2020-03-10 | International Business Machines Corporation | Generating false data for suspicious users |
US10601868B2 (en) | 2018-08-09 | 2020-03-24 | Microsoft Technology Licensing, Llc | Enhanced techniques for generating and deploying dynamic false user accounts |
US10637864B2 (en) | 2016-05-05 | 2020-04-28 | Ca, Inc. | Creation of fictitious identities to obfuscate hacking of internal networks |
US10733297B2 (en) | 2018-07-09 | 2020-08-04 | Juniper Networks, Inc. | Real-time signatureless malware detection |
CN111917691A (en) * | 2019-05-10 | 2020-11-10 | 张长河 | WEB dynamic self-adaptive defense system and method based on false response |
US10887346B2 (en) * | 2017-08-31 | 2021-01-05 | International Business Machines Corporation | Application-level sandboxing |
US11019076B1 (en) | 2017-04-26 | 2021-05-25 | Agari Data, Inc. | Message security assessment using sender identity profiles |
US11044267B2 (en) | 2016-11-30 | 2021-06-22 | Agari Data, Inc. | Using a measure of influence of sender in determining a security risk associated with an electronic message |
US11050769B2 (en) * | 2018-02-05 | 2021-06-29 | Bank Of America Corporation | Controlling dynamic user interface functionality using a machine learning control engine |
US11057429B1 (en) * | 2019-03-29 | 2021-07-06 | Rapid7, Inc. | Honeytoken tracker |
US11075931B1 (en) * | 2018-12-31 | 2021-07-27 | Stealthbits Technologies Llc | Systems and methods for detecting malicious network activity |
US11086991B2 (en) * | 2019-08-07 | 2021-08-10 | Advanced New Technologies Co., Ltd. | Method and system for active risk control based on intelligent interaction |
US11102244B1 (en) * | 2017-06-07 | 2021-08-24 | Agari Data, Inc. | Automated intelligence gathering |
US11102245B2 (en) * | 2016-12-15 | 2021-08-24 | Inierwise Ltd. | Deception using screen capture |
CN113422779A (en) * | 2021-07-02 | 2021-09-21 | 南京联成科技发展股份有限公司 | Active security defense system based on centralized management and control |
US11159567B2 (en) * | 2018-08-11 | 2021-10-26 | Microsoft Technology Licensing, Llc | Malicious cloud-based resource allocation detection |
WO2021225650A1 (en) * | 2020-05-05 | 2021-11-11 | Tigera, Inc. | Detecting malicious activity in a cluster |
US11212312B2 (en) | 2018-08-09 | 2021-12-28 | Microsoft Technology Licensing, Llc | Systems and methods for polluting phishing campaign responses |
CN113965409A (en) * | 2021-11-15 | 2022-01-21 | 北京天融信网络安全技术有限公司 | Network trapping method and device, electronic equipment and storage medium |
US20220109692A1 (en) * | 2020-10-05 | 2022-04-07 | Sap Se | Automatic generation of deceptive api endpoints |
US20220158992A1 (en) * | 2020-11-13 | 2022-05-19 | Cyberark Software Ltd. | Native remote access to target resources using secretless connections |
US11374971B2 (en) | 2018-08-24 | 2022-06-28 | Micro Focus Llc | Deception server deployment |
US20220222356A1 (en) * | 2021-01-14 | 2022-07-14 | Bank Of America Corporation | Generating and disseminating mock data for circumventing data security breaches |
US20220232020A1 (en) * | 2021-01-20 | 2022-07-21 | Vmware, Inc. | Application security enforcement |
US11429711B2 (en) * | 2019-11-25 | 2022-08-30 | Dell Products L.P. | Method and system for user induced password scrambling |
US11483318B2 (en) * | 2020-01-07 | 2022-10-25 | International Business Machines Corporation | Providing network security through autonomous simulated environments |
US11595354B2 (en) | 2016-09-26 | 2023-02-28 | Agari Data, Inc. | Mitigating communication risk by detecting similarity to a trusted message contact |
US11645383B2 (en) | 2017-01-11 | 2023-05-09 | Morphisec Information Security 2014 Ltd. | Early runtime detection and prevention of ransomware |
US20230164184A1 (en) * | 2021-11-23 | 2023-05-25 | Zscaler, Inc. | Cloud-based deception technology with auto-decoy and breadcrumb creation |
US11722513B2 (en) | 2016-11-30 | 2023-08-08 | Agari Data, Inc. | Using a measure of influence of sender in determining a security risk associated with an electronic message |
US11757914B1 (en) * | 2017-06-07 | 2023-09-12 | Agari Data, Inc. | Automated responsive message to determine a security risk of a message sender |
EP4123488A4 (en) * | 2020-04-28 | 2023-12-13 | Siemens Aktiengesellschaft | Malicious intrusion detection method, apparatus, and system, computing device, medium, and program |
US11856132B2 (en) | 2013-11-07 | 2023-12-26 | Rightquestion, Llc | Validating automatic number identification data |
US11936604B2 (en) | 2016-09-26 | 2024-03-19 | Agari Data, Inc. | Multi-level security analysis and intermediate delivery of an electronic message |
US11934948B1 (en) | 2019-07-16 | 2024-03-19 | The Government Of The United States As Represented By The Director, National Security Agency | Adaptive deception system |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10581914B2 (en) * | 2016-06-03 | 2020-03-03 | Ciena Corporation | Method and system of mitigating network attacks |
WO2020069741A1 (en) * | 2018-10-04 | 2020-04-09 | Cybertrap Software Gmbh | Network surveillance system |
US11165732B2 (en) * | 2020-03-20 | 2021-11-02 | International Business Machines Corporation | System and method to detect and define activity and patterns on a large relationship data network |
EP4385185A1 (en) * | 2021-08-09 | 2024-06-19 | Gorgon IP Pty. Ltd. | Computer network security device |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6567808B1 (en) * | 2000-03-31 | 2003-05-20 | Networks Associates, Inc. | System and process for brokering a plurality of security applications using a modular framework in a distributed computing environment |
US8140694B2 (en) * | 2004-03-15 | 2012-03-20 | Hewlett-Packard Development Company, L.P. | Method and apparatus for effecting secure communications |
US8375444B2 (en) * | 2006-04-20 | 2013-02-12 | Fireeye, Inc. | Dynamic signature creation and enforcement |
US7587537B1 (en) * | 2007-11-30 | 2009-09-08 | Altera Corporation | Serializer-deserializer circuits formed from input-output circuit registers |
US8429746B2 (en) * | 2006-05-22 | 2013-04-23 | Neuraliq, Inc. | Decoy network technology with automatic signature generation for intrusion detection and intrusion prevention systems |
US20140373144A9 (en) * | 2006-05-22 | 2014-12-18 | Alen Capalik | System and method for analyzing unauthorized intrusion into a computer network |
US9009829B2 (en) * | 2007-06-12 | 2015-04-14 | The Trustees Of Columbia University In The City Of New York | Methods, systems, and media for baiting inside attackers |
US8181250B2 (en) * | 2008-06-30 | 2012-05-15 | Microsoft Corporation | Personalized honeypot for detecting information leaks and security breaches |
US8813227B2 (en) * | 2011-03-29 | 2014-08-19 | Mcafee, Inc. | System and method for below-operating system regulation and control of self-modifying code |
US8955143B1 (en) * | 2012-09-04 | 2015-02-10 | Amazon Technologies, Inc. | Use of decoy data in a data store |
US20140096229A1 (en) * | 2012-09-28 | 2014-04-03 | Juniper Networks, Inc. | Virtual honeypot |
US9485276B2 (en) * | 2012-09-28 | 2016-11-01 | Juniper Networks, Inc. | Dynamic service handling using a honeypot |
US9621568B2 (en) * | 2014-02-11 | 2017-04-11 | Varmour Networks, Inc. | Systems and methods for distributed threat detection in a computer network |
US9356969B2 (en) * | 2014-09-23 | 2016-05-31 | Intel Corporation | Technologies for multi-factor security analysis and runtime control |
WO2017013589A1 (en) | 2015-07-21 | 2017-01-26 | Cymmetria, Inc. | Decoy and deceptive data object technology |
-
2016
- 2016-07-20 WO PCT/IB2016/054306 patent/WO2017013589A1/en active Application Filing
- 2016-07-20 US US15/746,411 patent/US10270807B2/en not_active Expired - Fee Related
-
2017
- 2017-01-25 US US15/414,850 patent/US20170134423A1/en not_active Abandoned
Cited By (71)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11856132B2 (en) | 2013-11-07 | 2023-12-26 | Rightquestion, Llc | Validating automatic number identification data |
US10270807B2 (en) * | 2015-07-21 | 2019-04-23 | Cymmetria, Inc. | Decoy and deceptive data object technology |
US10212169B2 (en) * | 2016-03-30 | 2019-02-19 | Oracle International Corporation | Enforcing data security in a cleanroom data processing environment |
US10491597B2 (en) | 2016-03-30 | 2019-11-26 | Oracle International Corporation | Enforcing data security in a cleanroom data processing environment |
US20170289107A1 (en) * | 2016-03-30 | 2017-10-05 | Oracle International Corporation | Enforcing data security in a cleanroom data processing envrionment |
US10225259B2 (en) | 2016-03-30 | 2019-03-05 | Oracle International Corporation | Establishing a cleanroom data processing environment |
US10033762B2 (en) * | 2016-04-26 | 2018-07-24 | Acalvio Technologies, Inc. | Threat engagement and deception escalation |
US10348763B2 (en) | 2016-04-26 | 2019-07-09 | Acalvio Technologies, Inc. | Responsive deception mechanisms |
US20170310704A1 (en) * | 2016-04-26 | 2017-10-26 | Acalvio Technologies, Inc. | Threat engagement and deception escalation |
US10637864B2 (en) | 2016-05-05 | 2020-04-28 | Ca, Inc. | Creation of fictitious identities to obfuscate hacking of internal networks |
US20170324774A1 (en) * | 2016-05-05 | 2017-11-09 | Javelin Networks, Inc. | Adding supplemental data to a security-related query |
US10515187B2 (en) | 2016-06-29 | 2019-12-24 | Symantec Corporation | Artificial intelligence (AI) techniques for learning and modeling internal networks |
US20190028488A1 (en) * | 2016-08-08 | 2019-01-24 | Namusoft Co., Ltd. | Method and system for blocking phishing or ransomware attack |
US10979450B2 (en) * | 2016-08-08 | 2021-04-13 | Namusoft Co., Ltd. | Method and system for blocking phishing or ransomware attack |
US11936604B2 (en) | 2016-09-26 | 2024-03-19 | Agari Data, Inc. | Multi-level security analysis and intermediate delivery of an electronic message |
US11595354B2 (en) | 2016-09-26 | 2023-02-28 | Agari Data, Inc. | Mitigating communication risk by detecting similarity to a trusted message contact |
US11044267B2 (en) | 2016-11-30 | 2021-06-22 | Agari Data, Inc. | Using a measure of influence of sender in determining a security risk associated with an electronic message |
US11722513B2 (en) | 2016-11-30 | 2023-08-08 | Agari Data, Inc. | Using a measure of influence of sender in determining a security risk associated with an electronic message |
US11102245B2 (en) * | 2016-12-15 | 2021-08-24 | Inierwise Ltd. | Deception using screen capture |
US11645383B2 (en) | 2017-01-11 | 2023-05-09 | Morphisec Information Security 2014 Ltd. | Early runtime detection and prevention of ransomware |
US9912695B1 (en) * | 2017-04-06 | 2018-03-06 | Qualcomm Incorporated | Techniques for using a honeypot to protect a server |
US11722497B2 (en) | 2017-04-26 | 2023-08-08 | Agari Data, Inc. | Message security assessment using sender identity profiles |
US11019076B1 (en) | 2017-04-26 | 2021-05-25 | Agari Data, Inc. | Message security assessment using sender identity profiles |
US11271967B2 (en) * | 2017-05-02 | 2022-03-08 | International Business Machines Corporation | Methods and systems for cyber-hacking detection |
US20180324213A1 (en) * | 2017-05-02 | 2018-11-08 | International Business Machines Corporation | Methods and systems for cyber-hacking detection |
US11102244B1 (en) * | 2017-06-07 | 2021-08-24 | Agari Data, Inc. | Automated intelligence gathering |
US20240089285A1 (en) * | 2017-06-07 | 2024-03-14 | Agari Data, Inc. | Automated responsive message to determine a security risk of a message sender |
US11757914B1 (en) * | 2017-06-07 | 2023-09-12 | Agari Data, Inc. | Automated responsive message to determine a security risk of a message sender |
US20180373868A1 (en) * | 2017-06-25 | 2018-12-27 | ITsMine Ltd. | Utilization of deceptive decoy elements to identify data leakage processes invoked by suspicious entities |
US11687650B2 (en) | 2017-06-25 | 2023-06-27 | ITsMine Ltd. | Utilization of deceptive decoy elements to identify data leakage processes invoked by suspicious entities |
US11093611B2 (en) * | 2017-06-25 | 2021-08-17 | ITsMine Ltd. | Utilization of deceptive decoy elements to identify data leakage processes invoked by suspicious entities |
US10419480B1 (en) | 2017-08-24 | 2019-09-17 | Amdocs Development Limited | System, method, and computer program for real-time cyber intrusion detection and intruder identity analysis |
US10887346B2 (en) * | 2017-08-31 | 2021-01-05 | International Business Machines Corporation | Application-level sandboxing |
US10574698B1 (en) * | 2017-09-01 | 2020-02-25 | Amazon Technologies, Inc. | Configuration and deployment of decoy content over a network |
US10587652B2 (en) * | 2017-11-29 | 2020-03-10 | International Business Machines Corporation | Generating false data for suspicious users |
US11750652B2 (en) | 2017-11-29 | 2023-09-05 | International Business Machines Corporation | Generating false data for suspicious users |
US10958687B2 (en) | 2017-11-29 | 2021-03-23 | International Business Machines Corporation | Generating false data for suspicious users |
US11050769B2 (en) * | 2018-02-05 | 2021-06-29 | Bank Of America Corporation | Controlling dynamic user interface functionality using a machine learning control engine |
US11785044B2 (en) | 2018-06-07 | 2023-10-10 | Intsights Cyber Intelligence Ltd. | System and method for detection of malicious interactions in a computer network |
US20190379694A1 (en) * | 2018-06-07 | 2019-12-12 | Intsights Cyber Intelligence Ltd. | System and method for detection of malicious interactions in a computer network |
US11611583B2 (en) * | 2018-06-07 | 2023-03-21 | Intsights Cyber Intelligence Ltd. | System and method for detection of malicious interactions in a computer network |
EP3594841A1 (en) * | 2018-07-09 | 2020-01-15 | Juniper Networks, Inc. | Real-time signatureless malware detection |
US10733297B2 (en) | 2018-07-09 | 2020-08-04 | Juniper Networks, Inc. | Real-time signatureless malware detection |
US10404747B1 (en) * | 2018-07-24 | 2019-09-03 | Illusive Networks Ltd. | Detecting malicious activity by using endemic network hosts as decoys |
US10382483B1 (en) * | 2018-08-02 | 2019-08-13 | Illusive Networks Ltd. | User-customized deceptions and their deployment in networks |
US10601868B2 (en) | 2018-08-09 | 2020-03-24 | Microsoft Technology Licensing, Llc | Enhanced techniques for generating and deploying dynamic false user accounts |
US11212312B2 (en) | 2018-08-09 | 2021-12-28 | Microsoft Technology Licensing, Llc | Systems and methods for polluting phishing campaign responses |
US11159567B2 (en) * | 2018-08-11 | 2021-10-26 | Microsoft Technology Licensing, Llc | Malicious cloud-based resource allocation detection |
US11374971B2 (en) | 2018-08-24 | 2022-06-28 | Micro Focus Llc | Deception server deployment |
US10432665B1 (en) * | 2018-09-03 | 2019-10-01 | Illusive Networks Ltd. | Creating, managing and deploying deceptions on mobile devices |
US11075931B1 (en) * | 2018-12-31 | 2021-07-27 | Stealthbits Technologies Llc | Systems and methods for detecting malicious network activity |
US11057428B1 (en) * | 2019-03-28 | 2021-07-06 | Rapid7, Inc. | Honeytoken tracker |
US11057429B1 (en) * | 2019-03-29 | 2021-07-06 | Rapid7, Inc. | Honeytoken tracker |
CN111917691A (en) * | 2019-05-10 | 2020-11-10 | 张长河 | WEB dynamic self-adaptive defense system and method based on false response |
US11934948B1 (en) | 2019-07-16 | 2024-03-19 | The Government Of The United States As Represented By The Director, National Security Agency | Adaptive deception system |
US11086991B2 (en) * | 2019-08-07 | 2021-08-10 | Advanced New Technologies Co., Ltd. | Method and system for active risk control based on intelligent interaction |
US11429711B2 (en) * | 2019-11-25 | 2022-08-30 | Dell Products L.P. | Method and system for user induced password scrambling |
US11483318B2 (en) * | 2020-01-07 | 2022-10-25 | International Business Machines Corporation | Providing network security through autonomous simulated environments |
EP4123488A4 (en) * | 2020-04-28 | 2023-12-13 | Siemens Aktiengesellschaft | Malicious intrusion detection method, apparatus, and system, computing device, medium, and program |
WO2021225650A1 (en) * | 2020-05-05 | 2021-11-11 | Tigera, Inc. | Detecting malicious activity in a cluster |
US20220109692A1 (en) * | 2020-10-05 | 2022-04-07 | Sap Se | Automatic generation of deceptive api endpoints |
US11729213B2 (en) * | 2020-10-05 | 2023-08-15 | Sap Se | Automatic generation of deceptive API endpoints |
US20220158992A1 (en) * | 2020-11-13 | 2022-05-19 | Cyberark Software Ltd. | Native remote access to target resources using secretless connections |
US11552943B2 (en) * | 2020-11-13 | 2023-01-10 | Cyberark Software Ltd. | Native remote access to target resources using secretless connections |
US20220222356A1 (en) * | 2021-01-14 | 2022-07-14 | Bank Of America Corporation | Generating and disseminating mock data for circumventing data security breaches |
US11880472B2 (en) * | 2021-01-14 | 2024-01-23 | Bank Of America Corporation | Generating and disseminating mock data for circumventing data security breaches |
US11824874B2 (en) * | 2021-01-20 | 2023-11-21 | Vmware, Inc. | Application security enforcement |
US20220232020A1 (en) * | 2021-01-20 | 2022-07-21 | Vmware, Inc. | Application security enforcement |
CN113422779A (en) * | 2021-07-02 | 2021-09-21 | 南京联成科技发展股份有限公司 | Active security defense system based on centralized management and control |
CN113965409A (en) * | 2021-11-15 | 2022-01-21 | 北京天融信网络安全技术有限公司 | Network trapping method and device, electronic equipment and storage medium |
US20230164184A1 (en) * | 2021-11-23 | 2023-05-25 | Zscaler, Inc. | Cloud-based deception technology with auto-decoy and breadcrumb creation |
Also Published As
Publication number | Publication date |
---|---|
US10270807B2 (en) | 2019-04-23 |
US20180212995A1 (en) | 2018-07-26 |
WO2017013589A1 (en) | 2017-01-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20170134423A1 (en) | Decoy and deceptive data object technology | |
US10834108B2 (en) | Data protection in a networked computing environment | |
US10567432B2 (en) | Systems and methods for incubating malware in a virtual organization | |
US20180309787A1 (en) | Deploying deception campaigns using communication breadcrumbs | |
Pandeeswari et al. | Anomaly detection system in cloud environment using fuzzy clustering based ANN | |
US9294442B1 (en) | System and method for threat-driven security policy controls | |
US11163878B2 (en) | Integrity, theft protection and cyber deception using a deception-based filesystem | |
US11228612B2 (en) | Identifying cyber adversary behavior | |
US20180191779A1 (en) | Flexible Deception Architecture | |
Sharma et al. | Advanced persistent threats (apt): evolution, anatomy, attribution and countermeasures | |
US20170359376A1 (en) | Automated threat validation for improved incident response | |
Stytz et al. | Toward attaining cyber dominance | |
US11750634B1 (en) | Threat detection model development for network-based systems | |
Al-Mohannadi et al. | Analysis of adversary activities using cloud-based web services to enhance cyber threat intelligence | |
Alsmadi | Cyber threat analysis | |
Pitropakis et al. | If you want to know about a hunter, study his prey: detection of network based attacks on KVM based cloud environments | |
Gupta et al. | System cum program-wide lightweight malicious program execution detection scheme for cloud | |
Buzzio-Garcia | Creation of a high-interaction honeypot system based-on docker containers | |
Gnatyuk et al. | Cloud-Based Cyber Incidents Response System and Software Tools | |
Wahid et al. | Anti-theft cloud apps for android operating system | |
Chaplinska | A purple team approach to attack automation in the cloud native environment | |
Jolkkonen | Cloud Asset Identification Strategy | |
WO2017187379A1 (en) | Supply chain cyber-deception | |
Mwendwa | A Honeypot based malware analysis tool for SACCOs in Kenya | |
OGINGA | A MODEL FOR DETECTING INFORMATION TECHNOLOGY INFRASTRUCTURE POLICY VIOLATIONS IN A CLOUD ENVIRONMENT |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: CYMMETRIA, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SYSMAN, DEAN;EVRON, GADI;GOLDBERG, IMRI;AND OTHERS;REEL/FRAME:041208/0267 Effective date: 20170125 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER |
|
AS | Assignment |
Owner name: CHUKAR, LLC, COLORADO Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CYMMETRIA, INC.;REEL/FRAME:051255/0951 Effective date: 20191107 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |