WO2017187379A1 - Supply chain cyber-deception - Google Patents
Supply chain cyber-deception Download PDFInfo
- Publication number
- WO2017187379A1 WO2017187379A1 PCT/IB2017/052439 IB2017052439W WO2017187379A1 WO 2017187379 A1 WO2017187379 A1 WO 2017187379A1 IB 2017052439 W IB2017052439 W IB 2017052439W WO 2017187379 A1 WO2017187379 A1 WO 2017187379A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- deception
- protected network
- external
- data objects
- access
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/55—Detecting local intrusion or implementing counter-measures
- G06F21/554—Detecting local intrusion or implementing counter-measures involving event detection and direct action
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/14—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
- H04L63/1441—Countermeasures against malicious traffic
- H04L63/1491—Countermeasures against malicious traffic using deception as countermeasure, e.g. honeypots, honeynets, decoys or entrapment
Definitions
- the present invention in some embodiments thereof, relates to detecting potential security risks in one or more external endpoints communicating with a protected network, and, more specifically, but not exclusively, to detecting potential security risks in one or more external endpoints communicating with a protected network by monitoring interaction between deception data objects deployed in an access client used to access the protected network and deception applications executed at the protected network.
- advanced attackers may gain control over other network(s) communicating with the certain network and use communication and/or interaction means between the networks to access and penetrate the certain network
- the advanced attackers may typically operate in a staged manner, first collecting intelligence about the target organizations, networks, services and/or systems, initiate an initial penetration of the target, perform lateral movement and escalation within the target network and/or services, take actions on detected objectives and leave the target while covering the tracks.
- Each of the staged approach steps involves tactical iterations through what is known in the art as observe, orient, decide, act (OODA) loop. This tactic may present itself as most useful for the attackers who may face an unknown environment and therefore begin by observing their surroundings, orienting themselves, then deciding on a course of action and carrying it out.
- SUMMARY observe, orient, decide, act
- a computer implemented method of detecting unauthorized access to a protected network from other networks comprising:
- One or more deception resources created in the protected network map one or more of the plurality of resources.
- Detecting usage of data contained in one or more of a plurality of deception data objects deployed in the one or more access clients by monitoring an interaction triggered by one or more of the deception data objects with the one or more deception resources when used.
- Detecting potential malicious activity at the external endpoint(s) may facilitate an expansion of the protection provided and/or supported by the protected network to the external endpoints and/or external network(s). While the protected network may be highly protected, the external endpoint(s) may typically be less protected and as part of the communication the protected network holds with the external endpoint(s), the protected network may take advantage of the protection means in its disposal to detect security risks, threats and/or potential malicious activity at the external endpoint(s). Monitoring interaction between the deception data objects deployed in the access client may allow detection of the security threats the external endpoint(s) may be exposed to even when the external endpoint(s) are not controlled from the protected network, i.e. installing protection means in the external endpoint(s) is not possible.
- a system for detecting unauthorized access to a protected network from other networks comprising a program store storing a code and one or more processors of an endpoint of a protected network, coupled to the program store for executing the stored code, the code comprising:
- One or more deception resources created in the protected network maps one or more of the plurality of resources.
- Code instructions to identify one or more potential unauthorized operations based on analysis of the detection are stored in a computer readable media.
- a computer implemented method of creating in a protected network a deception environment for accesses from external endpoints comprising:
- one or more deception resource created in the protected network maps one or more of the plurality of resources.
- the plurality of deception data objects are configured to trigger an interaction with the one or more deception resources when used.
- Adapting the deception environment emulating the real processing environment of the protected network, in particular the deception data objects, according to the communication and/or interaction the other network(s) hold with the protected network may establish a genuine view of the deception environment for the potential attacker(s). This may lead the potential attacker(s) to believe the deception environment is in fact the real (genuine) processing environment.
- Deploying the created deception data object(s) in the access client may allow overcoming the inability to directly and/or actively install software modules in general and protection means in particular in the external endpoint(sO environment.
- a software product comprising:
- One or more deception resources created in the protected network maps one or more of the plurality of resources.
- Second program instructions to create a plurality of deception data objects according to the monitored communication the plurality of deception data objects are configured to trigger an interaction with the one or more deception resources when used.
- the interaction between one or more of the plurality of deception data objects and the one or more deception resource is indicative of one or more potential unauthorized operations.
- first, second and third program instructions are executed by one or more processor of the external endpoint from the non-transitory computer readable storage medium.
- Creating and/or deploying the deception data objects at one or more of the external endpoints using a proprietary software module may significantly increase scalability of the deception environment as the deception data objects may be created locally for each endpoint, each access client and/or each user.
- the deception data object(s) creation and/or deployment capabilities may typically be integrate within a proprietary access client provided to the external user(s) accessing the protected network from the external endpoint(s).
- a computer implemented method comprising:
- the one or more access clients is executed by the external endpoint for accessing one or more of a plurality of resources of the protected network.
- One or more deception resources created in the protected network maps one or more of the plurality of resources.
- the plurality of deception data objects are configured to trigger an interaction with the one or more deception resources when used.
- the interaction between one or more of the plurality of deception data objects and the one or more deception resources is indicative of one or more potential unauthorized operations.
- each of the plurality of resources is a member of a group consisting of: an endpoint, a data resource, an application, a tool, a service and/or a website.
- each resource is a local resources and/or a cloud resource.
- the deception environment may be adapted to emulate various types, deployments, structures and/or features of the protected network.
- the access of the one or more external endpoints to the one or more resources includes one or more of: retrieving information and manipulating information.
- the external endpoint(s) interact with the protected network for accessing one or more resources provided by the protected network.
- the external endpoint(s) may have various privileges, access rights and/or data manipulation rights such as read only, read and write, retrieve and/or the like.
- one or more of the external endpoints is used by a supplier of an organization utilizing the protected network.
- the external endpoint(s) are used by one or more suppliers (part of a supply chain), vendors, 3rd party service providers and/or the like that work with an organization utilizing the protected network.
- the one or more deception resources is provided by at last one decoy endpoint which is a member selected from a group consisting of: a physical device comprising one or more processors and a virtual machine.
- the virtual machine is hosted by one or more members of a group consisting of: a local endpoint, a cloud service and a vendor service.
- the deception environment created to protect the protected network may be based on physical endpoints, virtual endpoint and/or a combination thereof.
- the deployment of the deception environment may be highly flexible allowing usage of local resources as well as cloud resources and/or a combination thereof.
- one or more of the plurality of deception data objects deployed in the one or more access clients is created at the protected network according to the monitored communication.
- Adapting the deception environment emulating the real processing environment of the protected network, in particular the deception data objects, according to the communication and/or interaction the other network(s) hold with the protected network may establish a genuine view of the deception environment for the potential attacker(s). This may lead the potential attacker(s) to believe the deception environment is in fact the real (genuine) processing environment.
- one or more of the plurality of deception data objects deployed in the one or more access clients is created at the external endpoint according to the monitored communication.
- the access client is a proprietary tool provided to the external endpoint(s) by, for example, the vendor and/or the owner of the protected network may allow installing deception creation capabilities in the proprietary access client. This may allow creating one or more of the deception data objects locally for each of the external endpoints, each of the access clients and/or each of the external users.
- each of the plurality of deception data objects emulates a valid data object used for interacting with one or more of the plurality of resources.
- the deception data objects are adapted to emulate real (valid) data objects to efficiently and effectively impersonate the real processing environment.
- each of the plurality of deception data objects is a member of a group consisting of: a browser cookie, a history log record, an account, a credentials object, a configuration file for remote desktop authentication credentials, a JavaScript and a deception file.
- the deception data objects may be adapted to emulate a variety of data objects typically used during interaction of the access clients with the protected network resources, services and/or applications. This may allow ability and further flexibility is deploying the deception data objects in the access client(s).
- the one or more access clients is a member of a group consisting of: a web browser, a remote access agent and a proprietary client provided by an organization utilizing the protected network.
- the deception environment may be adapted according to the type of the access client used to interact with the protected network from the external endpoint(s).
- the one or more access clients accesses the protected network using an account allocated by the protected network to one or more external users using the one or more access clients to access the protected network from the one or more external endpoints, the account is accessible with credentials assigned to the one or more external users.
- accesses to the protected network are done through accounts allocated to user(s) of the protected network, either local users and/or external users accessing the protected network from the external endpoint(s).
- the account may define access rights, operation privileges and/or the like for each of the users.
- the deception environment may therefore be adapted to use the accounts for efficiently emulating the real processing environment as well as controlling deception campaigns according to account characteristic.
- the one or more external users are members of a group consisting of: a human user and an automated tool.
- the deception environment may be adapted to identify any external user accessing the protected network, either human users and/or automated tools.
- the account allocated to the one or more external users is marked differently from the account allocated to one or more internal users of the protected network. This may allow for easily distinguishing between local (internal) users of the protected network and external users accessing the protected network from the external endpoint(s). This may facilitate launching effective deception campaign(s) that may target and/or focus on the external users.
- a plurality of external users using the one or more access clients to access the protected network from the one or more external endpoints are divided to groups according to one or more activity characteristics.
- the one or more activity characteristics are identified by analyzing the communication and are members of a group consisting of: an operation initiated by the external endpoint and a type of the external endpoint. This may further increase effectivity of the deception campaign(s) that may target and/or focus on specific types of external users and/or on external users exhibiting certain activity characteristics.
- the one or more potential unauthorized operations are identified by detecting usage of data contained in one or more of the deception data objects for accessing one or more of the resources.
- the unauthorized operation(s) may be detected when usage of certain deception data objects (data) is identified even when used for accessing a resource other than the one for which the certain deception data objects (data) were originally created for. This means that the usage of the deception data object(s) may be detected even when not used in the context for which they were created. This may further expand the threat detection capabilities of the protected network.
- an alert is generated at detection of the one or more potential unauthorized operations. This may allow alerting one or more parties of the detected unauthorized operation(s) that may be indicative of one or more potential threats, security risks and/or of an exposure to potential malicious attack and/or attacker.
- an alert is generated to the one or more external endpoints at detection of the one or more potential unauthorized operations. Expanding the alerted parties to the external endpoint(s) may allow one or more users (e.g. a system administrator, an IT person, etc.) and/or automated tools (e.g. a security system, etc.) of the external endpoint(s) to take one or more actions in response to the detected security threat.
- an alert is generated at detection of a combination of a plurality of potential unauthorized operations to detect a complex sequence of the interaction. This may support escalation of the alert and/or security thread notification when more complex sequences of unauthorized operations are detected.
- the one or more potential unauthorized operations are analyzed to identify an activity pattern. Detecting the activity pattern may allow for identifying one or more intentions of the potential attacker.
- a learning process is applied to the activity pattern to classify the activity pattern in order to detect and classify one or more future potential unauthorized operations. Classifying the activity pattern(s) may allow characterizing potential attacker(s) detected in subsequent detection events and estimate their intentions at an early stage of their penetration sequence.
- the software product is integrated in the one or more access clients. This may allow at last some level of control over the access client executed at the external endpoint(s) to access the protected network.
- the supported control may allow locally creating and/or deploying the deception data objects in the access client at the external endpoint(s). This may further support scalability as the creation and/or deployment of the deception data objects is distributed among the external endpoint(s) rather than by the protected network itself.
- Implementation of the method and/or system of embodiments of the invention can involve performing or completing selected tasks manually, automatically, or a combination thereof. Moreover, according to actual instrumentation and equipment of embodiments of the method and/or system of the invention, several selected tasks could be implemented by hardware, by software or by firmware or by a combination thereof using an operating system.
- a data processor such as a computing platform for executing a plurality of instructions.
- the data processor includes a volatile memory for storing instructions and/or data and/or a non- volatile storage, for example, a magnetic hard-disk and/or removable media, for storing instructions and/or data.
- a network connection is provided as well.
- a display and/or a user input device such as a keyboard or mouse are optionally provided as well.
- FIG. 1 is a flowchart of an exemplary process for monitoring interaction of one or more external endpoints with a deception environment of a protected network in order to detect potential security risks in the external endpoint(s), according to some embodiments of the present invention
- FIG. 2A is a schematic illustration of an exemplary first embodiment of a system for monitoring interaction of one or more external endpoints with a deception environment of a protected network in order to detect potential security risks in the external endpoint(s), according to some embodiments of the present invention
- FIG. 2B is a schematic illustration of an exemplary second embodiment of a system for monitoring interaction of one or more external endpoints with a deception environment of a protected network in order to detect potential security risks in the external endpoint(s), according to some embodiments of the present invention
- FIG. 2C is a schematic illustration of an exemplary third embodiment of a system for monitoring interaction of one or more external endpoints with a deception environment of a protected network in order to detect potential security risks in the external endpoint(s), according to some embodiments of the present invention
- FIG. 2D is a schematic illustration of an exemplary fourth embodiment of a system for monitoring interaction of one or more external endpoints with a deception environment of a protected network in order to detect potential security risks in the external endpoint(s), according to some embodiments of the present invention
- FIG. 3 is a block diagram of exemplary building blocks of a deception environment for detecting potential security risks in one or more external endpoints communicating with a protected network, according to some embodiments of the present invention.
- FIG. 4 is a flowchart of an exemplary process for creating a deception data objects deployed in an access client accessing a protected network from an external endpoint, according to some embodiments of the present invention.
- the present invention in some embodiments thereof, relates to detecting potential security risks in one or more external endpoints communicating with a protected network, and, more specifically, but not exclusively, to detecting potential security risks in one or more external endpoints communicating with a protected network by monitoring interaction between deception data objects deployed in an access client used to access the protected network and deception applications executed at the protected network.
- a computer for example, a computer, a workstation, a server, a processing node, a cluster of processing nodes, a network node, a Smartphone, a tablet, and/or any network connected device may be part of a network external (not part of) the protected network.
- the protected network may utilize, for example, an organization network, an institution network and/or the like while the external endpoint(s) may be used by, for example, a supply chain vendor (supplier), a 3 party, a services provider and/or the like.
- the protected network may be a network of a first organization, typically a mature and well protected organization while the external endpoint(s) may be part of another network of a second organization (company, firm, etc.) acquired by the first organization.
- the second organization which may maintain an independent operations structure, at least until fully assimilated within the first organization, may be less protected than the first organization. In such case, while maintaining inter-operation between the two networks, the protection provided by the protected network (the first organization) may be expanded to the external endpoint(s) of the other network (the second organization).
- Detection of potential security risk(s) in the external endpoint(s) and/or of a potential attacker(s) trying to access the protected network from the external endpoint(s) is based on creating a deception environment within the protected network.
- the deception environment is created, maintained and monitored through one or more deception campaigns and comprises a plurality of deception components such as deception resources, for example, decoy operating system(s) (OS) and/or deception applications created and launched in the protected network to emulate the resources, for example, endpoints, services, applications, data, websites and/or the like provided by the protected network.
- the deception environment co-exists with a real (valid) processing environment of the protected network while separated from the real processing environment.
- the security risk(s) detection is based on motivating the potential attacker(s) to access the deception environment by creating and deploying deception data objects (breadcrumbs), in access client(s) executed at the external endpoint(s) to access, connect, communicate and/or interact with one or more of the resources of the protected network.
- deception data objects may be created and deployed at the protected network, at the external endpoint executing the access client and/or a combination thereof.
- the deception data objects may be created according to one or more activity characteristics identified for the user(s) using the access client(s) by monitoring and analyzing the communication between the access client(s) and the protected network resources.
- the deception data objects for example, credential files, password files, "cookies", history log entries, access protocols, accounts, archive files and/or the like are configured to interact with the deception resources emulating the accessed resources of the protected network. While adapted to emulate valid corresponding data objects, the deception data objects are configured to, when used, to interact with the deception resource(s) instead of interacting with the real resources provided by the protected network.
- the deception data objects may be further configured to appeal to the potential attacker accessing the protected network from the external endpoint(s) while being significantly transparent to legitimate users.
- the interaction between the deception data objects and the deception applications is continuously monitored. Detection of such interaction may typically result from attempted unauthorized operation(s) in the protected network and may therefore be indicative of the potential attacker which may thus be indicative that the external endpoint may be compromised and subject to security risk(s).
- Usage of data contained in the deception data objects may be further monitored during interaction with other resources of the protected network, i.e. resources of the protected network that the deception data objects were not originally created to interact with. For example, a fake password deception data object may be created to emulate a password for a first service of the protected network. However, usage of the created fake password deception data object may be detected for accessing a second service of the protected network.
- one or more of the deception data objects may be updated periodically and/or dynamically to improve emulation and impersonation of the deception environment as the real processing environment.
- an organizational network that may be a protected network may be accessed by one or more external endpoints, for example, the supply chain vendor (the supplier), the 3 party, the services provider and/or the like.
- the protected network may be the network of the mature, well protected organization while the external endpoint may be part of the network of the acquired organization.
- access means for example, accounts, passwords, protocols and/or the like may be granted to external user(s) (human users and/or automated users) using the external endpoint(s).
- the external endpoint(s) may not be protected and may therefore be subject to security risks.
- the external endpoint(s) are not part of the protected networks and typically not controlled from the protected network, installing security mechanisms, detection means and/or the like in the external endpoint(s) may be impossible.
- the protected network By enabling the protected network to detect the security risk(s) in the external endpoint(s), such security risks and/or potential penetration of attacker(s) to the external endpoint(s) may be detected without actively and/or directly installing the security mechanisms, detection means in the external endpoint(s). This may increase the security circle of the organization itself (the protected network) as well as serve to alert the supply chain vendors (the external endpoint) of the potential security threat(s) they may be exposed to.
- the deception data objects may better emulate the real processing environment thus may appear genuine even to advanced and sophisticated attackers.
- Periodically and/or dynamically updating the data deception objects may further increase the genuine appearance of the deception data objects.
- Dividing the external user(s) to groups that may be targeted with different deception policies and/or parameters may further increase the genuine appearance of the deception data objects on one hand while allowing for improved detection on the other hand.
- the present invention may be a system, a method, and/or a computer program product.
- the computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.
- the computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device.
- the computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
- a non- exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing.
- RAM random access memory
- ROM read-only memory
- EPROM or Flash memory erasable programmable read-only memory
- SRAM static random access memory
- CD-ROM compact disc read-only memory
- DVD digital versatile disk
- memory stick a floppy disk
- mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon
- a computer readable storage medium is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
- Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network.
- the network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers.
- a network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
- Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
- ISA instruction-set-architecture
- machine instructions machine dependent instructions
- microcode firmware instructions
- state-setting data or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
- the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
- the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
- LAN local area network
- WAN wide area network
- Internet Service Provider for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
- electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.
- FPGA field-programmable gate arrays
- PLA programmable logic arrays
- the functions noted in the block may occur out of the order noted in the figures.
- two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
- each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.
- FIG. 1 is a flowchart of an exemplary process for monitoring interaction of one or more external endpoints with a deception environment of a protected network in order to detect potential security risks in the external endpoint(s), according to some embodiments of the present invention.
- a process 100 is executed at a protected network to monitor and analyze communication of a protected network with one or more external endpoint in order to detect potential exposure of the external endpoint(s) to unauthorized operation(s) and/or malicious attack(s).
- the protected network may be, for example, an organization network, an institution network and/or the like while the external endpoint(s) may be used by, for example, a supply chain vendor (supplier), a 3 party, a services provider and/or the like.
- supply chain vendor supply chain vendor
- the external endpoint(s) may further be part of one or more other networks external (not part of) the protected network.
- the protected network is owned by organization, i.e. is endemic to the organization.
- the external endpoint(s) may communicate with the protected network for accessing one or more resources, for example, endpoints, services, websites, data resources, applications and/or the like of the protected network.
- the external endpoint(s) may not be controlled from the protected network and hence may not be protected since protection components may not be actively and/or directly installed by the protected network in the external endpoint(s).
- the process 100 allows initiating deception campaigns to extend protection of the protected network to the external endpoint(s) by creating a deception environment that emulates a real processing environment of the protected network while co-existing with the real processing environment.
- the deception campaigns may be launched to create, maintain and monitor the deception environment.
- One or more external users either human users and/or automated tools, may use one or more access clients executed by the external endpoint(s) to access, connect, communicate and/or interact with one or more of the resources of the protected network.
- the external user(s) using the access client(s) may use one or more (user) accounts created for them within the protected network to allow the external user(s) to access the protected network.
- the deception environment comprises several deception components, for example, one or more deception resources such as, for example, a decoy OS, a deception application, a deception service, a deception website, a deception database and/or the like adapted according to the characteristics of resources of the protected network, for example, OS(s), applications, services, data resources, websites and/or the like.
- the deception resources may be launched on one or more physical and/or virtual decoy endpoints.
- the deception components further comprise a plurality of deception data objects (breadcrumbs) which may be configured to interact with the deception resources.
- one or more deception data objects may be deployed in the access client(s) used by the external users from the external endpoint(s).
- the deception data objects are typically of the same type(s) as valid data objects used to interact with the real resources available at the protected network such that the deception environment efficiently emulates and/or impersonates as the real processing environment of the protected network and/or a part thereof.
- the deception data objects When used, instead of interacting with the real resources, the deception data objects may interact with the deception resource(s) respectively. Therefore, analyzing the interaction of the deception data object(s) may reveal the potential unauthorized operation(s) and/or malicious attack(s) since the use of the deception data object(s) may be indicative of a potential attacker. This may allow expanding the deception environment to the external endpoint(s) and detecting potential security risks in the external endpoint(s) which may typically be uncontrolled and/or unprotected.
- the deception data objects may be updated periodically to avoid stagnancy and to genuinely mimic a real and dynamic environment with the deception data objects appearing as valid data objects such that the potential attacker believes the emulated deception environment is a real (valid) one.
- the deception campaign(s) may target one or more group of the external users according to one or more typical activity characteristics allowed for the external users using the access client(s) within the protected network, for example, type of the external endpoint, operations allowed for users of the external endpoint and/or the like.
- the deception data objects may be adapted for one or more of the external users accessing the protected network from the external endpoint(s) according to their activity characteristics.
- FIG. 2A, FIG. 2B, FIG. 2C and FIG. 2D are exemplary embodiments of a system for monitoring interaction of one or more external endpoints with a deception environment of a protected network in order to detect potential security risks in external endpoint(s), according to some embodiments of the present invention.
- One or more exemplary systems 200A, 200B, 200C and/or 200D may be used to execute a process such as the process 100 to launch one or more deception campaigns for creating the deception environment in a protected network 235 in order to detect and/or alert of potential security risks in external endpoint(s) 251 by monitoring and analyzing the communication and/or interaction of the external endpoint(s) 251 with the deception environment.
- One or more of the external endpoints 251 may be part of one or more external networks 250, i.e. networks which are not part of the protected network 235. While co-existing with the real processing environment of the protected network 235, the deception environment is separated from the real processing environment to maintain partitioning between the deception environment and the real processing environment.
- the systems 200A, 200B, 200C and/or 200D include the protected network 235 that comprises a plurality of endpoints 220 connected to a network 230 facilitated through one or more network infrastructures, for example, a local area network (LAN), a wide area network (WAN), a personal area network (PAN), a metropolitan area network (MAN) and/or the internet 240.
- the protected network 235 may be a physical protected network that may be a centralized single location network where all the endpoints 220 are on premises or a distributed network in which the endpoints 220 may be located at multiple physical and/or geographical locations or sites.
- the endpoint 220 may be a physical device, for example, a computer, a workstation, a server, a processing node, a cluster of processing nodes, a network node, a Smartphone, a tablet, a modem, a hub, a bridge, a switch, a router, a printer and/or any network connected device having one or more processors.
- the endpoint 220 may further be a virtual device, for example, a virtual machine (VM) executed by one or more of the physical devices.
- the virtual device may provide an abstracted and platform-dependent and/or independent program execution environment. The virtual device may imitate operation of the dedicated hardware components, operate in a physical system environment and/or operate in a virtualized system environment.
- the virtual devices may be utilized as system VMs, process VMs, application VMs and/or other virtualized implementations.
- Each of the endpoints 220 may provide one or more (real) resources 222, for example, an OS, an application, a service, a website, a utility, a tool, a process, an agent, a data resource, a data record, a storage resource and/or the like.
- the virtual endpoints 220 may also be instantiated through one or more cloud services 245, for example, Amazon Web Service (AWS), Google Cloud, Microsoft Azure and/or the like.
- AWS Amazon Web Service
- Azure Microsoft Azure
- the virtual endpoints 220 may also be provided as a service through one or more hosted services available by the cloud service(s) 245, for example, software as a service (SaaS), platform as a service (PaaS), Network as a Service (Naas) and/or the like.
- SaaS software as a service
- PaaS platform as a service
- Naas Network as a Service
- the protected network 235 may further be a virtual protected network hosted by one or more cloud services 245.
- the protected network 235 may also be a combination of the physical protected network and the virtual protected network.
- the physical protected networks 235 as implemented in the system 200A further include one or more decoy servers 201, for example, a computer, a workstation, a server, a processing node, a cluster of processing nodes, a network node, an endpoint and/or the like serving as a decoy endpoint. Additionally and/or alternatively, the decoy endpoint is utilized through one or more of the endpoints 220.
- the decoy server 201 as well as each of the endpoints 220 comprises a processor(s), a program store and a network interface for connecting to the network 230.
- the decoy server 201 and/or the endpoint(s) 220 include a user interface for interacting with one or more users 260, for example, an information technology (IT) person, a system administrator and/or the like.
- IT information technology
- the processor(s), homogenous or heterogeneous may include one or more processing nodes arranged for parallel processing, as clusters and/or as one or more multi core processor(s).
- the user interface may include one or more human-machine interfaces, for example, a text interface, a pointing devices interface, a display, a touchscreen, an audio interface and/or the like.
- the program store may include one or more non-transitory persistent storage devices, for example, a hard drive, a Flash array and/or the like.
- the program store may further comprise one or more network storage devices, for example, a storage server, a network accessible storage (NAS), a network drive, and/or the like.
- the program store may also include one or more volatile devices, for example, a Random Access Memory (RAM) component and/or the like.
- RAM Random Access Memory
- the program store may be used for storing one or more software modules each comprising a plurality of program instructions that may be executed by the processor(s) from the program store.
- the software modules may include, for example, one or more deception resources 210, for example, a decoy OS, a deception application, a deception service, a deception website, a deception database and/or the like that may be created, configured and/or executed by the processor(s) to form a deception environment emulating a (real) processing environment within the protected network 235.
- the deception resources 210 may be executed by the decoy server 201 in a naive implementation as shown for the system 200A and/or over one or more nested decoy VMs 203 serving as the decoy endpoint(s) hosted by the decoy endpoint 220A as shown for the system 200B.
- the decoy VM(s) 203 serving as the decoy endpoint(s) may be instantiated through a virtualization infrastructure over one or more hosting endpoints such as the decoy server 201 and/or endpoint 220A.
- the virtualization infrastructure may utilize, for example, Elastic Sky X (ESXi), XEN, Kernel-based Virtual Machine (KVM) and/or the like.
- the user 260 may interact with the campaign manager 216 and/or the deception resources 210 through the user interface of the hosting endpoint(s).
- the user 260 may use one or more applications, for example the local agent, the web browser and/or the like executed on one or more of the endpoints 220 to interact remotely over the network 230 with the campaign manager 216 executed by the hosting endpoint(s).
- one or more of the other endpoints 220 executes the campaign manager 216 that interacts over the network 230 with the hosting endpoint(s) 220A which host the deception resources 210.
- the deception environment in particular, the decoy resources may be executed and/or provided through computing resources available from the cloud service(s) 245 serving as the decoy endpoint(s).
- the deception resources 210 may be utilized as one or more decoy VMs 205 instantiated using the cloud service(s) 245 and/or through one or more hosted services 207, for example, SaaS, PaaS, Naas and/or the like that may be provided by the cloud service(s) 245.
- the protected network 235 and/or part thereof is a virtual protected network that may be hosted and/or provided through the cloud service(s) 245.
- the cloud service(s) 245 As a growing trend, many organizations may transfer and/or set their infrastructure comprising one or more of the resources 222, for example, a webserver, a database, an internal mail server, an internal web application and/or the like to the cloud, for example, through the cloud service(s) 245.
- the virtual protected network may be provided through the cloud service(s) 245 as one or more, for example, private networks, virtual private clouds (VPCs), private domains and/or the like.
- VPCs virtual private clouds
- Each of the private cloud(s), private network(s) and/or private domain(s) may include one or more virtual endpoints 220 that may be, for example, instantiated through the cloud service(s) 245, provided as the hosted service 207 and/or the like, where each of the virtual endpoints 220 may execute one or more of the deception applications 212.
- the deception resource(s) 210 for example, the decoy OS(s) may be executed as independent instance(s) deployed directly to the cloud service(s) 245 using an account for the cloud service 245, for example, AWS VPC, provided by AWS for the organizational infrastructure.
- users of the virtual protected network 235 may remotely access, communicate and/or interact with the applications 212 by using one or more access applications 225, for example, a local agent, a local service and/or a web browser executed on one or more of the endpoints 220 and/or one or more client terminals 221.
- the client terminal(s) 221 may include, for example, a computer, a workstation, a server, a processing node, a network node, a Smartphone, a tablet, an endpoint such as the endpoint 220 and/or the like.
- the protected network 235 may be a combination of the physical network as seen in the systems 200A, 200B and/or 200C and the virtual protected network 235 as seen in the system 200D.
- the protected network 235 which may be distributed to two or more subnetworks, physical and/or virtual may form a single logical protected network 235.
- a campaign manager 216 may be executed by one or more of the endpoints 220, the decoy server 201 and/or one or more of the decoy VMs 203. Additionally and/or alternatively the campaign manager 216 may be provided by the cloud services 245. The campaign manager 216 may be used to create and/or control one or more deception campaigns to create the deception environment and monitor the interaction between the external endpoint(s) 251 and the deception environment.
- One or more users 260 for example, a system administrator, an IT person and/or the like using the campaign manager 216 may create, adjust, configure and/or launch one or more of the deception resources 210 on one or more of the decoy endpoints.
- the campaign manager 216 provides a Graphical User Interface (GUI) to allow the user(s) 260 to create, configure, launch the deception campaign(s).
- GUI Graphical User Interface
- the GUI is described in detail in PCT Application No. IB2016/054306 titled “Decoy and Deceptive Data Object Technology” filed Jul, 20, 2016, the contents of which are incorporated herein by reference in their entirety
- the user(s) 260 may interact with the campaign manager 216 according to the deployment implementation.
- the user(s) 260 may interact with the campaign manager 216 directly through the user interface, for example, a GUI utilized through one or more of the human-machine interface(s) of the decoy server 201.
- the user 260 interacts with the campaign manager 216 remotely over the network 230 using one or more access applications such as the access application 225 executed on one or more of the endpoints 220.
- the user(s) 260 may interact with the campaign manager 216 from a remote location over the internet 240 using one or more client terminals such as the client terminals 221.
- the campaign manager 216 is executed on one or more of the endpoints 220.
- the user(s) 260 may interact with the campaign manager 216 directly through the user interface of the endpoint(s) 220 executing the campaign manager 216 or remotely using the access application 225 from the endpoints 220 and/or the client terminals 221.
- the campaign manager is provided by the cloud service(s) 245, as shown for the systems 200C and/or 200D, the user(s) may interact with the campaign manager 216 remotely using the access application 225 from the endpoints 220 and/or the client terminals 221.
- the campaign manager 216 is not executed by the same platform executing and/or providing the deception environment.
- the deception environment may be executed by the decoy server 201 and/or the decoy endpoint 220A while the campaign manager 216 is executed by one or more of the other endpoints 220. In such case the campaign manager 216 controls the deception environment over the network 230.
- the deception environment is provided by the cloud service(s) 245 as shown for the system 200C while the campaign manager 216 is executed by one or more of the endpoints 220. In such case the campaign manager 216 controls the deception environment remotely through the network 230 and/or the internet 240.
- the external endpoint(s) 251 may be for example, a computer, a workstation, a server, a processing node, a cluster of processing nodes, a network node, a Smartphone, a tablet, and/or any network connected device.
- the external endpoint 251 may further be utilized as a remote service provided by one or more of the cloud services and 245 accessed by the external user(s) from one or more client terminals such as the client terminal 221.
- One or more of the external users for example, a human user and/or an automated user may use one or more access clients 255 such as the access application 225 executed by the external endpoint(s) 251 to access the protected network 235 over the network 230, and typically through the internet 240.
- the access client 255 may further include one or more remote access tools, protocols, software packages and/or the like for remotely accessing the resources of the protected network 235, for example, Remote Desktop (RDP), Virtual Network Computing (VNC) and/or the like.
- RDP Remote Desktop
- VNC Virtual Network Computing
- the access client 255 is a proprietary access client 255 provided to the external user(s) by the organization utilizing the protected network 235, i.e. the "owner” and/or the vendor of the protected network 235.
- One or more of the external users accessing the protected network 235 from the external endpoint(s) 251 may be allocated (given) one or more accounts created for them for accessing one or more of the resources 222 of the protected network 235.
- the account may be a collection of data associated with a particular user of a multiuser computer system.
- Each account may typically comprise credentials, i.e. a user name and (almost always) a password and defines one or more accesses privileges for the respective user, for example, a security access level, a disk storage space and/or the like.
- credentials i.e. a user name and (almost always) a password
- defines one or more accesses privileges for the respective user for example, a security access level, a disk storage space and/or the like.
- one or more users 260 in the organization for example, the system administrator, the IT person and/or the like are responsible for setting up and overseeing the accounts.
- the user(s) 260 may further use the campaign manager 216 to create, deploy and/or update a plurality of deception data objects 214 (breadcrumbs) deployed in one or more of the access clients 255.
- the deployed deception data objects 214 are configured to interact with respective one or more of the deception applications 212.
- the deception data objects 214 are deployed in the access client(s) 255 to tempt the potential attacker(s) attempting to access resource(s) of the protected network 235 to use the deception data objects 214.
- the deception data objects 214 are configured to emulate valid data objects that are typically used for interacting with the resource(s) 222. As discussed before, the process 100 may be executed to launch one or more deception campaigns.
- Each deception campaign may include creating, updating and monitoring the deception environment in the protected network 235 in order to detect and/or alert of potential attackers trying to penetrate the protected network 235 through access means granted to the external users accessing the protected network 235 from the external endpoint(s) 251.
- Each deception campaign may be defined according a required deception scope and may be constructed according to one or more activity characteristics of the external user(s).
- the process 100 starts with the campaign manager 216 monitoring and analyzing communication between the access client 255 executed by the external endpoint 251 and the protected network 235.
- the campaign manager 216 monitors and analyzes the communication between the access client(s) 255 and the resource(s) 222, for example, an Enterprise Resource Planning (ERP) system, a development platform, a Human Resources (HR) system, a sales system, a finance system, an IT service, a Customer relationship management (CRM) system, a database service and/or the like.
- ERP Enterprise Resource Planning
- HR Human Resources
- CRM Customer relationship management
- one or more of the external users may use an RDP application initiated from the external endpoint 251 and serving as the access client 255 to access one or more of the endpoints 220 within the protected network 235.
- the external user(s) may further access one or more of the endpoints 220 using other remote access protocols, for example, VNC such as for example, RealVNC that may be used for an open source project development.
- VNC such as for example, RealVNC
- the access client 255 may be an access tool provided by user(s) 260 of the protected network 235, for example, the system administrator, the IT person and/or the like such that the access tool is provided by the organization which utilizes the protected network 235, i.e. the "owner" of the protected network 235.
- one or more of the external users may use the access client 255 provided by the organization utilizing the protected network 235, i.e. the "owner" of the protected network 235.
- the external users using the access client(s) 255 may interact with the resource(s) 222 using one or more accounts allocated (given) to the external user(s), whether human users and/or automated users.
- the account may be a collection of data associated with a particular user of a multiuser computer system.
- Each account may comprise credentials, i.e. a user name and (typically) a password and defines one or more accesses privileges for the respective user, for example, a security access level, a disk storage space and/or the like.
- credentials i.e. a user name and (typically) a password and defines one or more accesses privileges for the respective user, for example, a security access level, a disk storage space and/or the like.
- one or more of the users 260 of the protected network 235 for example, the system administrator, the IT person and/or the like create, set up and/or maintain the accounts.
- the accounts are created with one or more different attributes to differentiate between groups of users in order to create an efficient deception environment that may allow better classification of the potential security risk to the external endpoint(s) 251.
- the groups may be defined according to one or more activity characteristics of the plurality of users in the protected network 235.
- the accounts created for a group of external user(s) may be marked differently from accounts allocated to user(s) of the protected network 235.
- the accounts may be marked to differentiate between groups of external users accessing the resource(s) 210 from different external endpoints 251.
- the accounts may mark action group(s) according to type of operation(s) allowed for the resource 222, for example, view data, manipulate data, retrieve data, and/or the like.
- the accounts may be marked according to the type of department of the protected network 235 that may be accessed, for example, finance, production, development, IT, HR and/or the like. This may allow targeting the deception campaigns more effectively by adjusting the deception environment according to the account to access the protected network 235. This may further allow the campaign manager 216 to more effectively monitor the communication with and/or within the protected network 235 as the campaign manager 216 may focus on user(s) of the external endpoint(s) 251 rather than on (internal) user(s) of the protected network 235. The campaign manager 216 may also concentrate on monitoring the communication with the external endpoint(s) 251 in which accounts that were previously suspected as compromised are used.
- the campaign manager 216 monitors the interaction of the access client(s) 255 with the respective resource(s) 222 to identify one or more activity characteristics in the communication with the external endpoint(s) 251, for example, the type of the external endpoint 251, the type of the external user, the type of interaction, the type of resource 222, the type of operation(s) executed by the external user and/or the like.
- the campaign manager 216 may further divide the external users to groups according to one or more of the activity characteristics. For example, the campaign manager 216 may create a group of users using a certain one of the external endpoints 251. In another example, the campaign manager 216 may create a group of users which access a certain one of the resources 222. In another example, the campaign manager 216 may create a group of users according to their interaction privileges, for example, allowed to perform a certain operation in the protected network 235.
- the campaign manager 216 creates the deception data objects 214 and defines the interaction with one or more of the deception applications 212 by declaring the relationship(s) of each of the deception data objects 214.
- the campaign manager 216 creates the deception data objects 214 according to the activity characteristic(s) detected while monitoring the communication and/or interaction between the access client(s) 255 and the resources 222.
- the deception data objects 214 are created to emulate valid data objects used to interact with the resource(s) 222. Creation, initiation and launching of the deception environment deception resources 210, in particular the decoy OS(s) and the deception applications is out of scope of the present invention and is described in detail in PCT Application No. IB2016/054306 titled "Decoy and Deceptive Data Object Technology" filed Jul, 20, 2016, the contents of which are incorporated herein by reference in their entirety.
- the campaign manager 216 may create the deception data objects 214 according to the activity characteristic(s) and/or in response to operation(s) and/or action(s) performed by the external user(s). Typically, the campaign manager 216 creates the deception data objects 214 automatically. However, the user(s) 260 may interact with the campaign manager 216 to define a policy, scope, parameter(s), activity characteristic(s) and/or the like for the deception campaign(s). Optionally, the user(s) 260 may interact with the campaign manager 216 to specifically create one or more of the deception data objects 214. The campaign manager 216 may create the deception data objects 214 in addition to the normal response typically taken by the accessed resource 222 or instead of the normal response.
- the resource 222 responds with an outcome Y.
- the campaign manager 216 may therefore create one or more deception objects Y' when detecting the certain operation.
- the access client 255 and/or the external endpoint 251executing the access client 255 may be updated with Y and Y' or possibly only with Y'.
- the deception data objects 214 may include, for example, one or more of the following:
- Cookies and/or history log entries In case the access client 255 is browsing, for example, to a supplier-facing website (an exemplary resource 222) of the protected network 235, typically, a cookie is added to the to be added to the access client 255 and the website's address added to the history log of the access client 255.
- the campaign manager 216 may, for example, open an iframe to a deception website (an exemplary deception resource 210 corresponding to the exemplary resource 222) in which different cookie(s) and/or different history log entry(s) may be added to the access client 255.
- the different cookie(s) and/or history log entry(s) may point to the deception environment, for example, a false website, an identical system that is not used by real users (internal and/or external), a proxy that redirects to a real system within the protected network 235, a different service, such as, for example, email, SharePoint and/or the like.
- the campaign manager 216 may, for example, initiate a JavaScript, a pop-up window and/or a pop-under window to direct the access client 255 to the deception website in which the different cookie(s) and/or the different history log entry(s) may be added to the access client 255.
- the JavaScripts, the pop-up windows and/or the pop-under windows may maintain a visibility similarity with the supplier-facing website such that they do not significantly alter the browsing experience of the external user(s) using the access client(s) 255.
- the campaign manager 216 may install additional fake credentials on the external endpoint 251 from which the RDP is initiated.
- the campaign manager 216 may install a fake remote access configuration file on the external endpoint 251 from which the RDP is initiated.
- the campaign manager 216 may install additional fake credentials, files, password(s) and/or the like on the external endpoint 251from which the VNC is initiated.
- the access client 255 may be offered with automatic password completion when accessing a resource, a service and/or an application at the remote network.
- the campaign manager 216 may manipulate the automatic password completion and provide fake password(s) to the access client 255 accessing the protected network 235.
- Archive files for example, zip, rar, tar.gz and/or the like.
- the campaign manager 216 may insert one or more deception data objects 214 into one or more of the created archive files.
- the campaign manager 216 may provide the external user(s) one or more false accounts for accessing one of more of the deception applications 212.
- the campaign manager 216 may configure each of the deception data objects 214 to interact with one or more of the deception resources 210.
- the campaign manager 216 may configure the deception data objects 214 and define their relationships according to a deception policy and/or methods defined for the deception campaign.
- the campaign manager 216 creates and configures the deception data objects 214 according to the resource(s) 222 accessed by the access client(s) 255.
- the campaign manager 216 also defines the interaction with the deception resource(s) 210 which map the accessed resource(s) 222.
- the deceptive data object 214 of type "browser cookie” may be created to interact with one or more deception resources 210, for example, a fake website and/or a deception application launched using, for example, a deception resource 210 of type "browser” created during the deception campaign.
- a deceptive data object 214 of type "compressed file” may be created for external user(s) using a certain one of the external endpoints 251.
- a deceptive data object 214 of type "credentials” may be created for users accessing a certain application 212 of type "ERP".
- the campaign manager 216 periodically and/or dynamically updates one or more of the deception data objects 214 to impersonate an active real (valid) processing environment such that the deception data objects 214 appear to be valid data objects to lead the potential attacker to believe the emulated deception environment is a real one.
- the proprietary access client 255 may itself create the deception data objects 214.
- the proprietary access client 255 may monitor the communication with the resource(s) 222 and create the deception data objects 214 according to the detected activity characteristic(s). Additionally and/or alternatively, the proprietary access client 255 may create the deception data objects 214 according to instructions received from the campaign manager that monitors the communication of the proprietary access client 255 with the resource(s) 222.
- the campaign manager 216 is used to deploy the deception data objects 214 in the access client 255 and/or the external endpoint 251 executing the access client 255.
- the deception data objects 214 are directed (once deployed) to attract the potential attackers who may have gained access and/or control of the external endpoint 251 and trying to penetrate the protected network 235.
- the deception data objects 214 may be created with one or more attributes that may be attractive to the potential attacker, for example, a name, a type and/or the like.
- the deception data objects 214 may be created to attract the attention of the attacker using an attacker stack, i.e. tools, utilities, services, application and/or the like that are typically used by the attacker. As such, the deception data objects 214 may not be visible to users using a user stack, i.e. tools, utilities, services, application and/or the like that are typically used by a legitimate user.
- Taking this approach may allow creating the deception campaign in a manner that the user may need to go out of his way, perform unnatural operations and/or actions to detect, find and/or use the deception data objects 214 while it may be a most natural course of action or method of operation for the attacker.
- browser cookies are rarely accessed and/or reviewed by the legitimate user(s). At most, the cookies may be cleared en-masse.
- one of the main methods for the attacker(s) to obtain website credentials and/or discover internal websites visited by the legitimate user(s) is to look for cookies and/or history log entries and analyze them.
- deception data objects 241 are created according to the groups of external users using the access client(s) 255 to access the resource(s) 222.
- FIG. 3 is a block diagram of exemplary building blocks of a deception environment for detecting potential security risks in external endpoint(s) communicating with a protected network, according to some embodiments of the present invention.
- a deception environment 300 created using a campaign manager such as the campaign manager 216 comprises a plurality of deception data objects 214 deployed in one or more access client such as the access client 255 accessing a protected network such as the protected network 235 from one or more external endpoints such as the external endpoint 251.
- the campaign manager 216 is used to define relationships 320 between each of the deception data items 214 and one or more of a plurality of deception resources such as the deception resources 210, for example, deception applications 310.
- the campaign manager 216 is also used to define relationships 322 between each of the deception applications 310 and one or more of a plurality of other deception resources 210, for example, decoy OSs 312.
- the deception data objects 214, the deception applications 310 and/or the decoy OSs 312 may be arranged in one or more groups 302, 304 and/or 306 respectively according to one or more of the activity characteristics of the external user(s).
- operations that use data included in the deception data objects 214 interact with the deception application(s) 310 according to the defined relationships 320 that in turn interact with the decoy OS(s) 312 according to the defined relationships 322.
- the defined relationships 320 and/or 322 may later allow detection of one or more unauthorized operations by monitoring and analyzing the interaction between the deception data objects 214, the deception applications 310 and/or the decoy OSs 312.
- the campaign manager 216 continuously monitors the interaction between the deception data objects 214 and the deception resource(s) 212 in order to detect a potential security risk in the external endpoint(s) 251 and/or a potential attacker trying to penetrate the protected network 235.
- the potential attacker may be detected by identifying one or more unauthorized operations that are initiated in the protected network 235 through the access client(s) 255 using data retrieved from the deception data object(s) 214.
- the campaign manager 216 may detect usage of fake password(s) provided previously to the access client 255, for example, the fake credentials, the fake automatic password completion and/or the like.
- the campaign manager 216 may monitor the deception resource(s) 210 at one or more levels and/or layers, for example:
- Network monitoring in which the campaign manager 216 monitors egress and/or ingress traffic at one or more of the endpoints 220.
- the campaign manager 216 may further record the monitored network traffic.
- OS monitoring in which the campaign manager 216 monitors interaction made by one or more of the deception applications 310 with the deception resource(s) 210, for example, the decoy OS(s) 312.
- Kernel level monitoring in which the campaign manager 216 monitors and analyzes activity at the kernel level of the decoy OS(s) 312.
- the campaign manager 216 may further detect usage of data contained in certain deception data object(s) 214 for accessing resources of the protected network 235 which are different than the resources for which the certain deception data object(s) 214 were originally created for. For example, assuming a certain fake password data deception object 214 is created using the campaign manager 216 to emulate a password for accessing a first resource 222A and configured to interact with a first deception resource 210A. The campaign manager 216 may detect usage of the created fake password for accessing a second resource 222B, i.e. not the (first) resource 222A the fake password was originally created for. Moreover, the campaign manager 216 may detect interaction of the fake password with one or more other deception resources 210.
- the campaign manager 216 may also detect usage of data contained in certain deception data object(s) 214 used by an external user using an external endpoint 251 A which is not the same external user for which the certain deception data object(s) 214 was originally created for. For example, assuming a certain data deception object 214A is created using the campaign manager 216 and deployed in an access client 255A of a first user. The campaign manager 216 may detect usage of data contained in the deception data object 214A even when used by a second external user, whether using the same external endpoint 251 as the first external user and/or using a different external endpoint 251.
- the campaign manager 216 analyzes the data and/or activity detected during the interaction monitoring in order to identify the unauthorized operation that may indicate that the external endpoint 251 is compromised and/or that a potential attacker is trying to access the protected network 235.
- the campaign manager 216 manager may analyze the interaction to identify usage of data included, provided and/or available from one or more of the deception data objects 214. Based on the analysis, the campaign manager 216 may create one or more interaction events.
- the analysis conducted by the campaign manager 216 may include false positive analysis to avoid false identification of one or more operations initiated by one or more legitimate users, processes, applications and/or the like as operations initiated by the potential unauthorized operation.
- the interaction events may be created when the campaign manager 216 detects a meaningful interaction with one or more of the deception resources 210.
- the campaign manager 216 may create the interaction event when detecting usage of data that is included, provided and/or available from one or more of the deception data objects 214 for accessing and/or interacting with one or more of the deception resources 210.
- the campaign manager 216 may create an interaction event when detecting an attempt to logon to a deception application 310 of type "remote desktop service” using fake credentials stored in a deception data object 214 of type "credentials".
- the campaign manager 216 may detect an access to a deception application 312 of type "false website” using the data retrieved from a deception data object 214 of type "cookie”.
- the campaign manager 216 may be configured to create interaction events when detecting one or more pre-defined interaction types, for example, logging on a specific deception application 312, executing a specific command, clicking a specific button(s) and/or the like.
- the user(s) 260 may further define "scripts" that comprise a plurality of the pre-defined interaction types to configure the campaign manager 216 to create an interaction event at detection of complex interactions between one or more of the deception components, i.e. the deception resource(s) 210 and/or the deception data object(s) 214.
- the campaign manager 216 creates an activity pattern of the potential attacker by analyzing the identified unauthorized operation(s). Using the activity pattern, the campaign manager 216 may gather useful forensic data on the operations of the potential attacker and may classify the potential attacker in order to estimate a course of action and/or intentions of the potential attacker. The campaign manager 216 may than adapt the deception environment to tackle the estimated course of action and/or intentions of the potential attacker.
- the campaign manager 216 employs one or more machine learning processes, methods, algorithms and/or techniques on the identified activity pattern.
- the machine learning may serve to increase the accuracy of classifying the potential attacker based on the activity pattern.
- the machine learning may further be used by campaign manager 216 to adjust future deception environments and deception components to adapt to the learned activity pattern(s) of a plurality of potential attacker(s).
- classifying the activity pattern may allow the campaign manager 216 to characterize potential attacker(s) detected in subsequent detection events and estimate their intentions at an early stage of the penetration sequence.
- the campaign manager 216 generates one or more alerts following the detection event indicating the potential unauthorized operation.
- the user(s) 260 may configure the campaign manager 216 to set an alert policy defining one or more of the events and/or combination of events that trigger the alert(s).
- the campaign manager 216 may be configured during the creation of the deception campaign and/or at any time after the deception campaign is launched.
- the alert may be delivered to the user(s) 260 monitoring the campaign manager 216 and/or through any other method, for example, an email message, a text message, an alert in a mobile application and/or the like.
- the campaign manager 216 generates one or more alerts to the external endpoint 251 from which the unauthorized operation is initiated.
- the campaign manager 216 may alert, for example, an external user, a system administrator, an IT person and/or the like of the external endpoint(s) 251 that are suspected to be compromised and be exposed to a security risk.
- the campaign manager 216 may also alert an automated tool of the external endpoint 251, for example, a security system to inform of the potential security risk.
- the campaign manager 216 and/or the deception environment may be further configured to take one or more additional actions following the alert.
- One action may be pushing a log of potential unauthorized operation(s) using one or more external applications and/or services, for example, syslog, email and/or the like.
- the log may be pushed with varying levels of urgency according to the policy defined for the deception campaign.
- the campaign manager 216 and/or the deception environment may be further configured to contain the unauthorized operation(s) which may typically be part of an attack vector of the potential attacker within the deception environment.
- the campaign manager 216 may adjust, adapt and/or reconfigure the deception environment, for example, create, adjust and/or remove one or more deception resources 210, create, adjust and/or remove one or more deception data objects 214 and/or the like. This may allow isolating the potential attacker from the real processing environment of the protected network 235 while learning the activity pattern(s) of the attack vector and/or the potential attacker.
- the campaign manager 216 presents the user(s) 260 with real time and/or previously captured status information relating to the deception campaign(s), for example, created events, detected potential attackers, attack patterns and/or the like.
- the campaign manager 216 may provide, for example, a dashboard provided through the GUI of the campaign manager 216.
- the campaign manager 216 may also presents the status information and/or through a remote access application, for example, a web browser and/or a local agent executed on one or more of the endpoints 220 and/or by one or more of the client terminal 221 accessing the campaign manager 216 remotely over the network 230 and/or the internet 240.
- FIG. 4 is a flowchart of an exemplary process for creating a deception data objects deployed in an access client accessing a protected network from an external endpoint, according to some embodiments of the present invention.
- An exemplary process 400 may be executed to create deception environment components deployed in an access client such as the access client 255 used to access resource(s) such as the resource 210 of a protected network such as the protected network 235.
- the process 400 may be executed by an external endpoint such as the external endpoint 251 in a system such, for example, the system 200A, 200B, 200C and/or 200D.
- the process 400 may be executed by one or more software modules executing on the external endpoint 251.
- the software module(s) implementing the process 400 are integrated within the access client 255 which in such case may be a proprietary access client provided to the external users by the vendor and/or owner of the protected network 235.
- the communication of the access client 255 with the protected network is monitored.
- the access client 255 communicates with the protected network 235 in order to access one or more resources of the protected network 235 such as the resources 210.
- the monitoring of the communication is done as described in step 102 of the process 100.
- one or more data deception objects such as the deception data objects 214 are created according to one or more of the activity characteristic(s) detected while monitoring the communication between the access client(s) 255 and the resources 210. Creation of the deception data object(s) 214 is done as described in step
- the created deception data object(s) 214 are deployed in the access client 255 as described in step 106 of the process 100.
- composition or method may include additional ingredients and/or steps, but only if the additional ingredients and/or steps do not materially alter the basic and novel characteristics of the claimed composition or method.
- singular form “a”, “an” and “the” include plural references unless the context clearly dictates otherwise.
- the term “a compound” or “at least one compound” may include a plurality of compounds, including mixtures thereof.
- range format is merely for convenience and brevity and should not be construed as an inflexible limitation on the scope of the invention. Accordingly, the description of a range should be considered to have specifically disclosed all the possible subranges as well as individual numerical values within that range. For example, description of a range such as from 1 to 6 should be considered to have specifically disclosed subranges such as from 1 to 3, from 1 to 4, from 1 to 5, from 2 to 4, from 2 to 6, from 3 to 6 etc., as well as individual numbers within that range, for example, 1, 2, 3, 4, 5, and 6. This applies regardless of the breadth of the range.
- a computer implemented method of detecting unauthorized access to a protected network from external endpoints comprising:
- monitoring at a protected network, communication with at least one external endpoint using at least one access client to access at least one of a plurality of resources of the protected networked, at least one deception resource created in the protected network maps at least one of the plurality of resources;
- each of the plurality of resources is a member of a group consisting of: an endpoint, a data resource, an application, a tool, a website and a service,
- each resource is at least one of a local resource and a cloud resource.
- the at least one deception resource is provided by at last one decoy endpoint which is a member selected from a group consisting of: a physical device comprising at least one processor and a virtual machine,
Landscapes
- Engineering & Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Computing Systems (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Debugging And Monitoring (AREA)
Abstract
A computer implemented method of detecting unauthorized access to a protected network from external endpoints, comprising monitoring, at a protected network, communication with one or more external endpoints using one or more access clients to access one or more of a plurality of resources of the protected networked, where one or more deception resources created in the protected network map one or more of the plurality of resources, detecting usage of data contained in one or more of a plurality of deception data objects deployed in the one or more access clients by monitoring an interaction triggered by one or more of the deception data objects with the one or more deception resources when used and identifying one or more potential unauthorized operations based on analysis of the detection.
Description
SUPPLY CHAIN CYBER-DECEPTION
RELATED APPLICATIONS
PCT Application No. PCT/IB2016/054306 titled "Decoy and Deceptive Data Object Technology" filed Jul, 20, 2016, the contents of which are incorporated herein by reference in their entirety.
BACKGROUND
The present invention, in some embodiments thereof, relates to detecting potential security risks in one or more external endpoints communicating with a protected network, and, more specifically, but not exclusively, to detecting potential security risks in one or more external endpoints communicating with a protected network by monitoring interaction between deception data objects deployed in an access client used to access the protected network and deception applications executed at the protected network.
Organizations of all sizes and types face the threat of being attacked by advanced attackers who may be characterized as having substantial resources of time and tools, and are therefore able to carry out complicated and technologically advanced operations against targets to achieve specific goals, for example, retrieve sensitive data, damage infrastructure and/or the like.
Frequently, in order to penetrate a certain network, advanced attackers may gain control over other network(s) communicating with the certain network and use communication and/or interaction means between the networks to access and penetrate the certain network, the advanced attackers may typically operate in a staged manner, first collecting intelligence about the target organizations, networks, services and/or systems, initiate an initial penetration of the target, perform lateral movement and escalation within the target network and/or services, take actions on detected objectives and leave the target while covering the tracks. Each of the staged approach steps involves tactical iterations through what is known in the art as observe, orient, decide, act (OODA) loop. This tactic may present itself as most useful for the attackers who may face an unknown environment and therefore begin by observing their surroundings, orienting themselves, then deciding on a course of action and carrying it out.
SUMMARY
According to a first aspect of the present invention there is provided a computer implemented method of detecting unauthorized access to a protected network from other networks, comprising:
- Monitoring, at a protected network, communication with one or more external endpoints using one or more access clients to access one or more of a plurality of resources of the protected networked. One or more deception resources created in the protected network map one or more of the plurality of resources.
Detecting usage of data contained in one or more of a plurality of deception data objects deployed in the one or more access clients by monitoring an interaction triggered by one or more of the deception data objects with the one or more deception resources when used.
Identifying one or more potential unauthorized operations based on analysis of the detection.
Detecting potential malicious activity at the external endpoint(s) may facilitate an expansion of the protection provided and/or supported by the protected network to the external endpoints and/or external network(s). While the protected network may be highly protected, the external endpoint(s) may typically be less protected and as part of the communication the protected network holds with the external endpoint(s), the protected network may take advantage of the protection means in its disposal to detect security risks, threats and/or potential malicious activity at the external endpoint(s). Monitoring interaction between the deception data objects deployed in the access client may allow detection of the security threats the external endpoint(s) may be exposed to even when the external endpoint(s) are not controlled from the protected network, i.e. installing protection means in the external endpoint(s) is not possible.
According to a second aspect of the present invention there is provided a system for detecting unauthorized access to a protected network from other networks, comprising a program store storing a code and one or more processors of an endpoint of a protected network, coupled to the program store for executing the stored code, the code comprising:
Code instructions to monitor, at the protected network, communication with one or more external endpoints using one or more access clients to access one or more of a
plurality of resources of the protected network. One or more deception resources created in the protected network maps one or more of the plurality of resources. Code instructions to detect usage of data contained in one or more of a plurality of deception data objects deployed in the one or more access clients by monitoring an interaction triggered by one or more of the deception data objects with the one or more deception resource when used.
Code instructions to identify one or more potential unauthorized operations based on analysis of the detection.
According to a third aspect of the present invention there is provided a computer implemented method of creating in a protected network a deception environment for accesses from external endpoints, comprising:
Monitoring, at a protected network, communication with one or more external endpoints using one or more access clients to access one or more of a plurality of resources of the protected networked, one or more deception resource created in the protected network maps one or more of the plurality of resources.
Creating a plurality of deception data objects according to the monitored communication, the plurality of deception data objects are configured to trigger an interaction with the one or more deception resources when used.
Deploying the plurality of deception data objects in the one or more access clients; The interaction between one or more of the plurality of deception data objects and the one or more deception resources is indicative of one or more potential unauthorized operations.
Adapting the deception environment emulating the real processing environment of the protected network, in particular the deception data objects, according to the communication and/or interaction the other network(s) hold with the protected network may establish a genuine view of the deception environment for the potential attacker(s). This may lead the potential attacker(s) to believe the deception environment is in fact the real (genuine) processing environment. Deploying the created deception data object(s) in the access client may allow overcoming the inability to directly and/or actively install software modules in general and protection means in particular in the external endpoint(sO environment.
According to a fourth aspect of the present invention there is provided a software product, comprising:
A non-transitory computer readable storage medium;
First program instructions to monitor, at an external endpoint, communication of one or more access clients with a protected network, the one or more access clients is executed by the external endpoint for accessing one or more of a plurality of resources of the protected network. One or more deception resources created in the protected network maps one or more of the plurality of resources.
Second program instructions to create a plurality of deception data objects according to the monitored communication, the plurality of deception data objects are configured to trigger an interaction with the one or more deception resources when used.
Third program instructions to deploy the plurality of deception data objects in the one or more access clients.
The interaction between one or more of the plurality of deception data objects and the one or more deception resource is indicative of one or more potential unauthorized operations.
Wherein the first, second and third program instructions are executed by one or more processor of the external endpoint from the non-transitory computer readable storage medium.
Creating and/or deploying the deception data objects at one or more of the external endpoints using a proprietary software module the may significantly increase scalability of the deception environment as the deception data objects may be created locally for each endpoint, each access client and/or each user. The deception data object(s) creation and/or deployment capabilities may typically be integrate within a proprietary access client provided to the external user(s) accessing the protected network from the external endpoint(s).
According to a fifth aspect of the present invention there is provided a computer implemented method, comprising:
- Monitoring, at an external endpoint, communication of one or more access clients with a protected network, the one or more access clients is executed by the external endpoint for accessing one or more of a plurality of resources of the protected
network. One or more deception resources created in the protected network maps one or more of the plurality of resources.
Creating a plurality of deception data objects according to the monitored communication, the plurality of deception data objects are configured to trigger an interaction with the one or more deception resources when used.
Deploying the plurality of deception data objects in the one or more access clients. The interaction between one or more of the plurality of deception data objects and the one or more deception resources is indicative of one or more potential unauthorized operations.
In a further implementation form of the first, second, third, fourth and/or fifth aspects, each of the plurality of resources is a member of a group consisting of: an endpoint, a data resource, an application, a tool, a service and/or a website. Wherein each resource is a local resources and/or a cloud resource. The deception environment may be adapted to emulate various types, deployments, structures and/or features of the protected network.
In a further implementation form of the first, second, third, fourth and/or fifth aspects, the access of the one or more external endpoints to the one or more resources includes one or more of: retrieving information and manipulating information. The external endpoint(s) interact with the protected network for accessing one or more resources provided by the protected network. The external endpoint(s) may have various privileges, access rights and/or data manipulation rights such as read only, read and write, retrieve and/or the like.
In a further implementation form of the first, second, third, fourth and/or fifth aspects, one or more of the external endpoints is used by a supplier of an organization utilizing the protected network. In some embodiments of the present invention, the external endpoint(s) are used by one or more suppliers (part of a supply chain), vendors, 3rd party service providers and/or the like that work with an organization utilizing the protected network.
In a further implementation form of the first, second, third, fourth and/or fifth aspects, the one or more deception resources is provided by at last one decoy endpoint which is a member selected from a group consisting of: a physical device comprising one or more processors and a virtual machine. Wherein the virtual machine is hosted by
one or more members of a group consisting of: a local endpoint, a cloud service and a vendor service. The deception environment created to protect the protected network may be based on physical endpoints, virtual endpoint and/or a combination thereof. The deployment of the deception environment may be highly flexible allowing usage of local resources as well as cloud resources and/or a combination thereof.
In a further implementation form of the first, second, third, fourth and/or fifth aspects, one or more of the plurality of deception data objects deployed in the one or more access clients is created at the protected network according to the monitored communication. Adapting the deception environment emulating the real processing environment of the protected network, in particular the deception data objects, according to the communication and/or interaction the other network(s) hold with the protected network may establish a genuine view of the deception environment for the potential attacker(s). This may lead the potential attacker(s) to believe the deception environment is in fact the real (genuine) processing environment.
In a further implementation form of the first and/or second aspects, one or more of the plurality of deception data objects deployed in the one or more access clients is created at the external endpoint according to the monitored communication. In case the access client is a proprietary tool provided to the external endpoint(s) by, for example, the vendor and/or the owner of the protected network may allow installing deception creation capabilities in the proprietary access client. This may allow creating one or more of the deception data objects locally for each of the external endpoints, each of the access clients and/or each of the external users.
In a further implementation form of the first and/or second, third, fourth and/or fifth aspects, each of the plurality of deception data objects emulates a valid data object used for interacting with one or more of the plurality of resources. In order for the deception environment to emulate the real (genuine) processing environment of the protected network, the deception data objects are adapted to emulate real (valid) data objects to efficiently and effectively impersonate the real processing environment.
In a further implementation form of the first, second, third, fourth and/or fifth aspects, each of the plurality of deception data objects is a member of a group consisting of: a browser cookie, a history log record, an account, a credentials object, a configuration file for remote desktop authentication credentials, a JavaScript and a
deception file. The deception data objects may be adapted to emulate a variety of data objects typically used during interaction of the access clients with the protected network resources, services and/or applications. This may allow ability and further flexibility is deploying the deception data objects in the access client(s).
In a further implementation form of the first, second, third, fourth and/or fifth aspects, the one or more access clients is a member of a group consisting of: a web browser, a remote access agent and a proprietary client provided by an organization utilizing the protected network. The deception environment may be adapted according to the type of the access client used to interact with the protected network from the external endpoint(s).
In a further implementation form of the first, second, third, fourth and/or fifth aspects, the one or more access clients accesses the protected network using an account allocated by the protected network to one or more external users using the one or more access clients to access the protected network from the one or more external endpoints, the account is accessible with credentials assigned to the one or more external users. Typically, accesses to the protected network are done through accounts allocated to user(s) of the protected network, either local users and/or external users accessing the protected network from the external endpoint(s). The account may define access rights, operation privileges and/or the like for each of the users. The deception environment may therefore be adapted to use the accounts for efficiently emulating the real processing environment as well as controlling deception campaigns according to account characteristic.
In a further implementation form of the first, second, third, fourth and/or fifth aspects, the one or more external users are members of a group consisting of: a human user and an automated tool. The deception environment may be adapted to identify any external user accessing the protected network, either human users and/or automated tools.
In a further implementation form of the first, second, third, fourth and/or fifth aspects, the account allocated to the one or more external users is marked differently from the account allocated to one or more internal users of the protected network. This may allow for easily distinguishing between local (internal) users of the protected network and external users accessing the protected network from the external
endpoint(s). This may facilitate launching effective deception campaign(s) that may target and/or focus on the external users.
In an optional implementation form of the first, second, third, fourth and/or fifth aspects, a plurality of external users using the one or more access clients to access the protected network from the one or more external endpoints are divided to groups according to one or more activity characteristics. The one or more activity characteristics are identified by analyzing the communication and are members of a group consisting of: an operation initiated by the external endpoint and a type of the external endpoint. This may further increase effectivity of the deception campaign(s) that may target and/or focus on specific types of external users and/or on external users exhibiting certain activity characteristics.
In an optional implementation form of the first and/or second aspects, the one or more potential unauthorized operations are identified by detecting usage of data contained in one or more of the deception data objects for accessing one or more of the resources. The unauthorized operation(s) may be detected when usage of certain deception data objects (data) is identified even when used for accessing a resource other than the one for which the certain deception data objects (data) were originally created for. This means that the usage of the deception data object(s) may be detected even when not used in the context for which they were created. This may further expand the threat detection capabilities of the protected network.
In an optional implementation form of the first and/or second aspects, an alert is generated at detection of the one or more potential unauthorized operations. This may allow alerting one or more parties of the detected unauthorized operation(s) that may be indicative of one or more potential threats, security risks and/or of an exposure to potential malicious attack and/or attacker.
In an optional implementation form of the first and/or second aspects, an alert is generated to the one or more external endpoints at detection of the one or more potential unauthorized operations. Expanding the alerted parties to the external endpoint(s) may allow one or more users (e.g. a system administrator, an IT person, etc.) and/or automated tools (e.g. a security system, etc.) of the external endpoint(s) to take one or more actions in response to the detected security threat.
In an optional implementation form of the first and/or second aspects, an alert is generated at detection of a combination of a plurality of potential unauthorized operations to detect a complex sequence of the interaction. This may support escalation of the alert and/or security thread notification when more complex sequences of unauthorized operations are detected.
In an optional implementation form of the first and/or second aspects, the one or more potential unauthorized operations are analyzed to identify an activity pattern. Detecting the activity pattern may allow for identifying one or more intentions of the potential attacker.
In an optional implementation form of the first and/or second aspects, a learning process is applied to the activity pattern to classify the activity pattern in order to detect and classify one or more future potential unauthorized operations. Classifying the activity pattern(s) may allow characterizing potential attacker(s) detected in subsequent detection events and estimate their intentions at an early stage of their penetration sequence.
In a further implementation form of the fourth aspect, the software product is integrated in the one or more access clients. This may allow at last some level of control over the access client executed at the external endpoint(s) to access the protected network. The supported control may allow locally creating and/or deploying the deception data objects in the access client at the external endpoint(s). This may further support scalability as the creation and/or deployment of the deception data objects is distributed among the external endpoint(s) rather than by the protected network itself.
Unless otherwise defined, all technical and/or scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the invention pertains. Although methods and materials similar or equivalent to those described herein can be used in the practice or testing of embodiments of the invention, exemplary methods and/or materials are described below. In case of conflict, the patent specification, including definitions, will control. In addition, the materials, methods, and examples are illustrative only and are not intended to be necessarily limiting.
Implementation of the method and/or system of embodiments of the invention can involve performing or completing selected tasks manually, automatically, or a combination thereof. Moreover, according to actual instrumentation and equipment of
embodiments of the method and/or system of the invention, several selected tasks could be implemented by hardware, by software or by firmware or by a combination thereof using an operating system.
For example, hardware for performing selected tasks according to embodiments of the invention could be implemented as a chip or a circuit. As software, selected tasks according to embodiments of the invention could be implemented as a plurality of software instructions being executed by a computer using any suitable operating system. In an exemplary embodiment of the invention, one or more tasks according to exemplary embodiments of method and/or system as described herein are performed by a data processor, such as a computing platform for executing a plurality of instructions. Optionally, the data processor includes a volatile memory for storing instructions and/or data and/or a non- volatile storage, for example, a magnetic hard-disk and/or removable media, for storing instructions and/or data. Optionally, a network connection is provided as well. A display and/or a user input device such as a keyboard or mouse are optionally provided as well.
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
Some embodiments of the invention are herein described, by way of example only, with reference to the accompanying drawings. With specific reference now to the drawings in detail, it is stressed that the particulars shown are by way of example and for purposes of illustrative discussion of embodiments of the invention. In this regard, the description taken with the drawings makes apparent to those skilled in the art how embodiments of the invention may be practiced
In the drawings:
FIG. 1 is a flowchart of an exemplary process for monitoring interaction of one or more external endpoints with a deception environment of a protected network in order to detect potential security risks in the external endpoint(s), according to some embodiments of the present invention;
FIG. 2A is a schematic illustration of an exemplary first embodiment of a system for monitoring interaction of one or more external endpoints with a deception environment of a protected network in order to detect potential security risks in the external endpoint(s), according to some embodiments of the present invention;
FIG. 2B is a schematic illustration of an exemplary second embodiment of a system for monitoring interaction of one or more external endpoints with a deception environment of a protected network in order to detect potential security risks in the external endpoint(s), according to some embodiments of the present invention;
FIG. 2C is a schematic illustration of an exemplary third embodiment of a system for monitoring interaction of one or more external endpoints with a deception environment of a protected network in order to detect potential security risks in the external endpoint(s), according to some embodiments of the present invention;
FIG. 2D is a schematic illustration of an exemplary fourth embodiment of a system for monitoring interaction of one or more external endpoints with a deception environment of a protected network in order to detect potential security risks in the external endpoint(s), according to some embodiments of the present invention;
FIG. 3 is a block diagram of exemplary building blocks of a deception environment for detecting potential security risks in one or more external endpoints communicating with a protected network, according to some embodiments of the present invention; and
FIG. 4 is a flowchart of an exemplary process for creating a deception data objects deployed in an access client accessing a protected network from an external endpoint, according to some embodiments of the present invention.
DETAILED DESCRIPTION
The present invention, in some embodiments thereof, relates to detecting potential security risks in one or more external endpoints communicating with a protected network, and, more specifically, but not exclusively, to detecting potential security risks in one or more external endpoints communicating with a protected network by monitoring interaction between deception data objects deployed in an access client used to access the protected network and deception applications executed at the protected network.
According to some embodiments of the present invention, there are provided methods, systems and computer program products for creating an emulated deception environment in a protected network for detecting potential unauthorized operations initiated by user accessing the protected network from external endpoints. This may serve to expand the detection border of the protected network and allow detection of potential security risks in the external endpoint(s) which is not part of the protected
network and may therefore not be controlled by the protected network and may thus be compromised and subject to security risk(s). The external endpoint(s), for example, a computer, a workstation, a server, a processing node, a cluster of processing nodes, a network node, a Smartphone, a tablet, and/or any network connected device may be part of a network external (not part of) the protected network. For example, the protected network may utilize, for example, an organization network, an institution network and/or the like while the external endpoint(s) may be used by, for example, a supply chain vendor (supplier), a 3 party, a services provider and/or the like. In another example, the protected network may be a network of a first organization, typically a mature and well protected organization while the external endpoint(s) may be part of another network of a second organization (company, firm, etc.) acquired by the first organization. The second organization which may maintain an independent operations structure, at least until fully assimilated within the first organization, may be less protected than the first organization. In such case, while maintaining inter-operation between the two networks, the protection provided by the protected network (the first organization) may be expanded to the external endpoint(s) of the other network (the second organization).
Detection of potential security risk(s) in the external endpoint(s) and/or of a potential attacker(s) trying to access the protected network from the external endpoint(s) is based on creating a deception environment within the protected network. The deception environment is created, maintained and monitored through one or more deception campaigns and comprises a plurality of deception components such as deception resources, for example, decoy operating system(s) (OS) and/or deception applications created and launched in the protected network to emulate the resources, for example, endpoints, services, applications, data, websites and/or the like provided by the protected network. The deception environment co-exists with a real (valid) processing environment of the protected network while separated from the real processing environment.
The security risk(s) detection is based on motivating the potential attacker(s) to access the deception environment by creating and deploying deception data objects (breadcrumbs), in access client(s) executed at the external endpoint(s) to access, connect, communicate and/or interact with one or more of the resources of the protected
network. The deception data objects may be created and deployed at the protected network, at the external endpoint executing the access client and/or a combination thereof.
The deception data objects may be created according to one or more activity characteristics identified for the user(s) using the access client(s) by monitoring and analyzing the communication between the access client(s) and the protected network resources. The deception data objects, for example, credential files, password files, "cookies", history log entries, access protocols, accounts, archive files and/or the like are configured to interact with the deception resources emulating the accessed resources of the protected network. While adapted to emulate valid corresponding data objects, the deception data objects are configured to, when used, to interact with the deception resource(s) instead of interacting with the real resources provided by the protected network. The deception data objects may be further configured to appeal to the potential attacker accessing the protected network from the external endpoint(s) while being significantly transparent to legitimate users.
The interaction between the deception data objects and the deception applications is continuously monitored. Detection of such interaction may typically result from attempted unauthorized operation(s) in the protected network and may therefore be indicative of the potential attacker which may thus be indicative that the external endpoint may be compromised and subject to security risk(s). Usage of data contained in the deception data objects may be further monitored during interaction with other resources of the protected network, i.e. resources of the protected network that the deception data objects were not originally created to interact with. For example, a fake password deception data object may be created to emulate a password for a first service of the protected network. However, usage of the created fake password deception data object may be detected for accessing a second service of the protected network.
Optionally, one or more of the deception data objects may be updated periodically and/or dynamically to improve emulation and impersonation of the deception environment as the real processing environment.
Optionally, the users of the external endpoint(s) are divided to groups according to one or more of the detected activity characteristics.
Expanding the detection capabilities of the protected network to the external endpoint(s) may present significant advantages. Typically, an organizational network that may be a protected network may be accessed by one or more external endpoints, for example, the supply chain vendor (the supplier), the 3 party, the services provider and/or the like. In another example, the protected network may be the network of the mature, well protected organization while the external endpoint may be part of the network of the acquired organization. In order to allow the external endpoint(s) to access the protected network, access means, for example, accounts, passwords, protocols and/or the like may be granted to external user(s) (human users and/or automated users) using the external endpoint(s). However, the external endpoint(s) may not be protected and may therefore be subject to security risks. As the external endpoint(s) are not part of the protected networks and typically not controlled from the protected network, installing security mechanisms, detection means and/or the like in the external endpoint(s) may be impossible. By enabling the protected network to detect the security risk(s) in the external endpoint(s), such security risks and/or potential penetration of attacker(s) to the external endpoint(s) may be detected without actively and/or directly installing the security mechanisms, detection means in the external endpoint(s). This may increase the security circle of the organization itself (the protected network) as well as serve to alert the supply chain vendors (the external endpoint) of the potential security threat(s) they may be exposed to.
Moreover by adapting the deception data objects according to the observed communication and/or activity characteristics of the external user(s) using the access client(s) from the external endpoint(s), the deception data objects may better emulate the real processing environment thus may appear genuine even to advanced and sophisticated attackers. Periodically and/or dynamically updating the data deception objects may further increase the genuine appearance of the deception data objects. Dividing the external user(s) to groups that may be targeted with different deception policies and/or parameters may further increase the genuine appearance of the deception data objects on one hand while allowing for improved detection on the other hand.
Furthermore, expanding the detection capabilities of the protected network to the external endpoint(s) may allow for high scalability over large organizations, networks and/or systems.
Before explaining at least one embodiment of the invention in detail, it is to be understood that the invention is not necessarily limited in its application to the details of construction and the arrangement of the components and/or methods set forth in the following description and/or illustrated in the drawings and/or the Examples. The invention is capable of other embodiments or of being practiced or carried out in various ways.
The present invention may be a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.
The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non- exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless
transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.
Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.
The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.
Referring now to the drawings, FIG. 1 is a flowchart of an exemplary process for monitoring interaction of one or more external endpoints with a deception environment of a protected network in order to detect potential security risks in the external endpoint(s), according to some embodiments of the present invention. A process 100 is executed at a protected network to monitor and analyze communication of a protected network with one or more external endpoint in order to detect potential exposure of the external endpoint(s) to unauthorized operation(s) and/or malicious attack(s). The protected network may be, for example, an organization network, an institution network and/or the like while the external endpoint(s) may be used by, for example, a supply chain vendor (supplier), a 3 party, a services provider and/or the like. The external endpoint(s) may further be part of one or more other networks external (not part of) the protected network. The protected network is owned by organization, i.e. is endemic to the organization. The external endpoint(s) may communicate with the protected network for accessing one or more resources, for example, endpoints, services, websites, data resources, applications and/or the like of the protected network. In particular, the external endpoint(s) may not be controlled from the protected network and hence may not be protected since protection components may not be actively and/or directly installed by the protected network in the external endpoint(s).
The process 100 allows initiating deception campaigns to extend protection of the protected network to the external endpoint(s) by creating a deception environment that emulates a real processing environment of the protected network while co-existing with the real processing environment. The deception campaigns may be launched to create, maintain and monitor the deception environment. One or more external users, either human users and/or automated tools, may use one or more access clients executed by the external endpoint(s) to access, connect, communicate and/or interact with one or more of the resources of the protected network. In particular, the external user(s) using the access client(s) may use one or more (user) accounts created for them within the protected network to allow the external user(s) to access the protected network.
The deception environment comprises several deception components, for example, one or more deception resources such as, for example, a decoy OS, a deception application, a deception service, a deception website, a deception database and/or the like adapted according to the characteristics of resources of the protected network, for example, OS(s), applications, services, data resources, websites and/or the like. The deception resources may be launched on one or more physical and/or virtual decoy endpoints. The deception components further comprise a plurality of deception data objects (breadcrumbs) which may be configured to interact with the deception resources. During communication with the protected network, one or more deception data objects may be deployed in the access client(s) used by the external users from the external endpoint(s). This is done to attract potential attacker(s) to use the deception data objects when accessing the protected network and trying to exploit the protected network. The deception data objects are typically of the same type(s) as valid data objects used to interact with the real resources available at the protected network such that the deception environment efficiently emulates and/or impersonates as the real processing environment of the protected network and/or a part thereof.
When used, instead of interacting with the real resources, the deception data objects may interact with the deception resource(s) respectively. Therefore, analyzing the interaction of the deception data object(s) may reveal the potential unauthorized operation(s) and/or malicious attack(s) since the use of the deception data object(s) may be indicative of a potential attacker. This may allow expanding the deception
environment to the external endpoint(s) and detecting potential security risks in the external endpoint(s) which may typically be uncontrolled and/or unprotected.
The deception data objects may be updated periodically to avoid stagnancy and to genuinely mimic a real and dynamic environment with the deception data objects appearing as valid data objects such that the potential attacker believes the emulated deception environment is a real (valid) one. Optionally, the deception campaign(s) may target one or more group of the external users according to one or more typical activity characteristics allowed for the external users using the access client(s) within the protected network, for example, type of the external endpoint, operations allowed for users of the external endpoint and/or the like. As such, the deception data objects may be adapted for one or more of the external users accessing the protected network from the external endpoint(s) according to their activity characteristics.
Reference is also made to FIG. 2A, FIG. 2B, FIG. 2C and FIG. 2D, which are exemplary embodiments of a system for monitoring interaction of one or more external endpoints with a deception environment of a protected network in order to detect potential security risks in external endpoint(s), according to some embodiments of the present invention. One or more exemplary systems 200A, 200B, 200C and/or 200D may be used to execute a process such as the process 100 to launch one or more deception campaigns for creating the deception environment in a protected network 235 in order to detect and/or alert of potential security risks in external endpoint(s) 251 by monitoring and analyzing the communication and/or interaction of the external endpoint(s) 251 with the deception environment. One or more of the external endpoints 251 may be part of one or more external networks 250, i.e. networks which are not part of the protected network 235. While co-existing with the real processing environment of the protected network 235, the deception environment is separated from the real processing environment to maintain partitioning between the deception environment and the real processing environment.
The systems 200A, 200B, 200C and/or 200D include the protected network 235 that comprises a plurality of endpoints 220 connected to a network 230 facilitated through one or more network infrastructures, for example, a local area network (LAN), a wide area network (WAN), a personal area network (PAN), a metropolitan area network (MAN) and/or the internet 240.
The protected network 235 may be a physical protected network that may be a centralized single location network where all the endpoints 220 are on premises or a distributed network in which the endpoints 220 may be located at multiple physical and/or geographical locations or sites. The endpoint 220 may be a physical device, for example, a computer, a workstation, a server, a processing node, a cluster of processing nodes, a network node, a Smartphone, a tablet, a modem, a hub, a bridge, a switch, a router, a printer and/or any network connected device having one or more processors. The endpoint 220 may further be a virtual device, for example, a virtual machine (VM) executed by one or more of the physical devices. The virtual device may provide an abstracted and platform-dependent and/or independent program execution environment. The virtual device may imitate operation of the dedicated hardware components, operate in a physical system environment and/or operate in a virtualized system environment. The virtual devices may be utilized as system VMs, process VMs, application VMs and/or other virtualized implementations. Each of the endpoints 220 may provide one or more (real) resources 222, for example, an OS, an application, a service, a website, a utility, a tool, a process, an agent, a data resource, a data record, a storage resource and/or the like. The virtual endpoints 220 may also be instantiated through one or more cloud services 245, for example, Amazon Web Service (AWS), Google Cloud, Microsoft Azure and/or the like. The virtual endpoints 220 may also be provided as a service through one or more hosted services available by the cloud service(s) 245, for example, software as a service (SaaS), platform as a service (PaaS), Network as a Service (Naas) and/or the like.
The protected network 235 may further be a virtual protected network hosted by one or more cloud services 245. The protected network 235 may also be a combination of the physical protected network and the virtual protected network.
The physical protected networks 235 as implemented in the system 200A further include one or more decoy servers 201, for example, a computer, a workstation, a server, a processing node, a cluster of processing nodes, a network node, an endpoint and/or the like serving as a decoy endpoint. Additionally and/or alternatively, the decoy endpoint is utilized through one or more of the endpoints 220. The decoy server 201 as well as each of the endpoints 220 comprises a processor(s), a program store and a network interface for connecting to the network 230. Optionally, the decoy server 201
and/or the endpoint(s) 220 include a user interface for interacting with one or more users 260, for example, an information technology (IT) person, a system administrator and/or the like.
The processor(s), homogenous or heterogeneous, may include one or more processing nodes arranged for parallel processing, as clusters and/or as one or more multi core processor(s). The user interface may include one or more human-machine interfaces, for example, a text interface, a pointing devices interface, a display, a touchscreen, an audio interface and/or the like. The program store may include one or more non-transitory persistent storage devices, for example, a hard drive, a Flash array and/or the like. The program store may further comprise one or more network storage devices, for example, a storage server, a network accessible storage (NAS), a network drive, and/or the like. The program store may also include one or more volatile devices, for example, a Random Access Memory (RAM) component and/or the like.
The program store may be used for storing one or more software modules each comprising a plurality of program instructions that may be executed by the processor(s) from the program store. The software modules may include, for example, one or more deception resources 210, for example, a decoy OS, a deception application, a deception service, a deception website, a deception database and/or the like that may be created, configured and/or executed by the processor(s) to form a deception environment emulating a (real) processing environment within the protected network 235. The deception resources 210 may be executed by the decoy server 201 in a naive implementation as shown for the system 200A and/or over one or more nested decoy VMs 203 serving as the decoy endpoint(s) hosted by the decoy endpoint 220A as shown for the system 200B.
In the system 200B in which the deception resource(s) 210 are executed by the decoy VM(s) 203, the decoy VM(s) 203 serving as the decoy endpoint(s) may be instantiated through a virtualization infrastructure over one or more hosting endpoints such as the decoy server 201 and/or endpoint 220A. The virtualization infrastructure may utilize, for example, Elastic Sky X (ESXi), XEN, Kernel-based Virtual Machine (KVM) and/or the like. The user 260 may interact with the campaign manager 216 and/or the deception resources 210 through the user interface of the hosting endpoint(s). Additionally and/or alternatively, the user 260 may use one or more applications, for
example the local agent, the web browser and/or the like executed on one or more of the endpoints 220 to interact remotely over the network 230 with the campaign manager 216 executed by the hosting endpoint(s). Optionally, one or more of the other endpoints 220 executes the campaign manager 216 that interacts over the network 230 with the hosting endpoint(s) 220A which host the deception resources 210.
In some embodiments of the present invention, as shown for the system 200C, the deception environment, in particular, the decoy resources may be executed and/or provided through computing resources available from the cloud service(s) 245 serving as the decoy endpoint(s). The deception resources 210 may be utilized as one or more decoy VMs 205 instantiated using the cloud service(s) 245 and/or through one or more hosted services 207, for example, SaaS, PaaS, Naas and/or the like that may be provided by the cloud service(s) 245.
In some embodiments of the present invention, as shown for the system 200D, the protected network 235 and/or part thereof is a virtual protected network that may be hosted and/or provided through the cloud service(s) 245. As a growing trend, many organizations may transfer and/or set their infrastructure comprising one or more of the resources 222, for example, a webserver, a database, an internal mail server, an internal web application and/or the like to the cloud, for example, through the cloud service(s) 245. The virtual protected network may be provided through the cloud service(s) 245 as one or more, for example, private networks, virtual private clouds (VPCs), private domains and/or the like. Each of the private cloud(s), private network(s) and/or private domain(s) may include one or more virtual endpoints 220 that may be, for example, instantiated through the cloud service(s) 245, provided as the hosted service 207 and/or the like, where each of the virtual endpoints 220 may execute one or more of the deception applications 212. In such deployment(s) the deception resource(s) 210, for example, the decoy OS(s) may be executed as independent instance(s) deployed directly to the cloud service(s) 245 using an account for the cloud service 245, for example, AWS VPC, provided by AWS for the organizational infrastructure. Typically, users of the virtual protected network 235 may remotely access, communicate and/or interact with the applications 212 by using one or more access applications 225, for example, a local agent, a local service and/or a web browser executed on one or more of the endpoints 220 and/or one or more client terminals 221. The client terminal(s) 221 may
include, for example, a computer, a workstation, a server, a processing node, a network node, a Smartphone, a tablet, an endpoint such as the endpoint 220 and/or the like.
As discussed before, the protected network 235 may be a combination of the physical network as seen in the systems 200A, 200B and/or 200C and the virtual protected network 235 as seen in the system 200D. The protected network 235 which may be distributed to two or more subnetworks, physical and/or virtual may form a single logical protected network 235.
One or more additional software modules, for example, a campaign manager 216 may be executed by one or more of the endpoints 220, the decoy server 201 and/or one or more of the decoy VMs 203. Additionally and/or alternatively the campaign manager 216 may be provided by the cloud services 245. The campaign manager 216 may be used to create and/or control one or more deception campaigns to create the deception environment and monitor the interaction between the external endpoint(s) 251 and the deception environment. One or more users 260, for example, a system administrator, an IT person and/or the like using the campaign manager 216 may create, adjust, configure and/or launch one or more of the deception resources 210 on one or more of the decoy endpoints.
The campaign manager 216 provides a Graphical User Interface (GUI) to allow the user(s) 260 to create, configure, launch the deception campaign(s). The GUI is described in detail in PCT Application No. IB2016/054306 titled "Decoy and Deceptive Data Object Technology" filed Jul, 20, 2016, the contents of which are incorporated herein by reference in their entirety
The user(s) 260 may interact with the campaign manager 216 according to the deployment implementation. For example, as shown for the systems 200A and/or 200B, the user(s) 260 may interact with the campaign manager 216 directly through the user interface, for example, a GUI utilized through one or more of the human-machine interface(s) of the decoy server 201. Optionally, the user 260 interacts with the campaign manager 216 remotely over the network 230 using one or more access applications such as the access application 225 executed on one or more of the endpoints 220. Additionally and/or alternatively, the user(s) 260 may interact with the campaign manager 216 from a remote location over the internet 240 using one or more client terminals such as the client terminals 221. In other embodiments of the present
invention the campaign manager 216 is executed on one or more of the endpoints 220. In such case the user(s) 260 may interact with the campaign manager 216 directly through the user interface of the endpoint(s) 220 executing the campaign manager 216 or remotely using the access application 225 from the endpoints 220 and/or the client terminals 221. Similarly, when the campaign manager is provided by the cloud service(s) 245, as shown for the systems 200C and/or 200D, the user(s) may interact with the campaign manager 216 remotely using the access application 225 from the endpoints 220 and/or the client terminals 221.
Optionally, the campaign manager 216 is not executed by the same platform executing and/or providing the deception environment. For example, as shown for the systems 200A and/or 200B, the deception environment may be executed by the decoy server 201 and/or the decoy endpoint 220A while the campaign manager 216 is executed by one or more of the other endpoints 220. In such case the campaign manager 216 controls the deception environment over the network 230. In another example, the deception environment is provided by the cloud service(s) 245 as shown for the system 200C while the campaign manager 216 is executed by one or more of the endpoints 220. In such case the campaign manager 216 controls the deception environment remotely through the network 230 and/or the internet 240.
For brevity only several implementation deployments of the protected network 235 and the deception environment are presented, however as will be appreciated by a person skilled in the art, a plurality of other combinations is feasible. These combinations may include the combinations of the protected network comprising the physical network and/or the virtual network, combinations of the deception environment, in particular the deception resources executed locally and/or by the cloud services 245 and any combination of the two. The implementation deployments are described in more detail in PCT Application No. IB2016/054306 titled "Decoy and Deceptive Data Object Technology" filed Jul, 20, 2016, the contents of which are incorporated herein by reference in their entirety.
The external endpoint(s) 251 may be for example, a computer, a workstation, a server, a processing node, a cluster of processing nodes, a network node, a Smartphone, a tablet, and/or any network connected device. The external endpoint 251 may further be utilized as a remote service provided by one or more of the cloud services and 245
accessed by the external user(s) from one or more client terminals such as the client terminal 221. One or more of the external users, for example, a human user and/or an automated user may use one or more access clients 255 such as the access application 225 executed by the external endpoint(s) 251 to access the protected network 235 over the network 230, and typically through the internet 240. Using the access client(s) 255, the external users may access, connect, communicate and/or interact with one or more resources of the protected network 235. The access client 255 may further include one or more remote access tools, protocols, software packages and/or the like for remotely accessing the resources of the protected network 235, for example, Remote Desktop (RDP), Virtual Network Computing (VNC) and/or the like.
Optionally, the access client 255 is a proprietary access client 255 provided to the external user(s) by the organization utilizing the protected network 235, i.e. the "owner" and/or the vendor of the protected network 235.
One or more of the external users accessing the protected network 235 from the external endpoint(s) 251 may be allocated (given) one or more accounts created for them for accessing one or more of the resources 222 of the protected network 235. The account may be a collection of data associated with a particular user of a multiuser computer system. Each account may typically comprise credentials, i.e. a user name and (almost always) a password and defines one or more accesses privileges for the respective user, for example, a security access level, a disk storage space and/or the like. Usually, one or more users 260 in the organization (the protected network 235), for example, the system administrator, the IT person and/or the like are responsible for setting up and overseeing the accounts.
The user(s) 260 may further use the campaign manager 216 to create, deploy and/or update a plurality of deception data objects 214 (breadcrumbs) deployed in one or more of the access clients 255. The deployed deception data objects 214 are configured to interact with respective one or more of the deception applications 212. The deception data objects 214 are deployed in the access client(s) 255 to tempt the potential attacker(s) attempting to access resource(s) of the protected network 235 to use the deception data objects 214. The deception data objects 214 are configured to emulate valid data objects that are typically used for interacting with the resource(s) 222.
As discussed before, the process 100 may be executed to launch one or more deception campaigns. Each deception campaign may include creating, updating and monitoring the deception environment in the protected network 235 in order to detect and/or alert of potential attackers trying to penetrate the protected network 235 through access means granted to the external users accessing the protected network 235 from the external endpoint(s) 251. Each deception campaign may be defined according a required deception scope and may be constructed according to one or more activity characteristics of the external user(s).
As shown at 102, the process 100 starts with the campaign manager 216 monitoring and analyzing communication between the access client 255 executed by the external endpoint 251 and the protected network 235. In particular, the campaign manager 216 monitors and analyzes the communication between the access client(s) 255 and the resource(s) 222, for example, an Enterprise Resource Planning (ERP) system, a development platform, a Human Resources (HR) system, a sales system, a finance system, an IT service, a Customer relationship management (CRM) system, a database service and/or the like.
Optionally, one or more of the external users may use an RDP application initiated from the external endpoint 251 and serving as the access client 255 to access one or more of the endpoints 220 within the protected network 235. The external user(s) may further access one or more of the endpoints 220 using other remote access protocols, for example, VNC such as for example, RealVNC that may be used for an open source project development. In such cases the access client 255 may be an access tool provided by user(s) 260 of the protected network 235, for example, the system administrator, the IT person and/or the like such that the access tool is provided by the organization which utilizes the protected network 235, i.e. the "owner" of the protected network 235.
Optionally, as discussed before, one or more of the external users may use the access client 255 provided by the organization utilizing the protected network 235, i.e. the "owner" of the protected network 235.
Typically, the external users using the access client(s) 255 may interact with the resource(s) 222 using one or more accounts allocated (given) to the external user(s), whether human users and/or automated users. The account may be a collection of data associated with a particular user of a multiuser computer system. Each account may
comprise credentials, i.e. a user name and (typically) a password and defines one or more accesses privileges for the respective user, for example, a security access level, a disk storage space and/or the like. Usually, one or more of the users 260 of the protected network 235, for example, the system administrator, the IT person and/or the like create, set up and/or maintain the accounts.
Optionally, the accounts are created with one or more different attributes to differentiate between groups of users in order to create an efficient deception environment that may allow better classification of the potential security risk to the external endpoint(s) 251. The groups may be defined according to one or more activity characteristics of the plurality of users in the protected network 235. For example, the accounts created for a group of external user(s) may be marked differently from accounts allocated to user(s) of the protected network 235. In another example, the accounts may be marked to differentiate between groups of external users accessing the resource(s) 210 from different external endpoints 251. In another example, the accounts may mark action group(s) according to type of operation(s) allowed for the resource 222, for example, view data, manipulate data, retrieve data, and/or the like. In another example, the accounts may be marked according to the type of department of the protected network 235 that may be accessed, for example, finance, production, development, IT, HR and/or the like. This may allow targeting the deception campaigns more effectively by adjusting the deception environment according to the account to access the protected network 235. This may further allow the campaign manager 216 to more effectively monitor the communication with and/or within the protected network 235 as the campaign manager 216 may focus on user(s) of the external endpoint(s) 251 rather than on (internal) user(s) of the protected network 235. The campaign manager 216 may also concentrate on monitoring the communication with the external endpoint(s) 251 in which accounts that were previously suspected as compromised are used.
The campaign manager 216 monitors the interaction of the access client(s) 255 with the respective resource(s) 222 to identify one or more activity characteristics in the communication with the external endpoint(s) 251, for example, the type of the external endpoint 251, the type of the external user, the type of interaction, the type of resource 222, the type of operation(s) executed by the external user and/or the like.
The campaign manager 216 may further divide the external users to groups according to one or more of the activity characteristics. For example, the campaign manager 216 may create a group of users using a certain one of the external endpoints 251. In another example, the campaign manager 216 may create a group of users which access a certain one of the resources 222. In another example, the campaign manager 216 may create a group of users according to their interaction privileges, for example, allowed to perform a certain operation in the protected network 235.
As shown at 104, the campaign manager 216 creates the deception data objects 214 and defines the interaction with one or more of the deception applications 212 by declaring the relationship(s) of each of the deception data objects 214. The campaign manager 216 creates the deception data objects 214 according to the activity characteristic(s) detected while monitoring the communication and/or interaction between the access client(s) 255 and the resources 222. The deception data objects 214 are created to emulate valid data objects used to interact with the resource(s) 222. Creation, initiation and launching of the deception environment deception resources 210, in particular the decoy OS(s) and the deception applications is out of scope of the present invention and is described in detail in PCT Application No. IB2016/054306 titled "Decoy and Deceptive Data Object Technology" filed Jul, 20, 2016, the contents of which are incorporated herein by reference in their entirety.
The campaign manager 216 may create the deception data objects 214 according to the activity characteristic(s) and/or in response to operation(s) and/or action(s) performed by the external user(s). Typically, the campaign manager 216 creates the deception data objects 214 automatically. However, the user(s) 260 may interact with the campaign manager 216 to define a policy, scope, parameter(s), activity characteristic(s) and/or the like for the deception campaign(s). Optionally, the user(s) 260 may interact with the campaign manager 216 to specifically create one or more of the deception data objects 214. The campaign manager 216 may create the deception data objects 214 in addition to the normal response typically taken by the accessed resource 222 or instead of the normal response. To illustrate this, assuming that in response to a certain operation initiated by the external user through the access client 255, the resource 222 responds with an outcome Y. The campaign manager 216 may therefore create one or more deception objects Y' when detecting the certain operation.
As result, once accessing the resource 222, the access client 255 and/or the external endpoint 251executing the access client 255 may be updated with Y and Y' or possibly only with Y'.
The deception data objects 214 may include, for example, one or more of the following:
Cookies and/or history log entries. In case the access client 255 is browsing, for example, to a supplier-facing website (an exemplary resource 222) of the protected network 235, typically, a cookie is added to the to be added to the access client 255 and the website's address added to the history log of the access client 255. In such a case, the campaign manager 216 may, for example, open an iframe to a deception website (an exemplary deception resource 210 corresponding to the exemplary resource 222) in which different cookie(s) and/or different history log entry(s) may be added to the access client 255. The different cookie(s) and/or history log entry(s) may point to the deception environment, for example, a false website, an identical system that is not used by real users (internal and/or external), a proxy that redirects to a real system within the protected network 235, a different service, such as, for example, email, SharePoint and/or the like.
JavaScripts, pop-up windows, pop-under windows and/or any other browsing technique used to browse to another website(s). In case the access client 255 is browsing, for example, to the supplier-facing website, the campaign manager 216 may, for example, initiate a JavaScript, a pop-up window and/or a pop-under window to direct the access client 255 to the deception website in which the different cookie(s) and/or the different history log entry(s) may be added to the access client 255. The JavaScripts, the pop-up windows and/or the pop-under windows may maintain a visibility similarity with the supplier-facing website such that they do not significantly alter the browsing experience of the external user(s) using the access client(s) 255.
Credential object. In case the external user(s) use the RDP serving as the access client 255, upon login of the RDP, the campaign manager 216 may install additional fake credentials on the external endpoint 251 from which the RDP is initiated.
Configuration file. In case the external user(s) use the RDP serving as the access client 255, upon login of the RDP, the campaign manager 216 may install a fake
remote access configuration file on the external endpoint 251 from which the RDP is initiated.
Deception file, password and/or the like. In case the external user(s) use the VNC serving as the access client 255, the campaign manager 216 may install additional fake credentials, files, password(s) and/or the like on the external endpoint 251from which the VNC is initiated.
In some cases, the access client 255 may be offered with automatic password completion when accessing a resource, a service and/or an application at the remote network. In such case(s), the campaign manager 216 may manipulate the automatic password completion and provide fake password(s) to the access client 255 accessing the protected network 235.
Archive files, for example, zip, rar, tar.gz and/or the like. In case the external user(s) using the access client 255 performs a backup and/or retrieves data from the protected network 235, the campaign manager 216 may insert one or more deception data objects 214 into one or more of the created archive files.
An account. The campaign manager 216 may provide the external user(s) one or more false accounts for accessing one of more of the deception applications 212. The campaign manager 216 may configure each of the deception data objects 214 to interact with one or more of the deception resources 210. The campaign manager 216 may configure the deception data objects 214 and define their relationships according to a deception policy and/or methods defined for the deception campaign. Naturally, the campaign manager 216 creates and configures the deception data objects 214 according to the resource(s) 222 accessed by the access client(s) 255. The campaign manager 216 also defines the interaction with the deception resource(s) 210 which map the accessed resource(s) 222. For example, the deceptive data object 214 of type "browser cookie" may be created to interact with one or more deception resources 210, for example, a fake website and/or a deception application launched using, for example, a deception resource 210 of type "browser" created during the deception campaign. As another example, a deceptive data object 214 of type "compressed file" may be created for external user(s) using a certain one of the external endpoints 251. As another example, a deceptive data object 214 of type "credentials" may be created for users accessing a certain application 212 of type "ERP".
Optionally, the campaign manager 216 periodically and/or dynamically updates one or more of the deception data objects 214 to impersonate an active real (valid) processing environment such that the deception data objects 214 appear to be valid data objects to lead the potential attacker to believe the emulated deception environment is a real one.
Optionally, in case the external user(s) use the proprietary access client 255 provided by the organization which is the "owner" of the protected network 235, the proprietary access client 255 may itself create the deception data objects 214. The proprietary access client 255 may monitor the communication with the resource(s) 222 and create the deception data objects 214 according to the detected activity characteristic(s). Additionally and/or alternatively, the proprietary access client 255 may create the deception data objects 214 according to instructions received from the campaign manager that monitors the communication of the proprietary access client 255 with the resource(s) 222.
As shown at 106, the campaign manager 216 is used to deploy the deception data objects 214 in the access client 255 and/or the external endpoint 251 executing the access client 255.
The deception data objects 214 are directed (once deployed) to attract the potential attackers who may have gained access and/or control of the external endpoint 251 and trying to penetrate the protected network 235. To create an efficiently deceptive campaign, the deception data objects 214 may be created with one or more attributes that may be attractive to the potential attacker, for example, a name, a type and/or the like. The deception data objects 214 may be created to attract the attention of the attacker using an attacker stack, i.e. tools, utilities, services, application and/or the like that are typically used by the attacker. As such, the deception data objects 214 may not be visible to users using a user stack, i.e. tools, utilities, services, application and/or the like that are typically used by a legitimate user. Taking this approach may allow creating the deception campaign in a manner that the user may need to go out of his way, perform unnatural operations and/or actions to detect, find and/or use the deception data objects 214 while it may be a most natural course of action or method of operation for the attacker. For example, browser cookies are rarely accessed and/or reviewed by the legitimate user(s). At most, the cookies may be cleared en-masse.
However, one of the main methods for the attacker(s) to obtain website credentials and/or discover internal websites visited by the legitimate user(s) is to look for cookies and/or history log entries and analyze them.
Optionally, in case the deception data objects 241 are created according to the groups of external users using the access client(s) 255 to access the resource(s) 222.
Reference is now made to FIG. 3, which is a block diagram of exemplary building blocks of a deception environment for detecting potential security risks in external endpoint(s) communicating with a protected network, according to some embodiments of the present invention. A deception environment 300 created using a campaign manager such as the campaign manager 216 comprises a plurality of deception data objects 214 deployed in one or more access client such as the access client 255 accessing a protected network such as the protected network 235 from one or more external endpoints such as the external endpoint 251. The campaign manager 216 is used to define relationships 320 between each of the deception data items 214 and one or more of a plurality of deception resources such as the deception resources 210, for example, deception applications 310. The campaign manager 216 is also used to define relationships 322 between each of the deception applications 310 and one or more of a plurality of other deception resources 210, for example, decoy OSs 312. The deception data objects 214, the deception applications 310 and/or the decoy OSs 312 may be arranged in one or more groups 302, 304 and/or 306 respectively according to one or more of the activity characteristics of the external user(s). Once deployed, operations that use data included in the deception data objects 214 interact with the deception application(s) 310 according to the defined relationships 320 that in turn interact with the decoy OS(s) 312 according to the defined relationships 322. The defined relationships 320 and/or 322 may later allow detection of one or more unauthorized operations by monitoring and analyzing the interaction between the deception data objects 214, the deception applications 310 and/or the decoy OSs 312.
Reference is made once again to FIG. 1.
As shown at 108, the campaign manager 216 continuously monitors the interaction between the deception data objects 214 and the deception resource(s) 212 in order to detect a potential security risk in the external endpoint(s) 251 and/or a potential attacker trying to penetrate the protected network 235. The potential attacker may be detected by
identifying one or more unauthorized operations that are initiated in the protected network 235 through the access client(s) 255 using data retrieved from the deception data object(s) 214. For example, the campaign manager 216 may detect usage of fake password(s) provided previously to the access client 255, for example, the fake credentials, the fake automatic password completion and/or the like.
In order to identify the unauthorized operation(s), the campaign manager 216 may monitor the deception resource(s) 210 at one or more levels and/or layers, for example:
Network monitoring in which the campaign manager 216 monitors egress and/or ingress traffic at one or more of the endpoints 220. The campaign manager 216 may further record the monitored network traffic.
Log monitoring in which the campaign manager 216 monitors log records created by one or more of the deception resource(s) 210, for example, the deception application(s) 310.
OS monitoring in which the campaign manager 216 monitors interaction made by one or more of the deception applications 310 with the deception resource(s) 210, for example, the decoy OS(s) 312.
Kernel level monitoring in which the campaign manager 216 monitors and analyzes activity at the kernel level of the decoy OS(s) 312.
The campaign manager 216 may further detect usage of data contained in certain deception data object(s) 214 for accessing resources of the protected network 235 which are different than the resources for which the certain deception data object(s) 214 were originally created for. For example, assuming a certain fake password data deception object 214 is created using the campaign manager 216 to emulate a password for accessing a first resource 222A and configured to interact with a first deception resource 210A. The campaign manager 216 may detect usage of the created fake password for accessing a second resource 222B, i.e. not the (first) resource 222A the fake password was originally created for. Moreover, the campaign manager 216 may detect interaction of the fake password with one or more other deception resources 210.
The campaign manager 216 may also detect usage of data contained in certain deception data object(s) 214 used by an external user using an external endpoint 251 A which is not the same external user for which the certain deception data object(s) 214 was originally created for. For example, assuming a certain data deception object 214A
is created using the campaign manager 216 and deployed in an access client 255A of a first user. The campaign manager 216 may detect usage of data contained in the deception data object 214A even when used by a second external user, whether using the same external endpoint 251 as the first external user and/or using a different external endpoint 251.
As shown at 110, the campaign manager 216 analyzes the data and/or activity detected during the interaction monitoring in order to identify the unauthorized operation that may indicate that the external endpoint 251 is compromised and/or that a potential attacker is trying to access the protected network 235. The campaign manager 216 manager may analyze the interaction to identify usage of data included, provided and/or available from one or more of the deception data objects 214. Based on the analysis, the campaign manager 216 may create one or more interaction events. The analysis conducted by the campaign manager 216 may include false positive analysis to avoid false identification of one or more operations initiated by one or more legitimate users, processes, applications and/or the like as operations initiated by the potential unauthorized operation.
The interaction events may be created when the campaign manager 216 detects a meaningful interaction with one or more of the deception resources 210. The campaign manager 216 may create the interaction event when detecting usage of data that is included, provided and/or available from one or more of the deception data objects 214 for accessing and/or interacting with one or more of the deception resources 210. For example, the campaign manager 216 may create an interaction event when detecting an attempt to logon to a deception application 310 of type "remote desktop service" using fake credentials stored in a deception data object 214 of type "credentials". In another example, the campaign manager 216 may detect an access to a deception application 312 of type "false website" using the data retrieved from a deception data object 214 of type "cookie".
Optionally, the campaign manager 216 may be configured to create interaction events when detecting one or more pre-defined interaction types, for example, logging on a specific deception application 312, executing a specific command, clicking a specific button(s) and/or the like. The user(s) 260 may further define "scripts" that comprise a plurality of the pre-defined interaction types to configure the campaign
manager 216 to create an interaction event at detection of complex interactions between one or more of the deception components, i.e. the deception resource(s) 210 and/or the deception data object(s) 214.
Optionally, the campaign manager 216 creates an activity pattern of the potential attacker by analyzing the identified unauthorized operation(s). Using the activity pattern, the campaign manager 216 may gather useful forensic data on the operations of the potential attacker and may classify the potential attacker in order to estimate a course of action and/or intentions of the potential attacker. The campaign manager 216 may than adapt the deception environment to tackle the estimated course of action and/or intentions of the potential attacker.
Optionally, the campaign manager 216 employs one or more machine learning processes, methods, algorithms and/or techniques on the identified activity pattern. The machine learning may serve to increase the accuracy of classifying the potential attacker based on the activity pattern. The machine learning may further be used by campaign manager 216 to adjust future deception environments and deception components to adapt to the learned activity pattern(s) of a plurality of potential attacker(s). In addition, classifying the activity pattern may allow the campaign manager 216 to characterize potential attacker(s) detected in subsequent detection events and estimate their intentions at an early stage of the penetration sequence.
As shown at 112, the campaign manager 216 generates one or more alerts following the detection event indicating the potential unauthorized operation. The user(s) 260 may configure the campaign manager 216 to set an alert policy defining one or more of the events and/or combination of events that trigger the alert(s). The campaign manager 216 may be configured during the creation of the deception campaign and/or at any time after the deception campaign is launched. The alert may be delivered to the user(s) 260 monitoring the campaign manager 216 and/or through any other method, for example, an email message, a text message, an alert in a mobile application and/or the like. Optionally, the campaign manager 216 generates one or more alerts to the external endpoint 251 from which the unauthorized operation is initiated. The campaign manager 216 may alert, for example, an external user, a system administrator, an IT person and/or the like of the external endpoint(s) 251 that are suspected to be compromised and be exposed to a security risk. The campaign manager
216 may also alert an automated tool of the external endpoint 251, for example, a security system to inform of the potential security risk.
The campaign manager 216 and/or the deception environment may be further configured to take one or more additional actions following the alert. One action may be pushing a log of potential unauthorized operation(s) using one or more external applications and/or services, for example, syslog, email and/or the like. The log may be pushed with varying levels of urgency according to the policy defined for the deception campaign.
Moreover, the campaign manager 216 and/or the deception environment may be further configured to contain the unauthorized operation(s) which may typically be part of an attack vector of the potential attacker within the deception environment. As part of containing the attack, the campaign manager 216 may adjust, adapt and/or reconfigure the deception environment, for example, create, adjust and/or remove one or more deception resources 210, create, adjust and/or remove one or more deception data objects 214 and/or the like. This may allow isolating the potential attacker from the real processing environment of the protected network 235 while learning the activity pattern(s) of the attack vector and/or the potential attacker.
Optionally, the campaign manager 216 presents the user(s) 260 with real time and/or previously captured status information relating to the deception campaign(s), for example, created events, detected potential attackers, attack patterns and/or the like. The campaign manager 216 may provide, for example, a dashboard provided through the GUI of the campaign manager 216. The campaign manager 216 may also presents the status information and/or through a remote access application, for example, a web browser and/or a local agent executed on one or more of the endpoints 220 and/or by one or more of the client terminal 221 accessing the campaign manager 216 remotely over the network 230 and/or the internet 240.
Reference is now made to FIG. 4, which is a flowchart of an exemplary process for creating a deception data objects deployed in an access client accessing a protected network from an external endpoint, according to some embodiments of the present invention. An exemplary process 400 may be executed to create deception environment components deployed in an access client such as the access client 255 used to access resource(s) such as the resource 210 of a protected network such as the protected
network 235. The process 400 may be executed by an external endpoint such as the external endpoint 251 in a system such, for example, the system 200A, 200B, 200C and/or 200D. The process 400 may be executed by one or more software modules executing on the external endpoint 251. Typically the software module(s) implementing the process 400 are integrated within the access client 255 which in such case may be a proprietary access client provided to the external users by the vendor and/or owner of the protected network 235.
As shown at 402, the communication of the access client 255 with the protected network is monitored. The access client 255 communicates with the protected network 235 in order to access one or more resources of the protected network 235 such as the resources 210. The monitoring of the communication is done as described in step 102 of the process 100.
As shown at 404, one or more data deception objects such as the deception data objects 214 are created according to one or more of the activity characteristic(s) detected while monitoring the communication between the access client(s) 255 and the resources 210. Creation of the deception data object(s) 214 is done as described in step
104 of the process 100.
As shown at 406, the created deception data object(s) 214 are deployed in the access client 255 as described in step 106 of the process 100.
It is expected that during the life of a patent maturing from this application many relevant systems, methods and computer programs will be developed and the scope of the term endpoint and virtual machine is intended to include all such new technologies a priori.
As used herein the term "about" refers to ± 10 %.
The terms "comprises", "comprising", "includes", "including", "having" and their conjugates mean "including but not limited to". This term encompasses the terms "consisting of" and "consisting essentially of".
The phrase "consisting essentially of" means that the composition or method may include additional ingredients and/or steps, but only if the additional ingredients and/or steps do not materially alter the basic and novel characteristics of the claimed composition or method.
As used herein, the singular form "a", "an" and "the" include plural references unless the context clearly dictates otherwise. For example, the term "a compound" or "at least one compound" may include a plurality of compounds, including mixtures thereof.
Throughout this application, various embodiments of this invention may be presented in a range format. It should be understood that the description in range format is merely for convenience and brevity and should not be construed as an inflexible limitation on the scope of the invention. Accordingly, the description of a range should be considered to have specifically disclosed all the possible subranges as well as individual numerical values within that range. For example, description of a range such as from 1 to 6 should be considered to have specifically disclosed subranges such as from 1 to 3, from 1 to 4, from 1 to 5, from 2 to 4, from 2 to 6, from 3 to 6 etc., as well as individual numbers within that range, for example, 1, 2, 3, 4, 5, and 6. This applies regardless of the breadth of the range.
Whenever a numerical range is indicated herein, it is meant to include any cited numeral (fractional or integral) within the indicated range. The phrases "ranging/ranges between" a first indicate number and a second indicate number and "ranging/ranges from" a first indicate number "to" a second indicate number are used herein interchangeably and are meant to include the first and second indicated numbers and all the fractional and integral numerals therebetween.
The word "exemplary" is used herein to mean "serving as an example, an instance or an illustration". Any embodiment described as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments and/or to exclude the incorporation of features from other embodiments.
The word "optionally" is used herein to mean "is provided in some embodiments and not provided in other embodiments". Any particular embodiment of the invention may include a plurality of "optional" features unless such features conflict.
It is appreciated that certain features of the invention, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the invention, which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable subcombination or as suitable in any other described embodiment of the invention. Certain features described in the context of various
embodiments are not to be considered essential features of those embodiments, unless the embodiment is inoperative without those elements.
Although the invention has been described in conjunction with specific embodiments thereof, it is evident that many alternatives, modifications and variations will be apparent to those skilled in the art. Accordingly, it is intended to embrace all such alternatives, modifications and variations that fall within the spirit and broad scope of the appended claims.
All publications, patents and patent applications mentioned in this specification are herein incorporated in their entirety by reference into the specification, to the same extent as if each individual publication, patent or patent application was specifically and individually indicated to be incorporated herein by reference. In addition, citation or identification of any reference in this application shall not be construed as an admission that such reference is available as prior art to the present invention. To the extent that section headings are used, they should not be construed as necessarily limiting.
WHAT IS CLAIMED IS:
1. A computer implemented method of detecting unauthorized access to a protected network from external endpoints, comprising:
monitoring, at a protected network, communication with at least one external endpoint using at least one access client to access at least one of a plurality of resources of the protected networked, at least one deception resource created in the protected network maps at least one of the plurality of resources;
detecting usage of data contained in at least one of a plurality of deception data objects deployed in the at least one access client by monitoring an interaction triggered by the at least one deception data object with the at least one deception resource when used; and
identifying at least one potential unauthorized operation based on analysis of the detection.
2. The computer implemented method of claim 1, wherein each of the plurality of resources is a member of a group consisting of: an endpoint, a data resource, an application, a tool, a website and a service,
wherein the each resource is at least one of a local resource and a cloud resource.
3. The computer implemented method of claim 1, wherein the access of the at least one external endpoint to the at least one resource includes at least one of: retrieving information and manipulating information.
4. The computer implemented method of claim 1, wherein the at least one external endpoint is used by a supplier of an organization utilizing the protected network.
5. The computer implemented method of claim 1, wherein the at least one deception resource is provided by at last one decoy endpoint which is a member selected from a group consisting of: a physical device comprising at least one processor and a virtual machine,
Claims
wherein the virtual machine is hosted by at least one member of a group consisting of: a local endpoint, a cloud service and a vendor service.
6. The computer implemented method of claim 1, wherein at least one of the plurality of deception data objects deployed in the at least one access client is created at the protected network according to the monitored communication.
7. The computer implemented method of claim 1, wherein at least one of the plurality of deception data objects deployed in the at least one access client is created at the external endpoint according to the monitored communication.
8. The computer implemented method of claim 1, wherein each of the plurality of deception data objects emulates a valid data object used for interacting with at least one of the plurality of resources.
9. The computer implemented method of claim 1, wherein each of the plurality of deception data objects is a member of a group consisting of: a browser cookie, a history log record, an account, a credentials object, a configuration file for remote desktop authentication credentials, a JavaScript and a deception file.
10. The computer implemented method of claim 1, wherein the at least one access client is a member of a group consisting of: a web browser, a remote access agent and a proprietary client provided by an organization utilizing the protected network.
11. The computer implemented method of claim 1, wherein the at least one access client accesses the protected network using an account allocated by the protected network to at least one external user using the at least one access client to access the protected network from the at least one external endpoint, the account is accessible with credentials assigned to the at least one external user.
12. The computer implemented method of claim 11, wherein the at least one external user is a member of a group consisting of: a human user and an automated tool.
13. The computer implemented method of claim 11, wherein the account allocated to the at least one external user is marked differently from the account allocated to at least one internal user of the protected network.
14. The computer implemented method of claim 1, further comprising dividing a plurality of external users using the at least one access client to access the protected network from the at least one external endpoint to groups according to at least one activity characteristic, the at least one activity characteristic is identified by analyzing the communication and is a member of a group consisting of: an operation initiated by the at least one external endpoint and a type of the at least one external endpoint.
15. The computer implemented method of claim 1, further comprising identifying the at least one potential unauthorized operation by detecting usage of data contained in the at least one deception data objects for accessing the at least one resource.
16. The computer implemented method of claim 1, further comprising generating an alert at detection of the at least one potential unauthorized operation.
17. The computer implemented method of claim 1, further comprising generating an alert to the at least one external endpoint at detection of the at least one potential unauthorized operation.
18. The computer implemented method of claim 1, further comprising generating an alert at detection of a combination of a plurality of potential unauthorized operations to detect a complex sequence of the interaction.
19. The computer implemented method of claim 1, further comprising analyzing the at least one potential unauthorized operation to identify an activity pattern.
20. The computer implemented method of claim 19, further comprising applying a learning process to the activity pattern to classify the activity pattern in order to detect and classify at least one future potential unauthorized operation.
21. A system for detecting unauthorized access to a protected network from external endpoints, comprising:
a program store storing a code; and
at least one processor of an endpoint of a protected network, coupled to the program store for executing the stored code, the code comprising:
code instructions to monitor, at the protected network, communication with at least one external endpoint using at least one access client to access at least one of a plurality of resources of the protected network, at least one deception resource created in the protected network maps at least one of the plurality of resources;
code instructions to detect usage of data contained in at least one of a plurality of deception data objects deployed in the at least one access client by monitoring an interaction triggered by the at least one deception data object with the at least one deception resource when used; and
code instructions to identify at least one potential unauthorized operation based on analysis of the detection.
22. A computer implemented method of creating in a protected network a deception environment for accesses from external endpoints, comprising:
monitoring, at a protected network, communication with at least one external endpoint using at least one access client to access at least one of a plurality of resources of the protected networked, at least one deception resource created in the protected network maps at least one of the plurality of resources;
creating a plurality of deception data objects according to the monitored communication, the plurality of deception data objects are configured to trigger an interaction with the at least one deception resource when used; and
deploying the plurality of deception data objects in the at least one access client;
wherein the interaction between at least one of the plurality of deception data objects and the at least one deception resource is indicative of at least one potential unauthorized operation.
23. A software product, comprising:
a non-transitory computer readable storage medium;
first program instructions to monitor, at an external endpoint, communication of at least one access client with a protected network, the at least one access client is executed by the external endpoint for accessing at least one of a plurality of resources of the protected network, at least one deception resource created in the protected network maps at least one of the plurality of resources;
second program instructions to create a plurality of deception data objects according to the monitored communication, the plurality of deception data objects are configured to trigger an interaction with the at least one deception resource when used; and
third program instructions to deploy the plurality of deception data objects in the at least one access client;
wherein the interaction between at least one of the plurality of deception data objects and the at least one deception resource is indicative of at least one potential unauthorized operation, and
wherein the first, second and third program instructions are executed by at least one processor of the external endpoint from the non-transitory computer readable storage medium.
24. The software product of claim 23, further comprising the software product is integrated in the at least one access client.
25. A computer implemented method, comprising:
monitoring, at an external endpoint, communication of at least one access client with a protected network, the at least one access client is executed by the external endpoint for accessing at least one of a plurality of resources of the protected network, at least one deception resource created in the protected network maps at least one of the plurality of resources;
creating a plurality of deception data objects according to the monitored communication, the plurality of deception data objects are configured to trigger an interaction with the at least one deception resource when used; and
deploying the plurality of deception data objects in the at least one access client;
wherein the interaction between at least one of the plurality of deception data objects and the at least one deception resource is indicative of at least one potential unauthorized operation.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201662328063P | 2016-04-27 | 2016-04-27 | |
US62/328,063 | 2016-04-27 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017187379A1 true WO2017187379A1 (en) | 2017-11-02 |
Family
ID=60160177
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IB2017/052439 WO2017187379A1 (en) | 2016-04-27 | 2017-04-27 | Supply chain cyber-deception |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2017187379A1 (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060020813A1 (en) * | 2004-06-30 | 2006-01-26 | International Business Machines Corporation | Dynamic cache lookup based on dynamic data |
US20070094265A1 (en) * | 2005-06-07 | 2007-04-26 | Varonis Systems Ltd. | Automatic detection of abnormal data access activities |
US20090199296A1 (en) * | 2008-02-04 | 2009-08-06 | Samsung Electronics Co., Ltd. | Detecting unauthorized use of computing devices based on behavioral patterns |
US20100077483A1 (en) * | 2007-06-12 | 2010-03-25 | Stolfo Salvatore J | Methods, systems, and media for baiting inside attackers |
US20160072838A1 (en) * | 2014-09-05 | 2016-03-10 | Topspin Security Ltd. | System and a Method for Identifying the Presence of Malware Using Mini-Traps Set At Network Endpoints |
US9298925B1 (en) * | 2013-03-08 | 2016-03-29 | Ca, Inc. | Supply chain cyber security auditing systems, methods and computer program products |
-
2017
- 2017-04-27 WO PCT/IB2017/052439 patent/WO2017187379A1/en active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060020813A1 (en) * | 2004-06-30 | 2006-01-26 | International Business Machines Corporation | Dynamic cache lookup based on dynamic data |
US20070094265A1 (en) * | 2005-06-07 | 2007-04-26 | Varonis Systems Ltd. | Automatic detection of abnormal data access activities |
US20100077483A1 (en) * | 2007-06-12 | 2010-03-25 | Stolfo Salvatore J | Methods, systems, and media for baiting inside attackers |
US20090199296A1 (en) * | 2008-02-04 | 2009-08-06 | Samsung Electronics Co., Ltd. | Detecting unauthorized use of computing devices based on behavioral patterns |
US9298925B1 (en) * | 2013-03-08 | 2016-03-29 | Ca, Inc. | Supply chain cyber security auditing systems, methods and computer program products |
US20160072838A1 (en) * | 2014-09-05 | 2016-03-10 | Topspin Security Ltd. | System and a Method for Identifying the Presence of Malware Using Mini-Traps Set At Network Endpoints |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10270807B2 (en) | Decoy and deceptive data object technology | |
US10574685B2 (en) | Synthetic cyber-risk model for vulnerability determination | |
US10567414B2 (en) | Methods and apparatus for application isolation | |
JP6960037B2 (en) | Intrusion detection and mitigation in data processing | |
US20180309787A1 (en) | Deploying deception campaigns using communication breadcrumbs | |
US11222123B2 (en) | Securing privileged virtualized execution instances from penetrating a virtual host environment | |
US10567432B2 (en) | Systems and methods for incubating malware in a virtual organization | |
US11228612B2 (en) | Identifying cyber adversary behavior | |
US9294442B1 (en) | System and method for threat-driven security policy controls | |
CN111712814B (en) | System and method for monitoring baits to protect users from security threats | |
US20160294875A1 (en) | System and method for threat-driven security policy controls | |
US10963583B1 (en) | Automatic detection and protection against file system privilege escalation and manipulation vulnerabilities | |
US20170359376A1 (en) | Automated threat validation for improved incident response | |
US11336690B1 (en) | Threat emulation framework | |
US20210352103A1 (en) | Dynamic maze honeypot response system | |
WO2017187379A1 (en) | Supply chain cyber-deception | |
Wahid et al. | Anti-theft cloud apps for android operating system | |
De Tender et al. | Azure security center | |
Jolkkonen | Cloud Asset Identification Strategy | |
Ahl | The Relevance of Endpoint Security in Enterprise Networks | |
Mwendwa | A Honeypot based malware analysis tool for SACCOs in Kenya |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17788922 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17788922 Country of ref document: EP Kind code of ref document: A1 |