WO2017151135A1 - Data disappearance conditions - Google Patents

Data disappearance conditions Download PDF

Info

Publication number
WO2017151135A1
WO2017151135A1 PCT/US2016/020676 US2016020676W WO2017151135A1 WO 2017151135 A1 WO2017151135 A1 WO 2017151135A1 US 2016020676 W US2016020676 W US 2016020676W WO 2017151135 A1 WO2017151135 A1 WO 2017151135A1
Authority
WO
WIPO (PCT)
Prior art keywords
security information
community
security
data disappearance
data
Prior art date
Application number
PCT/US2016/020676
Other languages
French (fr)
Inventor
Brian Frederik Hosea Che HEIN
Tomas Sander
JR. Peter C. WITT
Original Assignee
Hewlett Packard Enterprise Development Lp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett Packard Enterprise Development Lp filed Critical Hewlett Packard Enterprise Development Lp
Priority to PCT/US2016/020676 priority Critical patent/WO2017151135A1/en
Publication of WO2017151135A1 publication Critical patent/WO2017151135A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/57Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
    • G06F21/577Assessing vulnerabilities and evaluating computer system security

Definitions

  • a security information sharing platform shares security indicators and/or other security-related information (e.g., mitigations strategies, attackers, attack campaigns and trends, threat intelligence information, etc.) with other users in an effort to advise the other users of any security threats, or to gain information related to security threats from other users.
  • security-related information e.g., mitigations strategies, attackers, attack campaigns and trends, threat intelligence information, etc.
  • FIG, 1 is a block diagram depicting an example environment in which various examples may be implemented for maintaining community- based security information.
  • FIG. 2 is a block diagram depicting an example system that renders unusable at least some community-based security information.
  • FIG. 3 is a block diagram depicting a machine readable medium encoded with example instructions for sanitizing community-based security information.
  • FIG, 4 is a flow diagram depicting an example method for protecting security information.
  • FIG. 5 is a flow diagram depicting another example method for protecting security information.
  • a security information sharing platform may enable users to share security indicators and/or other security information (e.g., mitigation strategies, attackers, attack campaigns and trends, threat intelligence information, etc.) with other users in an effort to advise the other users of any security threats, or to gain information related to security threats from other users.
  • security indicator may include a detection guidance for a security threat and/or vulnerability.
  • the other users with whom the security information is shared typically belong to a community that is selected by the user for sharing, or to the same community as the user, in some cases, the other users of such communities may further share the security information with further users and/or communities.
  • a user may include an individual, organization, or any entity that may send, receive, and/or share the security information.
  • a community may include a plurality of users, and users of a community may also be referred to as members of that community.
  • a community may include a plurality of individuals in a particular area of interest.
  • a community may include a global community where any user may join, for example, via subscription.
  • a community may also be a vertical-based community.
  • a vertical-based community may be a healthcare or a financial community.
  • a community may also be a private community with a limited number of selected users.
  • a community and/or users thereof may want to protect sensitive community-based security information.
  • community-based security information may refer to any security indicators, contextual information (related to those security indicators, the community, etc.), or any other information originated from (and/or submitted to the community) by a user of that community.
  • a user such as a particular financial institution may want to share a security indicator with the security information sharing platform, and more particularly, with a community to which the user belongs, for the purposes of analysis and investigation.
  • the security indicator may relate to a threat or occurrence of a cyber-attack or security breach suffered by the user.
  • the user may prefer that the security indicator and other related security information generated by the community be safeguarded against leakage outside the community and security information sharing platform, which may lead to negative exposure for the user.
  • Examples disclosed herein may be useful for protecting sensitive information in a security information sharing platform.
  • an implementation may receive security information from a user of a security information sharing platform, include the security information in community- based security information, detect whether a data disappearance condition associated with the security information is met, and render unusable at least some of the community-based security information that pertains to the security information in response to detecting that the data disappearance condition has been met.
  • users of a security information sharing platform may share sensitive security information with a community, while reducing the risk that such sensitive information leaks outside of the community.
  • FIG. 1 is an example environment 100 in which various examples described herein may be implemented as a system 120.
  • the system 120 may be useful for realizing a security information sharing platform.
  • Environment 100 may include various components including server computing device 130 and client computing devices 140A, 140B, 140N (collectively referred to as client computing devices 140). Each client computing device 140A, 140B, ... , 140N may communicate requests to and/or receive responses from server computing device 130. Server computing device 130 may receive and/or respond to requests from client computing devices 140. Client computing devices 140 may be any type of computing device providing a user interface through which a user can interact with a software application.
  • client computing devices 140 may include a laptop computing device, a desktop computing device, an all-in-one computing device, a tablet computing device, a mobile phone, an electronic book reader, a network- enabled appliance such as a "Smart" television, and/or other electronic device suitable for displaying a user interface and processing user interactions with the displayed interface.
  • server computing device 130 is depicted as a single computing device, server computing device 130 may include any number of integrated or distributed computing devices serving at least one software application for consumption by client computing devices 140.
  • Network 50 may comprise any infrastructure or combination of infrastructures that enable electronic communication between the
  • network 50 may include at least one of the internet, an intranet, a PAN (Personal Area Network), a LAN (Local Area Network), a WAN (Wide Area Network), a SAN (Storage Area Network), a MAN (Metropolitan Area Network), a wireless network, a cellular
  • system 120 and the various components described herein may be implemented in hardware and/or a combination of hardware and programming that configures hardware.
  • system 120 may be implemented on the server computing device 130, may be implemented on one or more of the client computing devices 140, or may be implemented on a combination of the server computing device 130 and client computing devices 140.
  • FIG. 1 and other Figures described herein different numbers of
  • System 120 may be a community-based security information sharing system.
  • System 120 may comprise a community generator 121 , a security information coordinator 122, a policy enforcer 127, and a data remover 128.
  • Each of the components 121 , 122, 127, 128 of the system 120 may include combination of hardware and programming that performs a designated function.
  • the hardware may include one or both of a processing resource and a machine readable medium, while the programming includes instructions or code stored on the machine readable medium and executable by the processor to perform the designated function.
  • a processing resource may be a microcontroller, a microprocessor, central processing unit (CPU) core(s), application-specific integrated circuit (ASIC), a field
  • FPGA programmable gate array
  • the machine readable medium may be random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory, a hard disk drive, etc.
  • RAM random access memory
  • ROM read-only memory
  • EEPROM electrically erasable programmable read-only memory
  • flash memory a hard disk drive, etc.
  • the community generator 121 may generate a community on a security information sharing platform.
  • the security information sharing platform may enable sharing of security information among a plurality of communities.
  • the community may comprise a plurality of users.
  • the generation of the community may be user-initiated or system- initiated.
  • a user may create the community by providing a list of users to be included in the community, in another example, the security information sharing platform may automatically identify and/or invite users who might be interested in joining the community based on information that have been collected about users of the platform (e.g., the platform may automatically identify and/or invite users who have been under similar security threats in the past), in some cases, users may operate client computing devices 140.
  • the security information coordinator 122 may maintain (123) community-based security information associated with the community (e.g., the community of the security information sharing platform, as generated by community generator 121 as discussed herein).
  • the security information coordinator 122 may maintain (i.e., store) the community-based security information in the security information sharing platform, and more particularly, in a data storage device or system of the security information sharing platform, such as data storage 129 to be described further below.
  • Community-based security information may be maintained in a database or other data structure.
  • Community-based security information may be in the form of a document (e.g., a PDF formatted document).
  • Community-based security information may include security information from the community as a whole.
  • community-based security information may include a plurality of security indicators from users of the community, and also may include security information related to each of the security indicators.
  • a security indicator may refer to a detection guidance for a security threat and/or vulnerability.
  • the security indicator may specify what to detect or look for (e.g., an observable) and/or what it means if detected.
  • the security indicator may specify a certain Internet Protocol (IP) address to look for in the network traffic.
  • IP Internet Protocol
  • the security indicator may include the information that the detection of that IP address in the network traffic can indicate a certain malicious security threat such as a Trojan virus.
  • the security indicator may comprise at least one observable.
  • An observable may refer to an event pertinent to the operation of computers and networks (e.g., an event occurring in network, servers, applications, databases, and/or various components of any computer system). Examples of an observable may include but not be limited to: an IP address, a domain name, an e-mail address, Uniform Resource Locator (URL), and a software file hash.
  • a security indicator may comprise a single observable (e.g., "a new file is created by an executable") or a plurality of observabies (e.g., "a new file is created by an executable and "the executable connects to domain X").
  • a security indicator may be created by and/or originated from at least one of a plurality of source entities.
  • the plurality of source entities may include a user.
  • a security indicator may be manually created and/or added to the security information sharing platform by the user, in another example, the plurality of source entities may include a threat intelligence provider that provides threat intelligence feeds.
  • a security indicator that is found in the intelligence feeds for example, may be created and/or added to the security information sharing platform.
  • the threat intelligence feeds may be provided by independent third parties such as security service providers. These providers and/or sources may supply the threat intelligence feeds that provide information about threats the providers have identified.
  • Most threat intelligence feeds for example, include lists of domain names, IP addresses, and URLs that various providers have classified as malicious or at least suspicious according to different methods and criteria.
  • security information may be related to a security indicator.
  • security information in the form of notes and comments for example, may be provided by other users of the community based on analysis of a previously provided security indicator.
  • Other non-limiting examples of security information include the following.
  • the security information may include contextual information, which may include information about: an investigation result provided by a user of the community, an indicator score for a security indicator, a source entity for a security indicator, a threat actor (e.g., attacker) identity for a security indicator, a level of confidence (e.g., a level of confidence that an indicator is actually malicious), a level of severity (e.g., a level of severity or likely impact that an indicator may pose), a sighting of an observable of an security indicator, and/or other information related to an security indicator.
  • contextual information may include information about: an investigation result provided by a user of the community, an indicator score for a security indicator, a source entity for a security indicator, a threat actor (e.g., attacker) identity for a security indicator, a level of confidence (e.g., a level of confidence that an indicator is actually malicious), a level of severity (e.g., a level of severity or likely impact that an indicator may pose), a sighting of an observable of
  • the contextual information may include information specific to the community itself such as an industry sector (e.g., the industry sector that the community is in), a geography (e.g., a geographical region where the community is located in), a common interest area, and/or other information related to the community.
  • an industry sector e.g., the industry sector that the community is in
  • a geography e.g., a geographical region where the community is located in
  • a common interest area e.g., a common interest area
  • security information may include tactics, techniques, and procedures information (TTP), which may include threat actor behaviors (e.g., attack patterns, maiware, exploits), resources leveraged (e.g., infrastructure, tools), characteristics of targeted victims (e.g., who, what, and/or where), vulnerabilities or weaknesses targeted, intended effects, etc.
  • threat actor behaviors e.g., attack patterns, maiware, exploits
  • resources leveraged e.g., infrastructure, tools
  • characteristics of targeted victims e.g., who, what, and/or where
  • security information may include mitigation strategies and courses of action, including corrective or preventative measures to address vulnerabilities and/or responsive actions for countering or mitigating potential negative effects of incidences of a security indicator.
  • security information may include campaign information, which may describe an intent pursued by a threat actor as deduced or inferred from multiple security indicators and/or TTPs.
  • Campaign information may include a suspected intended effect of the threat actor, related TTPs (or references thereto), related security indicators (or references thereto),
  • the security information coordinator 122 may receive (124) security information from a user of the community.
  • the received security information may be, for example, any of the above-described security information including a security indicator, contextual information, a TTP, a mitigation strategy, a course of action, campaign information, or the like.
  • the security information may be included and/or stored (125) in or as part of the
  • the security information coordinator 122 may share the received security information with the community (e.g., users of the community may receive the shared security information via the security information sharing platform) to obtain additional insight about the shared security information from the users of the community, where insight may be a same or a different type of security information as the shared security information.
  • a user of the community may create a new security indicator using the security information sharing platform and/or submit it to the community so that other users of the community may collaboratively investigate the security indicator and provide their input, in response to the new security indicator, another user of the community may investigate the security indicator being presented, assess the reliability of the source of the indicator, the level of confidence, and/or the level of severity, report a sighting of an observable (e.g., a sighting indicating that the user observed the observable), provide information about a potential threat actor (e.g., attacker) behind the security indicator, etc.
  • Any security information generated in relation to shared security information e.g., insight based on analysis
  • the security information coordinator 122 may receive a data disappearance condition from the user. More particularly, the user may submit the data disappearance condition at or around the time that the user creates new security information, such as a security indicator (e.g., such a time may be deemed an "initial" phase), so that the data disappearance condition is associated with the security information.
  • the data disappearance condition may, when tested, produce a binary outcome (e.g., condition met or condition not met).
  • the user also may submit (during the "initial" phase) to the security information coordinator 122 a data disappearance plan specifying in what manner the security information (and in some
  • any other related security information should be rendered unusable upon the data disappearance condition being met.
  • Specific details of the data disappearance condition and the data disappearance plan may be left to the discretion and preference of the user.
  • the security information coordinator 122 may receive from the user a plurality of data disappearance conditions associated with the security information, as well as directives for how to process the plurality of data disappearance conditions.
  • a directive may be a logical operator, such as an AND operator (e.g., data disappearance plan triggered if ail data disappearance conditions are met) or an OR operator (i.e., data disappearance plan triggered if any of the data disappearance conditions are met).
  • the directive may be a prioritization or hierarchy of data disappearance conditions, such that a higher priority may trigger the data disappearance plan regardless of whether lower priority data disappearance conditions are met.
  • Other directives may aiso be utilized, such as conditional statements,
  • the security information coordinator 122 may attach the data disappearance condition and/or the data disappearance plan to the security information and/or related security information, in which case the data disappearance condition and plan may be referred to as a "sticky policy".
  • a sticky policy may allow control over the security information and related security information as they are shared among the community (e.g., to client computing devices 140). Additionally or alternatively, the security information coordinator 122 may maintain the data disappearance condition and/or the data
  • disappearance plan in a central location such as in a memory or storage of the system 120.
  • the security information coordinator 122 may obtain a data disappearance condition and/or a data disappearance plan from a channel other than the user that created or added the security
  • a community moderator may review security information and design a data disappearance condition and/or data disappearance plan accordingly.
  • a default or predefined data disappearance condition and/or data disappearance plan may be applied (e.g., based on community consensus, community moderator design, security information sharing platform defaults, etc.).
  • machine learning or the like may be utilized to establish a data disappearance condition and/or data disappearance plan.
  • the policy enforcer 127 may monitor for and detect occurrence of the data disappearance condition associated with the security information.
  • a data disappearance condition may relate to community membership, information access (e.g., the security information or related security information being accessed), or a situational policy.
  • the data may relate to community membership, information access (e.g., the security information or related security information being accessed), or a situational policy.
  • the data may be monitored for and detect occurrence of the data disappearance condition associated with the security information.
  • a data disappearance condition may relate to community membership, information access (e.g., the security information or related security information being accessed), or a situational policy.
  • the data may relate to community membership, information access (e.g., the security information or related security information being accessed), or a situational policy.
  • disappearance condition may relate to changes in a security situation, such as a breach of the security information sharing platform, an increase in a publicized threat level (e.g., by a threat intelligence provider), an increase in sensitivity level associated with the security information, etc.
  • the data remover 128 may render unusable at least some community-based security information that pertains to the security
  • the community-based security information rendered unusable may be security information with which the data disappearance condition is associated and/or related security information (e.g., generated insights).
  • Community-based security information may be rendered unusable in an effort to prevent leakage outside the community of sensitive information identifiable or associabie with the user (and/or other members of the community).
  • the at least some community-based security information rendered unusable may pertain to a security indicator received by the security information coordinator 122.
  • the at least some of the community- based security information that is rendered unusable may pertain to the security indicator itself and/or some or all security information related to the security indicator (e.g., community notes, contextual information, TTPs, mitigation strategies, campaign information, etc.).
  • the information to be rendered unusable may be deleted, selectively or partially deleted, transformed, hashed, or encrypted and the decryption key deleted, among other techniques or processes.
  • the data remover 128 may refer to and carry out a data disappearance plan received by the security information coordinator 122 as described above.
  • the policy enforcer 127 and the data remover 128 may be distributed to devices.
  • the policy enforcer 127 and/or the data remover 128 may be distributed or installed to client computing devices 140 as part of a security information sharing platform client application, as a confirmatory prompt, as mobile code, etc.
  • Data storage 129 may represent any memory accessible to system 120 that can be used to store and retrieve data.
  • Data storage 129 and/or other database may comprise RAM, ROM, EEPROM, cache memory, floppy disks, hard disks, optical disks, tapes, solid state drives, flash drives, portable compact disks, and/or other storage media for storing computer- executable instructions and/or data.
  • System 120 may access data storage 129 locally or remotely via network 50 or other networks.
  • Data storage 129 may include a database to organize and store data, including community-based security information.
  • the database may reside in a single or multiple physical device(s) and in a single or multiple physical location(s).
  • the database may store a plurality of types of data and/or files and associated data or file description, administrative information, or any other data,
  • FIG. 2 is a block diagram depicting an example system 200.
  • System 200 may comprise a security information coordinator 202, a policy enforcer 204, and a data remover 208, which may be analogous in many respects to the security information coordinator 122, the policy enforcer 127, and the data remover 128 of FIG. 1 , respectively.
  • the security information coordinator 202, the policy enforcer 204, and the data remover 206 each may include any combination of hardware (e.g. , a processing resource, electronic circuitry, logic) and programming (e.g., instructions stored on a non-transitory machine readable medium) to implement their respective functionalities as described herein.
  • the security information coordinator 202 may maintain community- based security information associated with a community of a security information sharing platform, receive security information from a user of the community, and include the received security information (e.g., a security indicator and security information related to the security indicator) in the community-based security information.
  • the policy enforcer 204 may detect occurrence of a data disappearance condition associated with the received security information.
  • the data remover 206 may render unusable at least some community-based security information that pertains to the received security information, upon detection by the policy enforcer.
  • FIG. 3 is a block diagram depicting a system 300 that includes a processing resource 302 coupled to a non-transitory machine readable medium 304 encoded with (or storing) example instructions for sanitizing community-based security information.
  • the system 300 may serve as or form part of the system 120 of FIG. 1 .
  • the processing resource 302 may be a microcontroller, a microprocessor, CPU core(s), an ASIC, an FPGA, and/or other hardware device suitable for retrieval and/or execution of instructions stored on the machine readable medium 304. Additionally or alternatively, the processing resource 302 may include one or more hardware devices, including electronic circuitry, for implementing functionality described herein.
  • the machine readable medium 304 may be any medium suitable for storing executable instructions, such as RAM, ROM, EEPRO , flash memory, a hard disk drive, an optical disc, or the like. In some example implementations, the machine readable medium 304 may be a tangible, non- transitory medium. The machine readable medium 304 may be disposed within the system 300, as shown in FIG. 3, in which case the executable instructions may be deemed installed or embedded on the system 300.
  • the machine readable medium 304 may be a portable (e.g., external) storage medium, and may be part of an installation package.
  • the machine readable medium 304 may be encoded with a set of executable instructions 306, 308, 310, 312, It should be understood that part or all of the executable instructions and/or electronic circuits included within one box may, in alternate implementations, be included in a different box shown in the figures or in a different box not shown.
  • the security indicator e.g., received by instructions 308
  • instructions 310 when executed, cause the processing resource 302 to monitor for occurrence of a data disappearance condition associated with the security indicator.
  • execution of instructions 312 may be triggered, instructions 312, when executed, cause the processing resource 302 to sanitize (i.e., completely or partially render unusable, inaccessible,
  • an IP number may be sanitized by deleting the last octet, in some
  • the data disappearance condition may relate to a time- based policy, a community membership policy, a community action policy, a security indicator access policy, or a security situation policy.
  • FIG, 4 is a flow diagram depicting an example method 400 for protecting security information.
  • Method 400 may be performed by a system that includes a physical processing resource implementing or executing machine readable instructions stored on a machine readable medium.
  • system performing method 400 may include electronic circuitry.
  • at least some portions of method 400 may be performed by system 120, 200, or 300.
  • the blocks of method 400 may be executed substantially concurrently, may be ongoing, and/or may repeat.
  • method 400 may include more or fewer blocks than are shown in FIG. 4.
  • Method 400 begins at block 402, and continues to block 404, where method 400 includes receiving security information provided by a user of a community of a security information sharing platform.
  • method 400 includes detecting that a data disappearance condition associated with the security information is met.
  • Various example data disappearance conditions and the detection thereof will now be described, !t should be understood that other data disappearance conditions also may be suitable, in some cases, combinations of data disappearance conditions may be utilized (e.g., using logical operators, conditional statements, a prioritization, etc.).
  • Data disappearance conditions may relate to a time-based policy, a
  • block 408 may include detecting that the security information has been accessed by or shared with a predefined set of users of the community (i.e., an access policy).
  • the predefined set may be all users of the community or a specified subset of users of the community. For example, after the security information and/or related security information has been accessed or shared by the community, the user may prefer that the accessed or shared information be rendered unusable to reduce the chance of data leakage.
  • block 408 may include detecting a lapse of a predefined time interval for availability of security information (i.e., a time- based policy). For example, the user may prefer that their security
  • the predefined time interval e.g., on the order of hours, days, or weeks, etc.
  • block 408 may include detecting changes in membership of the community (i.e., a community membership policy). For example, a user may prefer that their security information and/or related security information be rendered unusable if new users join the community. In other examples, any user of the community may prefer that any security information they contributed be rendered unusable when the user leaves the community.
  • a community membership policy For example, a user may prefer that their security information and/or related security information be rendered unusable if new users join the community. In other examples, any user of the community may prefer that any security information they contributed be rendered unusable when the user leaves the community.
  • block 408 may include detecting confirmation from the community that security information is no longer needed. For example, a member of the community may finish reviewing the security information (e.g., scanning their own environment for threats similar, etc.) and then may indicate to the security information sharing platform that the member no longer needs the security information.
  • a member of the community may finish reviewing the security information (e.g., scanning their own environment for threats similar, etc.) and then may indicate to the security information sharing platform that the member no longer needs the security information.
  • block 408 may include detecting a security information recall request, from the user that submitted the security information for example.
  • a recall request may be useful if the user determines that security information is erroneous.
  • the foregoing conditions may be referred to as community action policies.
  • block 408 may include detecting a change in a sensitivity level property related to the security information (i.e., a security situation policy). For example, some security information may initially be assessed as low sensitivity, but upon analysis among the community or cross-correlation within the security information sharing platform, the security information may be upgraded to high sensitivity (by virtue of being, e.g., part of a campaign, related to certain threat actors, etc.). in such an example, an increase in sensitivity may trigger the creation of other data disappearance conditions, such as a time-based policy or an access policy.
  • a sensitivity level property related to the security information i.e., a security situation policy.
  • some security information may initially be assessed as low sensitivity, but upon analysis among the community or cross-correlation within the security information sharing platform, the security information may be upgraded to high sensitivity (by virtue of being, e.g., part of a campaign, related to certain threat actors, etc.). in such an example, an increase in sensitivity may trigger the creation of other data disappearance conditions
  • block 408 may include detecting increased risk of exposure of information outside the community (i.e., a security situation policy). For example, an increased risk of exposure may accompany a past, present, or imminent security breach of the security information sharing platform (e.g., a breach of the database storing
  • a third party request may cause security information to be preserved (e.g., if the third party is an authority, a threat intelligence provider, or other trusted entity).
  • block 408 may include detecting a change in security situation external to the security information sharing platform. For example, an increase in a security or threat level publicized by an authority, a threat intelligence provider, or another trusted or independent entity, may cause community-based security information to be rendered unusable, at least temporarily.
  • method 400 includes rendering unusable at least some of the security information, in response to the detecting the data disappearance condition at block 408.
  • method 400 may end.
  • FIG. 5 is a flowchart of an example method 500 for protecting the security information.
  • method 500 may be performed by a system that includes a physical processing resource implementing or executing machine readable instructions stored on a machine readable medium or includes electronic circuitry, in particular, at least some portions of method 500 may be performed by system 120, 200, or 300.
  • one or more blocks of method 500 may be executed substantially concurrently or in a different order than shown in FIG, 5.
  • Some of the blocks of method 500 may, at times, be ongoing and/or may repeat, in some implementations of the present disclosure, method 500 may include more or fewer blocks than are shown in FIG. 5.
  • Method 500 begins at block 502, and continues to block 504, where method 500 includes receiving security information provided by a user of a community of a security information sharing platform (also referred to herein as received security information, for convenience).
  • the security information may be received via the security information sharing platform by a server computing device or a client computing device of the security information sharing platform.
  • the received security information may be a security indicator.
  • the received security information may be another type security information, such as contextual information, TTPs, a mitigation strategy, campaign information, etc.
  • Blocks 404 and 504 may be analogous in many respects.
  • method 500 includes receiving a data disappearance condition (e.g., from the user).
  • the data disappearance condition is associated with the security information received at block 504. Examples of the data disappearance condition may be analogous to the data
  • disappearance conditions describe above, such as those described with respect to block 408.
  • multiple data disappearance conditions may be received at block 506, as well as a directive for combining the data disappearance conditions (e.g. , logical operator, conditional statements, a prioritization, etc.).
  • block 506 also may include receiving a data disappearance plan (e.g., from the user).
  • the data disappearance plan may specify whether rendering security information unusable includes deleting security information, transforming security information (e.g., via hash function), deleting a decryption key for encrypted security information, or other techniques.
  • the data disappearance plan may also indicate whether security information is to be selectively rendered unusable.
  • the data disappearance plan may specify that portions of the received security information (and/or related security information) attributable or traceable back to a user are to be rendered unusable, but may allow other portions of the received security information (and/or related security information) to remain available to the community for further analysis and investigation.
  • the data disappearance plan may specify which pieces of related information are rendered unusable, such as any one or combination of the security indicator, community notes and comments, contextual information, TTPs, mitigation strategies, campaign information, etc. Blocks 504 and 506 together may be deemed an "initial phase",
  • method 500 includes maintaining community-based security information that includes the security information received at block 504.
  • maintaining may include operating a database of community-based security information and storing the security information received at block 504 in such a database.
  • Maintaining also may include storing community-provided security information related to the received security information (e.g., community generated insights of any security information type described above) in such a database of community-based security information. Relations between the received security information and related security information may be documented in the database.
  • maintaining may include storing security information in a cache or in data storage, at a client computing device for example.
  • block 508 may include attaching data disappearance conditions to security information to create a sticky policy,
  • method 500 includes monitoring whether a data disappearance condition associated with the security information has been met. For example, various sources of data are monitored, such as the date and time, membership in the community, security information sharing platform integrity status, correspondence and pronouncements from trusted third parties (e.g., related to security/threat levels), etc.
  • method 500 includes determining from the monitoring at block 510 whether a data disappearance condition has been met. If a data disappearance condition has not been met ("NO” at block 512), method 500 returns to block 510. If a data disappearance condition has been met ("YES” at block 512, method 500 proceeds to block 514.
  • method 500 includes determining whether selective data disappearance was specified by the user in the data disappearance plan. If yes ("YES" at block 514), method 500 proceeds to block 516, which includes rendering unusable select portions of the received security information or select portions of community-based security information related to the received security information (e.g., select portions identified in the data disappearance plan), including portions attributable to the user.
  • method 500 includes maintaining (i.e., not rendering unusable) other portions of the received security information or portions of community-based security information related to the received security information. Blocks 516 and 518 may be performed in accordance with specifics detailed in the data disappearance plan.
  • method 500 proceeds to block 520 which includes rendering unusable ail of the received security information and community-based security information related to the received security information. For example, all of a security indicator submitted by a user and related security information generated by the community may be securely deleted. After block 518 and 520, method 500 may end.
  • security information submitted to and maintained by a security information sharing platform can be rendered unusable upon fulfillment of certain data disappearance conditions. Accordingly, users of a security information sharing platform may share security information with a community, such as an incident report of a cyber-attack or a security breach, while reducing the risk that such sensitive information leaks outside of the community.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Bioethics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Storage Device Security (AREA)

Abstract

Example implementations relate to security information. For example, in an implementation, security information is received from a user of a community of a security information sharing platform. Upon detection of a data disappearance condition associated with the security information, at least some of the security information is rendered unusable.

Description

DATA DISAPPEARANCE CONDITIONS BACKGROUND
[0001 ] Users of a security information sharing platform share security indicators and/or other security-related information (e.g., mitigations strategies, attackers, attack campaigns and trends, threat intelligence information, etc.) with other users in an effort to advise the other users of any security threats, or to gain information related to security threats from other users.
BRIEF DESCRSPTION OF THE DRAWSNGS
[0002] Various examples will be described below with reference to the following figures.
[0003] FIG, 1 is a block diagram depicting an example environment in which various examples may be implemented for maintaining community- based security information.
[0004] FIG. 2 is a block diagram depicting an example system that renders unusable at least some community-based security information.
[0005] FIG. 3 is a block diagram depicting a machine readable medium encoded with example instructions for sanitizing community-based security information.
[0006] FIG, 4 is a flow diagram depicting an example method for protecting security information.
[0007] FIG. 5 is a flow diagram depicting another example method for protecting security information.
DETAILED DESCRIPTOR
[0008] A security information sharing platform may enable users to share security indicators and/or other security information (e.g., mitigation strategies, attackers, attack campaigns and trends, threat intelligence information, etc.) with other users in an effort to advise the other users of any security threats, or to gain information related to security threats from other users. For example, a security indicator may include a detection guidance for a security threat and/or vulnerability. The other users with whom the security information is shared typically belong to a community that is selected by the user for sharing, or to the same community as the user, in some cases, the other users of such communities may further share the security information with further users and/or communities.
[0009] A user may include an individual, organization, or any entity that may send, receive, and/or share the security information. A community may include a plurality of users, and users of a community may also be referred to as members of that community. For example, a community may include a plurality of individuals in a particular area of interest. A community may include a global community where any user may join, for example, via subscription. A community may also be a vertical-based community. For example, a vertical-based community may be a healthcare or a financial community. A community may also be a private community with a limited number of selected users.
[0010] In some instances, a community and/or users thereof may want to protect sensitive community-based security information. As will be further described below, community-based security information may refer to any security indicators, contextual information (related to those security indicators, the community, etc.), or any other information originated from (and/or submitted to the community) by a user of that community. For example, a user such as a particular financial institution may want to share a security indicator with the security information sharing platform, and more particularly, with a community to which the user belongs, for the purposes of analysis and investigation. The security indicator may relate to a threat or occurrence of a cyber-attack or security breach suffered by the user. However, the user may prefer that the security indicator and other related security information generated by the community be safeguarded against leakage outside the community and security information sharing platform, which may lead to negative exposure for the user.
[001 1 ] Examples disclosed herein may be useful for protecting sensitive information in a security information sharing platform. For example, an implementation may receive security information from a user of a security information sharing platform, include the security information in community- based security information, detect whether a data disappearance condition associated with the security information is met, and render unusable at least some of the community-based security information that pertains to the security information in response to detecting that the data disappearance condition has been met. By virtue of the foregoing, users of a security information sharing platform may share sensitive security information with a community, while reducing the risk that such sensitive information leaks outside of the community.
[0012] FIG. 1 is an example environment 100 in which various examples described herein may be implemented as a system 120. The system 120 may be useful for realizing a security information sharing platform.
Environment 100 may include various components including server computing device 130 and client computing devices 140A, 140B, 140N (collectively referred to as client computing devices 140). Each client computing device 140A, 140B, ... , 140N may communicate requests to and/or receive responses from server computing device 130. Server computing device 130 may receive and/or respond to requests from client computing devices 140. Client computing devices 140 may be any type of computing device providing a user interface through which a user can interact with a software application. For example, client computing devices 140 may include a laptop computing device, a desktop computing device, an all-in-one computing device, a tablet computing device, a mobile phone, an electronic book reader, a network- enabled appliance such as a "Smart" television, and/or other electronic device suitable for displaying a user interface and processing user interactions with the displayed interface. While server computing device 130 is depicted as a single computing device, server computing device 130 may include any number of integrated or distributed computing devices serving at least one software application for consumption by client computing devices 140.
[0013] The various components (e.g., components 129, 130, and/or 140) depicted in FIG. 1 may be coupled to at least one other component via a network 50. Network 50 may comprise any infrastructure or combination of infrastructures that enable electronic communication between the
components. For example, network 50 may include at least one of the internet, an intranet, a PAN (Personal Area Network), a LAN (Local Area Network), a WAN (Wide Area Network), a SAN (Storage Area Network), a MAN (Metropolitan Area Network), a wireless network, a cellular
communications network, a Public Switched Telephone Network, and/or other network.
[0014] According to various implementations, system 120 and the various components described herein may be implemented in hardware and/or a combination of hardware and programming that configures hardware. In various implementations, system 120 may be implemented on the server computing device 130, may be implemented on one or more of the client computing devices 140, or may be implemented on a combination of the server computing device 130 and client computing devices 140. Furthermore, in FIG. 1 and other Figures described herein, different numbers of
components or entities than depicted may be used.
[0015] System 120 may be a community-based security information sharing system. System 120 may comprise a community generator 121 , a security information coordinator 122, a policy enforcer 127, and a data remover 128. Each of the components 121 , 122, 127, 128 of the system 120 may include combination of hardware and programming that performs a designated function. For example, the hardware may include one or both of a processing resource and a machine readable medium, while the programming includes instructions or code stored on the machine readable medium and executable by the processor to perform the designated function. A processing resource may be a microcontroller, a microprocessor, central processing unit (CPU) core(s), application-specific integrated circuit (ASIC), a field
programmable gate array (FPGA) and/or other hardware device suitable for retrieval and/or execution of instructions from the machine readable medium, and the machine readable medium may be random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory, a hard disk drive, etc.
[0016] The community generator 121 may generate a community on a security information sharing platform. The security information sharing platform, as discussed above, may enable sharing of security information among a plurality of communities. The community may comprise a plurality of users. The generation of the community may be user-initiated or system- initiated. For example, a user may create the community by providing a list of users to be included in the community, in another example, the security information sharing platform may automatically identify and/or invite users who might be interested in joining the community based on information that have been collected about users of the platform (e.g., the platform may automatically identify and/or invite users who have been under similar security threats in the past), in some cases, users may operate client computing devices 140.
[0017] The security information coordinator 122 may maintain (123) community-based security information associated with the community (e.g., the community of the security information sharing platform, as generated by community generator 121 as discussed herein). The security information coordinator 122 may maintain (i.e., store) the community-based security information in the security information sharing platform, and more particularly, in a data storage device or system of the security information sharing platform, such as data storage 129 to be described further below.
Community-based security information may be maintained in a database or other data structure. Community-based security information may be in the form of a document (e.g., a PDF formatted document). [0018] Community-based security information may include security information from the community as a whole. For example, community-based security information may include a plurality of security indicators from users of the community, and also may include security information related to each of the security indicators.
[0019] A security indicator may refer to a detection guidance for a security threat and/or vulnerability. In other words, the security indicator may specify what to detect or look for (e.g., an observable) and/or what it means if detected. For example, the security indicator may specify a certain Internet Protocol (IP) address to look for in the network traffic. The security indicator may include the information that the detection of that IP address in the network traffic can indicate a certain malicious security threat such as a Trojan virus.
[0020] The security indicator may comprise at least one observable. An observable may refer to an event pertinent to the operation of computers and networks (e.g., an event occurring in network, servers, applications, databases, and/or various components of any computer system). Examples of an observable may include but not be limited to: an IP address, a domain name, an e-mail address, Uniform Resource Locator (URL), and a software file hash. A security indicator may comprise a single observable (e.g., "a new file is created by an executable") or a plurality of observabies (e.g., "a new file is created by an executable and "the executable connects to domain X").
[0021 ] A security indicator may be created by and/or originated from at least one of a plurality of source entities. For example, the plurality of source entities may include a user. A security indicator may be manually created and/or added to the security information sharing platform by the user, in another example, the plurality of source entities may include a threat intelligence provider that provides threat intelligence feeds. A security indicator that is found in the intelligence feeds, for example, may be created and/or added to the security information sharing platform. There exist a number of entities that may provide threat intelligence feeds. The threat intelligence feeds may be provided by independent third parties such as security service providers. These providers and/or sources may supply the threat intelligence feeds that provide information about threats the providers have identified. Most threat intelligence feeds, for example, include lists of domain names, IP addresses, and URLs that various providers have classified as malicious or at least suspicious according to different methods and criteria.
[0022] Similarly, users may also create or add other types of security information to the security information sharing platform, to be included in and maintained with the community-based sharing information. In some cases, security information may be related to a security indicator. To illustrate, security information, in the form of notes and comments for example, may be provided by other users of the community based on analysis of a previously provided security indicator. Other non-limiting examples of security information include the following.
[0023] In some cases, the security information may include contextual information, which may include information about: an investigation result provided by a user of the community, an indicator score for a security indicator, a source entity for a security indicator, a threat actor (e.g., attacker) identity for a security indicator, a level of confidence (e.g., a level of confidence that an indicator is actually malicious), a level of severity (e.g., a level of severity or likely impact that an indicator may pose), a sighting of an observable of an security indicator, and/or other information related to an security indicator. In some implementations, the contextual information may include information specific to the community itself such as an industry sector (e.g., the industry sector that the community is in), a geography (e.g., a geographical region where the community is located in), a common interest area, and/or other information related to the community.
[0024] Additionally or alternatively, security information may include tactics, techniques, and procedures information (TTP), which may include threat actor behaviors (e.g., attack patterns, maiware, exploits), resources leveraged (e.g., infrastructure, tools), characteristics of targeted victims (e.g., who, what, and/or where), vulnerabilities or weaknesses targeted, intended effects, etc. Additionally or alternatively, security information may include mitigation strategies and courses of action, including corrective or preventative measures to address vulnerabilities and/or responsive actions for countering or mitigating potential negative effects of incidences of a security indicator. Additionally or alternatively, security information may include campaign information, which may describe an intent pursued by a threat actor as deduced or inferred from multiple security indicators and/or TTPs. Campaign information may include a suspected intended effect of the threat actor, related TTPs (or references thereto), related security indicators (or references thereto), a threat actor identity, a confidence level regarding the intent, source of the campaign information, action taken in response to the campaign, etc.
[0025] The security information coordinator 122 may receive (124) security information from a user of the community. The received security information may be, for example, any of the above-described security information including a security indicator, contextual information, a TTP, a mitigation strategy, a course of action, campaign information, or the like. The security information may be included and/or stored (125) in or as part of the
community-based security information of that community.
[0026] In some implementations, the security information coordinator 122 may share the received security information with the community (e.g., users of the community may receive the shared security information via the security information sharing platform) to obtain additional insight about the shared security information from the users of the community, where insight may be a same or a different type of security information as the shared security information. For example, a user of the community may create a new security indicator using the security information sharing platform and/or submit it to the community so that other users of the community may collaboratively investigate the security indicator and provide their input, in response to the new security indicator, another user of the community may investigate the security indicator being presented, assess the reliability of the source of the indicator, the level of confidence, and/or the level of severity, report a sighting of an observable (e.g., a sighting indicating that the user observed the observable), provide information about a potential threat actor (e.g., attacker) behind the security indicator, etc. Any security information generated in relation to shared security information (e.g., insight based on analysis) may be deemed related security information and may also be included and/or stored (125) in or as part of the community-based security information of that community.
[0027] In some implementations, the security information coordinator 122 may receive a data disappearance condition from the user. More particularly, the user may submit the data disappearance condition at or around the time that the user creates new security information, such as a security indicator (e.g., such a time may be deemed an "initial" phase), so that the data disappearance condition is associated with the security information. The data disappearance condition may, when tested, produce a binary outcome (e.g., condition met or condition not met). Additionally, the user also may submit (during the "initial" phase) to the security information coordinator 122 a data disappearance plan specifying in what manner the security information (and in some
implementations, any other related security information) should be rendered unusable upon the data disappearance condition being met. Specific details of the data disappearance condition and the data disappearance plan may be left to the discretion and preference of the user.
[0028] In some implementations, the security information coordinator 122 may receive from the user a plurality of data disappearance conditions associated with the security information, as well as directives for how to process the plurality of data disappearance conditions. For example, a directive may be a logical operator, such as an AND operator (e.g., data disappearance plan triggered if ail data disappearance conditions are met) or an OR operator (i.e., data disappearance plan triggered if any of the data disappearance conditions are met). In another example, the directive may be a prioritization or hierarchy of data disappearance conditions, such that a higher priority may trigger the data disappearance plan regardless of whether lower priority data disappearance conditions are met. Other directives may aiso be utilized, such as conditional statements,
[0029] The security information coordinator 122 may attach the data disappearance condition and/or the data disappearance plan to the security information and/or related security information, in which case the data disappearance condition and plan may be referred to as a "sticky policy". A sticky policy may allow control over the security information and related security information as they are shared among the community (e.g., to client computing devices 140). Additionally or alternatively, the security information coordinator 122 may maintain the data disappearance condition and/or the data
disappearance plan in a central location, such as in a memory or storage of the system 120.
[0030] In some implementations, the security information coordinator 122 may obtain a data disappearance condition and/or a data disappearance plan from a channel other than the user that created or added the security
information to the security information sharing platform. For example, a community moderator may review security information and design a data disappearance condition and/or data disappearance plan accordingly. In another example, a default or predefined data disappearance condition and/or data disappearance plan may be applied (e.g., based on community consensus, community moderator design, security information sharing platform defaults, etc.). In yet another example, machine learning or the like may be utilized to establish a data disappearance condition and/or data disappearance plan.
[0031 ] The policy enforcer 127 may monitor for and detect occurrence of the data disappearance condition associated with the security information. A data disappearance condition may relate to community membership, information access (e.g., the security information or related security information being accessed), or a situational policy. In some implementations, the data
disappearance condition may relate to changes in a security situation, such as a breach of the security information sharing platform, an increase in a publicized threat level (e.g., by a threat intelligence provider), an increase in sensitivity level associated with the security information, etc. Some additional example data disappearance conditions will be described below, with respect to FIG. 4, for example.
[0032] The data remover 128 may render unusable at least some community-based security information that pertains to the security
information, upon detection by the policy enforcer of an occurrence of the data disappearance condition associated with the security information. The community-based security information rendered unusable may be security information with which the data disappearance condition is associated and/or related security information (e.g., generated insights). Community-based security information may be rendered unusable in an effort to prevent leakage outside the community of sensitive information identifiable or associabie with the user (and/or other members of the community).
[0033] In some examples, the at least some community-based security information rendered unusable may pertain to a security indicator received by the security information coordinator 122. The at least some of the community- based security information that is rendered unusable may pertain to the security indicator itself and/or some or all security information related to the security indicator (e.g., community notes, contextual information, TTPs, mitigation strategies, campaign information, etc.).
[0034] Various techniques may be used to render unusable the at least some community-based security information, as will be described below with respect to FIG. 5, for example. In some implementations, the information to be rendered unusable may be deleted, selectively or partially deleted, transformed, hashed, or encrypted and the decryption key deleted, among other techniques or processes. In some implementations, the data remover 128 may refer to and carry out a data disappearance plan received by the security information coordinator 122 as described above.
[0035] In cases where the data disappearance condition and/or data disappearance plan are a sticky policy, at least some functionality of the policy enforcer 127 and the data remover 128 may be distributed to devices. For example, the policy enforcer 127 and/or the data remover 128 may be distributed or installed to client computing devices 140 as part of a security information sharing platform client application, as a confirmatory prompt, as mobile code, etc.
[0036] In performing their respective functions, the community generator 121 , the security information coordinator 122, the policy enforcer 127, and/or the data remover 128 may access data storage 129 and/or other suitable database(s). Data storage 129 may represent any memory accessible to system 120 that can be used to store and retrieve data. Data storage 129 and/or other database may comprise RAM, ROM, EEPROM, cache memory, floppy disks, hard disks, optical disks, tapes, solid state drives, flash drives, portable compact disks, and/or other storage media for storing computer- executable instructions and/or data. System 120 may access data storage 129 locally or remotely via network 50 or other networks.
[0037] Data storage 129 may include a database to organize and store data, including community-based security information. The database may reside in a single or multiple physical device(s) and in a single or multiple physical location(s). The database may store a plurality of types of data and/or files and associated data or file description, administrative information, or any other data,
[0038] FIG. 2 is a block diagram depicting an example system 200.
System 200 may comprise a security information coordinator 202, a policy enforcer 204, and a data remover 208, which may be analogous in many respects to the security information coordinator 122, the policy enforcer 127, and the data remover 128 of FIG. 1 , respectively. For example, the security information coordinator 202, the policy enforcer 204, and the data remover 206 each may include any combination of hardware (e.g. , a processing resource, electronic circuitry, logic) and programming (e.g., instructions stored on a non-transitory machine readable medium) to implement their respective functionalities as described herein. [0039] The security information coordinator 202 may maintain community- based security information associated with a community of a security information sharing platform, receive security information from a user of the community, and include the received security information (e.g., a security indicator and security information related to the security indicator) in the community-based security information. The policy enforcer 204 may detect occurrence of a data disappearance condition associated with the received security information. The data remover 206 may render unusable at least some community-based security information that pertains to the received security information, upon detection by the policy enforcer.
[0040] FIG. 3 is a block diagram depicting a system 300 that includes a processing resource 302 coupled to a non-transitory machine readable medium 304 encoded with (or storing) example instructions for sanitizing community-based security information. The system 300 may serve as or form part of the system 120 of FIG. 1 .
[0041 ] In some implementations, the processing resource 302 may be a microcontroller, a microprocessor, CPU core(s), an ASIC, an FPGA, and/or other hardware device suitable for retrieval and/or execution of instructions stored on the machine readable medium 304. Additionally or alternatively, the processing resource 302 may include one or more hardware devices, including electronic circuitry, for implementing functionality described herein.
[0042] The machine readable medium 304 may be any medium suitable for storing executable instructions, such as RAM, ROM, EEPRO , flash memory, a hard disk drive, an optical disc, or the like. In some example implementations, the machine readable medium 304 may be a tangible, non- transitory medium. The machine readable medium 304 may be disposed within the system 300, as shown in FIG. 3, in which case the executable instructions may be deemed installed or embedded on the system 300.
Alternatively, the machine readable medium 304 may be a portable (e.g., external) storage medium, and may be part of an installation package. [0043] As described further herein below, the machine readable medium 304 may be encoded with a set of executable instructions 306, 308, 310, 312, It should be understood that part or all of the executable instructions and/or electronic circuits included within one box may, in alternate implementations, be included in a different box shown in the figures or in a different box not shown.
[0044] Instructions 308, when executed, cause the processing resource 302 to receive a security indicator from a user of a community of a security information sharing platform. Instructions 308, when executed, cause the processing resource 302 to include the security indicator (e.g., received by instructions 308) in community-based security information, instructions 310, when executed, cause the processing resource 302 to monitor for occurrence of a data disappearance condition associated with the security indicator.
Upon occurrence of the data disappearance condition as detected by monitoring instructions 310, execution of instructions 312 may be triggered, instructions 312, when executed, cause the processing resource 302 to sanitize (i.e., completely or partially render unusable, inaccessible,
indecipherable, etc.) community-based security information that is associated with the security indicator and that is attributable to the user. In an illustration, an IP number may be sanitized by deleting the last octet, in some
implementations, the data disappearance condition may relate to a time- based policy, a community membership policy, a community action policy, a security indicator access policy, or a security situation policy.
[0045] FIG, 4 is a flow diagram depicting an example method 400 for protecting security information. Method 400 may be performed by a system that includes a physical processing resource implementing or executing machine readable instructions stored on a machine readable medium.
Additionally or alternatively, the system performing method 400 may include electronic circuitry. For example, at least some portions of method 400 may be performed by system 120, 200, or 300. in some implementations of the present disclosure, the blocks of method 400 may be executed substantially concurrently, may be ongoing, and/or may repeat. In some implementations of the present disclosure, method 400 may include more or fewer blocks than are shown in FIG. 4.
[0048] Method 400 begins at block 402, and continues to block 404, where method 400 includes receiving security information provided by a user of a community of a security information sharing platform. At block 408, method 400 includes detecting that a data disappearance condition associated with the security information is met. Various example data disappearance conditions and the detection thereof will now be described, !t should be understood that other data disappearance conditions also may be suitable, in some cases, combinations of data disappearance conditions may be utilized (e.g., using logical operators, conditional statements, a prioritization, etc.). Data disappearance conditions may relate to a time-based policy, a
community membership policy, a community action policy, an access policy, or a security situation policy.
[0047] In some implementations, block 408 may include detecting that the security information has been accessed by or shared with a predefined set of users of the community (i.e., an access policy). The predefined set may be all users of the community or a specified subset of users of the community. For example, after the security information and/or related security information has been accessed or shared by the community, the user may prefer that the accessed or shared information be rendered unusable to reduce the chance of data leakage.
[0048] In some implementations, block 408 may include detecting a lapse of a predefined time interval for availability of security information (i.e., a time- based policy). For example, the user may prefer that their security
information and/or related security information to be available for no longer than the predefined time interval (e.g., on the order of hours, days, or weeks, etc.).
[0049] In some implementations, block 408 may include detecting changes in membership of the community (i.e., a community membership policy). For example, a user may prefer that their security information and/or related security information be rendered unusable if new users join the community. In other examples, any user of the community may prefer that any security information they contributed be rendered unusable when the user leaves the community.
[0050] In some implementations, block 408 may include detecting confirmation from the community that security information is no longer needed. For example, a member of the community may finish reviewing the security information (e.g., scanning their own environment for threats similar, etc.) and then may indicate to the security information sharing platform that the member no longer needs the security information. In some
implementations, block 408 may include detecting a security information recall request, from the user that submitted the security information for example. For example, a recall request may be useful if the user determines that security information is erroneous. The foregoing conditions may be referred to as community action policies.
[0051 ] In some implementations, block 408 may include detecting a change in a sensitivity level property related to the security information (i.e., a security situation policy). For example, some security information may initially be assessed as low sensitivity, but upon analysis among the community or cross-correlation within the security information sharing platform, the security information may be upgraded to high sensitivity (by virtue of being, e.g., part of a campaign, related to certain threat actors, etc.). in such an example, an increase in sensitivity may trigger the creation of other data disappearance conditions, such as a time-based policy or an access policy.
[0052] In some implementations, block 408 may include detecting increased risk of exposure of information outside the community (i.e., a security situation policy). For example, an increased risk of exposure may accompany a past, present, or imminent security breach of the security information sharing platform (e.g., a breach of the database storing
community-based security information). As another example, an increased risk of exposure may accompany a third party request for access to community-based security information, which may trigger sanitizing the security information, depending on the identity of the third party. However, in other implementations, a third party request may cause security information to be preserved (e.g., if the third party is an authority, a threat intelligence provider, or other trusted entity).
[0053] In some implementations, block 408 may include detecting a change in security situation external to the security information sharing platform. For example, an increase in a security or threat level publicized by an authority, a threat intelligence provider, or another trusted or independent entity, may cause community-based security information to be rendered unusable, at least temporarily.
[0054] At block 410, method 400 includes rendering unusable at least some of the security information, in response to the detecting the data disappearance condition at block 408. At block 412, method 400 may end.
[0055] FIG. 5 is a flowchart of an example method 500 for protecting the security information. As with method 400, method 500 may be performed by a system that includes a physical processing resource implementing or executing machine readable instructions stored on a machine readable medium or includes electronic circuitry, in particular, at least some portions of method 500 may be performed by system 120, 200, or 300. In some implementations of the present disclosure, one or more blocks of method 500 may be executed substantially concurrently or in a different order than shown in FIG, 5. Some of the blocks of method 500 may, at times, be ongoing and/or may repeat, in some implementations of the present disclosure, method 500 may include more or fewer blocks than are shown in FIG. 5.
[0056] Method 500 begins at block 502, and continues to block 504, where method 500 includes receiving security information provided by a user of a community of a security information sharing platform (also referred to herein as received security information, for convenience). For example, the security information may be received via the security information sharing platform by a server computing device or a client computing device of the security information sharing platform. In some cases, the received security information may be a security indicator. In other cases, the received security information may be another type security information, such as contextual information, TTPs, a mitigation strategy, campaign information, etc. Blocks 404 and 504 may be analogous in many respects.
[0057] At block 506, method 500 includes receiving a data disappearance condition (e.g., from the user). The data disappearance condition is associated with the security information received at block 504. Examples of the data disappearance condition may be analogous to the data
disappearance conditions describe above, such as those described with respect to block 408. in some examples, multiple data disappearance conditions may be received at block 506, as well as a directive for combining the data disappearance conditions (e.g. , logical operator, conditional statements, a prioritization, etc.).
[0058] In some implementations, block 506 also may include receiving a data disappearance plan (e.g., from the user). For example, the data disappearance plan may specify whether rendering security information unusable includes deleting security information, transforming security information (e.g., via hash function), deleting a decryption key for encrypted security information, or other techniques. The data disappearance plan may also indicate whether security information is to be selectively rendered unusable. For example, the data disappearance plan may specify that portions of the received security information (and/or related security information) attributable or traceable back to a user are to be rendered unusable, but may allow other portions of the received security information (and/or related security information) to remain available to the community for further analysis and investigation. Additionally, the data disappearance plan may specify which pieces of related information are rendered unusable, such as any one or combination of the security indicator, community notes and comments, contextual information, TTPs, mitigation strategies, campaign information, etc. Blocks 504 and 506 together may be deemed an "initial phase",
[0059] At block 508, method 500 includes maintaining community-based security information that includes the security information received at block 504. For example, maintaining may include operating a database of community-based security information and storing the security information received at block 504 in such a database. Maintaining also may include storing community-provided security information related to the received security information (e.g., community generated insights of any security information type described above) in such a database of community-based security information. Relations between the received security information and related security information may be documented in the database. In some implementations, maintaining may include storing security information in a cache or in data storage, at a client computing device for example. In some implementations, block 508 may include attaching data disappearance conditions to security information to create a sticky policy,
[0060] At block 510, method 500 includes monitoring whether a data disappearance condition associated with the security information has been met. For example, various sources of data are monitored, such as the date and time, membership in the community, security information sharing platform integrity status, correspondence and pronouncements from trusted third parties (e.g., related to security/threat levels), etc.
[0061 ] At block 512, method 500 includes determining from the monitoring at block 510 whether a data disappearance condition has been met. If a data disappearance condition has not been met ("NO" at block 512), method 500 returns to block 510. If a data disappearance condition has been met ("YES" at block 512, method 500 proceeds to block 514.
[0062] At block 514, method 500 includes determining whether selective data disappearance was specified by the user in the data disappearance plan. If yes ("YES" at block 514), method 500 proceeds to block 516, which includes rendering unusable select portions of the received security information or select portions of community-based security information related to the received security information (e.g., select portions identified in the data disappearance plan), including portions attributable to the user. At block 518, method 500 includes maintaining (i.e., not rendering unusable) other portions of the received security information or portions of community-based security information related to the received security information. Blocks 516 and 518 may be performed in accordance with specifics detailed in the data disappearance plan.
[0063] Referring again to block 514, if selective data disappearance was not specified by the user ("NO" at block 514), method 500 proceeds to block 520 which includes rendering unusable ail of the received security information and community-based security information related to the received security information. For example, all of a security indicator submitted by a user and related security information generated by the community may be securely deleted. After block 518 and 520, method 500 may end.
[0064] In view of the foregoing description, it can be appreciated that security information submitted to and maintained by a security information sharing platform can be rendered unusable upon fulfillment of certain data disappearance conditions. Accordingly, users of a security information sharing platform may share security information with a community, such as an incident report of a cyber-attack or a security breach, while reducing the risk that such sensitive information leaks outside of the community.
[0065] In the foregoing description, numerous details are set forth to provide an understanding of the subject matter disclosed herein. However, implementation may be practiced without some or all of these details. Other implementations may include modifications and variations from the details discussed above. It is intended that the following claims cover such modifications and variations.

Claims

What is claimed:
1 . A method for protecting security information by a security information sharing platform that includes a physical processing resource implementing machine readable instructions, the method comprising:
receiving, via the security information sharing platform, security information provided by a user of a community of the security information sharing platform;
detecting that a data disappearance condition associated with the security information is met; and
in response to the defecting, rendering unusable at least some of the security information.
2. The method of claim 1 , wherein the detecting that a data
disappearance condition associated with the security information is met includes detecting that the security information has been accessed by or shared with a predefined set of users of the community.
3. The method of claim 1 , wherein the detecting that a data
disappearance condition associated with the security information is met includes detecting a lapse of a predefined time interval for availability of the security information.
4. The method of claim 1 , wherein the detecting that a data
disappearance condition associated with the security information is met includes detecting changes in membership of the community.
5. The method of claim 1 , wherein the detecting that a data
disappearance condition associated with the security information is met includes detecting confirmation from the community that the security information is no longer needed or detecting a security information recall request.
6. The method of claim 1 , wherein the detecting that a data
disappearance condition associated with the security information is met includes detecting a change in a sensitivity level property related to the security information.
7. The method of claim 1 , wherein the detecting that a data
disappearance condition associated with the security information is met includes detecting increased risk of exposure of the security information outside the community.
8. The method of claim 1 , where the rendering inaccessible at least some of the security information includes:
rendering unusable select portions of the security information, including portions attributable to the user, and
maintaining other portions of the security information.
9. The method of claim 1 , further comprising receiving the data disappearance condition from the user during an initial phase that includes the receiving the security information.
10. A system comprising:
a security information coordinator to:
maintain community-based security information associated with a community of a security information sharing platform,
receive security information from a user of the community, and include the security information in the community-based security information;
a policy enforcer to detect occurrence of a data disappearance condition associated with the security information; and a data remover to render unusable at least some community-based security information that pertains to the security information, upon detection by the policy enforcer.
1 1 . The system of claim 10, wherein the data disappearance condition relates to a community membership, the security information being accessed, or a situational policy.
12. The system of claim 10, wherein the data disappearance condition relates to changes in a security situation.
13. The system of claim 10, wherein the data disappearance condition is triggered by an action within the community.
14. A non-transitory machine readable medium storing instructions executable by a processing resource of a system involved in sharing of community-based security information, the non-transitory machine readable medium comprising:
instructions to receive a security indicator from a user of a community of a security information sharing platform;
instructions to include the security indicator in community-based security information;
instructions to monitor for occurrence of a data disappearance condition associated with the security indicator; and
instructions to, upon occurrence of the data disappearance condition, sanitize community-based security information that is associated with the security indicator and that is attributable to the user.
15. The non-transitory machine readable medium of claim 14, wherein the data disappearance condition relates to a time-based policy, a community membership policy, a community action policy, a security indicator access policy, or a security situation policy.
PCT/US2016/020676 2016-03-03 2016-03-03 Data disappearance conditions WO2017151135A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2016/020676 WO2017151135A1 (en) 2016-03-03 2016-03-03 Data disappearance conditions

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2016/020676 WO2017151135A1 (en) 2016-03-03 2016-03-03 Data disappearance conditions

Publications (1)

Publication Number Publication Date
WO2017151135A1 true WO2017151135A1 (en) 2017-09-08

Family

ID=59744218

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2016/020676 WO2017151135A1 (en) 2016-03-03 2016-03-03 Data disappearance conditions

Country Status (1)

Country Link
WO (1) WO2017151135A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110302634A1 (en) * 2009-01-16 2011-12-08 Jeyhan Karaoguz Providing secure communication and/or sharing of personal data via a broadband gateway
US8538020B1 (en) * 2010-12-29 2013-09-17 Amazon Technologies, Inc. Hybrid client-server cryptography for network applications
US8552833B2 (en) * 2010-06-10 2013-10-08 Ricoh Company, Ltd. Security system for managing information on mobile wireless devices
US20150169893A1 (en) * 2013-12-12 2015-06-18 Citrix Systems, Inc. Securing Sensitive Data on a Mobile Device
US20150373040A1 (en) * 2013-01-31 2015-12-24 Hewlett-Packard Development Company, L.P. Sharing information

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110302634A1 (en) * 2009-01-16 2011-12-08 Jeyhan Karaoguz Providing secure communication and/or sharing of personal data via a broadband gateway
US8552833B2 (en) * 2010-06-10 2013-10-08 Ricoh Company, Ltd. Security system for managing information on mobile wireless devices
US8538020B1 (en) * 2010-12-29 2013-09-17 Amazon Technologies, Inc. Hybrid client-server cryptography for network applications
US20150373040A1 (en) * 2013-01-31 2015-12-24 Hewlett-Packard Development Company, L.P. Sharing information
US20150169893A1 (en) * 2013-12-12 2015-06-18 Citrix Systems, Inc. Securing Sensitive Data on a Mobile Device

Similar Documents

Publication Publication Date Title
Allodi et al. Security events and vulnerability data for cybersecurity risk estimation
US20180191763A1 (en) System and method for determining network security threats
US10873601B1 (en) Decoy network-based service for deceiving attackers
US10104112B2 (en) Rating threat submitter
US11095625B2 (en) Data objects associated with private set intersection (PSI)
Nhlabatsi et al. Threat-specific security risk evaluation in the cloud
WO2017131788A1 (en) Encryption of community-based security information based on time-bound cryptographic keys
US11303662B2 (en) Security indicator scores
CN111542811B (en) Enhanced network security monitoring
Kaur et al. Cybersecurity threats in Fintech
US10693914B2 (en) Alerts for communities of a security information sharing platform
Zakaria et al. Early Detection of Windows Cryptographic Ransomware Based on Pre-Attack API Calls Features and Machine Learning
US20230068946A1 (en) Integrated cybersecurity threat management
US11962609B2 (en) Source entities of security indicators
US10701044B2 (en) Sharing of community-based security information
WO2017151135A1 (en) Data disappearance conditions
WO2017131739A1 (en) Communities on a security information sharing platform
US10951405B2 (en) Encryption of community-based security information
Preuveneers et al. Privacy-preserving correlation of cross-organizational cyber threat intelligence with private graph intersections
US11356484B2 (en) Strength of associations among data records in a security information sharing platform
Porteous Cybersecurity: Technical and Policy Challenges
Buksov Characteristics of a Successful Ransomware Attack
Koot et al. Privacy from an Informatics Perspective
Sakib et al. A Review of the Evaluation of Ransomware: Human Error or Technical Failure?
Kihiu Comparative analysis of disiinctive features of Ransomware tactics in relation to other Malware

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16892882

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 16892882

Country of ref document: EP

Kind code of ref document: A1