US20210092142A1 - Techniques for targeted botnet protection - Google Patents
Techniques for targeted botnet protection Download PDFInfo
- Publication number
- US20210092142A1 US20210092142A1 US17/114,522 US202017114522A US2021092142A1 US 20210092142 A1 US20210092142 A1 US 20210092142A1 US 202017114522 A US202017114522 A US 202017114522A US 2021092142 A1 US2021092142 A1 US 2021092142A1
- Authority
- US
- United States
- Prior art keywords
- identifiers
- botnet
- tmm
- traffic
- request message
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims description 31
- 230000007246 mechanism Effects 0.000 claims abstract description 29
- 238000012544 monitoring process Methods 0.000 claims abstract description 11
- 230000000903 blocking effect Effects 0.000 claims description 12
- 230000000694 effects Effects 0.000 claims description 7
- 230000003213 activating effect Effects 0.000 claims description 4
- 238000010586 diagram Methods 0.000 description 35
- KLTWGRFNJPLFDA-UHFFFAOYSA-N benzimidazolide Chemical compound C1=CC=C2[N-]C=NC2=C1 KLTWGRFNJPLFDA-UHFFFAOYSA-N 0.000 description 22
- 238000004422 calculation algorithm Methods 0.000 description 8
- 230000008569 process Effects 0.000 description 7
- 230000004044 response Effects 0.000 description 7
- 238000012545 processing Methods 0.000 description 6
- 238000012546 transfer Methods 0.000 description 5
- 239000003795 chemical substances by application Substances 0.000 description 4
- 238000004891 communication Methods 0.000 description 4
- 238000010276 construction Methods 0.000 description 4
- 230000001681 protective effect Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 230000006855 networking Effects 0.000 description 3
- 230000036961 partial effect Effects 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 230000003466 anti-cipated effect Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 230000001010 compromised effect Effects 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- ZXQYGBMAQZUVMI-GCMPRSNUSA-N gamma-cyhalothrin Chemical compound CC1(C)[C@@H](\C=C(/Cl)C(F)(F)F)[C@H]1C(=O)O[C@H](C#N)C1=CC=CC(OC=2C=CC=CC=2)=C1 ZXQYGBMAQZUVMI-GCMPRSNUSA-N 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000008520 organization Effects 0.000 description 2
- 238000000638 solvent extraction Methods 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 210000001072 colon Anatomy 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 230000007123 defense Effects 0.000 description 1
- 239000012634 fragment Substances 0.000 description 1
- 208000015181 infectious disease Diseases 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 238000013515 script Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/14—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
- H04L63/1408—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
- H04L63/1425—Traffic logging, e.g. anomaly detection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/08—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/14—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
- H04L63/1441—Countermeasures against malicious traffic
- H04L63/145—Countermeasures against malicious traffic the attack involving the propagation of malware through the network, e.g. viruses, trojans or worms
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/20—Network architectures or network communication protocols for network security for managing network security; network security policies in general
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L2463/00—Additional details relating to network architectures or network communication protocols for network security covered by H04L63/00
- H04L2463/144—Detection or countermeasures against botnets
-
- H04L61/2007—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L61/00—Network arrangements, protocols or services for addressing or naming
- H04L61/50—Address allocation
- H04L61/5007—Internet protocol [IP] addresses
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/02—Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
Definitions
- Embodiments relate to the field of computer networking; and more specifically, to techniques for botnet identification and targeted botnet protection.
- a botnet is a group of Internet-connected computing devices communicating with other similar machines in an effort to complete repetitive tasks and objectives.
- Botnets can include computers whose security defenses have been breached and control conceded to a third party.
- Each such compromised device known as a “bot,” may be created when a computer is penetrated by software from a malware (i.e., a malicious software) distribution.
- the controller of a botnet is able to direct the activities of these compromised computers through communication channels formed by standards-based network protocols such as Internet Relay Chat (IRC), Hypertext Transfer Protocol (HTTP), etc.
- IRC Internet Relay Chat
- HTTP Hypertext Transfer Protocol
- Computers can be co-opted into a botnet when they execute malicious software. This can be accomplished by luring users into making a drive-by download, exploiting web browser vulnerabilities, or by tricking the user into running a Trojan horse program, which could come from an email attachment.
- This malware typically installs modules that allow the computer to be commanded and controlled by the botnet's operator. After the software is executed, it may “call home” to the host computer. When the re-connection is made, depending on how it is written, a Trojan may then delete itself, or may remain present to update and maintain the modules. Many computer users are unaware that their computer is infected with bots.
- Botnets can include many different computers (e.g., hundreds, thousands, tens of thousands, hundreds of thousands, or more) and the membership of a botnet can change over time.
- DDoS distributed denial-of-service
- FIG. 1 is a block diagram illustrating botnet member identification and targeted botnet protection according to some embodiments.
- FIG. 2 is a flow diagram illustrating operations for botnet member identification and targeted botnet protection according to some embodiments.
- FIG. 3 is a flow diagram illustrating exemplary operations for botnet member identification according to some embodiments.
- FIG. 4 is a block diagram illustrating botnet member identification according to some embodiments.
- FIG. 5 is a flow diagram illustrating exemplary operations for targeted botnet protection according to some embodiments.
- FIG. 6 is a block diagram illustrating an exemplary on premise deployment environment for a traffic monitoring module and/or botnet identification module according to some embodiments.
- FIG. 7 is a block diagram illustrating an exemplary cloud-based deployment environment for a traffic monitoring module and/or botnet identification module according to some embodiments.
- FIGS. 8-11 illustrate operations for botnet member identification according to some embodiments, in which:
- FIG. 8 is a block diagram illustrating exemplary operations including malicious event identification for botnet member identification according to some embodiments.
- FIG. 9 is a block diagram illustrating exemplary operations including traffic source similarity determination for botnet member identification, which can be performed after the operations of FIG. 8 , according to some embodiments.
- FIG. 10 is a block diagram illustrating exemplary operations including periodic-source cluster generation for botnet member identification, which can be performed after the operations of FIG. 9 , according to some embodiments.
- FIG. 11 is a block diagram illustrating exemplary operations including attacking-clusters graph generation and botnet identification for botnet member identification, which can be performed after the operations of FIG. 10 , according to some embodiments.
- FIG. 12 is a flow diagram illustrating exemplary operations for providing targeted botnet protection according to some embodiments.
- FIG. 13 is a flow diagram illustrating exemplary operations for providing targeted botnet protection according to some embodiments.
- FIG. 14 is a flow diagram illustrating exemplary operations for identifying a subset of a plurality of end stations that collectively act as a suspected botnet according to some embodiments.
- Bracketed text and blocks with dashed borders are used herein to illustrate optional operations that add additional features to embodiments of the invention. However, such notation should not be taken to mean that these are the only options or optional operations, and/or that blocks with solid borders are not optional in certain embodiments of the invention.
- references in the specification to “one embodiment,” “an embodiment,” “an example embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
- Coupled is used to indicate that two or more elements, which may or may not be in direct physical or electrical contact with each other, co-operate or interact with each other.
- Connected is used to indicate the establishment of communication between two or more elements that are coupled with each other.
- URL Uniform Resource Locator
- URI Uniform Resource Identifier
- the techniques shown in the figures can be implemented using code and data stored and executed on one or more electronic devices (e.g., an end station, a network device).
- electronic devices which are also referred to as computing devices, store and communicate (internally and/or with other electronic devices over a network) code and data using computer-readable media, such as non-transitory computer-readable storage media (e.g., magnetic disks; optical disks; random access memory (RAM); read only memory (ROM); flash memory devices; phase-change memory) and transitory computer-readable communication media (e.g., electrical, optical, acoustical or other form of propagated signals, such as carrier waves, infrared signals, digital signals).
- non-transitory computer-readable storage media e.g., magnetic disks; optical disks; random access memory (RAM); read only memory (ROM); flash memory devices; phase-change memory
- transitory computer-readable communication media e.g., electrical, optical, acoustical or other form of propagated signals, such
- such electronic devices include hardware, such as a set of one or more processors coupled to one or more other components, e.g., one or more non-transitory machine-readable storage media to store code and/or data, and a set of one or more wired or wireless network interfaces allowing the electronic device to transmit data to and receive data from other computing devices, typically across one or more networks (e.g., Local Area Networks (LANs), the Internet).
- the coupling of the set of processors and other components is typically through one or more interconnects within the electronic device, (e.g., busses and possibly bridges).
- non-transitory machine-readable storage media of a given electronic device typically stores code (i.e., instructions) for execution on the set of one or more processors of that electronic device.
- code i.e., instructions
- one or more parts of various embodiments may be implemented using different combinations of software, firmware, and/or hardware.
- Embodiments described herein provide for methods, systems, non-transitory computer readable media, and apparatuses for botnet identification and for targeted botnet protection.
- members of a botnet can be identified by observing incoming traffic to a set of servers (e.g., web application servers) over time, finding behavior-similarity between the sources of the traffic that is malicious by examining the traffic broken into time-periods, and finding associates between sets of sources that have many members and persist over a long time duration.
- sets of botnet sources can thus be identified, where each set of botnet sources includes multiple source identifiers (e.g., Internet Protocol (IP) addresses utilized by the end stations participating in a particular botnet).
- IP Internet Protocol
- sets of botnet sources can be utilized to enable targeted botnet protection.
- a traffic monitoring module upon detecting incoming traffic destined to a server that is deemed as malicious according to a security rule, can identify whether the source of that malicious traffic is identified within any of the sets of botnet sources.
- security measures can be activated for any further traffic received from any of the sources of that one set of botnet sources for a period of time.
- FIG. 1 is a block diagram illustrating a system 100 providing botnet member identification and targeted botnet protection according to some embodiments.
- botnet member identification and “targeted botnet protection” may be described together in this Figure and throughout this description, it is to be understood that these techniques may also be separately/individually utilized without the other technique.
- a set of one or more server end stations 110 execute or otherwise implement one or more servers 111 providing access to data.
- the server end stations 110 implement a web application server 116 , though in other embodiments the set of server end stations 110 can enable other types of servers, including but not limited to database servers, file servers, print servers, mail servers, gaming servers, application servers, Domain Name System (DNS) servers, etc.
- DNS Domain Name System
- the set of server end stations 110 may be “protected” by a security gateway 102 .
- Security gateways 102 such as firewalls, database firewalls, file system firewalls, and web application firewalls (WAFs)—are network security systems that protect software applications (e.g., web application server(s) 116 or servers 111 ) executing on electronic devices (e.g., server end station(s) 110 ) within a network by controlling the flow of network traffic passing through the security gateway 102 .
- software applications e.g., web application server(s) 116 or servers 111
- electronic devices e.g., server end station(s) 110
- the security gateway 102 can prevent malicious traffic from reaching a protected server, modify the malicious traffic, and/or create an alert to trigger another responsive event or notify a user of the detection of the malicious traffic.
- the security gateway 102 is communicatively coupled between one or more client end stations 120 A- 120 N and the server end stations 110 , such that all traffic destined to the server end stations 110 is first passed through (or made available to) the security gateway 102 for analysis.
- the security gateway 102 executes as part of a separate server end station or a (dedicated or shared) network device; but in other embodiments, the security gateway 102 operates as part of server end station(s) 110 (for example, as a software module), or is implemented using or another type of electronic device and can be software, hardware, or a combination of both. Further detail regarding security gateways will be described later herein with regard to FIGS. 6 and 7 .
- Client end stations 120 A- 120 N are computing devices operable to execute applications 126 that, among other functions, can access the content/services provided by other computing devices, such as the server end stations 110 .
- VoIP Voice over Internet Protocol
- UE user equipment
- GPS Global Positioning Satellite
- the client end stations 120 include one (or possibly multiple) malware 122 modules that cause the client end stations 120 to participate in one (or more) botnets 104 A- 104 N.
- a client end station e.g., 120 A
- the user e.g., owner/operator
- the client end station may act as part of a botnet (e.g., receive commands from a controller, begin transmitting malicious traffic, etc.) and perform malicious actions without the user's knowledge, perhaps even concurrently while the user is actively utilizing the client end station.
- a traffic monitoring module 104 or “TMM” (which may be implemented within a security gateway 102 ) can be configured to receive network traffic from client end stations 120 that is directed to one or more server(s) 111 , and provide traffic data 108 (e.g., raw traffic/captures of packets, “event” data structures that represent certain traffic, etc.) to a botnet identification module (“BIM”) 106 at circle ‘1’.
- traffic data 108 e.g., raw traffic/captures of packets, “event” data structures that represent certain traffic, etc.
- BIM botnet identification module
- the BIM 106 may receive such traffic data 108 from only one deployment 112 A (e.g., one or more traffic monitoring modules 104 of a particular enterprise, network, etc.), though in other embodiments the BIM 106 may receive such traffic data 108 from multiple deployments 112 A- 112 N.
- the BIM 106 performs an analysis of this traffic data 108 at circle ‘2’ to determine one or more sets of botnet sources.
- Each set of botnet sources from the one or more sets of botnet sources can be specific to a particular botnet, and include source identifiers for each client end station determined to belong to that botnet (or, determined as being highly likely as belonging to a botnet).
- each source identifier comprises an IP address, such as an IPv4 address or IPv6 address. Further detail describing exemplary techniques for botnet membership determinations is presented later herein with regard to FIGS. 3 and 4 .
- the determined sets of botnet sources 114 can be provided from the BIM 106 to one or more TMMs 104 , which may be located at one or more deployments 112 A- 112 N.
- the TMM 104 can utilize the sets of botnet sources 114 to protect the server(s) 111 (and/or other infrastructure) against the identified botnets.
- a partial representation of sets of botnet sources 114 is illustrated herein as including (at least) three sets of botnet sources.
- a first set of botnet sources is illustrated as including at least three sources: 1.1.1.1, 2.2.2.2, and 5.5.5.5.
- a second set of botnet sources is illustrated as including at least three sources: 2.2.2.2, 4.4.4.4, and 6.6.6.6.
- a third set of botnet sources is illustrated as including at least three sources: 7.7.7.7, 8.8.8.8, and 9.9.9.9.
- a single source identifier (e.g., 2.2.2.2) may exist in multiple different ones of the sets (such as when a single client end station using that IP address has been infected with multiple malware types and thus is a member of multiple botnets), though in other scenarios this may not be the case.
- the TMM 104 may receive malicious traffic 130 A from a source “6.6.6.6” (from client end station 120 A), and analyze the traffic 130 A (e.g., according to security rules) to determine that it is in fact malicious.
- the TMM 104 may be configured to perform a protective action at circle ‘5’ in response, such as blocking the traffic 130 A (e.g., dropping the packets so that they are not sent to the destination server), downgrading the priority of the traffic in being processed/forwarded, modifying the traffic, reporting the incident to another device or human user, etc.
- the TMM 104 can also consult the sets of botnet sources 114 by performing a lookup using a source identifier from the malicious traffic 130 A.
- a source IP address from an IP header of the traffic may serve as a source identifier, though other source identifiers can also be used, including but not limited to a value from a User-Agent header field of an HTTP request message (which may be able to uniquely identify a source), an IP address from an X-Forwarded-For (XFF) header field of an HTTP request message (which may identify an originating source of the message when the traffic is being passed via an intermediate proxy or load balancer), etc.
- XFF X-Forwarded-For
- the source IP address of the traffic 130 A i.e., 6.6.6.6
- the source IP address of the traffic 130 A is found to exist within the second set of botnet sources. Accordingly, it is likely that a botnet attack from that botnet (e.g., botnet 104 A) is beginning.
- the TMM 104 can “activate” one or more protection measures for any additional traffic that may be received from any of the source identifiers in that second set—i.e., 2.2.2.2, 4.4.4.4, 6.6.6.6, and so on.
- the TMM 104 can immediately perform the activated protection measure for the traffic 130 B (at circle ‘8’, such as “dropping” the traffic by not forwarding it on to its intended destination), potentially even before performing any other security analysis of the traffic.
- a potentially large amount of traffic that may arrive from a botnet in a potentially short amount of time can be quickly managed (e.g., dispensed with), thus enabling the TMM 104 to more easily accommodate the additional load from the attack and likely preserve resources for continuing to process non-malicious traffic destined for the server(s) 111 —whether they are targeted by the botnet or not.
- embodiments may “activate” protection measures for all of these botnets.
- these protection measures may be activated for different amounts of time based upon the characteristics of the particular involved botnets, such as the average or maximum length of time observed for attacks from those particular botnets.
- the activated protection measures can be disabled at the expiration of a time period.
- the time period can be a constant value (e.g., 1 minute, 5 minutes, 10 minutes, 20 minutes, 1 hour, etc.), which may be different based upon the particular botnet.
- the length of the time period can be based upon previously-detected activity of that botnet, and could be based upon an average attack length of time observed from that botnet, a maximum attack length of time observed from that botnet, etc.
- the time period may continue indefinitely until a condition is satisfied.
- a condition may be configured such that the condition is met when no traffic is observed from any of the sources within the particular botnet for a threshold period of time (e.g., 5 minutes, 15 minutes, 1 hour, etc.).
- the TMM 104 may receive malicious traffic 130 A and analyze the traffic 130 A to determine that it is malicious. In some embodiments, this analysis may be based upon one or more packets arriving from one particular client end station (e.g., 120 A).
- the analysis can be expanded to be based upon traffic from multiple client end stations.
- a security rule could be configured to watch for a number of occurrences of a particular pattern within traffic within an amount of time (e.g., when X packets are received within Y seconds that include a particular character Z).
- this security rule could be applied to one particular source, and thus trigger when one source, for example, sends X packets within Y seconds that include a particular character Z.
- the TMM 104 can activate protection measures against all sources within a shared set of botnet sources that includes the transmitting entity.
- the security rule could also or alternatively be triggered when the condition is met due to traffic from multiple sources within a same set of botnet sources (i.e., within a same botnet).
- the condition can be satisfied when two or more sources within a set of botnet sources—but potentially not any of them individually—satisfies the condition.
- the rule could be triggered (and the protection measures activated for the botnet), when a rule requires that 10 particular packets are received, and a first client (in the botnet) sends 5 such packets and a second client (also in the same botnet) sends 5 such packets.
- the security rule can look at traffic over time from multiple members of a botnet, which can make this protection scheme more granular that watching an entire botnet, but more comprehensive than just watching one particular source.
- FIG. 2 is a flow diagram illustrating operations 200 for botnet member identification and targeted botnet protection according to some embodiments.
- the operations 200 include, at block 205 , identifying members of one or more botnets based upon network traffic observed over time, which includes identifying a plurality of source identifiers (e.g., IP addresses) of end stations acting as part of each botnet.
- source identifiers e.g., IP addresses
- the operations also include, at block 210 , when traffic identified as malicious is received from one of the source identifiers of one of the botnets, blocking all traffic from all members of that botnet for an amount of time.
- FIG. 3 is a flow diagram illustrating exemplary operations 300 for botnet member identification according to some embodiments. These operations 300 can be a part of block 205 shown in FIG. 2 .
- the operations 300 include at block 305 observing incoming requests to one or more web applications at one or more sites for an amount of time, and at block 310 , identifying a subset of the observed incoming requests as being part of a malicious attack.
- the operations 300 can also include at block 315 , creating, for each of the identified malicious requests, an event data structure.
- the event data structure can include a unique identifier of the request's source, a unique identifier of the request's destination, and an identifier of a type of the malicious attack of the request.
- blocks 305 and 310 and 315 may be performed by the TMM 104 , though one or more of these blocks could also be performed by the BIM 106 .
- FIG. 4 is a block diagram 400 illustrating botnet member identification concepts according to some embodiments.
- the events are split into separate time period groupings based upon the time of each event.
- the operations include identifying behavior similarity between traffic sources of the malicious requests (using the event data structures) at block 335 .
- block 335 includes blocks 340 - 370 .
- events of a same attack type are selected, and at block 345 a similarity score is calculated between each pair of traffic sources.
- calculating a similarity score includes representing selected events as a graph of sources connected to targets ( 350 ) and applying a similarity algorithm for vertices' similarity based upon graph ( 355 ). As an example, see illustration 410 of FIG. 4 .
- Block 360 the sources can be clustered based upon the similarity scores.
- Block 360 in some embodiments includes assigning each source to its own cluster ( 365 ), and iteratively merging clusters of the next top-similarity pair of sources until the similarity of the next-examined top-similarity pair is below a threshold value ( 370 ). As an example, see illustration 415 of FIG. 4 .
- block 375 associations are determined between sets of sources that have a threshold amount of members (or, “many” members) that persisted over a threshold amount of time (or, a “long” amount of time).
- block 375 includes creating an attacking-clusters graph ( 380 ) (as an example, see illustration 420 of FIG. 4 ), and finding ( 385 ) all paths that contain only edges with a minimal weight, that pass through vertices with a minimal weight, and have a length (e.g., number of edges) larger than a threshold amount of edges. This can result with each path being a botnet “candidate”—i.e., each path may be a set of botnet sources, subject to further processing.
- sources can be removed from the paths that belong to a list of whitelisted sources.
- sources associated with known Content Distribution Networks (CDNs), search engines, known scanning services, etc. can be removed from the paths (lists) of botnet sources, as blocking traffic from these sources—regardless of whether some of it may be malicious—could be problematic to the operation of the protected server(s) 111 .
- CDNs Content Distribution Networks
- search engines search engines
- known scanning services etc.
- blocks 320 - 395 may be performed by the BIM 106 , though in other embodiments some or all of these blocks could be performed by other modules.
- FIG. 5 is a flow diagram illustrating exemplary operations 500 for targeted botnet protection according to some embodiments.
- the operations 500 may be for block 210 of FIG. 2 , i.e., when traffic identified as malicious is received from one of the source identifiers of one of the botnets, block all traffic from all members of that botnet for an amount of time.
- some or all of these operations 500 may be performed by the TMM 104 described herein.
- the operations 500 can include, at block 505 , receiving one or more sets of botnet sources, where each set includes a plurality of source IP addresses of a botnet. Then, at block 510 , traffic can be received from an IP address of one of the botnets (e.g., is identified within at least one of the sets of botnet sources) that is destined to a server.
- the operations 500 can include performing a protective action with regard to the traffic, such as “blocking” it from being forwarded to the server, subjecting it to additional analysis, notifying a user or process, assigning it a lower or different processing priority value, etc.
- 515 can include enabling a botnet security measure to be utilized against any traffic received from any of the source IP addresses belonging to the botnet (of which the source IP of received traffic belongs to) for a period of time. This can include, in some embodiments, subjecting such traffic to the same protective action that has been performed with regard to the traffic (e.g., blocking it), though it could include performing different actions.
- the “period of time” for enabling the security measure can be based upon a determined duration of attack involving that particular botnet, and could be based upon an average or maximum duration of attack involving that particular botnet.
- the operations 500 also include blocks 520 , 525 , and 530 .
- traffic is received from another IP address of the one botnet.
- the protective action is performed against the traffic, due to the traffic being originated by a member of the same botnet.
- the botnet security measure can be disabled for the botnet.
- the disabling occurs responsive to an expiration of a timer, and in some embodiments, the disabling occurs responsive to not receiving any traffic from any IP of the botnet for a threshold amount of time.
- FIG. 6 is a block diagram illustrating an exemplary on premise deployment environment for a TMM 104 and/or BIM 106 according to some embodiments.
- FIG. 6 illustrates the TMM 104 implemented in a security gateway 602 (which can be an enterprise security gateway) coupled between servers 111 and client end stations 120 A- 120 N.
- a security gateway 602 which can be an enterprise security gateway
- access to the servers 111 can be thought of as being “protected” by the security gateway 602 , as most (or all) desired interactions with any of the servers 111 will flow through the security gateway 602 .
- Security gateways such as firewalls, database firewalls, file system firewalls, and web application firewalls (WAFs)—are network security systems that protect software applications (e.g., web application servers 616 ) executing on electronic devices (e.g., server end stations 660 ) within a network (e.g., enterprise network 610 ) by controlling the flow of network traffic passing through the security gateway.
- software applications e.g., web application servers 616
- electronic devices e.g., server end stations 660
- the security gateway can prevent malicious traffic from reaching a protected server, modify the malicious traffic, and/or create an alert to trigger another responsive event or notify a user of the detection of the malicious traffic.
- the security gateway 602 is communicatively coupled between the client end stations ( 120 A- 120 N) and the server end stations 660 , such that all traffic (or a defined subset of traffic) destined to the server end stations 660 is first passed through (or made available to) the security gateway 602 for analysis. In some embodiments, part of the analysis is performed by the TMM 104 based upon one or more configured security rules 650 .
- the security gateway 602 executes as part of a separate server end station 630 B or a (dedicated or shared) network device 630 A; but in other embodiments, the security gateway 602 can operate as part of server end stations 660 (for example, as a software module), or can be implemented using or another type of electronic device and can be software, hardware, or a combination of both.
- a network device e.g., a router, switch, bridge
- a network device is an electronic device that is a piece of networking equipment, including hardware and software, which communicatively interconnects other equipment on the network (e.g., other network devices, end stations).
- Some network devices are “multiple services network devices” that provide support for multiple networking functions (e.g., routing, bridging, switching), and/or provide support for multiple application services (e.g., data, voice, and video).
- Security gateways are sometimes deployed as transparent inline bridges, routers, or transparent proxies.
- a security gateway deployed as a transparent inline bridge, transparent router, or transparent proxy is placed inline between clients (the originating client end station of the traffic 601 ) and servers (e.g., server(s) 111 ) and is “transparent” to both the clients and servers (the clients and the servers are not aware of the IP address of the security gateway, and thus the security gateway is not an apparent endpoint).
- packets sent between the clients and the servers will pass through the security gateway (e.g., arrive at the security gateway, be analyzed by the security gateway, and may be blocked or forwarded on to the server when the packets are deemed acceptable by the security gateway).
- security gateways can also be deployed as a reverse proxy or non-inline sniffer (which may be coupled to a switch or other network device forwarding network traffic between the client end stations ( 120 A- 120 N) and the server end stations 660 ).
- a reverse proxy or non-inline sniffer which may be coupled to a switch or other network device forwarding network traffic between the client end stations ( 120 A- 120 N) and the server end stations 660 ).
- the security gateway 602 and the server end station(s) 660 are illustrated as being within an enterprise network 610 , which can include one or more LANs.
- An enterprise is a business, organization, governmental body, or other collective body utilizing or providing content and/or services.
- a set of one or more server end stations 660 execute or otherwise implement one or more servers providing the content and/or services.
- the servers 111 include a database server 612 , a file server 614 , a web application server 616 , and a mail server 620 , though in other embodiments the set of server end stations 660 implement other types of servers, including but not limited to print servers, gaming servers, application servers, etc.
- a web application server 616 is system software (running on top of an operating system) executed by server hardware (e.g., server end stations 660 ) upon which web applications (e.g., web application 618 ) run.
- Web application servers 616 may include a web server (e.g. Apache, Microsoft® Internet Information Server (IIS), nginx, lighttpd) that delivers web pages (or other content) upon the request of HTTP clients (i.e., software executing on an end station) using the HTTP protocol.
- Web application servers 616 can also include an application server that executes procedures (i.e., programs, routines, scripts) of a web application 618 .
- Web application servers 616 typically include web server connectors, computer programming language libraries, runtime libraries, database connectors, and/or the administration code needed to deploy, configure, manage, and connect these components.
- Web applications 618 are computer software applications made up of one or more files including computer code that run on top of web application servers 616 and are written in a language the web application server 616 supports.
- Web applications 618 are typically designed to interact with HTTP clients by dynamically generating HyperText Markup Language (HTML) and other content responsive to HTTP request messages sent by those HTTP clients.
- HTML HyperText Markup Language
- HTTP clients typically interact with web applications by transmitting HTTP request messages to web application servers 616 , which execute portions of web applications 618 and return web application data in the form of HTTP response messages back to the HTTP clients, where the web application data can be rendered using a web browser.
- HTTP functions as a request-response protocol in a client-server computing model, where the web application servers 616 typically act as the “server” and the HTTP clients typically act as the “client.”
- HTTP Resources are identified and located on a network by Uniform Resource Identifiers (URIs)—or, more specifically, Uniform Resource Locators (URLs)—using the HTTP or HTTP Secure (HTTPS) URI schemes.
- URLs are specific strings of characters that identify a particular reference available using the Internet. URLs typically contain a protocol identifier or scheme name (e.g. http/https/ftp), a colon, two slashes, and one or more of user credentials, server name, domain name, IP address, port, resource path, query string, and fragment identifier, which may be separated by periods and/or slashes.
- HTTP/0.9 and HTTP/1.0 were revised in Internet Engineering Task Force (IETF) Request for Comments (RFC) 2616 as HTTP/1.1, which is in common use today.
- IETF Internet Engineering Task Force
- RRC Request for Comments
- HTTP/2 is based upon the SPDY protocol and improves how transmitted data is framed and transported between clients and servers.
- Database servers 612 are computer programs that provide database services to other computer programs or computers, typically adhering to the client-server model of communication.
- Many web applications 618 utilize database servers 612 (e.g., relational databases such as PostgreSQL, MySQL, and Oracle, and non-relational databases, also known as NoSQL databases, such as MongoDB, Riak, CouchDB, Apache Cassandra, and HBase) to store information received from HTTP clients and/or information to be displayed to HTTP clients.
- database servers 612 e.g., relational databases such as PostgreSQL, MySQL, and Oracle, and non-relational databases, also known as NoSQL databases, such as MongoDB, Riak, CouchDB, Apache Cassandra, and HBase
- database servers 612 including but not limited to accounting software, other business software, or research software.
- Database servers 612 typically store data using one or more databases, each including one or more tables (traditionally and formally referred to as “relations”), which are ledger-style (or spreadsheet-style) data structures including columns (often deemed “attributes”, or “attribute names”) and rows (often deemed “tuples”) of data (“values” or “attribute values”) adhering to any defined data types for each column.
- relations traditionally and formally referred to as “relations”
- ledger-style (or spreadsheet-style) data structures including columns (often deemed “attributes”, or “attribute names”) and rows (often deemed “tuples”) of data (“values” or “attribute values”) adhering to any defined data types for each column.
- a database server 612 can receive a SQL query from a client (directly from a client process or client end station using a database protocol, or indirectly via a web application server that a client is interacting with), execute the SQL query using data stored in the set of one or more database tables of one or more of the databases, and may potentially return a result (e.g., an indication of success, a value, one or more tuples, etc.).
- a result e.g., an indication of success, a value, one or more tuples, etc.
- a file server 614 is system software (e g , running on top of an operating system, or as part of an operating system itself) typically executed by one or more server end stations 660 (each coupled to or including one or more storage devices) that allows applications or client end stations access to a file-system and/or files (e.g., enterprise data), typically allowing for the opening of files, reading of files, writing to files, and/or closing of files over a network. Further, while some file servers 614 provide file-level access to storage, other file servers 614 may provide block-level access to storage.
- File servers 614 typically operate using any number of remote file-system access protocols, which allow client processes to access and/or manipulate remote files from across the Internet or within a same enterprise network (e.g., a corporate Intranet).
- remote file-system access protocols include, but are not limited to, Network File System (NFS), WebNFS, Server Message Block (SMB)/Common Internet File System (CIFS), File Transfer Protocol (FTP), Web Distributed Authoring and Versioning (WebDAV), Apple Filing Protocol (AFP), Remote File System (RFS), etc.
- TM Microsoft Sharepoint
- TM Microsoft Sharepoint
- a mail server 620 (or messaging server, message transfer agent, mail relay, etc.) is system software (running on top of an operating system) executed by server hardware (e.g., server end stations 660 ) that can transfer electronic messages (e.g., electronic mail) from one computing device to another using a client—server application architecture.
- server hardware e.g., server end stations 660
- electronic messages e.g., electronic mail
- Many mail servers 620 may implement and utilize the Simple Mail Transfer Protocol (SMTP), and may utilize the Post Office Protocol (POP3) and/or the Internet Message Access Protocol (IMAP), although many proprietary systems also exist.
- Many mail servers 620 also offer a web interface (e.g., as a web application 618 ) for reading and sending email.
- the illustrated exemplary deployment also illustrates a variety of configurations for implementing a BIM 106 .
- a first deployment possibility (BIM 106 A) is as a module of the security gateway 602 .
- Another deployment possibility (BIM 106 B) is as a module executed upon the server end station(s) 660
- yet another deployment possibility (BIM 106 C) is a module executed in a cloud computing system 664 .
- the BIM 106 is communicatively coupled with the TMM 104 , and thus can be located in a variety of locations able to provide such connectivity.
- FIG. 7 is a block diagram illustrating an exemplary cloud-based deployment environment 700 for a TMM and/or BIM according to some embodiments.
- FIG. 7 again illustrates servers 111 , a TMM 104 , various deployments of a BIM 106 , and client end station(s) 120 A- 120 N.
- the servers 111 (and possibly BIM 106 B) can be provided as cloud services 710 of one or more third-party server end stations 720 of, for example, a cloud computing system 732 .
- the TMM 104 (and possibly BIM 106 A) can be provided in a cloud security gateway 702 operating in a cloud computing system 730 , which can be different than cloud computing system 732 or possibly even the same.
- the path 725 from the client end station(s) 120 A- 120 N to the servers 111 necessarily flows through the TMM 104 , even though it may not be in a same cloud computing system 732 as the servers 111 .
- a cloud security gateway 702 is the Imperva (TM) Skyfence (TM) Cloud Gateway from Imperva, Inc.
- the TMM 104 may not lie in the path 725 between the client end stations 120 A- 120 N and the servers 111 , and instead may gain access to network traffic through a channel between the TMM 104 and the servers 111 for this purpose.
- the TMM 104 can be configured to “monitor” or “poll” the cloud service(s) 710 by transmitting requests to the third-party server end stations (or individual servers, such as web application server 616 ) as part of a monitoring scheme to obtain network traffic. This monitoring can occur according to a defined schedule, such as checking once every few minutes.
- the server(s) 111 can be configured to “report” some or all traffic (or summaries thereof, event data structures, etc.) to the TMM 104 .
- the server(s) 111 can be configured to transmit data to the TMM 104 using an Application Programming Interface (API) call, Short Message Service (SMS) message, email message, etc.
- API Application Programming Interface
- SMS Short Message Service
- FIG. 8 is a block diagram illustrating exemplary operations 800 including malicious event identification for botnet member identification according to some embodiments.
- the operations depicted in FIG. 8 and subsequent FIGS. 9-11 are provided to further illustrate one possible set of operations corresponding to certain operations depicted in FIG. 3 and/or FIG. 4 . In some embodiments, these operations of any or all of FIGS. 8-11 can be performed by the BIM 106 of FIG. 1, 6 , or 7 .
- a botnet member identification procedure can include obtaining a set of requests 805 originated by one or more end stations (e.g., client end stations, server end stations) and destined to one or more servers (e.g., servers 111 ). As described herein, this obtaining could include receiving traffic data 108 from one or more TMMs 104 (e.g., as each request is received, according to a schedule, on-demand, etc.), and these one or more TMMs 104 could be at a same or different site—for example, the traffic data 108 could be from one organization or from multiple organizations.
- the obtaining of the set of requests 805 could occur via obtaining, by the BIM 106 , data from access logs of the server(s) 111 , and identifying requests 805 from within this data.
- the BIM 106 could request access log data from the server(s) 111 (or server end station(s) 110 ) and then it would be transmitted back to the BIM 106 in response, or the server(s) 111 /server end station(s) 110 could otherwise provide the access log data, e.g., according to a schedule. Then, according to the type of server 111 and/or the type of logs, requests can be identified.
- This set of operations for obtaining or “collecting” the set of requests 805 can, in some embodiments, be part of block 305 of FIG. 3 —i.e., observing incoming requests to one or more web applications at one or more sites for an amount of time.
- a set of malicious requests 815 can be identified based upon applying a set of security rules to the set of collected requests 805 .
- Numerous types of security rules for detecting malicious attacks are well-known to those of skill in the art and can be used in various embodiments, and may include searching for the existence (or non-existence) of a particular character, set of characters, pattern, etc., within one or multiple portions (e.g., within headers and/or payloads) of one or multiple requests.
- a rule can be configured to detect a “malformed” Content-Length header (of a HTTP request message) that has a negative value, as one attack that has been observed includes attackers providing negative values in this field instead of an anticipated, non-negative integer value.
- one or more of the collected requests 805 can be analyzed to determine if any include a negative integer value (or another type of non-anticipated type of value), and if such a request is found, it can be included in the set of malicious requests 815 .
- the number of rules and/or attack “coverage” of the set of rules can differ according to the types of server(s) involved, the amount of scrutiny required for the deployment, the amount of processing resources available, etc.
- the malicious requests 815 could include requests 810 that are part of a volumetric (or “volume-based”) attack, protocol attack, application layer attack, etc.
- this set of operations for identifying malicious requests 815 may be part of block 310 of FIG. 3 —i.e., identifying a subset of the observed incoming requests 805 as being part of a malicious attack.
- each event structure 820 For each of the malicious requests 815 , an event structure 820 can be generated. As illustrated, each event structure 820 includes a source identifier (S 1 , S 2 , et seq.) of the source of the corresponding request message, which can comprise a complete or partial source Internet Protocol (IP) address (v4 or v6), or another identifier that uniquely corresponds to a particular source. FIG. 8 also shows each event structure 820 including a destination identifier (D 1 , D 2 , et seq.) of the destination (e.g., server) of the corresponding request message, which can comprise a complete or partial source Internet Protocol (IP) address (v4 or v6), a hostname, or another identifier that uniquely corresponds to a particular destination.
- IP Internet Protocol
- Each event structure 820 illustrated also includes an attack type indicator (AT 1 , AT 2 , et seq.) that uniquely identifies one (or more) attack type of the corresponding request message.
- an attack type indicator (AT 1 , AT 2 , et seq.) that uniquely identifies one (or more) attack type of the corresponding request message.
- an identifier of that particular rule e.g., rule # 1
- attack e.g., a Malformed Content-Length attack, which could be found via one or more multiple rules, is attack # 3
- attack # 3 e.g., a Malformed Content-Length attack, which could be found via one or more multiple rules
- the depicted event structures 820 also include a time (TIME A, TIME B, et seq.) associated with the request—e.g., the time when the request was received/observed/logged by a TMM 105 , security gateway 102 , server end station 110 , server 111 , etc.
- This time could be in a variety of formats/representations, including a raw or converted timestamp, an identifier of a period of time (which hour, 10 minute window, etc.) of the request, etc.
- this set of operations for generating event structures 820 may be part of block 315 of FIG. 3 —i.e., creating, for each of the identified malicious requests, an event data structure.
- the event structures 820 can be divided into time periods (e.g., time period ‘A’ 825 A, time period ‘B’ 825 B, and so on).
- the size of the time period can be configured differently depending upon the preferences of the implementer and the types of attacks currently used by attackers, and can be tuned over time to adjust for best results for a particular implementation. However, in general, the size of each time period should be large enough to allow for evidence of an ongoing, continued attack to be observed in successive time periods, though the size of each time period should be small enough to avoid the processing (described below) for each time period becoming impractical.
- the size of each time period is one minute, though in other embodiments the size of each time period could be thirty seconds, two minutes, five minutes, ten minutes, thirty minutes, one hour, etc.
- This “division” of the event structures 820 into multiple time periods 825 can occur as a distinct operation, and can include placing those event structures in a same time period in a memory or storage location in adjacent physical or virtual locations.
- the division of the event structures 820 may include labeling (or updating) each event structure 820 with a time period identifier, and in some embodiments, the division may occur iteratively—e.g., the event structures can be iterated over one or more multiple times to gather time period-relevant event structures in an on-demand fashion.
- each event structure can be a record/document stored a database, and the division can occur by querying the database for event structures having a “time” within a range of times corresponding to a particular time period.
- the division can occur by querying the database for event structures having a “time” within a range of times corresponding to a particular time period.
- this set of operations for dividing event structures into time periods may be part of block 320 of FIG. 3 —i.e., the events are split into separate time period groupings based upon the time of each event.
- FIG. 9 is a block diagram illustrating exemplary operations 900 including traffic source similarity determination for botnet member identification, which can be performed after the operations of FIG. 8 , according to some embodiments.
- FIG. 8 can correspond to block 330 of FIG. 3 .
- the event structures (corresponding to “events”) of a time period having a same attack type 905 can be identified, which can correspond to block 340 of FIG. 3 —i.e., events of a same attack type are selected. This selection can, in some embodiments, be based upon the attack type indicator data of each event structure (shown here as “AT 1 ”).
- a source-target graph 910 can be constructed, which can be a graph of the sources (of the traffic) connected to the targets (or destinations of the traffic).
- the events with an attack type 905 of “AT 1 ” have three different sources—S 1 , S 2 , and S 3 —and these events have four different targets—D 1 , D 2 , D 4 , and D 5 .
- a vertex (or node) is created for each traffic source and a vertex is created for each traffic target/destination, and edges (or “arrows”, which can be directed) can be inserted between those of the sources and targets that have a corresponding event with a same attack type 905 .
- the source-target graph 910 includes three edges leading from S 1 to D 1 , D 2 , and D 4 , respectively.
- these source-target graph 910 construction operations may be part of block 350 of FIG. 3 —i.e., representing selected events as a graph of sources connected to targets.
- a similarity algorithm 915 can be applied (or executed) to determine the similarity between the source vertices of the graph.
- a variety of such similarity algorithms are known to those of skill in the art, and one such similarly algorithm is illustrated here in FIG. 9 .
- This exemplary similarity algorithm 915 includes determining a similarity score for each pair of sources in the graph 910 , and includes dividing the number of common targets (for a considered pair of sources) and the total number of different targets (for the same considered pair of sources).
- the similarity score for S 1 and S 2 can be the result of 2 (as sources S 1 and S 2 both are connected to targets D 1 and D 4 ) divided by 3 (as sources S 1 and S 2 collectively target D 1 , D 2 , and D 4 ).
- This result of 2 ⁇ 3 yields a similarity score of 0.66 for the (S 1 , S 2 ) pair.
- a similarity score can be computed for each such combination of sources.
- This combination of source-target graph 910 construction and similarity scoring can be part of block 345 of FIG. 3 , and can be performed for each of the different groupings of events having a same attack type 905 , and can be performed for each time period (e.g., time periods 825 A- 825 B) being considered.
- FIG. 10 is a block diagram illustrating exemplary operations 1000 including periodic-source cluster generation for botnet member identification, which can be performed after the operations of FIG. 9 , according to some embodiments.
- the sources can be clustered 360 .
- each source reflected in the example similarity scores 1005 may be initially placed into its own cluster, which can be part of block 365 of FIG. 3 —i.e., assigning each source to its own group or cluster.
- the sources can be iteratively combined (or consolidated) based upon the similarity scores, which can be part of block 370 of FIG. 3 .
- the pair of sources having a largest similarity score can be combined. This combining can occur one or more times while one or more conditions are met. For example, in some embodiments the combining occurs as long as the next-highest similarity score is greater than a threshold value (e.g., 0.5).
- the exemplary six sources in FIG. 10 may first be placed into separate clusters (which can correspond to block 365 of FIG. 3 ), and the source pair having the largest unprocessed similarity score (of 0.66, belonging to the S 1 -S 2 pair) can be merged into one cluster. This results in one cluster with S 1 and S 2 , and four other clusters corresponding to S 3 -S 6 . Next, the next source pair having the largest unprocessed similarity score (of 0.57, belonging to the S 4 -S 5 pair) can be merged into one cluster. This results in four clusters—(S 1 , S 2 ), (S 3 ), (S 6 ), and (S 4 , S 5 ).
- next source pair having the largest unprocessed similarity score (of 0.5, belonging to the S 1 -S 6 pair) can be merged into one cluster.
- the source S 6 can be added into the cluster including S 1 and S 2 , and thus, three clusters remain—(S 1 , S 2 , S 6 ), (S 3 ), and (S 4 , S 5 ).
- next source pair having the largest unprocessed similarity score (0.33, of the S 1 -S 4 pair) could be merged, in this example we assume that a merging condition exists, which indicates that merging will only occur for similarity scores that are greater than or equal to 0.5. Thus, because 0 . 33 does not satisfy this criteria as it is less than 0.5, the merging stops, leaving a set of periodic-source clusters 1010 A- 1010 C.
- this illustrated scenario is exemplary.
- rules can be utilized that indicate what happens when a source (in a cluster with 1+other sources) is to be merged with another source (in a different cluster with 1+other sources), which could include merging all of these sources together into one cluster, stopping the merging process, skipping that particular merge but continuing the merging process, etc.
- FIG. 11 is a block diagram illustrating exemplary operations 1100 including attacking-clusters graph generation and botnet identification for botnet member identification, which can be performed after the operations of FIG. 10 , according to some embodiments.
- an attacking-clusters graph 1110 can be generated. This generation can be part of block 380 of FIG. 3 .
- the periodic-source clusters 1010 A- 1010 C of time period “A” 825 A can be represented as a level in a graph, where each vertex represents one of the clusters and has a weight indicating the number of sources within the cluster.
- the first periodic-source cluster 1010 A includes three sources (S 1 , S 2 , and S 6 ), and thus can be represented as a vertex “C 1 ” having a weight of 3 .
- Each of the other periodic-source clusters 1010 B- 1010 C can similarly be represented as vertices C 2 and C 3 , having weights of 2 and 1 respectively.
- This attacking-clusters graph 1110 construction can continue with generating additional levels for each other time period under consideration—here, this is shown as time periods ‘B’ 825 B and ‘C’ 825 C, though of course in other embodiments there can be different numbers of time periods.
- the attacking-clusters graph 1110 construction can also include the insertion of edges between the vertices of different levels.
- an edge can be inserted between clusters of adjacent levels that share members (i.e., have one or more common sources).
- cluster C 1 having S 1 , S 2 , and S 6
- cluster C 4 of the second level which, for example, could represent sources S 1 , S 2 , S 6 , S 8 , S 9 , and S 10 ).
- an edge can be inserted between C 1 and C 4 , with a weight indicating a number of common sources (3) between the two vertices.
- the operations 1100 can further include analyzing the attacking-clusters graph 1110 according to a condition 1115 .
- a condition 1115 comprises one or more logical tests indicating what paths through the attacking-clusters graph 1110 are to be identified.
- the condition 1115 shown herein provides that path are to be identified that include only vertices having a weight of 2 or more, only edges having a weight of 2 or more, and have a path length (e.g., a number of traversed edges) of at least 2 edges.
- the condition 1115 is satisfied for two paths of the attacking-clusters graph 1110 : C 1 -C 4 -C 8 and C 2 - 05 -C 8 .
- These two paths can thus indicate two botnet candidates, which are represented as botnet candidate # 1 1150 A and botnet candidate # 2 1150 B.
- Each botnet candidate 1150 A- 1150 B is shown as including identifiers of the sources of the vertices traversed in the corresponding path.
- the operations 1100 may include, at circle ‘A’, removing from the botnet candidates 1150 A- 1150 B any sources (or, source identifiers) that exist in a set of identifiers of whitelisted sources 1120 (e.g., sources known to be part of CDNs, search engines, scanning services).
- sources S 8 and S 13 are in the set of identifiers of whitelisted sources, and source S 8 is also within botnet candidate # 1 1150 A.
- this source may be remove from botnet candidate # 1 1150 A whereas botnet candidate # 2 will be unchanged, resulting in the two sets of suspected botnet identifiers 1125 A- 1125 B.
- These operations for the “filtering” of whitelisted sources may be part of block 390 of FIG. 3 —i.e., sources can be removed from the paths that belong to a list of whitelisted sources. However, in some embodiments such source removal operations may not be performed, and thus the flow may alternatively continue via circle ‘B’ to yield the two sets of suspected botnet identifiers 1125 A- 1125 B, where the first botnet # 1 1125 A does include an identifier of source S 8 .
- FIG. 12 is a flow diagram illustrating exemplary operations 1200 for providing targeted botnet protection according to some embodiments.
- operations 1200 can be performed by one or more TMMs 104 disclosed herein.
- the one or more TMMs 104 can be implemented by one or more electronic devices, and each TMM 104 can be deployed in front of the one or more servers in that the TMM 104 receives all network traffic sent by a plurality of end stations that is destined for the one or more servers.
- the operations 1200 include receiving a message including a plurality of identifiers that have been determined to be used by a subset of the plurality of end stations collectively acting as a suspected botnet, where each of the plurality of identifiers is or is based upon a network address.
- the operations 1200 can also include, at block 1210 , receiving, from a first of the plurality of end stations, a request message that is destined for one of the one or more servers and that includes at least a first identifier of the plurality of identifiers.
- the operations 1200 can also include, at block 1215 , blocking the request message from being sent to the one server responsive to a determination that the request message is malicious, and at block 1220 , responsive to the determination and a different determination that the request message includes any of the plurality of identifiers in at least a set of one or more locations in the request message, activating, for an amount of time, a protection mechanism that applies to all traffic that has any of the plurality of identifiers in any of the set of locations.
- the set of locations includes one or more of: a source IP address header field; a User-Agent header field; and an X-Forwarded-For header field.
- the protection mechanism comprises dropping all traffic that has any of the plurality of identifiers in any of the set of locations regardless of whether it is separately determined to be malicious.
- the operations 1200 further include receiving a second request message that is from the first end station and that is destined to any one of the one or more servers; and allowing the second request message to be forwarded toward its destination despite the second request message including the at least one of the plurality of identifiers that have been determined to be used by the subset of end stations collectively acting as the suspected botnet due to the protection mechanism no longer being activated.
- the protection mechanism comprises increasing an amount of security analysis performed with the traffic that has any of the plurality of identifiers in any of the set of locations.
- the amount of time that the protection mechanism is activated is specific to the suspected botnet. In some embodiments, the amount of time is based upon an average or maximum attack length of time determined based upon previous activity of the suspected botnet.
- the amount of time is indefinite in that it continues until a condition is satisfied. In some embodiments, the condition is satisfied upon a determination that no request messages have been received (e.g., at the TMM 104 ) that include any of the plurality of identifiers in any of the set of locations for a threshold amount of time.
- the operations 1200 further include receiving, from a second end station of the subset of end stations, a second request message that is destined for a second server of the one or more servers and that includes at least one of the plurality of identifiers in one of the set of locations; and blocking, as part of the protection mechanism before an end of the amount of time, the second request message from being sent to the second server due to the protection mechanism being activated.
- the received message that includes the plurality of identifiers further includes a second plurality of identifiers that have been determined to be used by a second subset of the plurality of end stations collectively acting as a second suspected botnet; the first identifier exists in both the plurality of identifiers and the second plurality of identifiers; and the activated protection mechanism further applies to all traffic that has any of the second plurality of identifiers in any of the set of locations due to the first identifier also existing in the second plurality of identifiers.
- the operations 1200 further include receiving, from a second of the plurality of end stations, a second request message that is destined to any of the one or more servers and that includes at least a second identifier that exists within a second plurality of identifiers of a second suspected botnet but not within the first plurality of identifiers of the suspected botnet; and responsive at least in part due to the second identifier not existing within the first plurality of identifiers, allowing the second request message to be forwarded toward its destination despite the protection mechanism being activated for the suspected botnet and despite the second identifier belonging to the second plurality of identifiers of the second suspected botnet.
- FIG. 13 is a flow diagram illustrating exemplary operations 1300 for providing targeted botnet protection according to some embodiments.
- operations 1300 can be performed by one or more TMMs 104 disclosed herein.
- the one or more TMMs 104 can be implemented by one or more electronic devices, and each TMM 104 can be deployed in front of the one or more servers in that the TMM 104 receives all network traffic sent by a plurality of end stations that is destined for the one or more servers.
- the operations 1300 include receiving a message including a plurality of identifiers that have been determined to be used by a subset of the plurality of end stations collectively acting as a suspected botnet, wherein each of the plurality of identifiers is or is based upon a network address.
- the operations 1300 include, at block 1310 , receiving, from a plurality of end stations of the subset of end stations collective acting as the suspected botnet, a plurality of request messages that are destined for a set of one or more of the one or more servers, wherein each of the plurality of request messages includes at least one of the plurality of identifiers in at least a set of one or more locations in the request message.
- the operations 1300 include, at block 1315 , responsive to a determination that the plurality of request messages each include, in the set of locations, an identifier that is within the plurality of identifiers and further that these plurality of request messages collectively satisfy a security rule, activating, for an amount of time, a protection mechanism that applies to all traffic that has any of the plurality of identifiers in any of the set of locations, wherein none of the plurality of request messages individually would satisfy the security rule.
- the security rule to be satisfied, at least requires a defined amount of request messages that share a common characteristic to be received within a period of time, wherein the defined amount of request messages is greater than one.
- the common characteristic is met when each of the plurality of request messages carries a payload representing an attempt to login to an application, wherein the payload includes a password, a username, or both the password and username.
- the operations 1300 further include determining that each attempt to login of each of the plurality of request messages was unsuccessful.
- said determining that each attempt to login of the plurality of request messages was unsuccessful comprises: receiving a plurality of response messages that were originated by the set of servers, wherein each response message indicates that the attempt to login of a corresponding one of the plurality of request messages was unsuccessful.
- the set of locations includes one or more of: a source IP address header field; a User-Agent header field; and an X-Forwarded-For header field.
- the protection mechanism comprises dropping all traffic that has any of the plurality of identifiers in any of the set of locations regardless of whether it is separately determined to be malicious.
- the protection mechanism comprises increasing an amount of security analysis performed with the traffic that has any of the plurality of identifiers in any of the set of locations.
- the amount of time that the protection mechanism is activated is specific to the suspected botnet and is based upon an average or maximum attack length of time determined based upon previous activity of the suspected botnet.
- the amount of time is indefinite in that it continues until a condition is satisfied, wherein the condition is satisfied upon a determination that no request messages have been received (e.g., at the TMM) that include any of the plurality of identifiers in any of the set of locations for a threshold amount of time.
- FIG. 14 is a flow diagram illustrating exemplary operations 1400 for identifying a subset of a plurality of end stations that collectively act as a suspected botnet according to some embodiments.
- operations 1400 can be performed by a BIM 106 disclosed herein.
- the BIM 106 can be implemented by an electronic device.
- the operations 1400 include, at block 1405 , obtaining traffic data from one or more TMMs 104 implemented by one or more electronic devices, wherein the traffic data includes or is based upon a plurality of request messages that were originated by ones of the plurality of end stations and that were destined to one or more servers, wherein each of the one or more TMMs is deployed in front of at least one of the one or more servers in that the TMM receives all network traffic originated by the plurality of end stations that is destined for the at least one server;
- the operations 1400 include, at block 1410 , generating, based upon the obtained traffic data, a set of identifiers corresponding to the subset of the plurality of end stations that are determined by the BIM to be collectively acting as the suspected botnet in that they have transmitted request messages, destined for one or more of the one or more servers, that collectively or individually satisfy one or more security rules which, when satisfied, indicate a malicious attack, wherein the set of identifiers comprises a plurality of identifiers.
- the operations 1400 include, at block 1415 , transmitting the set of identifiers to the one or more TMMs to cause the one or more TMMs to utilize the set of identifiers while analyzing subsequent request messages destined to the one or more servers to detect an attack from the suspected botnet and to protect the one or more servers from the attack.
- said generating the set of identifiers comprises: identifying, from the traffic data, a subset of the plurality of request messages that are malicious; and determining, based upon the subset of request messages, that the subset of end stations have collectively performed the malicious attack for at least a threshold amount of time and that at least a threshold number of the subset of end stations have been involved in the malicious attack for each of a threshold number of time periods within the threshold amount of time.
- said determining comprises: generating a plurality of event data structures corresponding to the subset of request messages, wherein each of the plurality of event data structures includes a source identifier of a source of the corresponding request message and a destination identifier of a destination of the corresponding request message; identifying a plurality of groupings of the generated event data structures, wherein each of the plurality of groupings corresponds to a time period of a plurality of different time periods and includes those of the generated event data structures corresponding to a request message that was received at a corresponding one of the one or more TMMs within the time period; and identifying, for each of the different time periods, groupings of source identifiers of the request messages corresponding to the event data structures of the time period, wherein generating the set of identifiers is based upon analyzing the groupings of source identifiers of each of the different time periods.
- said identifying the groupings of source identifiers for each of the different time periods comprises: calculating a similarity score between each pair of source identifiers of those of the event data structures of the time period that have a same attack type, wherein the similarity score is based upon the destination identifiers of those event data structures that include one of the pair of source identifiers; and clustering, based upon the similarity scores, the source identifiers of those of the event data structures of the time period that have the same attack type.
- said generating the set of identifiers further comprises: removing, from the set of identifiers, any identifier existing within a set of whitelisted source identifiers.
- the operations 1400 further include determining, based at least in part upon the obtained traffic data, an attack duration of the suspected botnet, wherein the attack duration is an average attack duration or a maximum attack duration observed over a plurality of attacks observed from the suspected botnet; and transmitting the attack duration of the suspected botnet to the one or more TMMs to cause the one or more TMMs to utilize the attack duration while protecting the one or more servers from the suspected botnet.
- each identifier of the set of identifiers comprises an IP address.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Computer Hardware Design (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Environmental & Geological Engineering (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Virology (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
Description
- This application is a continuation of application Ser. No. 15/442,560, filed Feb. 24, 2017, which claims priority from U.S. Provisional Application No. 62/300,069, filed on Feb. 25, 2016, the contents of which are incorporated by reference.
- Embodiments relate to the field of computer networking; and more specifically, to techniques for botnet identification and targeted botnet protection.
- A botnet is a group of Internet-connected computing devices communicating with other similar machines in an effort to complete repetitive tasks and objectives. Botnets can include computers whose security defenses have been breached and control conceded to a third party. Each such compromised device, known as a “bot,” may be created when a computer is penetrated by software from a malware (i.e., a malicious software) distribution. The controller of a botnet is able to direct the activities of these compromised computers through communication channels formed by standards-based network protocols such as Internet Relay Chat (IRC), Hypertext Transfer Protocol (HTTP), etc.
- Computers can be co-opted into a botnet when they execute malicious software. This can be accomplished by luring users into making a drive-by download, exploiting web browser vulnerabilities, or by tricking the user into running a Trojan horse program, which could come from an email attachment. This malware typically installs modules that allow the computer to be commanded and controlled by the botnet's operator. After the software is executed, it may “call home” to the host computer. When the re-connection is made, depending on how it is written, a Trojan may then delete itself, or may remain present to update and maintain the modules. Many computer users are unaware that their computer is infected with bots.
- Botnets can include many different computers (e.g., hundreds, thousands, tens of thousands, hundreds of thousands, or more) and the membership of a botnet can change over time.
- One type of attack perpetrated by botnets is a distributed denial-of-service (DDoS) attack, in which multiple systems submit as many requests as possible to a single Internet computer or service, overloading it and preventing it from servicing legitimate requests.
- The geographic dispersal of botnets typically means that each participant must be individually identified, which limits the benefits of filtering mechanisms. Although a service provider could choose to block all traffic during a botnet attack, this negatively impacts existing users of the service. Further, a service provider could choose to allow all traffic to continue to be processed, but this can significantly affect its quality of service to its regular users, and potentially even “crash” the service altogether. Moreover, it can be tremendously difficult to determine which requests for a service are malicious and which are not, making it very challenging to attempt to selectively deal with only the malicious traffic.
- Accordingly, improved techniques for identifying botnet traffic and protecting services from botnet attacks are desired.
- The invention may best be understood by referring to the following description and accompanying drawings that are used to illustrate embodiments of the invention. In the drawings:
-
FIG. 1 is a block diagram illustrating botnet member identification and targeted botnet protection according to some embodiments. -
FIG. 2 is a flow diagram illustrating operations for botnet member identification and targeted botnet protection according to some embodiments. -
FIG. 3 is a flow diagram illustrating exemplary operations for botnet member identification according to some embodiments. -
FIG. 4 is a block diagram illustrating botnet member identification according to some embodiments. -
FIG. 5 is a flow diagram illustrating exemplary operations for targeted botnet protection according to some embodiments. -
FIG. 6 is a block diagram illustrating an exemplary on premise deployment environment for a traffic monitoring module and/or botnet identification module according to some embodiments. -
FIG. 7 is a block diagram illustrating an exemplary cloud-based deployment environment for a traffic monitoring module and/or botnet identification module according to some embodiments. -
FIGS. 8-11 illustrate operations for botnet member identification according to some embodiments, in which: -
FIG. 8 is a block diagram illustrating exemplary operations including malicious event identification for botnet member identification according to some embodiments. -
FIG. 9 is a block diagram illustrating exemplary operations including traffic source similarity determination for botnet member identification, which can be performed after the operations ofFIG. 8 , according to some embodiments. -
FIG. 10 is a block diagram illustrating exemplary operations including periodic-source cluster generation for botnet member identification, which can be performed after the operations ofFIG. 9 , according to some embodiments. -
FIG. 11 is a block diagram illustrating exemplary operations including attacking-clusters graph generation and botnet identification for botnet member identification, which can be performed after the operations ofFIG. 10 , according to some embodiments. -
FIG. 12 is a flow diagram illustrating exemplary operations for providing targeted botnet protection according to some embodiments. -
FIG. 13 is a flow diagram illustrating exemplary operations for providing targeted botnet protection according to some embodiments. -
FIG. 14 is a flow diagram illustrating exemplary operations for identifying a subset of a plurality of end stations that collectively act as a suspected botnet according to some embodiments. - In the following description, numerous specific details such as logic implementations, resource partitioning/sharing/duplication implementations, types and interrelationships of system components, and logic partitioning/integration choices are set forth in order to provide a more thorough understanding of the present invention. It will be appreciated, however, by one skilled in the art that the invention may be practiced without such specific details. In other instances, control structures, gate level circuits and full software instruction sequences have not been shown in detail in order not to obscure the invention. Those of ordinary skill in the art, with the included descriptions, will be able to implement appropriate functionality without undue experimentation.
- Bracketed text and blocks with dashed borders (e.g., large dashes, small dashes, dot-dash, and dots) are used herein to illustrate optional operations that add additional features to embodiments of the invention. However, such notation should not be taken to mean that these are the only options or optional operations, and/or that blocks with solid borders are not optional in certain embodiments of the invention.
- References in the specification to “one embodiment,” “an embodiment,” “an example embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
- In the following description and claims, the terms “coupled” and “connected,” along with their derivatives, may be used. It should be understood that these terms are not intended as synonyms for each other. “Coupled” is used to indicate that two or more elements, which may or may not be in direct physical or electrical contact with each other, co-operate or interact with each other. “Connected” is used to indicate the establishment of communication between two or more elements that are coupled with each other. Further, although a Uniform Resource Locator (URL) is one type of Uniform Resource Identifier (URI), these terms are used interchangeably herein to refer to a URI, which is a string of characters used to identify a name or a web resource.
- The techniques shown in the figures can be implemented using code and data stored and executed on one or more electronic devices (e.g., an end station, a network device). Such electronic devices, which are also referred to as computing devices, store and communicate (internally and/or with other electronic devices over a network) code and data using computer-readable media, such as non-transitory computer-readable storage media (e.g., magnetic disks; optical disks; random access memory (RAM); read only memory (ROM); flash memory devices; phase-change memory) and transitory computer-readable communication media (e.g., electrical, optical, acoustical or other form of propagated signals, such as carrier waves, infrared signals, digital signals). In addition, such electronic devices include hardware, such as a set of one or more processors coupled to one or more other components, e.g., one or more non-transitory machine-readable storage media to store code and/or data, and a set of one or more wired or wireless network interfaces allowing the electronic device to transmit data to and receive data from other computing devices, typically across one or more networks (e.g., Local Area Networks (LANs), the Internet). The coupling of the set of processors and other components is typically through one or more interconnects within the electronic device, (e.g., busses and possibly bridges). Thus, the non-transitory machine-readable storage media of a given electronic device typically stores code (i.e., instructions) for execution on the set of one or more processors of that electronic device. Of course, one or more parts of various embodiments may be implemented using different combinations of software, firmware, and/or hardware.
- Embodiments described herein provide for methods, systems, non-transitory computer readable media, and apparatuses for botnet identification and for targeted botnet protection.
- In some embodiments, members of a botnet can be identified by observing incoming traffic to a set of servers (e.g., web application servers) over time, finding behavior-similarity between the sources of the traffic that is malicious by examining the traffic broken into time-periods, and finding associates between sets of sources that have many members and persist over a long time duration. In some embodiments, sets of botnet sources can thus be identified, where each set of botnet sources includes multiple source identifiers (e.g., Internet Protocol (IP) addresses utilized by the end stations participating in a particular botnet).
- In some embodiments, sets of botnet sources can be utilized to enable targeted botnet protection. A traffic monitoring module, upon detecting incoming traffic destined to a server that is deemed as malicious according to a security rule, can identify whether the source of that malicious traffic is identified within any of the sets of botnet sources. When the source of the received malicious traffic is determined to be within one of the sets of botnet sources, security measures can be activated for any further traffic received from any of the sources of that one set of botnet sources for a period of time. Accordingly, instead of following a naive approach of immediately blocking all traffic from all of the sets of botnet sources—which would likely interrupt some non-malicious traffic, as many devices acting as part of a botnet are frequently used for non-malicious purposes—some embodiments can protect against attacks from these botnets with a fine-grained approach that is limited to an attacking botnet, and can be limited to a particular time period (e.g., the length of the attack), to thereby allow subsequent non-malicious activity from those sources to be processed in an uninterrupted manner
-
FIG. 1 is a block diagram illustrating asystem 100 providing botnet member identification and targeted botnet protection according to some embodiments. Although “botnet member identification” and “targeted botnet protection” may be described together in this Figure and throughout this description, it is to be understood that these techniques may also be separately/individually utilized without the other technique. - In
FIG. 1 , a set of one or moreserver end stations 110 execute or otherwise implement one ormore servers 111 providing access to data. In the embodiment depicted in this Figure, theserver end stations 110 implement aweb application server 116, though in other embodiments the set ofserver end stations 110 can enable other types of servers, including but not limited to database servers, file servers, print servers, mail servers, gaming servers, application servers, Domain Name System (DNS) servers, etc. - The set of
server end stations 110 may be “protected” by asecurity gateway 102.Security gateways 102—such as firewalls, database firewalls, file system firewalls, and web application firewalls (WAFs)—are network security systems that protect software applications (e.g., web application server(s) 116 or servers 111) executing on electronic devices (e.g., server end station(s) 110) within a network by controlling the flow of network traffic passing through thesecurity gateway 102. By analyzing packets flowing through thesecurity gateway 102 and determining whether those packets should be allowed to continue traveling through the network, thesecurity gateway 102 can prevent malicious traffic from reaching a protected server, modify the malicious traffic, and/or create an alert to trigger another responsive event or notify a user of the detection of the malicious traffic. - In some embodiments, the
security gateway 102 is communicatively coupled between one or moreclient end stations 120A-120N and theserver end stations 110, such that all traffic destined to theserver end stations 110 is first passed through (or made available to) thesecurity gateway 102 for analysis. In some embodiments, thesecurity gateway 102 executes as part of a separate server end station or a (dedicated or shared) network device; but in other embodiments, thesecurity gateway 102 operates as part of server end station(s) 110 (for example, as a software module), or is implemented using or another type of electronic device and can be software, hardware, or a combination of both. Further detail regarding security gateways will be described later herein with regard toFIGS. 6 and 7 . -
Client end stations 120A-120N (e.g., workstations, laptops, netbooks, palm tops, mobile phones, smartphones, multimedia phones, Voice over Internet Protocol (VoIP) phones, user equipment (UE), terminals, portable media players, Global Positioning Satellite (GPS) units, gaming systems, set-top boxes, etc.) are computing devices operable to execute applications 126 that, among other functions, can access the content/services provided by other computing devices, such as theserver end stations 110. - In this example, the client end stations 120 include one (or possibly multiple) malware 122 modules that cause the client end stations 120 to participate in one (or more)
botnets 104A-104N. In some cases, a client end station (e.g., 120A) may be infected with malware 122 while the user (e.g., owner/operator) of the client end station is unaware of the infection. Thus, the user may continue to operate their client end station in a non-malicious manner, and the client end station may act as part of a botnet (e.g., receive commands from a controller, begin transmitting malicious traffic, etc.) and perform malicious actions without the user's knowledge, perhaps even concurrently while the user is actively utilizing the client end station. - In some embodiments, a
traffic monitoring module 104 or “TMM” (which may be implemented within a security gateway 102) can be configured to receive network traffic from client end stations 120 that is directed to one or more server(s) 111, and provide traffic data 108 (e.g., raw traffic/captures of packets, “event” data structures that represent certain traffic, etc.) to a botnet identification module (“BIM”) 106 at circle ‘1’. In some embodiments, theBIM 106 may receivesuch traffic data 108 from only onedeployment 112A (e.g., one or moretraffic monitoring modules 104 of a particular enterprise, network, etc.), though in other embodiments theBIM 106 may receivesuch traffic data 108 frommultiple deployments 112A-112N. - In some embodiments, the
BIM 106 performs an analysis of thistraffic data 108 at circle ‘2’ to determine one or more sets of botnet sources. Each set of botnet sources from the one or more sets of botnet sources can be specific to a particular botnet, and include source identifiers for each client end station determined to belong to that botnet (or, determined as being highly likely as belonging to a botnet). In some embodiments, each source identifier comprises an IP address, such as an IPv4 address or IPv6 address. Further detail describing exemplary techniques for botnet membership determinations is presented later herein with regard toFIGS. 3 and 4 . - At circle ‘3’, the determined sets of
botnet sources 114 can be provided from theBIM 106 to one ormore TMMs 104, which may be located at one ormore deployments 112A-112N. - In some embodiments, the
TMM 104 can utilize the sets ofbotnet sources 114 to protect the server(s) 111 (and/or other infrastructure) against the identified botnets. A partial representation of sets ofbotnet sources 114 is illustrated herein as including (at least) three sets of botnet sources. A first set of botnet sources is illustrated as including at least three sources: 1.1.1.1, 2.2.2.2, and 5.5.5.5. A second set of botnet sources is illustrated as including at least three sources: 2.2.2.2, 4.4.4.4, and 6.6.6.6. A third set of botnet sources is illustrated as including at least three sources: 7.7.7.7, 8.8.8.8, and 9.9.9.9. Of course, more or fewer sets of botnet sources may be used, and the sets of botnet sources may include more or fewer source identifiers. Additionally, in some scenarios a single source identifier (e.g., 2.2.2.2) may exist in multiple different ones of the sets (such as when a single client end station using that IP address has been infected with multiple malware types and thus is a member of multiple botnets), though in other scenarios this may not be the case. - At circle ‘4’, the
TMM 104 may receivemalicious traffic 130A from a source “6.6.6.6” (fromclient end station 120A), and analyze thetraffic 130A (e.g., according to security rules) to determine that it is in fact malicious. TheTMM 104 may be configured to perform a protective action at circle ‘5’ in response, such as blocking thetraffic 130A (e.g., dropping the packets so that they are not sent to the destination server), downgrading the priority of the traffic in being processed/forwarded, modifying the traffic, reporting the incident to another device or human user, etc. - In some embodiments, the
TMM 104 can also consult the sets ofbotnet sources 114 by performing a lookup using a source identifier from themalicious traffic 130A. For example, a source IP address from an IP header of the traffic may serve as a source identifier, though other source identifiers can also be used, including but not limited to a value from a User-Agent header field of an HTTP request message (which may be able to uniquely identify a source), an IP address from an X-Forwarded-For (XFF) header field of an HTTP request message (which may identify an originating source of the message when the traffic is being passed via an intermediate proxy or load balancer), etc. In this example, the source IP address of thetraffic 130A (i.e., 6.6.6.6) is found to exist within the second set of botnet sources. Accordingly, it is likely that a botnet attack from that botnet (e.g.,botnet 104A) is beginning. - In response, at circle ‘6’, the
TMM 104 can “activate” one or more protection measures for any additional traffic that may be received from any of the source identifiers in that second set—i.e., 2.2.2.2, 4.4.4.4, 6.6.6.6, and so on. Thus, upon a receipt (e.g., a short time later) ofmalicious traffic 130B fromclient end station 120B (using a source IP of 4.4.4.4) at circle ‘7’, theTMM 104 can immediately perform the activated protection measure for thetraffic 130B (at circle ‘8’, such as “dropping” the traffic by not forwarding it on to its intended destination), potentially even before performing any other security analysis of the traffic. Accordingly, in some embodiments, a potentially large amount of traffic that may arrive from a botnet in a potentially short amount of time can be quickly managed (e.g., dispensed with), thus enabling theTMM 104 to more easily accommodate the additional load from the attack and likely preserve resources for continuing to process non-malicious traffic destined for the server(s) 111—whether they are targeted by the botnet or not. - Notably, in some embodiments, when the source of the
malicious traffic 130A (in the example, 6.6.6.6) exists within multiple sets of botnet sources, embodiments may “activate” protection measures for all of these botnets. In some embodiments, these protection measures may be activated for different amounts of time based upon the characteristics of the particular involved botnets, such as the average or maximum length of time observed for attacks from those particular botnets. - In some embodiments, at the expiration of a time period, the activated protection measures can be disabled. The time period can be a constant value (e.g., 1 minute, 5 minutes, 10 minutes, 20 minutes, 1 hour, etc.), which may be different based upon the particular botnet. For example, in some embodiments the length of the time period can be based upon previously-detected activity of that botnet, and could be based upon an average attack length of time observed from that botnet, a maximum attack length of time observed from that botnet, etc.
- Moreover, in some embodiments the time period may continue indefinitely until a condition is satisfied. For example, a condition may be configured such that the condition is met when no traffic is observed from any of the sources within the particular botnet for a threshold period of time (e.g., 5 minutes, 15 minutes, 1 hour, etc.).
- As described above with regard to circle ‘4’, the
TMM 104 may receivemalicious traffic 130A and analyze thetraffic 130A to determine that it is malicious. In some embodiments, this analysis may be based upon one or more packets arriving from one particular client end station (e.g., 120A). - However, in some embodiments the analysis can be expanded to be based upon traffic from multiple client end stations. For example, a security rule could be configured to watch for a number of occurrences of a particular pattern within traffic within an amount of time (e.g., when X packets are received within Y seconds that include a particular character Z). As described above, this security rule could be applied to one particular source, and thus trigger when one source, for example, sends X packets within Y seconds that include a particular character Z. In that case, the
TMM 104 can activate protection measures against all sources within a shared set of botnet sources that includes the transmitting entity. However, in some embodiments the security rule could also or alternatively be triggered when the condition is met due to traffic from multiple sources within a same set of botnet sources (i.e., within a same botnet). Thus, in some embodiments, the condition can be satisfied when two or more sources within a set of botnet sources—but potentially not any of them individually—satisfies the condition. For example, the rule could be triggered (and the protection measures activated for the botnet), when a rule requires that 10 particular packets are received, and a first client (in the botnet) sends 5 such packets and a second client (also in the same botnet) sends 5 such packets. Accordingly, the security rule can look at traffic over time from multiple members of a botnet, which can make this protection scheme more granular that watching an entire botnet, but more comprehensive than just watching one particular source. - In some embodiments, both botnet member identification and targeted botnet protection techniques can be utilized together. For example,
FIG. 2 is a flowdiagram illustrating operations 200 for botnet member identification and targeted botnet protection according to some embodiments. - The
operations 200 include, atblock 205, identifying members of one or more botnets based upon network traffic observed over time, which includes identifying a plurality of source identifiers (e.g., IP addresses) of end stations acting as part of each botnet. - The operations also include, at block 210, when traffic identified as malicious is received from one of the source identifiers of one of the botnets, blocking all traffic from all members of that botnet for an amount of time.
-
FIG. 3 is a flow diagram illustratingexemplary operations 300 for botnet member identification according to some embodiments. Theseoperations 300 can be a part ofblock 205 shown inFIG. 2 . - In some embodiments, the
operations 300 include atblock 305 observing incoming requests to one or more web applications at one or more sites for an amount of time, and atblock 310, identifying a subset of the observed incoming requests as being part of a malicious attack. Theoperations 300 can also include atblock 315, creating, for each of the identified malicious requests, an event data structure. The event data structure can include a unique identifier of the request's source, a unique identifier of the request's destination, and an identifier of a type of the malicious attack of the request. In some embodiments, blocks 305 and 310 and 315 may be performed by theTMM 104, though one or more of these blocks could also be performed by theBIM 106. - These operations are further illustrated at 405 of
FIG. 4 , which is a block diagram 400 illustrating botnet member identification concepts according to some embodiments. - At
block 320, the events are split into separate time period groupings based upon the time of each event. - Then, for each time period (330), the operations include identifying behavior similarity between traffic sources of the malicious requests (using the event data structures) at block 335.
- In some embodiments, block 335 includes blocks 340-370. At
block 340, events of a same attack type are selected, and at block 345 a similarity score is calculated between each pair of traffic sources. In some embodiments, calculating a similarity score includes representing selected events as a graph of sources connected to targets (350) and applying a similarity algorithm for vertices' similarity based upon graph (355). As an example, seeillustration 410 ofFIG. 4 . - At block 360, the sources can be clustered based upon the similarity scores. Block 360 in some embodiments includes assigning each source to its own cluster (365), and iteratively merging clusters of the next top-similarity pair of sources until the similarity of the next-examined top-similarity pair is below a threshold value (370). As an example, see
illustration 415 ofFIG. 4 . - At block 375, associations are determined between sets of sources that have a threshold amount of members (or, “many” members) that persisted over a threshold amount of time (or, a “long” amount of time). In some embodiments, block 375 includes creating an attacking-clusters graph (380) (as an example, see
illustration 420 ofFIG. 4 ), and finding (385) all paths that contain only edges with a minimal weight, that pass through vertices with a minimal weight, and have a length (e.g., number of edges) larger than a threshold amount of edges. This can result with each path being a botnet “candidate”—i.e., each path may be a set of botnet sources, subject to further processing. - For example, optionally at
block 390, in some embodiments further processing may be performed where sources can be removed from the paths that belong to a list of whitelisted sources. For example, in some embodiments the sources associated with known Content Distribution Networks (CDNs), search engines, known scanning services, etc., can be removed from the paths (lists) of botnet sources, as blocking traffic from these sources—regardless of whether some of it may be malicious—could be problematic to the operation of the protected server(s) 111. - In some embodiments, blocks 320-395 may be performed by the
BIM 106, though in other embodiments some or all of these blocks could be performed by other modules. -
FIG. 5 is a flow diagram illustratingexemplary operations 500 for targeted botnet protection according to some embodiments. In some embodiments, theoperations 500 may be for block 210 ofFIG. 2 , i.e., when traffic identified as malicious is received from one of the source identifiers of one of the botnets, block all traffic from all members of that botnet for an amount of time. In some embodiments, some or all of theseoperations 500 may be performed by theTMM 104 described herein. - The
operations 500 can include, atblock 505, receiving one or more sets of botnet sources, where each set includes a plurality of source IP addresses of a botnet. Then, atblock 510, traffic can be received from an IP address of one of the botnets (e.g., is identified within at least one of the sets of botnet sources) that is destined to a server. - In some embodiments, at
block 515 upon determining that the traffic is malicious, theoperations 500 can include performing a protective action with regard to the traffic, such as “blocking” it from being forwarded to the server, subjecting it to additional analysis, notifying a user or process, assigning it a lower or different processing priority value, etc. Additionally, 515 can include enabling a botnet security measure to be utilized against any traffic received from any of the source IP addresses belonging to the botnet (of which the source IP of received traffic belongs to) for a period of time. This can include, in some embodiments, subjecting such traffic to the same protective action that has been performed with regard to the traffic (e.g., blocking it), though it could include performing different actions. In some embodiments, optionally the “period of time” for enabling the security measure can be based upon a determined duration of attack involving that particular botnet, and could be based upon an average or maximum duration of attack involving that particular botnet. - Optionally, the
operations 500 also includeblocks block 520, during the period of time in which the botnet security measure is activated, traffic is received from another IP address of the one botnet. Atblock 525, the protective action is performed against the traffic, due to the traffic being originated by a member of the same botnet. - At some point, at block 530, the botnet security measure can be disabled for the botnet. In some embodiments, the disabling occurs responsive to an expiration of a timer, and in some embodiments, the disabling occurs responsive to not receiving any traffic from any IP of the botnet for a threshold amount of time.
- Exemplary Deployment Environment
- As described herein, the various involved components can be deployed in various configurations for various purposes. For example,
FIG. 6 is a block diagram illustrating an exemplary on premise deployment environment for aTMM 104 and/orBIM 106 according to some embodiments. - Specifically,
FIG. 6 illustrates theTMM 104 implemented in a security gateway 602 (which can be an enterprise security gateway) coupled betweenservers 111 andclient end stations 120A-120N. Thus, access to theservers 111 can be thought of as being “protected” by thesecurity gateway 602, as most (or all) desired interactions with any of theservers 111 will flow through thesecurity gateway 602. - Security gateways—such as firewalls, database firewalls, file system firewalls, and web application firewalls (WAFs)—are network security systems that protect software applications (e.g., web application servers 616) executing on electronic devices (e.g., server end stations 660) within a network (e.g., enterprise network 610) by controlling the flow of network traffic passing through the security gateway. By analyzing packets flowing through the security gateway and determining whether those packets should be allowed to continue traveling through the network, the security gateway can prevent malicious traffic from reaching a protected server, modify the malicious traffic, and/or create an alert to trigger another responsive event or notify a user of the detection of the malicious traffic.
- In some embodiments, the
security gateway 602 is communicatively coupled between the client end stations (120A-120N) and theserver end stations 660, such that all traffic (or a defined subset of traffic) destined to theserver end stations 660 is first passed through (or made available to) thesecurity gateway 602 for analysis. In some embodiments, part of the analysis is performed by theTMM 104 based upon one or more configured security rules 650. - In some embodiments, the
security gateway 602 executes as part of a separateserver end station 630B or a (dedicated or shared)network device 630A; but in other embodiments, thesecurity gateway 602 can operate as part of server end stations 660 (for example, as a software module), or can be implemented using or another type of electronic device and can be software, hardware, or a combination of both. - As used herein, a network device (e.g., a router, switch, bridge) is an electronic device that is a piece of networking equipment, including hardware and software, which communicatively interconnects other equipment on the network (e.g., other network devices, end stations). Some network devices are “multiple services network devices” that provide support for multiple networking functions (e.g., routing, bridging, switching), and/or provide support for multiple application services (e.g., data, voice, and video).
- Security gateways are sometimes deployed as transparent inline bridges, routers, or transparent proxies. A security gateway deployed as a transparent inline bridge, transparent router, or transparent proxy is placed inline between clients (the originating client end station of the traffic 601) and servers (e.g., server(s) 111) and is “transparent” to both the clients and servers (the clients and the servers are not aware of the IP address of the security gateway, and thus the security gateway is not an apparent endpoint). Thus, packets sent between the clients and the servers will pass through the security gateway (e.g., arrive at the security gateway, be analyzed by the security gateway, and may be blocked or forwarded on to the server when the packets are deemed acceptable by the security gateway).
- Additionally, security gateways can also be deployed as a reverse proxy or non-inline sniffer (which may be coupled to a switch or other network device forwarding network traffic between the client end stations (120A-120N) and the server end stations 660).
- In this depicted embodiment, the
security gateway 602 and the server end station(s) 660 are illustrated as being within anenterprise network 610, which can include one or more LANs. An enterprise is a business, organization, governmental body, or other collective body utilizing or providing content and/or services. - In
FIG. 6 , a set of one or moreserver end stations 660 execute or otherwise implement one or more servers providing the content and/or services. In the embodiment depicted in this figure, theservers 111 include adatabase server 612, afile server 614, aweb application server 616, and amail server 620, though in other embodiments the set ofserver end stations 660 implement other types of servers, including but not limited to print servers, gaming servers, application servers, etc. - A
web application server 616 is system software (running on top of an operating system) executed by server hardware (e.g., server end stations 660) upon which web applications (e.g., web application 618) run.Web application servers 616 may include a web server (e.g. Apache, Microsoft® Internet Information Server (IIS), nginx, lighttpd) that delivers web pages (or other content) upon the request of HTTP clients (i.e., software executing on an end station) using the HTTP protocol.Web application servers 616 can also include an application server that executes procedures (i.e., programs, routines, scripts) of aweb application 618.Web application servers 616 typically include web server connectors, computer programming language libraries, runtime libraries, database connectors, and/or the administration code needed to deploy, configure, manage, and connect these components.Web applications 618 are computer software applications made up of one or more files including computer code that run on top ofweb application servers 616 and are written in a language theweb application server 616 supports.Web applications 618 are typically designed to interact with HTTP clients by dynamically generating HyperText Markup Language (HTML) and other content responsive to HTTP request messages sent by those HTTP clients. HTTP clients (e.g., non-illustrated software of any ofclient end stations 120A-120N) typically interact with web applications by transmitting HTTP request messages toweb application servers 616, which execute portions ofweb applications 618 and return web application data in the form of HTTP response messages back to the HTTP clients, where the web application data can be rendered using a web browser. Thus, HTTP functions as a request-response protocol in a client-server computing model, where theweb application servers 616 typically act as the “server” and the HTTP clients typically act as the “client.” - HTTP Resources are identified and located on a network by Uniform Resource Identifiers (URIs)—or, more specifically, Uniform Resource Locators (URLs)—using the HTTP or HTTP Secure (HTTPS) URI schemes. URLs are specific strings of characters that identify a particular reference available using the Internet. URLs typically contain a protocol identifier or scheme name (e.g. http/https/ftp), a colon, two slashes, and one or more of user credentials, server name, domain name, IP address, port, resource path, query string, and fragment identifier, which may be separated by periods and/or slashes. The original versions of HTTP—HTTP/0.9 and HTTP/1.0—were revised in Internet Engineering Task Force (IETF) Request for Comments (RFC) 2616 as HTTP/1.1, which is in common use today. A new version of the HTTP protocol, HTTP/2, is based upon the SPDY protocol and improves how transmitted data is framed and transported between clients and servers.
-
Database servers 612 are computer programs that provide database services to other computer programs or computers, typically adhering to the client-server model of communication.Many web applications 618 utilize database servers 612 (e.g., relational databases such as PostgreSQL, MySQL, and Oracle, and non-relational databases, also known as NoSQL databases, such as MongoDB, Riak, CouchDB, Apache Cassandra, and HBase) to store information received from HTTP clients and/or information to be displayed to HTTP clients. However, other non-web applications may also utilizedatabase servers 612, including but not limited to accounting software, other business software, or research software. Further, some applications allow for users to perform ad-hoc or defined queries (often using Structured Query Language or “SQL”) using thedatabase server 612.Database servers 612 typically store data using one or more databases, each including one or more tables (traditionally and formally referred to as “relations”), which are ledger-style (or spreadsheet-style) data structures including columns (often deemed “attributes”, or “attribute names”) and rows (often deemed “tuples”) of data (“values” or “attribute values”) adhering to any defined data types for each column. Thus, in some instances adatabase server 612 can receive a SQL query from a client (directly from a client process or client end station using a database protocol, or indirectly via a web application server that a client is interacting with), execute the SQL query using data stored in the set of one or more database tables of one or more of the databases, and may potentially return a result (e.g., an indication of success, a value, one or more tuples, etc.). - A
file server 614 is system software (e g , running on top of an operating system, or as part of an operating system itself) typically executed by one or more server end stations 660 (each coupled to or including one or more storage devices) that allows applications or client end stations access to a file-system and/or files (e.g., enterprise data), typically allowing for the opening of files, reading of files, writing to files, and/or closing of files over a network. Further, while somefile servers 614 provide file-level access to storage,other file servers 614 may provide block-level access to storage.File servers 614 typically operate using any number of remote file-system access protocols, which allow client processes to access and/or manipulate remote files from across the Internet or within a same enterprise network (e.g., a corporate Intranet). Examples of remote file-system access protocols include, but are not limited to, Network File System (NFS), WebNFS, Server Message Block (SMB)/Common Internet File System (CIFS), File Transfer Protocol (FTP), Web Distributed Authoring and Versioning (WebDAV), Apple Filing Protocol (AFP), Remote File System (RFS), etc. Another type of remote-file system access protocol is provided by Microsoft Sharepoint (TM), which is a web application platform providing content management and document and file management. - A mail server 620 (or messaging server, message transfer agent, mail relay, etc.) is system software (running on top of an operating system) executed by server hardware (e.g., server end stations 660) that can transfer electronic messages (e.g., electronic mail) from one computing device to another using a client—server application architecture.
Many mail servers 620 may implement and utilize the Simple Mail Transfer Protocol (SMTP), and may utilize the Post Office Protocol (POP3) and/or the Internet Message Access Protocol (IMAP), although many proprietary systems also exist.Many mail servers 620 also offer a web interface (e.g., as a web application 618) for reading and sending email. - The illustrated exemplary deployment also illustrates a variety of configurations for implementing a
BIM 106. A first deployment possibility (BIM 106A) is as a module of thesecurity gateway 602. Another deployment possibility (BIM 106B) is as a module executed upon the server end station(s) 660, while yet another deployment possibility (BIM 106C) is a module executed in acloud computing system 664. In some embodiments, theBIM 106 is communicatively coupled with theTMM 104, and thus can be located in a variety of locations able to provide such connectivity. - Another deployment possibility is illustrated in
FIG. 7 , which is a block diagram illustrating an exemplary cloud-baseddeployment environment 700 for a TMM and/or BIM according to some embodiments. -
FIG. 7 again illustratesservers 111, aTMM 104, various deployments of aBIM 106, and client end station(s) 120A-120N. However, in this depicted embodiment, the servers 111 (and possiblyBIM 106B) can be provided ascloud services 710 of one or more third-partyserver end stations 720 of, for example, acloud computing system 732. - Additionally, the TMM 104 (and possibly
BIM 106A) can be provided in a cloud security gateway 702 operating in acloud computing system 730, which can be different thancloud computing system 732 or possibly even the same. Regardless, thepath 725 from the client end station(s) 120A-120N to theservers 111 necessarily flows through theTMM 104, even though it may not be in a samecloud computing system 732 as theservers 111. One example of a cloud security gateway 702 is the Imperva (TM) Skyfence (TM) Cloud Gateway from Imperva, Inc. - Alternatively, though not illustrated, the
TMM 104 may not lie in thepath 725 between theclient end stations 120A-120N and theservers 111, and instead may gain access to network traffic through a channel between theTMM 104 and theservers 111 for this purpose. For example, theTMM 104 can be configured to “monitor” or “poll” the cloud service(s) 710 by transmitting requests to the third-party server end stations (or individual servers, such as web application server 616) as part of a monitoring scheme to obtain network traffic. This monitoring can occur according to a defined schedule, such as checking once every few minutes. Additionally or alternatively, the server(s) 111 can be configured to “report” some or all traffic (or summaries thereof, event data structures, etc.) to theTMM 104. For example, in some embodiments the server(s) 111 can be configured to transmit data to theTMM 104 using an Application Programming Interface (API) call, Short Message Service (SMS) message, email message, etc. -
FIG. 8 is a block diagram illustratingexemplary operations 800 including malicious event identification for botnet member identification according to some embodiments. The operations depicted inFIG. 8 and subsequentFIGS. 9-11 are provided to further illustrate one possible set of operations corresponding to certain operations depicted inFIG. 3 and/orFIG. 4 . In some embodiments, these operations of any or all ofFIGS. 8-11 can be performed by theBIM 106 ofFIG. 1, 6 , or 7. - In some embodiments, a botnet member identification procedure can include obtaining a set of
requests 805 originated by one or more end stations (e.g., client end stations, server end stations) and destined to one or more servers (e.g., servers 111). As described herein, this obtaining could include receivingtraffic data 108 from one or more TMMs 104 (e.g., as each request is received, according to a schedule, on-demand, etc.), and these one or more TMMs 104 could be at a same or different site—for example, thetraffic data 108 could be from one organization or from multiple organizations. As another example, the obtaining of the set ofrequests 805 could occur via obtaining, by theBIM 106, data from access logs of the server(s) 111, and identifyingrequests 805 from within this data. For example, theBIM 106 could request access log data from the server(s) 111 (or server end station(s) 110) and then it would be transmitted back to theBIM 106 in response, or the server(s) 111/server end station(s) 110 could otherwise provide the access log data, e.g., according to a schedule. Then, according to the type ofserver 111 and/or the type of logs, requests can be identified. This set of operations for obtaining or “collecting” the set ofrequests 805 can, in some embodiments, be part ofblock 305 ofFIG. 3 —i.e., observing incoming requests to one or more web applications at one or more sites for an amount of time. - Next, from the collected
requests 805, a set ofmalicious requests 815 can be identified based upon applying a set of security rules to the set of collectedrequests 805. Numerous types of security rules for detecting malicious attacks are well-known to those of skill in the art and can be used in various embodiments, and may include searching for the existence (or non-existence) of a particular character, set of characters, pattern, etc., within one or multiple portions (e.g., within headers and/or payloads) of one or multiple requests. - As one example, a rule can be configured to detect a “malformed” Content-Length header (of a HTTP request message) that has a negative value, as one attack that has been observed includes attackers providing negative values in this field instead of an anticipated, non-negative integer value. Thus, one or more of the collected
requests 805 can be analyzed to determine if any include a negative integer value (or another type of non-anticipated type of value), and if such a request is found, it can be included in the set ofmalicious requests 815. In various embodiments the number of rules and/or attack “coverage” of the set of rules can differ according to the types of server(s) involved, the amount of scrutiny required for the deployment, the amount of processing resources available, etc. Thus, in some embodiments, themalicious requests 815 could includerequests 810 that are part of a volumetric (or “volume-based”) attack, protocol attack, application layer attack, etc. - In some embodiments, this set of operations for identifying
malicious requests 815 may be part ofblock 310 ofFIG. 3 —i.e., identifying a subset of the observedincoming requests 805 as being part of a malicious attack. - For each of the
malicious requests 815, anevent structure 820 can be generated. As illustrated, eachevent structure 820 includes a source identifier (S1, S2, et seq.) of the source of the corresponding request message, which can comprise a complete or partial source Internet Protocol (IP) address (v4 or v6), or another identifier that uniquely corresponds to a particular source.FIG. 8 also shows eachevent structure 820 including a destination identifier (D1, D2, et seq.) of the destination (e.g., server) of the corresponding request message, which can comprise a complete or partial source Internet Protocol (IP) address (v4 or v6), a hostname, or another identifier that uniquely corresponds to a particular destination. Eachevent structure 820 illustrated also includes an attack type indicator (AT1, AT2, et seq.) that uniquely identifies one (or more) attack type of the corresponding request message. As one simple example, upon the security rule described above (looking for a negative integer in a Content-Length header) being met, an identifier of that particular rule (e.g., rule #1) or attack (e.g., a Malformed Content-Length attack, which could be found via one or more multiple rules, is attack #3) can be placed within theevent structure 820. Additionally, the depictedevent structures 820 also include a time (TIME A, TIME B, et seq.) associated with the request—e.g., the time when the request was received/observed/logged by a TMM 105,security gateway 102,server end station 110,server 111, etc. This time could be in a variety of formats/representations, including a raw or converted timestamp, an identifier of a period of time (which hour, 10 minute window, etc.) of the request, etc. - Although four types of data are shown within an
event structure 820 and will be used to continue the example, it is to be understood that the number(s) and type(s) of these elements can be different in different embodiments—thus, this combination is merely exemplary. - In some embodiments, this set of operations for generating
event structures 820 may be part ofblock 315 ofFIG. 3 —i.e., creating, for each of the identified malicious requests, an event data structure. - In some embodiments, the
event structures 820 can be divided into time periods (e.g., time period ‘A’ 825A, time period ‘B’ 825B, and so on). The size of the time period can be configured differently depending upon the preferences of the implementer and the types of attacks currently used by attackers, and can be tuned over time to adjust for best results for a particular implementation. However, in general, the size of each time period should be large enough to allow for evidence of an ongoing, continued attack to be observed in successive time periods, though the size of each time period should be small enough to avoid the processing (described below) for each time period becoming impractical. For example, in some embodiments, the size of each time period is one minute, though in other embodiments the size of each time period could be thirty seconds, two minutes, five minutes, ten minutes, thirty minutes, one hour, etc. - This “division” of the
event structures 820 into multiple time periods 825 can occur as a distinct operation, and can include placing those event structures in a same time period in a memory or storage location in adjacent physical or virtual locations. However, in some embodiments, the division of theevent structures 820 may include labeling (or updating) eachevent structure 820 with a time period identifier, and in some embodiments, the division may occur iteratively—e.g., the event structures can be iterated over one or more multiple times to gather time period-relevant event structures in an on-demand fashion. For example, in some embodiments each event structure can be a record/document stored a database, and the division can occur by querying the database for event structures having a “time” within a range of times corresponding to a particular time period. Of course, many other techniques for dividing event structures into time periods can be utilized and discerned by those of ordinary skill in the art. - In some embodiments, this set of operations for dividing event structures into time periods may be part of
block 320 ofFIG. 3 —i.e., the events are split into separate time period groupings based upon the time of each event. - We now turn to
FIG. 9 , which is a block diagram illustratingexemplary operations 900 including traffic source similarity determination for botnet member identification, which can be performed after the operations ofFIG. 8 , according to some embodiments. - These depicted
operations 900 can be performed for each of the time periods in which the event structures have been divided into, as shown inFIG. 8 , which can correspond to block 330 ofFIG. 3 . - Thus, in some embodiments, the event structures (corresponding to “events”) of a time period having a
same attack type 905 can be identified, which can correspond to block 340 ofFIG. 3 —i.e., events of a same attack type are selected. This selection can, in some embodiments, be based upon the attack type indicator data of each event structure (shown here as “AT1”). - Using these selected events having a same attack type, in some embodiments a source-
target graph 910 can be constructed, which can be a graph of the sources (of the traffic) connected to the targets (or destinations of the traffic). In this depicted example, the events with anattack type 905 of “AT1” have three different sources—S1, S2, and S3—and these events have four different targets—D1, D2, D4, and D5. Thus, in some embodiments a vertex (or node) is created for each traffic source and a vertex is created for each traffic target/destination, and edges (or “arrows”, which can be directed) can be inserted between those of the sources and targets that have a corresponding event with asame attack type 905. In this example, as there are three events of asame attack type 905 from source S1, and these events are destined to targets D1, D4, and D2, the source-target graph 910 includes three edges leading from S1 to D1, D2, and D4, respectively. In some embodiments, these source-target graph 910 construction operations may be part ofblock 350 ofFIG. 3 —i.e., representing selected events as a graph of sources connected to targets. - With this source-
target graph 910, in some embodiments asimilarity algorithm 915 can be applied (or executed) to determine the similarity between the source vertices of the graph. A variety of such similarity algorithms are known to those of skill in the art, and one such similarly algorithm is illustrated here inFIG. 9 . Thisexemplary similarity algorithm 915 includes determining a similarity score for each pair of sources in thegraph 910, and includes dividing the number of common targets (for a considered pair of sources) and the total number of different targets (for the same considered pair of sources). Thus, in this example, the similarity score for S1 and S2 can be the result of 2 (as sources S1 and S2 both are connected to targets D1 and D4) divided by 3 (as sources S1 and S2 collectively target D1, D2, and D4). This result of ⅔ yields a similarity score of 0.66 for the (S1, S2) pair. In some embodiments, a similarity score can be computed for each such combination of sources. These operations involving applying a similarity algorithm can be part ofblock 355 ofFIG. 3 —i.e., applying a similarity algorithm for the vertices' similarity based upon the graph. - This combination of source-
target graph 910 construction and similarity scoring (e.g., using similarity algorithm 915) can be part ofblock 345 ofFIG. 3 , and can be performed for each of the different groupings of events having asame attack type 905, and can be performed for each time period (e.g.,time periods 825A-825B) being considered. - Continuing on,
FIG. 10 is a block diagram illustratingexemplary operations 1000 including periodic-source cluster generation for botnet member identification, which can be performed after the operations ofFIG. 9 , according to some embodiments. Using the generated similarity scores (e.g., example similarity scores 1005)—for a time period for a grouping of event structures having a same attack type—the sources can be clustered 360. For example, in some embodiments each source reflected in theexample similarity scores 1005 may be initially placed into its own cluster, which can be part ofblock 365 ofFIG. 3 —i.e., assigning each source to its own group or cluster. - Next, in some embodiments the sources can be iteratively combined (or consolidated) based upon the similarity scores, which can be part of
block 370 ofFIG. 3 . For example, the pair of sources having a largest similarity score can be combined. This combining can occur one or more times while one or more conditions are met. For example, in some embodiments the combining occurs as long as the next-highest similarity score is greater than a threshold value (e.g., 0.5). - For ease of understanding, the exemplary six sources in
FIG. 10 may first be placed into separate clusters (which can correspond to block 365 ofFIG. 3 ), and the source pair having the largest unprocessed similarity score (of 0.66, belonging to the S1-S2 pair) can be merged into one cluster. This results in one cluster with S1 and S2, and four other clusters corresponding to S3-S6. Next, the next source pair having the largest unprocessed similarity score (of 0.57, belonging to the S4-S5 pair) can be merged into one cluster. This results in four clusters—(S1, S2), (S3), (S6), and (S4, S5). Again, the next source pair having the largest unprocessed similarity score (of 0.5, belonging to the S1-S6 pair) can be merged into one cluster. In this case, the source S6 can be added into the cluster including S1 and S2, and thus, three clusters remain—(S1, S2, S6), (S3), and (S4, S5). - Although the next source pair having the largest unprocessed similarity score (0.33, of the S1-S4 pair) could be merged, in this example we assume that a merging condition exists, which indicates that merging will only occur for similarity scores that are greater than or equal to 0.5. Thus, because 0.33 does not satisfy this criteria as it is less than 0.5, the merging stops, leaving a set of periodic-
source clusters 1010A-1010C. Of course, other rules can be configured for this merging process, and thus, this illustrated scenario is exemplary. For example, rules can be utilized that indicate what happens when a source (in a cluster with 1+other sources) is to be merged with another source (in a different cluster with 1+other sources), which could include merging all of these sources together into one cluster, stopping the merging process, skipping that particular merge but continuing the merging process, etc. - We continue with
FIG. 11 , which is a block diagram illustratingexemplary operations 1100 including attacking-clusters graph generation and botnet identification for botnet member identification, which can be performed after the operations ofFIG. 10 , according to some embodiments. - With a set of periodic-source clusters (e.g., periodic-
source clusters 1010A-1010C generated inFIG. 10 , shown as C1, C2, and C3) corresponding to events of a same attack type from each of the time periods under consideration, an attacking-clusters graph 1110 can be generated. This generation can be part ofblock 380 ofFIG. 3 . For example, the periodic-source clusters 1010A-1010C of time period “A” 825A can be represented as a level in a graph, where each vertex represents one of the clusters and has a weight indicating the number of sources within the cluster. As an example, the first periodic-source cluster 1010A includes three sources (S1, S2, and S6), and thus can be represented as a vertex “C1” having a weight of 3. Each of the other periodic-source clusters 1010B-1010C can similarly be represented as vertices C2 and C3, having weights of 2 and 1 respectively. This attacking-clusters graph 1110 construction can continue with generating additional levels for each other time period under consideration—here, this is shown as time periods ‘B’ 825B and ‘C’ 825C, though of course in other embodiments there can be different numbers of time periods. - The attacking-clusters graph 1110 construction can also include the insertion of edges between the vertices of different levels. For example, an edge can be inserted between clusters of adjacent levels that share members (i.e., have one or more common sources). In this example, we assume that cluster C1 (having S1, S2, and S6) shares all three of its sources with cluster C4 of the second level (which, for example, could represent sources S1, S2, S6, S8, S9, and S10). Thus, an edge can be inserted between C1 and C4, with a weight indicating a number of common sources (3) between the two vertices.
- In some embodiments, with a per-attack-type attacking-clusters graph 1110, the
operations 1100 can further include analyzing the attacking-clusters graph 1110 according to acondition 1115. In some embodiments, acondition 1115 comprises one or more logical tests indicating what paths through the attacking-clusters graph 1110 are to be identified. For example, thecondition 1115 shown herein provides that path are to be identified that include only vertices having a weight of 2 or more, only edges having a weight of 2 or more, and have a path length (e.g., a number of traversed edges) of at least 2 edges. In this case, thecondition 1115 is satisfied for two paths of the attacking-clusters graph 1110: C1-C4-C8 and C2-05-C8. These two paths can thus indicate two botnet candidates, which are represented asbotnet candidate # 1 1150A andbotnet candidate # 2 1150B. Eachbotnet candidate 1150A-1150B is shown as including identifiers of the sources of the vertices traversed in the corresponding path. Thus, this set of operations involving utilizing acondition 1115 with the attacking-clusters graph 1110 to generate botnet candidates can be part ofblock 385 ofFIG. 3 —i.e., finding all paths that contain only edges with a minimal weight, that pass through vertices with a minimal weight, and have a length larger than a threshold amount of edges, resulting in each path being a botnet “candidate” (in some embodiments, subject to further processing). - For example, in some embodiments, the
operations 1100 may include, at circle ‘A’, removing from thebotnet candidates 1150A-1150B any sources (or, source identifiers) that exist in a set of identifiers of whitelisted sources 1120 (e.g., sources known to be part of CDNs, search engines, scanning services). As illustrated, the sources S8 and S13 are in the set of identifiers of whitelisted sources, and source S8 is also withinbotnet candidate # 1 1150A. Thus, this source may be remove frombotnet candidate # 1 1150A whereasbotnet candidate # 2 will be unchanged, resulting in the two sets of suspected botnet identifiers 1125A-1125B. These operations for the “filtering” of whitelisted sources may be part ofblock 390 ofFIG. 3 —i.e., sources can be removed from the paths that belong to a list of whitelisted sources. However, in some embodiments such source removal operations may not be performed, and thus the flow may alternatively continue via circle ‘B’ to yield the two sets of suspected botnet identifiers 1125A-1125B, where thefirst botnet # 1 1125A does include an identifier of source S8. - Exemplary Flows
-
FIG. 12 is a flow diagram illustratingexemplary operations 1200 for providing targeted botnet protection according to some embodiments. In some embodiments,operations 1200 can be performed by one or more TMMs 104 disclosed herein. In these embodiments, the one or more TMMs 104 can be implemented by one or more electronic devices, and eachTMM 104 can be deployed in front of the one or more servers in that theTMM 104 receives all network traffic sent by a plurality of end stations that is destined for the one or more servers. - In some embodiments, at
block 1205 theoperations 1200 include receiving a message including a plurality of identifiers that have been determined to be used by a subset of the plurality of end stations collectively acting as a suspected botnet, where each of the plurality of identifiers is or is based upon a network address. Theoperations 1200 can also include, atblock 1210, receiving, from a first of the plurality of end stations, a request message that is destined for one of the one or more servers and that includes at least a first identifier of the plurality of identifiers. Theoperations 1200 can also include, atblock 1215, blocking the request message from being sent to the one server responsive to a determination that the request message is malicious, and atblock 1220, responsive to the determination and a different determination that the request message includes any of the plurality of identifiers in at least a set of one or more locations in the request message, activating, for an amount of time, a protection mechanism that applies to all traffic that has any of the plurality of identifiers in any of the set of locations. - In some embodiments, the set of locations includes one or more of: a source IP address header field; a User-Agent header field; and an X-Forwarded-For header field.
- In some embodiments, the protection mechanism comprises dropping all traffic that has any of the plurality of identifiers in any of the set of locations regardless of whether it is separately determined to be malicious. In some embodiments, the
operations 1200 further include receiving a second request message that is from the first end station and that is destined to any one of the one or more servers; and allowing the second request message to be forwarded toward its destination despite the second request message including the at least one of the plurality of identifiers that have been determined to be used by the subset of end stations collectively acting as the suspected botnet due to the protection mechanism no longer being activated. - In some embodiments, the protection mechanism comprises increasing an amount of security analysis performed with the traffic that has any of the plurality of identifiers in any of the set of locations.
- According to some embodiments, the amount of time that the protection mechanism is activated is specific to the suspected botnet. In some embodiments, the amount of time is based upon an average or maximum attack length of time determined based upon previous activity of the suspected botnet.
- In some embodiments, the amount of time is indefinite in that it continues until a condition is satisfied. In some embodiments, the condition is satisfied upon a determination that no request messages have been received (e.g., at the TMM 104) that include any of the plurality of identifiers in any of the set of locations for a threshold amount of time.
- In some embodiments, the
operations 1200 further include receiving, from a second end station of the subset of end stations, a second request message that is destined for a second server of the one or more servers and that includes at least one of the plurality of identifiers in one of the set of locations; and blocking, as part of the protection mechanism before an end of the amount of time, the second request message from being sent to the second server due to the protection mechanism being activated. - According to some embodiments, the received message that includes the plurality of identifiers further includes a second plurality of identifiers that have been determined to be used by a second subset of the plurality of end stations collectively acting as a second suspected botnet; the first identifier exists in both the plurality of identifiers and the second plurality of identifiers; and the activated protection mechanism further applies to all traffic that has any of the second plurality of identifiers in any of the set of locations due to the first identifier also existing in the second plurality of identifiers.
- In some embodiments, the
operations 1200 further include receiving, from a second of the plurality of end stations, a second request message that is destined to any of the one or more servers and that includes at least a second identifier that exists within a second plurality of identifiers of a second suspected botnet but not within the first plurality of identifiers of the suspected botnet; and responsive at least in part due to the second identifier not existing within the first plurality of identifiers, allowing the second request message to be forwarded toward its destination despite the protection mechanism being activated for the suspected botnet and despite the second identifier belonging to the second plurality of identifiers of the second suspected botnet. -
FIG. 13 is a flow diagram illustratingexemplary operations 1300 for providing targeted botnet protection according to some embodiments. In some embodiments,operations 1300 can be performed by one or more TMMs 104 disclosed herein. In these embodiments, the one or more TMMs 104 can be implemented by one or more electronic devices, and eachTMM 104 can be deployed in front of the one or more servers in that theTMM 104 receives all network traffic sent by a plurality of end stations that is destined for the one or more servers. - In some embodiments, at
block 1305 theoperations 1300 include receiving a message including a plurality of identifiers that have been determined to be used by a subset of the plurality of end stations collectively acting as a suspected botnet, wherein each of the plurality of identifiers is or is based upon a network address. - In some embodiments, the
operations 1300 include, atblock 1310, receiving, from a plurality of end stations of the subset of end stations collective acting as the suspected botnet, a plurality of request messages that are destined for a set of one or more of the one or more servers, wherein each of the plurality of request messages includes at least one of the plurality of identifiers in at least a set of one or more locations in the request message. - In some embodiments, the
operations 1300 include, atblock 1315, responsive to a determination that the plurality of request messages each include, in the set of locations, an identifier that is within the plurality of identifiers and further that these plurality of request messages collectively satisfy a security rule, activating, for an amount of time, a protection mechanism that applies to all traffic that has any of the plurality of identifiers in any of the set of locations, wherein none of the plurality of request messages individually would satisfy the security rule. - In some embodiments, the security rule, to be satisfied, at least requires a defined amount of request messages that share a common characteristic to be received within a period of time, wherein the defined amount of request messages is greater than one. According to some embodiments, the common characteristic is met when each of the plurality of request messages carries a payload representing an attempt to login to an application, wherein the payload includes a password, a username, or both the password and username. In some embodiments, the
operations 1300 further include determining that each attempt to login of each of the plurality of request messages was unsuccessful. In some embodiments, said determining that each attempt to login of the plurality of request messages was unsuccessful comprises: receiving a plurality of response messages that were originated by the set of servers, wherein each response message indicates that the attempt to login of a corresponding one of the plurality of request messages was unsuccessful. - According to some embodiments, the set of locations includes one or more of: a source IP address header field; a User-Agent header field; and an X-Forwarded-For header field.
- According to some embodiments, the protection mechanism comprises dropping all traffic that has any of the plurality of identifiers in any of the set of locations regardless of whether it is separately determined to be malicious.
- According to some embodiments, the protection mechanism comprises increasing an amount of security analysis performed with the traffic that has any of the plurality of identifiers in any of the set of locations.
- According to some embodiments, the amount of time that the protection mechanism is activated is specific to the suspected botnet and is based upon an average or maximum attack length of time determined based upon previous activity of the suspected botnet.
- According to some embodiments, the amount of time is indefinite in that it continues until a condition is satisfied, wherein the condition is satisfied upon a determination that no request messages have been received (e.g., at the TMM) that include any of the plurality of identifiers in any of the set of locations for a threshold amount of time.
-
FIG. 14 is a flow diagram illustratingexemplary operations 1400 for identifying a subset of a plurality of end stations that collectively act as a suspected botnet according to some embodiments. In some embodiments,operations 1400 can be performed by aBIM 106 disclosed herein. TheBIM 106 can be implemented by an electronic device. - In some embodiments, the
operations 1400 include, atblock 1405, obtaining traffic data from one or more TMMs 104 implemented by one or more electronic devices, wherein the traffic data includes or is based upon a plurality of request messages that were originated by ones of the plurality of end stations and that were destined to one or more servers, wherein each of the one or more TMMs is deployed in front of at least one of the one or more servers in that the TMM receives all network traffic originated by the plurality of end stations that is destined for the at least one server; - In some embodiments, the
operations 1400 include, at block 1410, generating, based upon the obtained traffic data, a set of identifiers corresponding to the subset of the plurality of end stations that are determined by the BIM to be collectively acting as the suspected botnet in that they have transmitted request messages, destined for one or more of the one or more servers, that collectively or individually satisfy one or more security rules which, when satisfied, indicate a malicious attack, wherein the set of identifiers comprises a plurality of identifiers. - In some embodiments, the
operations 1400 include, atblock 1415, transmitting the set of identifiers to the one or more TMMs to cause the one or more TMMs to utilize the set of identifiers while analyzing subsequent request messages destined to the one or more servers to detect an attack from the suspected botnet and to protect the one or more servers from the attack. - In some embodiments, said generating the set of identifiers comprises: identifying, from the traffic data, a subset of the plurality of request messages that are malicious; and determining, based upon the subset of request messages, that the subset of end stations have collectively performed the malicious attack for at least a threshold amount of time and that at least a threshold number of the subset of end stations have been involved in the malicious attack for each of a threshold number of time periods within the threshold amount of time.
- In some embodiments, said determining comprises: generating a plurality of event data structures corresponding to the subset of request messages, wherein each of the plurality of event data structures includes a source identifier of a source of the corresponding request message and a destination identifier of a destination of the corresponding request message; identifying a plurality of groupings of the generated event data structures, wherein each of the plurality of groupings corresponds to a time period of a plurality of different time periods and includes those of the generated event data structures corresponding to a request message that was received at a corresponding one of the one or more TMMs within the time period; and identifying, for each of the different time periods, groupings of source identifiers of the request messages corresponding to the event data structures of the time period, wherein generating the set of identifiers is based upon analyzing the groupings of source identifiers of each of the different time periods.
- In some embodiments, said identifying the groupings of source identifiers for each of the different time periods comprises: calculating a similarity score between each pair of source identifiers of those of the event data structures of the time period that have a same attack type, wherein the similarity score is based upon the destination identifiers of those event data structures that include one of the pair of source identifiers; and clustering, based upon the similarity scores, the source identifiers of those of the event data structures of the time period that have the same attack type.
- In some embodiments, said generating the set of identifiers further comprises: removing, from the set of identifiers, any identifier existing within a set of whitelisted source identifiers.
- In some embodiments, the
operations 1400 further include determining, based at least in part upon the obtained traffic data, an attack duration of the suspected botnet, wherein the attack duration is an average attack duration or a maximum attack duration observed over a plurality of attacks observed from the suspected botnet; and transmitting the attack duration of the suspected botnet to the one or more TMMs to cause the one or more TMMs to utilize the attack duration while protecting the one or more servers from the suspected botnet. - In some embodiments, each identifier of the set of identifiers comprises an IP address.
- Alternative Embodiments
- The operations in the flow diagrams have been described with reference to the exemplary embodiments of the other diagrams. However, it should be understood that the operations of the flow diagrams can be performed by embodiments of the invention other than those discussed with reference to these other diagrams, and the embodiments of the invention discussed with reference these other diagrams can perform operations different than those discussed with reference to the flow diagrams.
- Similarly, while the flow diagrams in the figures show a particular order of operations performed by certain embodiments of the invention, it should be understood that such order is exemplary (e.g., alternative embodiments may perform the operations in a different order, combine certain operations, overlap certain operations, etc.).
- While the invention has been described in terms of several embodiments, those skilled in the art will recognize that the invention is not limited to the embodiments described, can be practiced with modification and alteration within the spirit and scope of the appended claims. The description is thus to be regarded as illustrative instead of limiting.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/114,522 US20210092142A1 (en) | 2016-02-25 | 2020-12-08 | Techniques for targeted botnet protection |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201662300069P | 2016-02-25 | 2016-02-25 | |
US15/442,560 US10911472B2 (en) | 2016-02-25 | 2017-02-24 | Techniques for targeted botnet protection |
US17/114,522 US20210092142A1 (en) | 2016-02-25 | 2020-12-08 | Techniques for targeted botnet protection |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/442,560 Continuation US10911472B2 (en) | 2016-02-25 | 2017-02-24 | Techniques for targeted botnet protection |
Publications (1)
Publication Number | Publication Date |
---|---|
US20210092142A1 true US20210092142A1 (en) | 2021-03-25 |
Family
ID=59678617
Family Applications (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/442,582 Active 2038-02-04 US10673719B2 (en) | 2016-02-25 | 2017-02-24 | Techniques for botnet detection and member identification |
US15/442,560 Active 2038-01-04 US10911472B2 (en) | 2016-02-25 | 2017-02-24 | Techniques for targeted botnet protection |
US15/442,571 Abandoned US20170251016A1 (en) | 2016-02-25 | 2017-02-24 | Techniques for targeted botnet protection using collective botnet analysis |
US17/114,522 Pending US20210092142A1 (en) | 2016-02-25 | 2020-12-08 | Techniques for targeted botnet protection |
Family Applications Before (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/442,582 Active 2038-02-04 US10673719B2 (en) | 2016-02-25 | 2017-02-24 | Techniques for botnet detection and member identification |
US15/442,560 Active 2038-01-04 US10911472B2 (en) | 2016-02-25 | 2017-02-24 | Techniques for targeted botnet protection |
US15/442,571 Abandoned US20170251016A1 (en) | 2016-02-25 | 2017-02-24 | Techniques for targeted botnet protection using collective botnet analysis |
Country Status (1)
Country | Link |
---|---|
US (4) | US10673719B2 (en) |
Families Citing this family (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9648036B2 (en) * | 2014-12-29 | 2017-05-09 | Palantir Technologies Inc. | Systems for network risk assessment including processing of user access rights associated with a network of devices |
US10673719B2 (en) | 2016-02-25 | 2020-06-02 | Imperva, Inc. | Techniques for botnet detection and member identification |
US10028145B2 (en) * | 2016-04-15 | 2018-07-17 | Microsoft Technology Licensing, Llc | Blocking undesirable communications in voice over internet protocol systems |
US10243981B2 (en) * | 2016-09-09 | 2019-03-26 | Ca, Inc. | Bot detection based on divergence and variance |
CN107979581B (en) * | 2016-10-25 | 2020-10-27 | 华为技术有限公司 | Detection method and device for zombie characteristics |
WO2018148834A1 (en) * | 2017-02-17 | 2018-08-23 | Royal Bank Of Canada | Web application firewall |
US10721148B2 (en) * | 2017-10-16 | 2020-07-21 | Radware, Ltd. | System and method for botnet identification |
US11005869B2 (en) * | 2017-11-24 | 2021-05-11 | Korea Internet & Security Agency | Method for analyzing cyber threat intelligence data and apparatus thereof |
US11374945B1 (en) * | 2018-02-13 | 2022-06-28 | Akamai Technologies, Inc. | Content delivery network (CDN) edge server-based bot detection with session cookie support handling |
US11368483B1 (en) | 2018-02-13 | 2022-06-21 | Akamai Technologies, Inc. | Low touch integration of a bot detection service in association with a content delivery network |
US11372893B2 (en) | 2018-06-01 | 2022-06-28 | Ntt Security Holdings Corporation | Ensemble-based data curation pipeline for efficient label propagation |
US11252185B2 (en) * | 2019-03-28 | 2022-02-15 | NTT Security Corporation | Graph stream mining pipeline for efficient subgraph detection |
CN110049065B (en) * | 2019-05-21 | 2022-04-05 | 网易(杭州)网络有限公司 | Attack defense method, device, medium and computing equipment of security gateway |
CN110324437B (en) * | 2019-07-09 | 2020-08-21 | 中星科源(北京)信息技术有限公司 | Original address transmission method, system, storage medium and processor |
CN110730175B (en) * | 2019-10-16 | 2022-12-06 | 杭州安恒信息技术股份有限公司 | Botnet detection method and detection system based on threat information |
US11399041B1 (en) * | 2019-11-22 | 2022-07-26 | Anvilogic, Inc. | System for determining rules for detecting security threats |
US20210184977A1 (en) * | 2019-12-16 | 2021-06-17 | Citrix Systems, Inc. | Cpu and priority based early drop packet processing systems and methods |
US11159576B1 (en) * | 2021-01-30 | 2021-10-26 | Netskope, Inc. | Unified policy enforcement management in the cloud |
CN113746918A (en) * | 2021-09-03 | 2021-12-03 | 上海幻电信息科技有限公司 | Hypertext transfer protocol proxy method and system |
US20230421577A1 (en) * | 2022-06-28 | 2023-12-28 | Okta, Inc. | Techniques for malicious entity discovery |
US12010099B1 (en) | 2023-10-10 | 2024-06-11 | Uab 360 It | Identity-based distributed cloud firewall for access and network segmentation |
CN117879863A (en) * | 2023-11-30 | 2024-04-12 | 电子科技大学长三角研究院(湖州) | Multi-layer coping system for botnet based on elastic complex social network |
Citations (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007107766A1 (en) * | 2006-03-22 | 2007-09-27 | British Telecommunications Public Limited Company | Method and apparatus for automated testing software |
US20080080518A1 (en) * | 2006-09-29 | 2008-04-03 | Hoeflin David A | Method and apparatus for detecting compromised host computers |
US20080196099A1 (en) * | 2002-06-10 | 2008-08-14 | Akonix Systems, Inc. | Systems and methods for detecting and blocking malicious content in instant messages |
US20080307526A1 (en) * | 2007-06-07 | 2008-12-11 | Mi5 Networks | Method to perform botnet detection |
KR20100074470A (en) * | 2008-12-24 | 2010-07-02 | 한국인터넷진흥원 | Method for detecting irc botnet based on network |
KR20100074504A (en) * | 2008-12-24 | 2010-07-02 | 한국인터넷진흥원 | Method for analyzing behavior of irc and http botnet based on network |
WO2011012056A1 (en) * | 2009-07-29 | 2011-02-03 | 成都市华为赛门铁克科技有限公司 | Method, system and equipment for detecting botnets |
KR101025502B1 (en) * | 2008-12-24 | 2011-04-06 | 한국인터넷진흥원 | Network based detection and response system and method of irc and http botnet |
KR20110070189A (en) * | 2009-12-18 | 2011-06-24 | 한국인터넷진흥원 | Malicious traffic isolation system using botnet infomation and malicious traffic isolation method using botnet infomation |
KR101045330B1 (en) * | 2008-12-24 | 2011-06-30 | 한국인터넷진흥원 | Method for detecting http botnet based on network |
US8195750B1 (en) * | 2008-10-22 | 2012-06-05 | Kaspersky Lab, Zao | Method and system for tracking botnets |
KR20120072992A (en) * | 2010-12-24 | 2012-07-04 | 한국인터넷진흥원 | System and method for botnet detection using traffic analysis of non-ideal domain name system |
WO2012130523A1 (en) * | 2011-03-29 | 2012-10-04 | Nec Europe Ltd. | A method for providing a firewall rule and a corresponding system |
CN202652243U (en) * | 2012-06-21 | 2013-01-02 | 南京信息工程大学 | Botnet detecting system based on node |
US20130133072A1 (en) * | 2010-07-21 | 2013-05-23 | Ron Kraitsman | Network protection system and method |
CN103152442A (en) * | 2013-01-31 | 2013-06-12 | 中国科学院计算机网络信息中心 | Detection and processing method and system for botnet domain names |
US8555388B1 (en) * | 2011-05-24 | 2013-10-08 | Palo Alto Networks, Inc. | Heuristic botnet detection |
WO2013156220A1 (en) * | 2012-04-20 | 2013-10-24 | F-Secure Corporation | Discovery of suspect ip addresses |
US20140047543A1 (en) * | 2012-08-07 | 2014-02-13 | Electronics And Telecommunications Research Institute | Apparatus and method for detecting http botnet based on densities of web transactions |
US8762298B1 (en) * | 2011-01-05 | 2014-06-24 | Narus, Inc. | Machine learning based botnet detection using real-time connectivity graph based traffic features |
CN103916288A (en) * | 2013-12-27 | 2014-07-09 | 哈尔滨安天科技股份有限公司 | Botnet detection method and system on basis of gateway and local |
US20150326587A1 (en) * | 2014-05-07 | 2015-11-12 | Attivo Networks Inc. | Distributed system for bot detection |
US20150358285A1 (en) * | 2012-03-21 | 2015-12-10 | Raytheon Bbn Technologies Corp. | Destination address control to limit unauthorized communications |
US20160006755A1 (en) * | 2013-02-22 | 2016-01-07 | Adaptive Mobile Security Limited | Dynamic Traffic Steering System and Method in a Network |
US20160080395A1 (en) * | 2014-09-17 | 2016-03-17 | Cisco Technology, Inc. | Provisional Bot Activity Recognition |
US20160182542A1 (en) * | 2014-12-18 | 2016-06-23 | Stuart Staniford | Denial of service and other resource exhaustion defense and mitigation using transition tracking |
US9430646B1 (en) * | 2013-03-14 | 2016-08-30 | Fireeye, Inc. | Distributed systems and methods for automatically detecting unknown bots and botnets |
US20170063921A1 (en) * | 2015-08-28 | 2017-03-02 | Verizon Patent And Licensing Inc. | Botnet beaconing detection and mitigation |
US20170374084A1 (en) * | 2016-06-22 | 2017-12-28 | Ntt Innovation Institute, Inc. | Botnet detection system and method |
US10129288B1 (en) * | 2014-02-11 | 2018-11-13 | DataVisor Inc. | Using IP address data to detect malicious activities |
US20190215301A1 (en) * | 2015-10-30 | 2019-07-11 | Melih Abdulhayoglu | System and method of protecting a network |
US10432649B1 (en) * | 2014-03-20 | 2019-10-01 | Fireeye, Inc. | System and method for classifying an object based on an aggregated behavior results |
US10462171B2 (en) * | 2017-08-08 | 2019-10-29 | Sentinel Labs Israel Ltd. | Methods, systems, and devices for dynamically modeling and grouping endpoints for edge networking |
US20200120121A1 (en) * | 2017-08-18 | 2020-04-16 | Visa International Service Association | Remote configuration of security gateways |
US10673719B2 (en) * | 2016-02-25 | 2020-06-02 | Imperva, Inc. | Techniques for botnet detection and member identification |
US10867039B2 (en) * | 2017-10-19 | 2020-12-15 | AO Kaspersky Lab | System and method of detecting a malicious file |
Family Cites Families (87)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7032114B1 (en) | 2000-08-30 | 2006-04-18 | Symantec Corporation | System and method for using signatures to detect computer intrusions |
US7788718B1 (en) * | 2002-06-13 | 2010-08-31 | Mcafee, Inc. | Method and apparatus for detecting a distributed denial of service attack |
US8539582B1 (en) | 2004-04-01 | 2013-09-17 | Fireeye, Inc. | Malware containment and security analysis on connection |
US20060107324A1 (en) * | 2004-11-18 | 2006-05-18 | International Business Machines Corporation | Method to prevent denial of service attack on persistent TCP connections |
US8079083B1 (en) * | 2005-09-02 | 2011-12-13 | Symantec Corporation | Method and system for recording network traffic and predicting potential security events |
US8397284B2 (en) * | 2006-01-17 | 2013-03-12 | University Of Maryland | Detection of distributed denial of service attacks in autonomous system domains |
US8490190B1 (en) * | 2006-06-30 | 2013-07-16 | Symantec Corporation | Use of interactive messaging channels to verify endpoints |
US7870610B1 (en) | 2007-03-16 | 2011-01-11 | The Board Of Directors Of The Leland Stanford Junior University | Detection of malicious programs |
EP2130350B1 (en) * | 2007-03-28 | 2018-04-11 | British Telecommunications public limited company | Identifying abnormal network traffic |
US8272044B2 (en) | 2007-05-25 | 2012-09-18 | New Jersey Institute Of Technology | Method and system to mitigate low rate denial of service (DoS) attacks |
WO2008151321A2 (en) * | 2007-06-08 | 2008-12-11 | The Trustees Of Columbia University In The City Of New York | Systems, methods, and media for enforcing a security policy in a network including a plurality of components |
US8752169B2 (en) * | 2008-03-31 | 2014-06-10 | Intel Corporation | Botnet spam detection and filtration on the source machine |
US10027688B2 (en) | 2008-08-11 | 2018-07-17 | Damballa, Inc. | Method and system for detecting malicious and/or botnet-related domain names |
US8176173B2 (en) * | 2008-09-12 | 2012-05-08 | George Mason Intellectual Properties, Inc. | Live botmaster traceback |
US8069210B2 (en) * | 2008-10-10 | 2011-11-29 | Microsoft Corporation | Graph based bot-user detection |
US20100162399A1 (en) | 2008-12-18 | 2010-06-24 | At&T Intellectual Property I, L.P. | Methods, apparatus, and computer program products that monitor and protect home and small office networks from botnet and malware activity |
KR101010302B1 (en) | 2008-12-24 | 2011-01-25 | 한국인터넷진흥원 | Security management system and method of irc and http botnet |
US8387145B2 (en) * | 2009-06-08 | 2013-02-26 | Microsoft Corporation | Blocking malicious activity using blacklist |
US7890627B1 (en) * | 2009-09-02 | 2011-02-15 | Sophos Plc | Hierarchical statistical model of internet reputation |
US20110072516A1 (en) | 2009-09-23 | 2011-03-24 | Cohen Matthew L | Prevention of distributed denial of service attacks |
US8490189B2 (en) * | 2009-09-25 | 2013-07-16 | Intel Corporation | Using chipset-based protected firmware for host software tamper detection and protection |
KR101077135B1 (en) * | 2009-10-22 | 2011-10-26 | 한국인터넷진흥원 | Apparatus for detecting and filtering application layer DDoS Attack of web service |
US20110153811A1 (en) * | 2009-12-18 | 2011-06-23 | Hyun Cheol Jeong | System and method for modeling activity patterns of network traffic to detect botnets |
KR101038048B1 (en) | 2009-12-21 | 2011-06-01 | 한국인터넷진흥원 | Botnet malicious behavior real-time analyzing system |
US9009299B2 (en) * | 2010-01-07 | 2015-04-14 | Polytechnic Institute Of New York University | Method and apparatus for identifying members of a peer-to-peer botnet |
US8561184B1 (en) * | 2010-02-04 | 2013-10-15 | Adometry, Inc. | System, method and computer program product for comprehensive collusion detection and network traffic quality prediction |
KR101109669B1 (en) | 2010-04-28 | 2012-02-08 | 한국전자통신연구원 | Virtual server and method for identifying zombies and Sinkhole server and method for managing zombie information integrately based on the virtual server |
US8925101B2 (en) * | 2010-07-28 | 2014-12-30 | Mcafee, Inc. | System and method for local protection against malicious software |
US9516058B2 (en) * | 2010-08-10 | 2016-12-06 | Damballa, Inc. | Method and system for determining whether domain names are legitimate or malicious |
US8661544B2 (en) * | 2010-08-31 | 2014-02-25 | Cisco Technology, Inc. | Detecting botnets |
US8516585B2 (en) * | 2010-10-01 | 2013-08-20 | Alcatel Lucent | System and method for detection of domain-flux botnets and the like |
US9148376B2 (en) * | 2010-12-08 | 2015-09-29 | AT&T Intellectual Property I, L.L.P. | Method and system for dynamic traffic prioritization |
US9219744B2 (en) * | 2010-12-08 | 2015-12-22 | At&T Intellectual Property I, L.P. | Mobile botnet mitigation |
US8682812B1 (en) | 2010-12-23 | 2014-03-25 | Narus, Inc. | Machine learning based botnet detection using real-time extracted traffic features |
US9473530B2 (en) | 2010-12-30 | 2016-10-18 | Verisign, Inc. | Client-side active validation for mitigating DDOS attacks |
US8631489B2 (en) * | 2011-02-01 | 2014-01-14 | Damballa, Inc. | Method and system for detecting malicious domain names at an upper DNS hierarchy |
US8695095B2 (en) | 2011-03-11 | 2014-04-08 | At&T Intellectual Property I, L.P. | Mobile malicious software mitigation |
US8887279B2 (en) * | 2011-03-31 | 2014-11-11 | International Business Machines Corporation | Distributed real-time network protection for authentication systems |
US9118702B2 (en) | 2011-05-31 | 2015-08-25 | Bce Inc. | System and method for generating and refining cyber threat intelligence data |
KR101095447B1 (en) * | 2011-06-27 | 2011-12-16 | 주식회사 안철수연구소 | Apparatus and method for preventing distributed denial of service attack |
US8561188B1 (en) * | 2011-09-30 | 2013-10-15 | Trend Micro, Inc. | Command and control channel detection with query string signature |
US8677487B2 (en) * | 2011-10-18 | 2014-03-18 | Mcafee, Inc. | System and method for detecting a malicious command and control channel |
US20140075557A1 (en) * | 2012-09-11 | 2014-03-13 | Netflow Logic Corporation | Streaming Method and System for Processing Network Metadata |
US9392010B2 (en) | 2011-11-07 | 2016-07-12 | Netflow Logic Corporation | Streaming method and system for processing network metadata |
US8745737B2 (en) | 2011-12-29 | 2014-06-03 | Verisign, Inc | Systems and methods for detecting similarities in network traffic |
US10116696B2 (en) | 2012-05-22 | 2018-10-30 | Sri International | Network privilege manager for a dynamically programmable computer network |
WO2013188611A2 (en) * | 2012-06-14 | 2013-12-19 | Tt Government Solutions, Inc. | System and method for real-time reporting of anomalous internet protocol attacks |
US9088606B2 (en) * | 2012-07-05 | 2015-07-21 | Tenable Network Security, Inc. | System and method for strategic anti-malware monitoring |
US9166994B2 (en) * | 2012-08-31 | 2015-10-20 | Damballa, Inc. | Automation discovery to identify malicious activity |
US20150215334A1 (en) | 2012-09-28 | 2015-07-30 | Level 3 Communications, Llc | Systems and methods for generating network threat intelligence |
EP2901612A4 (en) | 2012-09-28 | 2016-06-15 | Level 3 Communications Llc | Apparatus, system and method for identifying and mitigating malicious network threats |
GB201218218D0 (en) * | 2012-10-10 | 2012-11-21 | Univ Lancaster | Computer networks |
EP2936863A1 (en) * | 2012-12-18 | 2015-10-28 | Koninklijke KPN N.V. | Controlling a mobile device in a telecommunications network |
US9749336B1 (en) | 2013-02-26 | 2017-08-29 | Palo Alto Networks, Inc. | Malware domain detection using passive DNS |
GB201306628D0 (en) * | 2013-04-11 | 2013-05-29 | F Secure Oyj | Detecting and marking client devices |
US9571511B2 (en) | 2013-06-14 | 2017-02-14 | Damballa, Inc. | Systems and methods for traffic classification |
US9547845B2 (en) | 2013-06-19 | 2017-01-17 | International Business Machines Corporation | Privacy risk metrics in location based services |
US9705895B1 (en) | 2013-07-05 | 2017-07-11 | Dcs7, Llc | System and methods for classifying internet devices as hostile or benign |
US9305163B2 (en) | 2013-08-15 | 2016-04-05 | Mocana Corporation | User, device, and app authentication implemented between a client device and VPN gateway |
US9208335B2 (en) | 2013-09-17 | 2015-12-08 | Auburn University | Space-time separated and jointly evolving relationship-based network access and data protection system |
US9350758B1 (en) | 2013-09-27 | 2016-05-24 | Emc Corporation | Distributed denial of service (DDoS) honeypots |
US9392018B2 (en) | 2013-09-30 | 2016-07-12 | Juniper Networks, Inc | Limiting the efficacy of a denial of service attack by increasing client resource demands |
US9628507B2 (en) * | 2013-09-30 | 2017-04-18 | Fireeye, Inc. | Advanced persistent threat (APT) detection center |
US9288220B2 (en) * | 2013-11-07 | 2016-03-15 | Cyberpoint International Llc | Methods and systems for malware detection |
FR3015839A1 (en) * | 2013-12-23 | 2015-06-26 | Orange | METHOD FOR SLOWING COMMUNICATION IN A NETWORK |
US20150188941A1 (en) * | 2013-12-26 | 2015-07-02 | Telefonica Digital Espana, S.L.U. | Method and system for predicting victim users and detecting fake user accounts in online social networks |
US9191403B2 (en) | 2014-01-07 | 2015-11-17 | Fair Isaac Corporation | Cyber security adaptive analytics threat monitoring system and method |
US10326778B2 (en) | 2014-02-24 | 2019-06-18 | Cyphort Inc. | System and method for detecting lateral movement and data exfiltration |
TW201533443A (en) | 2014-02-27 | 2015-09-01 | Nat Univ Tsing Hua | Biosensor, manufacturing method of biosensor and method of detecting object |
EP3117363B1 (en) | 2014-03-11 | 2020-07-29 | Vectra AI, Inc. | Method and system for detecting bot behavior |
WO2015153849A1 (en) | 2014-04-03 | 2015-10-08 | Automattic, Inc. | Systems and methods for protecting websites from botnet attacks |
US9635049B1 (en) | 2014-05-09 | 2017-04-25 | EMC IP Holding Company LLC | Detection of suspicious domains through graph inference algorithm processing of host-domain contacts |
US9686312B2 (en) * | 2014-07-23 | 2017-06-20 | Cisco Technology, Inc. | Verifying network attack detector effectiveness |
US9392019B2 (en) * | 2014-07-28 | 2016-07-12 | Lenovo Enterprise (Singapore) Pte. Ltd. | Managing cyber attacks through change of network address |
US9942250B2 (en) | 2014-08-06 | 2018-04-10 | Norse Networks, Inc. | Network appliance for dynamic protection from risky network activities |
EP3195066B1 (en) * | 2014-09-06 | 2019-08-07 | Mazebolt Technologies Ltd. | Non-disruptive ddos testing |
US9900344B2 (en) * | 2014-09-12 | 2018-02-20 | Level 3 Communications, Llc | Identifying a potential DDOS attack using statistical analysis |
US10560466B2 (en) | 2015-01-13 | 2020-02-11 | Level 3 Communications, Llc | Vertical threat analytics for DDoS attacks |
US9930065B2 (en) | 2015-03-25 | 2018-03-27 | University Of Georgia Research Foundation, Inc. | Measuring, categorizing, and/or mitigating malware distribution paths |
KR101666177B1 (en) * | 2015-03-30 | 2016-10-14 | 한국전자통신연구원 | Malicious domain cluster detection apparatus and method |
US9654485B1 (en) | 2015-04-13 | 2017-05-16 | Fireeye, Inc. | Analytics-based security monitoring system and method |
US10536357B2 (en) * | 2015-06-05 | 2020-01-14 | Cisco Technology, Inc. | Late data detection in data center |
US9912678B2 (en) * | 2015-06-24 | 2018-03-06 | Verisign, Inc. | Techniques for automatically mitigating denial of service attacks via attack pattern matching |
US9954804B2 (en) * | 2015-07-30 | 2018-04-24 | International Business Machines Coporation | Method and system for preemptive harvesting of spam messages |
US9930057B2 (en) * | 2015-10-05 | 2018-03-27 | Cisco Technology, Inc. | Dynamic deep packet inspection for anomaly detection |
US9838409B2 (en) | 2015-10-08 | 2017-12-05 | Cisco Technology, Inc. | Cold start mechanism to prevent compromise of automatic anomaly detection systems |
US10097511B2 (en) * | 2015-12-22 | 2018-10-09 | Cloudflare, Inc. | Methods and systems for identification of a domain of a command and control server of a botnet |
-
2017
- 2017-02-24 US US15/442,582 patent/US10673719B2/en active Active
- 2017-02-24 US US15/442,560 patent/US10911472B2/en active Active
- 2017-02-24 US US15/442,571 patent/US20170251016A1/en not_active Abandoned
-
2020
- 2020-12-08 US US17/114,522 patent/US20210092142A1/en active Pending
Patent Citations (37)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080196099A1 (en) * | 2002-06-10 | 2008-08-14 | Akonix Systems, Inc. | Systems and methods for detecting and blocking malicious content in instant messages |
WO2007107766A1 (en) * | 2006-03-22 | 2007-09-27 | British Telecommunications Public Limited Company | Method and apparatus for automated testing software |
US20080080518A1 (en) * | 2006-09-29 | 2008-04-03 | Hoeflin David A | Method and apparatus for detecting compromised host computers |
US20080307526A1 (en) * | 2007-06-07 | 2008-12-11 | Mi5 Networks | Method to perform botnet detection |
US8195750B1 (en) * | 2008-10-22 | 2012-06-05 | Kaspersky Lab, Zao | Method and system for tracking botnets |
KR20100074504A (en) * | 2008-12-24 | 2010-07-02 | 한국인터넷진흥원 | Method for analyzing behavior of irc and http botnet based on network |
KR101025502B1 (en) * | 2008-12-24 | 2011-04-06 | 한국인터넷진흥원 | Network based detection and response system and method of irc and http botnet |
KR101045330B1 (en) * | 2008-12-24 | 2011-06-30 | 한국인터넷진흥원 | Method for detecting http botnet based on network |
KR20100074470A (en) * | 2008-12-24 | 2010-07-02 | 한국인터넷진흥원 | Method for detecting irc botnet based on network |
WO2011012056A1 (en) * | 2009-07-29 | 2011-02-03 | 成都市华为赛门铁克科技有限公司 | Method, system and equipment for detecting botnets |
KR20110070189A (en) * | 2009-12-18 | 2011-06-24 | 한국인터넷진흥원 | Malicious traffic isolation system using botnet infomation and malicious traffic isolation method using botnet infomation |
US9270690B2 (en) * | 2010-07-21 | 2016-02-23 | Seculert Ltd. | Network protection system and method |
US20130133072A1 (en) * | 2010-07-21 | 2013-05-23 | Ron Kraitsman | Network protection system and method |
KR20120072992A (en) * | 2010-12-24 | 2012-07-04 | 한국인터넷진흥원 | System and method for botnet detection using traffic analysis of non-ideal domain name system |
US8762298B1 (en) * | 2011-01-05 | 2014-06-24 | Narus, Inc. | Machine learning based botnet detection using real-time connectivity graph based traffic features |
WO2012130523A1 (en) * | 2011-03-29 | 2012-10-04 | Nec Europe Ltd. | A method for providing a firewall rule and a corresponding system |
US8555388B1 (en) * | 2011-05-24 | 2013-10-08 | Palo Alto Networks, Inc. | Heuristic botnet detection |
US20150358285A1 (en) * | 2012-03-21 | 2015-12-10 | Raytheon Bbn Technologies Corp. | Destination address control to limit unauthorized communications |
WO2013156220A1 (en) * | 2012-04-20 | 2013-10-24 | F-Secure Corporation | Discovery of suspect ip addresses |
CN202652243U (en) * | 2012-06-21 | 2013-01-02 | 南京信息工程大学 | Botnet detecting system based on node |
US20140047543A1 (en) * | 2012-08-07 | 2014-02-13 | Electronics And Telecommunications Research Institute | Apparatus and method for detecting http botnet based on densities of web transactions |
CN103152442A (en) * | 2013-01-31 | 2013-06-12 | 中国科学院计算机网络信息中心 | Detection and processing method and system for botnet domain names |
US20160006755A1 (en) * | 2013-02-22 | 2016-01-07 | Adaptive Mobile Security Limited | Dynamic Traffic Steering System and Method in a Network |
US9430646B1 (en) * | 2013-03-14 | 2016-08-30 | Fireeye, Inc. | Distributed systems and methods for automatically detecting unknown bots and botnets |
CN103916288A (en) * | 2013-12-27 | 2014-07-09 | 哈尔滨安天科技股份有限公司 | Botnet detection method and system on basis of gateway and local |
US10129288B1 (en) * | 2014-02-11 | 2018-11-13 | DataVisor Inc. | Using IP address data to detect malicious activities |
US10432649B1 (en) * | 2014-03-20 | 2019-10-01 | Fireeye, Inc. | System and method for classifying an object based on an aggregated behavior results |
US20150326587A1 (en) * | 2014-05-07 | 2015-11-12 | Attivo Networks Inc. | Distributed system for bot detection |
US20160080395A1 (en) * | 2014-09-17 | 2016-03-17 | Cisco Technology, Inc. | Provisional Bot Activity Recognition |
US20160182542A1 (en) * | 2014-12-18 | 2016-06-23 | Stuart Staniford | Denial of service and other resource exhaustion defense and mitigation using transition tracking |
US20170063921A1 (en) * | 2015-08-28 | 2017-03-02 | Verizon Patent And Licensing Inc. | Botnet beaconing detection and mitigation |
US20190215301A1 (en) * | 2015-10-30 | 2019-07-11 | Melih Abdulhayoglu | System and method of protecting a network |
US10673719B2 (en) * | 2016-02-25 | 2020-06-02 | Imperva, Inc. | Techniques for botnet detection and member identification |
US20170374084A1 (en) * | 2016-06-22 | 2017-12-28 | Ntt Innovation Institute, Inc. | Botnet detection system and method |
US10462171B2 (en) * | 2017-08-08 | 2019-10-29 | Sentinel Labs Israel Ltd. | Methods, systems, and devices for dynamically modeling and grouping endpoints for edge networking |
US20200120121A1 (en) * | 2017-08-18 | 2020-04-16 | Visa International Service Association | Remote configuration of security gateways |
US10867039B2 (en) * | 2017-10-19 | 2020-12-15 | AO Kaspersky Lab | System and method of detecting a malicious file |
Non-Patent Citations (7)
Title |
---|
A.M. Sukhov, E.S. Sagatov, A.V. Baskakov, Analysis of Internet service user audiences for network security problems, 2014 IEEE 2nd International Symposium on Telecommunication Technologies (ISTT), Langkawi, Malaysia (24-26 Nov 2014), 6 pages (Year: 2014) * |
Ibrahim Ghafir and Vaclav Prenosil, Blacklist-based Malicious IP Traffic Detection, Proceedings of 2015 Global Conference on Communication Technologies(GCCT 2015),Date of Conference: 23-24 April 2015, 5 pages (Year: 2015) * |
Kamaldeep Singh, Big Data Analytics framework for Peer-to-Peer Botnet detection using Random Forests, Information Sciences Volume 278, 10 September 2014, Pages 488-497 (Year: 2014) * |
Li Sheng, A Distributed Botnet Detecting Approach Based on Traffic Flow Analysis, 2012, 2012 Second International Conference on Instrumentation & Measurement, Computer, Communication and Control, 5 pages (Year: 2012) * |
Zhen Ling, TorWard: Discovery, Blocking, and Traceback of Malicious Traffic Over Tor, IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 10, NO. 12, DECEMBER 2015, 16 pages (Year: 2015) * |
Zhenhua Chi; Zixiang Zhao, Detecting and Blocking Malicious Traffic Caused by IRC Protocol Based Botnets, Published in: 2007 IFIP International Conference on Network and Parallel Computing Workshops (NPC 2007), 5 pages (Year: 2007) * |
Zhifei Tian, Method to detect botnet based on network coordination, 2012,2012 IEEE International Conference on Computer Science and Automation Engineering, Method to detect botnet based on network coordination, 4 pages (Year: 2012) * |
Also Published As
Publication number | Publication date |
---|---|
US20170251015A1 (en) | 2017-08-31 |
US20170251005A1 (en) | 2017-08-31 |
US20170251016A1 (en) | 2017-08-31 |
US10673719B2 (en) | 2020-06-02 |
US10911472B2 (en) | 2021-02-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210092142A1 (en) | Techniques for targeted botnet protection | |
US11843605B2 (en) | Methods and systems for data traffic based adaptive security | |
US11323469B2 (en) | Entity group behavior profiling | |
US11924170B2 (en) | Methods and systems for API deception environment and API traffic control and security | |
Nawrocki et al. | A survey on honeypot software and data analysis | |
US11089049B2 (en) | System, device, and method of detecting cryptocurrency mining activity | |
US9667651B2 (en) | Compromised insider honey pots using reverse honey tokens | |
US10469514B2 (en) | Collaborative and adaptive threat intelligence for computer security | |
US9787642B2 (en) | Platforms for implementing an analytics framework for DNS security | |
US20200274783A1 (en) | Systems and methods for monitoring digital user experience | |
EP3699766A1 (en) | Systems and methods for monitoring, analyzing, and improving digital user experience | |
JP2020521383A (en) | Correlation-driven threat assessment and remediation | |
Kondracki et al. | Catching transparent phish: Analyzing and detecting mitm phishing toolkits | |
Haddadi et al. | DoS-DDoS: taxonomies of attacks, countermeasures, and well-known defense mechanisms in cloud environment | |
US20180191650A1 (en) | Publish-subscribe based exchange for network services | |
Yen | Detecting stealthy malware using behavioral features in network traffic | |
US10812352B2 (en) | System and method for associating network domain names with a content distribution network | |
van der Eijk et al. | Detecting cobalt strike beacons in netflow data | |
US9935990B1 (en) | Systems and methods for multicast streaming analysis | |
Le et al. | 5G-IoT-IDS: Intrusion Detection System for CIoT as Network Function in 5G Core Network | |
KR101045332B1 (en) | System for sharing information and method of irc and http botnet | |
US20230171280A1 (en) | Risk based session resumption | |
Nechaev et al. | Internet botnets: A survey of detection techniques | |
Akhlaq | Improved performance high speed network intrusion detection systems (NIDS). A high speed NIDS architectures to address limitations of Packet Loss and Low Detection Rate by adoption of Dynamic Cluster Architecture and Traffic Anomaly Filtration (IADF). |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: IMPERVA, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:NIV, NITZAN;SHULMAN, AMICHAI;REEL/FRAME:054610/0871 Effective date: 20170305 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |