EP2272024A2 - Verfahren und system zum schutz vor informationen stehlender software - Google Patents

Verfahren und system zum schutz vor informationen stehlender software

Info

Publication number
EP2272024A2
EP2272024A2 EP09721776A EP09721776A EP2272024A2 EP 2272024 A2 EP2272024 A2 EP 2272024A2 EP 09721776 A EP09721776 A EP 09721776A EP 09721776 A EP09721776 A EP 09721776A EP 2272024 A2 EP2272024 A2 EP 2272024A2
Authority
EP
European Patent Office
Prior art keywords
bait
sensitive information
information
traffic analyzer
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP09721776A
Other languages
English (en)
French (fr)
Inventor
Lidror Troyansky
Sharon Bruckner
Daniel Lyle Hubbard
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Websense LLC
Original Assignee
Websense LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US12/051,579 external-priority patent/US9015842B2/en
Priority claimed from US12/051,616 external-priority patent/US9130986B2/en
Priority claimed from US12/051,670 external-priority patent/US8407784B2/en
Application filed by Websense LLC filed Critical Websense LLC
Publication of EP2272024A2 publication Critical patent/EP2272024A2/de
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/554Detecting local intrusion or implementing counter-measures involving event detection and direct action
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/57Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
    • G06F21/577Assessing vulnerabilities and evaluating computer system security
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1441Countermeasures against malicious traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1441Countermeasures against malicious traffic
    • H04L63/1483Countermeasures against malicious traffic service impersonation, e.g. phishing, pharming or web spoofing

Definitions

  • the present invention relates generally to the field of information leak prevention. More specifically but not exclusively, the present invention deals with methods for an efficient identification of attempts to steal private and confidential information using information stealing software and phishing.
  • HIPAA Health Insurance Portability and Accountability Act
  • GLBA Gramm-Leach-Bliley act
  • SOX Sarbanes Oxley act
  • Information Stealing Software such as Trojan Horses and "Spyware”.
  • Such software may be installed on the computer by malicious users that gained an access to the user's computer or by "infection” e.g., from a web-site, an email or shared files in a file-sharing network.
  • the Information Stealing Software can then detect sensitive or confidential information - e.g., by employing a "keylogger” that logs keystrokes, or by searching for confidential information within the user's computer and sending it to a predefined destination.
  • phishing & pharming Another aspect of information stealing is known as "phishing & pharming".
  • users are solicited, usually by officially-looking e-mails, to post their sensitive details to web-sites designed for stealing this information.
  • effective phishing attacks remain very common.
  • Pharming attacks aim to redirect a website's traffic to another, bogus website. Pharming can be conducted either by changing the hosts file on a victim's computer or by exploitation of a vulnerability in DNS server software. Current attempts to mitigate risks of pharming, such as DNS protection and web browser add-ins such as toolbars are of limited value.
  • a system and method for identifying infection of unwanted software on an electronic device is disclosed.
  • a software agent is configured to generate a bait and is installed on the electronic device.
  • the bait can simulate a situation in which the user performs a login session and submits personal information or it may just contain artificial sensitive information. Additionally, parameters may be inserted into the bait such as the identity of the electronic device that the bait is installed upon.
  • the electronic output of the electronic device is then monitored and analyzed for attempts of transmitting the bait.
  • the output is analyzed by correlating the output with the bait and can be done by comparing information about the bait with the traffic over a computer network in order to decide about the existence and the location of unwanted software.
  • a system for identifying unwanted software on at least one electronic device has a management unit in communication with the electronic device.
  • the management unit is configured to install a software agent on the electronic device that generates a bait to be transmitted by the electronic device over a computer network as an output.
  • the management unit can be configured to insert a parameter into the bait in order to identify the electronic device.
  • a traffic analyzer in communication with the computer network analyzes the output of the electronic device.
  • the traffic analyzer may be installed on a network gateway in communication with the computer network.
  • a decision system in communication with the traffic analyzer correlates the bait from the electronic device with the output of the electronic device in order to determine the existence of unwanted software.
  • a bait is installed on at least one of the electronic devices of the first group of electronic devices.
  • the output of the first and second groups of electronic devices is monitored and analyzed wherein the second group of electronic devices is used as a baseline for analyzing the output of the first group of electronic devices.
  • the output of the first group and second group of electronic devices can be correlated in order to determine the existence of unwanted software.
  • a method for controlling the dissemination of sensitive information over an electronic network includes analyzing the traffic of the network and detecting the sensitive information. Next, the sensitivity level and the risk level of the information leaving the electronic network is assessed. A required action is determined based upon the sensitivity level and the risk level.
  • the sensitivity level of the information is assessed by analyzing the content of the information.
  • the information may include a password and the sensitivity information may be analyzed by analyzing the strength of the password. For example, a strong password would indicate that the information is highly sensitive.
  • the risk level of the information leaving the network may be assessed using heuristics including at least one of geolocation, analysis of a recipient URL, previous knowledge about the destination and analysis of the content of the site.
  • FIG. 1 is a flowchart illustrating a method of efficient detection of information stealing software.
  • FIG. 2 is an illustration of a system for mitigation of information-stealing software hazards according to FIG 1.
  • FIG. 3 is flowchart illustrating another method of efficient detection of information stealing software.
  • FIG. 4 is an illustration of a system for mitigation of information-stealing software hazards according to FIG. 3.
  • FIG. 5 is an illustration of a system that utilizes a corporation from target sites in order to detect information stealing software.
  • FIG. 6 is a flowchart illustrating another method of efficient detection of information stealing software.
  • FIG. 7 is an illustration of a system for mitigation of information stealing software hazards according to FIG. 6.
  • Behavioral detection of information stealing software in a potentially infected computerized device or software is achieved by simulating situations that will potentially trigger the information stealing software to attempt to disseminate "artificial sensitive information bait", and thereafter analyze the traffic and other behavioral patterns of the potentially infected computerized device or software. As the situation is controlled and the information bait is known to the system, there are many cases of infection in which such an analysis will be able to detect the existence of the information stealing software.
  • malware types such as certain keyloggers, attempt to locate sensitive or personal information (e.g., usernames, passwords, financial information etc.). When such information is discovered, either locally on the host computer or as the user uses it to log into a website or application, the malware attempts to capture it and send it out, either in plaintext or encrypted. This behavior is exploited by generating bogus credentials and artificial sensitive information bait and storing it and/or sending them periodically to websites. [0026] If such malware exists on the user's system, the malware captures the bogus information and attempts to send it out. Because the system provided this information in the first place, the system has a very good estimate of what the message sent by the malware will look like.
  • sensitive or personal information e.g., usernames, passwords, financial information etc.
  • the system inspects all outgoing traffic from the user to spot these suspicious messages, and deduce the existence of malware on the machine.
  • the system can simulate a situation in which the user attempts to access the website of a financial institute and submits his username and password. If an information stealing software is installed on the user's computer or along the connection, then by intercepting and analyzing the outgoing traffic the system can detect attempts to steal information.
  • FIG. 1 illustrates a method for detection of information stealing software.
  • a software agent is installed on computerized devices.
  • the software agent is preferably designed and implemented such that it can simulate various artificial inputs in a manner that would seem as a regular user input from the information stealing software perspective (e.g., emulating sequences of keystrokes, accessing sites of e-banking, planting documents that would seem to be sensitive etc.)
  • a set of parameters are preferably selected, such as scheduling bait tasks or providing keywords that produce an attractive bait in this context.
  • various baits in the various computerized devices are implemented in accordance with the inserted parameters. Specifically, the baits are created and sent to predefined targets.
  • the output and behavioral patterns of the computerized device are analyzed from the computer network and at stage E, 150, the system estimates the probability that the device is infected by an information stealing software from the output and behavorial patterns analyzed at stage D.
  • a remote installation & management unit 210 installs software agents 220 on various computerized devices 230 connected thereto by means ordinarily used in the art.
  • the installation can include optional parameters inserted by an operator 240.
  • the software agents produce artificial sensitive information baits, and the output and other behavioral parameters of the various computerized devices are analyzed by the software agents 220 and preferably by a traffic analyzer 250 on a network gateway 260.
  • the traffic analyzer 250 may be software installed on the gateway for monitoring the flow of electronic traffic between the computer devices 230 and a WAN as is commonly known in the art.
  • the results are sent for analysis to a decision system 270, which correlates the information in the traffic with the artificial sensitive information baits in order to decide about the existence and the location of potentially infected computerized devices or software.
  • the decision system 270 may be a software or a hardware module in electronic communication with the traffic analyzer 250.
  • the artificial sensitive information bait typically comprises bogus personal data which is used to login to e-banks, payment services etc. and the system is operable to simulate a situation in which the user performs a login session to such service and submit personal information.
  • the baits implemented on different devices or software components can have unique characteristics, which enable identification of the infected machine.
  • the software agent produces emulated keystrokes (e.g., utilizing the keyboard and/or the mouse drivers) that produce a sequence of characters in a variable rate, that reflect natural typing.
  • the system can produce artificial sensitive documents that would seem realistic - for example financial reports to be publicly released, design documents, password files, network diagrams, etc.
  • the system can produce the baits in random fashion, such that each artificial sensitive information or document is different, in order to impede the information stealing software further.
  • the software agents implemented in the various devices are masqueraded in order to avoid detection by the information stealing software.
  • the software agents can also be hidden, e.g., in a manner commonly referred to as rootkits, by means ordinarily used in the art.
  • the target sites e.g., sites of e- banking
  • the target sites can be emulated by the gateway 260. Accordingly, no information is actually sent to the target sites.
  • Sophisticated information stealing software may utilize special means to avoid detection, and may encrypt and/or hide the disseminated information.
  • the system looks for encrypted content and correlates, statistically, the amount of encrypted data in the outgoing transportation with the number and size of the artificial sensitive information baits. This correlation may be a comparison, or it may be some other type of correlation.
  • Detection of encrypted content can be based on the entropy of the content. In general, the sequence of bits that represent the encrypted content appears to be random (e.g., with maximal entropy).
  • the system preferably utilizes the entropy test for encryption after establishing that the content is not compressed by a standard compression means ordinarily used in the art.
  • the software agents may be installed on some of the machines and the system performs statistical tests, as explained below, in order to decide about the probability of existence of infected computerized devices and software in the organization.
  • FIG. 3 illustrates a method for detection of information stealing software, substantially similar to the method of FIG. 1, but utilizes a two-set method: in stage A, 310, software agents are installed on some of the computerized devices, denoted as set S. At stage B, 320, in order to fine-tune the operation of the software agents, a set of parameters are preferably selected, such as scheduling bait tasks and providing keywords that would produce an attractive bait in this context. At stage C, 330, various baits in the various computerized devices are implemented in accordance with the inserted parameters. At stage D, 340 the output and behavioral patterns of the computerized device are analyzed and compared with those of computerized devices and at stage E, 350, the system estimates the probability that the device is infected by information stealing software.
  • FIG. 4 illustrates a system for detection of information stealing software, substantially similar to the system of FIG. 2 but utilizing the two-set method to improve detection of information stealing software described in FIG. 3.
  • a remote installation & management unit 410 installs software agents 420 on various computerized devices in the set S 430, (according to parameters inserted optionally by an operator) but not on set 455.
  • the software agents then produce artificial sensitive information baits on the computerized devices of set S 430, and the output and other behavioral parameters of the various computerized devices in the set S and the complementary set S are analyzed by a traffic analyzer 450, on a gateway 460.
  • the results are sent for analysis to a decision system 470, which compares characteristics such of the output between sets S and $ in order to decide about the existence of potentially infected computerized devices or software.
  • characteristics may include, for example, the volume of the traffic, the number of TCP sessions, the geographical distribution of the recipients, the entropy of the traffic, the time of the sessions etc.
  • the results of the analysis of the set $ are thereafter used as a baseline in order to determine the statistical significance of the hypothesis that there are infected computerized devices or software in the set S that react to the existence of the artificial sensitive information baits.
  • the sets S and $ may be selected randomly and are changed dynamically in order to provide more information about the identity of the infected machines.
  • the computerized devices in both S and $ are equipped with software agents which analyze and store outgoing traffic, but only the agents of set S produce artificial sensitive information baits.
  • the output of the computerized devices may be compared with the output of computerized devices that, with high probability, were not infected - e.g., new machines (real or virtual), hi order to further increase the probability of detection, the method may also include cooperation with the sites to which the bogus login details are to be submitted in order to detect attempts to use bogus username, password and other elements of sensitive information.
  • FIG. 5 there is illustrated a system that utilizes such cooperation.
  • a remote installation & management unit 510 installs software agents 520 on various computerized devices according to optional parameters inserted by an operator 540.
  • the software agents 520 then produce artificial sensitive information baits, such that each computerized device receives different bogus details.
  • the bogus details are then sent via a gateway 560 to databases 582 at sites 580. If an attacker 590 tries to use a username and password in order to login to the site 580, the site will check the database 582 to determine whether these were bogus details created by the software agents 520, and will send the details of the event to a decision system 570.
  • the decision system 570 determines the infected machines based on the uniqueness of the bogus personal information.
  • the system can detect patterns that correspond to the information planted by the system that were possibly encoded in order to avoid detection: e.g., the system compares the monitored traffic with the planted content and attempts to decide whether there exists a transformation between the two contents. For example, the system can check for reversing the order of the characters, replacing characters (e.g., S -> $), encoding characters using numeric transformations, etc.... The system can also decide that certain patterns are suspicious as attempts to avoid detection.
  • the system can look at behavioral patterns and correlate them with the planting events in order to achieve a better accuracy level.
  • the system identifies and blocks information stealing malicious code that are designed to compromise hosts, collect data, and upload them to a remote location, usually without the users consent or knowledge. These often are installed as part of an attacker's toolkit that are becoming more popular to use, but they can also be part of a targeted attack scheme.
  • the system can also protect against attempts to steal personal information using methods commonly referred to as “phishing” and “pharming”.
  • the method is based on:
  • the system determines whether the destination site is suspicious, and differentiates accordingly between cases in which users send information to suspicious sites and cases in which the information is sent to benign sites.
  • the system can thereafter employ accordingly different strategies, such that for "suspicious" destinations dissemination of potentially sensitive information is blocked.
  • Suspicious sites can be determined using various heuristics, including:
  • the system may also identify cases in which the sensitive private information is posted in cleartext over a non-secure connection, a case that by itself constitutes a problematic situation, and thus may justifiy blocking or quarantining.
  • the private sensitive information may include credit card numbers, social security numbers, ATM PIN, expiration dates of credit-card numbers etc.
  • the system may utilize the categorization and classification of websites and then assess the probability that the site is dangerous or malicious based on this categorization (e.g., using blacklists and whitelists), or employ real-time classification of the content of the destination site, in order to assess its integrity and the probability that the site is malicious.
  • this categorization e.g., using blacklists and whitelists
  • real-time classification of the content of the destination site e.g., using blacklists and whitelists
  • the system can also assess the strength of the password in order to assess the sensitivity level: strong passwords "deserve” higher protection, while common passwords, that can be easily guessed using basic "dictionary attack" can be considered as less sensitive. Note that sites that require strong passwords are in general more sensitive (e.g., financial institutions) while in many cases users select common passwords to "entertainment sites”.
  • the strength of the password is determined according to at least one of the following parameters:
  • the strength and the entropy of the password are evaluated using the methods described in Appendix A of the National Institute of Standards (NIST) Special Publication 800-63, Electronic Authentication Guideline, the contents of which is hereby incorporated herein by reference in its entirety.
  • Fig. 6 illustrates a method for protection against phishing and pharming attempts. Specifically, the electronic traffic is monitored and analyzed at stage A, 610 possibly using a system that is used also for other applications, such as monitoring and prevention of unauthorized dissemination of information, as described e.g., in U.S. Published Patent Application Nos. 2002/0129140 entitled .”A System and a Method for Monitoring Unauthorized Transport of Digital Content” and 2005/0288939, .”A method and system for managing confidential information", the contents of which are hereby incorporated by reference herein in their entirety.
  • detectors of sensitive information detect sensitive information such as passwords, usernames, mother maiden names, etc.
  • the sensitivity level of the sensitive information is assessed, e.g., by analyzing password strength as explained above, by counting the number of personal details etc.
  • the level of risk is assessed using various heuristics, including geolocation, analysis of the URL, previous knowledge about the site, analysis of the content of the site etc.
  • the system decides about the required action (such as blocking, quarantine, alert etc.) based on both the sensitivity level and the risk, and at stage F, 660, the system enforces the required action accordingly.
  • low risk and low sensitivity case e.g. sending the password 1234 to a hobby-related site
  • high-risk high-sensitivity case sending many personal details and a strong password in cleartext to a doubtful site
  • cases in the "gray area” e.g., "medium sensitivity - low risk” or “medium risk - low sensitivity”
  • the operator of the system can set parameters that will reflect the organizational trade-off in the risk- sensitivity two-dimensional plane.
  • FIG. 7 there is an illustration of a system for protection against phishing and pharming attempts, constructed in accordance with the method described in FIG. 6.
  • a management unit 710 is used for setting a policy for protecting computerized devices 720 within the organizational perimeter 730, optionally according to parameters inserted by an operator 740, (e.g., parameters that will reflect the organizational trade-off in the risk- sensitivity two-dimensional plane, as explained above).
  • a traffic analyzer 750 on a gateway 760 monitors incoming and outgoing traffic from at least one computerized device 720 to a site 780 and analyzes the sensitivity and the risk involved in the scenario. The results are sent for analysis to the decision system 770, which decides about the required action and sends instructions accordingly (such as "block", "quarantine” or "alert”) to the gateway 760.
  • the system of FIG. 7 can perform a weak validation to check whether the disseminated password is, with a high-probability, the password used by a user to access his account (or other sensitive resources) inside the organization, without revealing significant information to an attacker who gains access to a weak validation file.
  • This is in contrast to files that allow "strong validation" of passwords, using their hash values - such files are known as highly vulnerable to attacks commonly known as "dictionary attacks”.
  • the weak validation method may be based on a Bloom filter, as described in: Space/Time Trade-offs in Hash Coding with Allowable Errors, by H Bloom Burton, Communications of the ACM, 13 (7). 422-426, 1970, the contents of which are hereby incorporated herein by reference in their entirety.
  • the Bloom filter can assign a tunable probability to the existence of passwords from the organization password file. When the system tests for the existence of a password in the file, it queries the Bloom filter. If the Bloom filter returns "no" then the password does not exist in the file. If the Bloom filter returns "yes”, then it is probable that the password exists in the file, (and therefore in the organization).
  • the Bloom filter therefore provides a probabilistic indication for the existence of a password in the organization, and this probabilistic indication p is tunable by the design of the filter. If p equals to, e.g. 0.9, then there is a false-positive rate of 0.1. Since this validation appears in the context of password dissemination, which by itself conveys a potential risk, this level of false positives is acceptable while monitoring normal traffic.
  • SSN's are 9 digit numbers, even if they are represented by a strong cryptographic hashes, one can easily conduct an effective dictionary attack over all the valid social security numbers. Utilizing the weak validation method described above, one can assess whether the disseminated 9-digit number is, with a high probability, an SSN from the database.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • a general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine.
  • a processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
  • a software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
  • An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium, hi the alternative, the storage medium may be integral to the processor.
  • the processor and the storage medium may reside in an ASIC.
  • the ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.
EP09721776A 2008-03-19 2009-03-17 Verfahren und system zum schutz vor informationen stehlender software Withdrawn EP2272024A2 (de)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US12/051,579 US9015842B2 (en) 2008-03-19 2008-03-19 Method and system for protection against information stealing software
US12/051,616 US9130986B2 (en) 2008-03-19 2008-03-19 Method and system for protection against information stealing software
US12/051,670 US8407784B2 (en) 2008-03-19 2008-03-19 Method and system for protection against information stealing software
PCT/US2009/037435 WO2009117445A2 (en) 2008-03-19 2009-03-17 Method and system for protection against information stealing software

Publications (1)

Publication Number Publication Date
EP2272024A2 true EP2272024A2 (de) 2011-01-12

Family

ID=40736626

Family Applications (1)

Application Number Title Priority Date Filing Date
EP09721776A Withdrawn EP2272024A2 (de) 2008-03-19 2009-03-17 Verfahren und system zum schutz vor informationen stehlender software

Country Status (5)

Country Link
EP (1) EP2272024A2 (de)
CN (1) CN101978376A (de)
AU (1) AU2009225671A1 (de)
CA (1) CA2718594A1 (de)
WO (1) WO2009117445A2 (de)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012057737A1 (en) * 2010-10-26 2012-05-03 Hewlett-Packard Development Company, L. P. Methods and systems for detecting suspected data leakage using traffic samples
CN103607392A (zh) * 2010-12-14 2014-02-26 华为数字技术(成都)有限公司 一种防范钓鱼攻击的方法及装置
CN102098285B (zh) * 2010-12-14 2013-12-04 华为数字技术(成都)有限公司 一种防范钓鱼攻击的方法及装置
JP5624938B2 (ja) * 2011-05-13 2014-11-12 日立オムロンターミナルソリューションズ株式会社 自動取引装置および自動取引システム
CN102801688B (zh) * 2011-05-23 2015-11-25 联想(北京)有限公司 一种数据访问的方法、装置及支持数据访问的终端
CN103294950B (zh) * 2012-11-29 2016-07-06 北京安天电子设备有限公司 一种基于反向追踪的高威窃密恶意代码检测方法及系统
CN103177204B (zh) * 2013-03-29 2016-09-28 北京奇虎科技有限公司 密码信息提示方法及装置
MY184389A (en) * 2013-05-17 2021-04-01 Mimos Berhad Method and system for detecting keylogger
US9357397B2 (en) * 2014-07-23 2016-05-31 Qualcomm Incorporated Methods and systems for detecting malware and attacks that target behavioral security mechanisms of a mobile device
CN105512020B (zh) * 2014-09-24 2018-05-04 阿里巴巴集团控股有限公司 测试方法及装置
CN105447385B (zh) * 2014-12-08 2018-04-24 哈尔滨安天科技股份有限公司 一种多层次检测的应用型数据库蜜罐实现系统及方法
CN105141610A (zh) * 2015-08-28 2015-12-09 百度在线网络技术(北京)有限公司 钓鱼页面检测方法及系统
CN106549960A (zh) * 2016-10-27 2017-03-29 北京安天电子设备有限公司 一种基于网络监控追踪攻击者的方法及系统
CN108256323A (zh) * 2016-12-29 2018-07-06 武汉安天信息技术有限责任公司 一种针对钓鱼应用的检测方法及装置
CN108830089B (zh) * 2018-05-16 2022-04-08 哈尔滨工业大学 高频数据传输中电磁辐射信息泄漏的主动防护系统

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1147795C (zh) * 2001-04-29 2004-04-28 北京瑞星科技股份有限公司 检测和清除已知及未知计算机病毒的方法、系统
US7636943B2 (en) * 2005-06-13 2009-12-22 Aladdin Knowledge Systems Ltd. Method and system for detecting blocking and removing spyware
US7721333B2 (en) * 2006-01-18 2010-05-18 Webroot Software, Inc. Method and system for detecting a keylogger on a computer
WO2009032379A1 (en) * 2007-06-12 2009-03-12 The Trustees Of Columbia University In The City Of New York Methods and systems for providing trap-based defenses

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO2009117445A2 *

Also Published As

Publication number Publication date
CA2718594A1 (en) 2009-09-24
AU2009225671A1 (en) 2009-09-24
WO2009117445A2 (en) 2009-09-24
CN101978376A (zh) 2011-02-16
WO2009117445A3 (en) 2009-11-12

Similar Documents

Publication Publication Date Title
US9455981B2 (en) Method and system for protection against information stealing software
US9495539B2 (en) Method and system for protection against information stealing software
US8959634B2 (en) Method and system for protection against information stealing software
WO2009117445A2 (en) Method and system for protection against information stealing software
US7890612B2 (en) Method and apparatus for regulating data flow between a communications device and a network
JP6104149B2 (ja) ログ分析装置及びログ分析方法及びログ分析プログラム
US7681234B2 (en) Preventing phishing attacks
US9106680B2 (en) System and method for protocol fingerprinting and reputation correlation
EP2147390B1 (de) Detektion von gegnern durch sammeln und korrelation von bewertungen
CN102246490A (zh) 对不需要的软件或恶意软件进行分类的系统和方法
Biju et al. Cyber attacks and its different types
Kalla et al. Phishing detection implementation using databricks and artificial Intelligence
Altwairqi et al. Four most famous cyber attacks for financial gains
Tanwar et al. Classification and imapct of cyber threats in India: a review
CA2587867C (en) Network security device
Alnabulsi et al. Protecting code injection attacks in intelligent transportation system
Waziri Website forgery: Understanding phishing attacks and nontechnical Countermeasures
Ruhani et al. Keylogger: The Unsung Hacking Weapon
Abbas et al. A comprehensive approach to designing internet security taxonomy
Sarkunavathi et al. A Detailed Study on Advanced Persistent Threats: A Sophisticated Threat
Vakil et al. Cyber Attacks: Detection and Prevention
Khanday et al. Intrusion Detection Systems for Trending Cyberattacks
Berchi et al. Security Issues in Cloud-based IoT Systems
Sarowa et al. Analysis of Cyber Attacks and Cyber Incident Patterns over APCERT Member Countries
Muthengi Combating current and emerging cybercrimes in Kenya

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20101019

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL BA RS

DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20141001