WO2004079514A2 - Feedback loop for spam prevention - Google Patents

Feedback loop for spam prevention Download PDF

Info

Publication number
WO2004079514A2
WO2004079514A2 PCT/US2004/005501 US2004005501W WO2004079514A2 WO 2004079514 A2 WO2004079514 A2 WO 2004079514A2 US 2004005501 W US2004005501 W US 2004005501W WO 2004079514 A2 WO2004079514 A2 WO 2004079514A2
Authority
WO
WIPO (PCT)
Prior art keywords
spam
messages
polling
message
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US2004/005501
Other languages
English (en)
French (fr)
Other versions
WO2004079514A3 (en
Inventor
Robert L. Rounthwaite
David E. Heckerman
John D. Mehr
Nathan D. Howell
Micah C. Rupersburg
Dean A. Slawson
Joshua T. Goodman
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Corp
Original Assignee
Microsoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Corp filed Critical Microsoft Corp
Priority to JP2006508818A priority Critical patent/JP4828411B2/ja
Priority to BR0407045-3A priority patent/BRPI0407045A/pt
Priority to AU2004216772A priority patent/AU2004216772B2/en
Priority to MXPA05008303A priority patent/MXPA05008303A/es
Priority to NZ541628A priority patent/NZ541628A/en
Priority to EP04714607A priority patent/EP1599781A4/en
Priority to CA2513967A priority patent/CA2513967C/en
Publication of WO2004079514A2 publication Critical patent/WO2004079514A2/en
Priority to NO20053733A priority patent/NO20053733L/no
Priority to IL170115A priority patent/IL170115A/en
Priority to EGNA2005000502 priority patent/EG23988A/xx
Anticipated expiration legal-status Critical
Publication of WO2004079514A3 publication Critical patent/WO2004079514A3/en
Priority to IL206121A priority patent/IL206121A/en
Ceased legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/21Monitoring or handling of messages
    • H04L51/212Monitoring or handling of messages using filtering or selective blocking
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/107Computer-aided management of electronic mailing [e-mailing]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/08Annexed information, e.g. attachments

Definitions

  • a key technique utilized to thwart junk e-mail is employment of filtering systems/methodologies.
  • One proven filtering technique is based upon a machine learning approach - machine learning filters assign to an incoming message a probability that the message is junk.
  • features typically are extracted from two classes of example messages (e.g., junk and non-junk messages), and a learning filter is applied to discriminate probabilistically between the two classes. Since many message features are related to content (e.g., words and phrases in the subject and/or body of the message), such types of filters are commonly referred to as "content-based filters”.
  • Some junk spam filters are adaptive, which is important in that multilingual users and users who speak rare languages need a filter that can adapt to their specific needs.
  • Another adaptive filter training approach is to employ implicit training cues. For example, if the user(s) replies to or forwards a message, the approach assumes the message to be non-junk. However, using only message cues of this sort introduces statistical biases into the training process, resulting in filters of lower respective accuracy.
  • Still another approach is to utilize all user(s) e-mail for training, where initial labels are assigned by an existing filter and the user(s) sometimes overrides those assignments with explicit cues (e.g., a "user-correction” method) — for example, selecting options such as “delete as junk” and “not junk” — and/or implicit cues.
  • explicit cues e.g., a "user-correction” method
  • the subject invention provides for a feedback loop system and method that facilitates classifying items in connection with spam prevention.
  • the invention makes uses of a machine-learning approach as applied to spam filters, and in particular, randomly samples incoming email messages so that examples of both legitimate and junk/spam mail are obtained to generate sets of training data.
  • Pre-selected individuals serve as spam fighters and participate in categorizing respective replications (which optionally can be slightly modified) of the samples.
  • messages selected for polling are modified in various aspects to appear as polling messages.
  • a unique aspect of the invention is that a copy of an incoming message selected for polling is made such that some users (e.g., spam fighters) will receive the same message (e.g., in terms of message content) twice: once in the form of a polling message and again, in its original form.
  • Another unique aspect of the subject invention is all messages are considered for polling - including those which have been labeled as spam by existing filters. Spam-labeled messages are considered for polling and if selected, are not treated as spam according to specifications of the existing filter (e.g., move to junk folder, delete).
  • more accurate spam filters can be created by training spam filters in accordance with the feedback technique of the subject invention so as to learn to distinguish between good mail and spam, thereby mitigating biased and inaccurate filtering.
  • the feedback is accomplished at least in part by polling any suitable number of users to obtain feedback on their incoming email. Users, identified as spam-fighters, are tasked with voting on whether a selection of incoming messages is either legitimate mail or junk mail. Both positive and negative classifications of incoming email are desired to mitigate improperly filtering out as spam mail that is good (e.g., not spam) intended for a user.
  • the respective classifications along with any other information associated with each mail transaction are moved to a database to facilitate training the spam filters.
  • the database and related components can compile and store properties for selected message(s) (or selected mail transaction), which includes user properties, user voting information and histories, message properties such as unique identification numbers assigned to each selected message, message classifications, and message content summaries, or statistical data related to any of the above, to generate sets of training data for machine learning systems.
  • Machine learning systems e.g., neural networks, Support Vector Machines (SVMs), Bayesian Belief
  • Networks facilitate creating improved spam filters that are trained to recognize both legitimate mail and spam mail and further, to distinguish between them.
  • a new spam filter Once a new spam filter has been trained in accordance with the invention, it can be distributed to mail servers and client email software programs. Furthermore, the new spam filter can be trained with respect to a specific user(s) to improve performance of a personalized filter(s). As new training data sets are built, the spam filter can undergo further training via machine learning to optimize its performance and accuracy.
  • User feedback by way of message classification can also be utilized to generate lists for spam filters and parental controls, to test spam filter performance, and/or to identify spam origination.
  • Cross-validation involves training a filter from which the polling results of some users are excluded. That is, the filter is trained using polling results from a subset of users. On average, this subset of users will work well enough even with some mistakes to detect those who generally are not in agreement with them.
  • the polling results from the excluded users are compared to those of the trained filter. This comparison essentially determines how the users from the training subset would have voted on the messages belonging to the excluded users. If the agreement between an excluded user's votes and the filter is low, then the polling results from that user can either be discarded or marked for manual inspection. This technique can be repeated as desired, excluding data from different users each time.
  • Mistakes on individual messages can also be detected such as a message on which the filter and the user vote strongly disagree. These messages can be flagged for either automatic removal and/or manual inspection.
  • a filter can be trained on all or substantially all users. The user votes and/or messages that disagree with the filter can be discarded.
  • Another alternative to cross- validation involves known result test messages in which the user(s) is asked to vote on a message(s) where the result is known. Accurate classification (e.g., user vote matches filter action) of the message by the user verifies the user's trustworthiness and determines whether to remove the user's classifications from training, and whether to remove the user from future polling.
  • a known spam target is an email address where the set of legitimate mail can be determined and all other mail can be considered spam.
  • the email address can be disclosed on a website in a restrictive manner not likely to be found by people. Hence, any mail sent to this address can be considered spam.
  • the email address may have only been disclosed to a merchant from whom legitimate mail is expected to be received. Thus, mail received from the merchant is legitimate mail, but all other mail received can safely be considered spam.
  • Spam data derived from honeypots and/or other sources can be integrated into the feedback loop system, but because of the substantial increase in spam classification with honeypots, such data should be down weighted, as will be described infra in greater detail, to mitigate obtaining biased polling results.
  • Another aspect of the invention provides for quarantining messages which are deemed uncertain either by the feedback loop system or by the filter. Such messages are held for any suitable period of time instead of being discarded or classified. This time period can be set in advance, or the message can be held until receipt of a determined number of poll results similar to the message, e.g., from the same IP address or with similar content.
  • FIG. 1 A is a block diagram of a feedback loop training system in accordance with an aspect of the present invention.
  • Fig. IB is a flow diagram of an exemplary feedback loop training process in accordance with an aspect of the present invention.
  • Fig. 2 is a flow diagram of an exemplary method that facilitates mail classification by users to create spam filters in accordance with an aspect of the present invention.
  • Fig. 3 is a flow diagram of an exemplary method that facilitates cross-validation of users participating in the method of Fig. 2 in accordance with an aspect of the present invention.
  • Fig. 4 is a flow diagram of an exemplary method that facilitates determining whether users are untrustworthy in accordance with an aspect of the present invention.
  • Fig. 5 is a flow diagram of an exemplary method that facilitates catching spam and determining spam originators in accordance with an aspect of the present invention.
  • Fig. 6 is a block diagram of a client-based feedback loop architecture in accordance with an aspect of the present invention.
  • Fig. 7 is a block diagram of a server-based feedback loop system having one or more users that generate training data in accordance with an aspect of the present invention.
  • Fig. 8 is a block diagram of a cross-organizational server-based feedback loop system wherein the system includes an internal server with its own database to pull training data stored on external user databases in accordance with an aspect of the present invention.
  • Fig. 9 illustrates an exemplary environment for implementing various aspects of , the invention.
  • Fig. 10 is a schematic block diagram of an exemplary communication environment in accordance with the present invention.
  • a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer.
  • a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer.
  • an application running on a server and the server can be a component.
  • One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers.
  • the subject invention can incorporate various inference schemes and/or techniques in connection with generating training data for machine learned spam filtering.
  • the term "inference” refers generally to the process of reasoning about or inferring states of the system, environment, and/or user from a set of observations as captured via events and/or data. Inference can be employed to identify a specific context or action, or can generate a probability distribution over states, for example.
  • the inference can be probabilistic - that is, the computation of a probability distribution over states of interest based on a consideration of data and events.
  • Inference can also refer to techniques employed for composing higher-level events from a set of events and/or data. Such inference results in the construction of new events or actions from a set of observed events and/or stored event data, whether or not the events are correlated in close temporal proximity, and whether the events and data come from one or several event and data sources.
  • message is employed extensively throughout the specification, such term is not limited to electronic mail per se, but can be suitably adapted to include electronic messaging of any form that can be distributed over any suitable communication architecture.
  • conferencing applications that facilitate a conference between two or more people (e.g., interactive chat programs, and instant messaging programs) can also utilize the filtering benefits disclosed herein, since unwanted text can be electronically interspersed into normal chat messages as users exchange messages and/or inserted as a lead-off message, a closing message, or all of the above.
  • a filter could be trained to automatically filter particular message content (text and images) in order to capture and tag as junk the undesirable content (e.g., commercials, promotions, or advertisements).
  • the term "recipient” refers to an addressee of an incoming message or item.
  • the term “user” refers to a recipient who has chosen, either passively or actively, to participate in the feedback loop systems and processes as described herein.
  • a message receipt component 12 receives and delivers incoming messages (denoted as IM) to intended recipients 14.
  • the message receipt component can include at least one filter 16 as is customary with many message receipt components (e.g., junk mail filter) to mitigate delivery of undesirable messages (e.g., spam).
  • the message receipt component 12 in connection with the filter 16 processes the messages (IM) and provides a filtered subset of the messages (IM') to the intended recipients 14.
  • a polling component 18 receives all of the incoming messages (IM) and identifies the respective intended recipients 14.
  • the polling component selects a subset of the intended recipients 14 (referred to as spam fighters 20) to classify a subset of the incoming messages (denoted as IM") as spam or not spam, for example.
  • the classification-related information (denoted as VOTING INFO) are submitted to a message store/vote store 22, where the voting information as well as copies of the respective LVI" are stored for later use such as by a feedback component 24.
  • the feedback component 24 employs machine learning techniques (e.g., neural networks, SVMs, Bayesian networks or any machine learning system suitable for employment with the subject invention) which make use of the voting information to train and/or improve the filter 16 (and/or build new filter(s)) with respect to identifying spam mail, for example.
  • machine learning techniques e.g., neural networks, SVMs, Bayesian networks or any machine learning system suitable for employment with the subject invention
  • IM' spam and more legitimate messages
  • the system 10 facilitates the identification of spam and the training of improved spam filters by utilizing feedback generated by spam fighters 20.
  • Such feedback aspect of the subject invention provides for a rich and highly dynamic scheme for refining a spam detection system. Various details regarding more granular aspects of the subject invention are discussed below.
  • a feedback loop training flow diagram 100 in connection with spam fighting and spam prevention in accordance with the subject invention.
  • users are selected to be spam-fighters (e.g., from a master set comprising all email users) - the selection can be based on a random sampling, or level of trust, or any suitable selection scheme/criteria in accordance with the subject invention.
  • the selected subset of users can include all users, a randomly selected set of users, those who have opted in as spam fighters, or those who have not opted out, and/or any combination thereof, and/or based in part upon their demographic location and related information.
  • the master set of email users selected from can be limited to paying users which can make it more expensive for spammers to subvert the subject invention.
  • a subset of users selected to participate in the spam fighting could comprise only paying users.
  • a list or customer table including the names and properties of the selected users (e.g., spam fighters) can then be created.
  • a recipient of each message is checked against a list of all spam fighters at 104. If the recipient is on the list, then the message is considered for polling. Next, a determination is made whether to select a message for polling. Unlike conventional spam filters, the invention does not delete any messages (e.g., spam) until at least after all incoming mail is considered for polling. That is, the mail is classified before it is subjected to any labeling (e.g., spam, non-spam) - this facilitates obtaining an unbiased sample of messages available for user polling.
  • a component for message selection (not shown) can be employed to select messages with some random probability to mitigate bias of data.
  • Another approach involves using demographic information as well as other user/recipient attributes and properties.
  • messages can be selected based at least in part upon the user/recipient.
  • Other alternative algorithms exist for selecting messages.
  • a spammer could create an account, send it millions of spam messages, and classify all such messages as good: this would allow the spammer to corrupt the training database with incorrectly labeled messages.
  • Some forms of spam filtering notably referred to as black hole lists may not be skippable. Black hole lists prevent a server from receiving any mail from a list of Internet Protocol (IP) addresses. Therefore, the selection of messages can be chosen from the set of mail which is not from a black hole list.
  • IP Internet Protocol
  • a unique aspect of the invention is that messages selected for polling, which are marked as spam by filters currently in place, are not deleted or moved to a junk mail folder. Instead, they are placed in a usual inbox or mailbox where all other messages are received for polling consideration. However, if there are two copies of the message, and the message is considered as spam by the filter, then one copy is delivered to the spam folder or otherwise treated according to set parameters (e.g., deleted, specially marked, or moved to junk folder). When a message is selected, it is forwarded to the user and marked in some special way to indicate that it is a polling message. In particular, the selected message can be modified by a message modification component 106.
  • Examples of message modification include, but are not limited to, locating the polling message in a separate folder, changing the 'from' address or the subject line, and/or using a special icon or special color that would identify the message as a polling message to the user.
  • the selected message can also be encapsulated within another message, which would provide instructions to the user on how to vote on and/or classify the encapsulated message. These instructions can include at least two buttons or links: one to vote the message as spam and one to vote the message as not spam, for example.
  • the voting buttons can be implemented by modifying the contents of the message before sending a copy of the polling message to the user.
  • the user interface can be modified to include the voting buttons.
  • the polling message can contain instructions and voting buttons as well as the selected message attached thereto.
  • the polling message can also comprise a summary of the selected message such as the subject line, from address, date sent and/or received, and the text or at least the first few lines of the text.
  • Another approach involves sending the message with the voting instructions and voting buttons pre- pended thereto.
  • buttons or links
  • buttons including, but not limited to, "spam” and “not spam” buttons can pop up on the user interface or can be incorporated into the polling message.
  • each polling message contains a set of instructions and suitable voting buttons. Other modifications may be necessary, including possibly removing HTML background instructions (which could obscure the text of instructions or buttons.)
  • Another button such as a "solicited commercial email” button can also be provided, depending on the type of information that is desired.
  • the message can also include a button/link to opt-out of future polling.
  • the instructions are localized to the user's preferred language and can be embedded into the polling message.
  • messages selected for polling can be scanned for viruses by the message modification component 106 or by some other suitable virus scanning component (not shown). If a virus is found, the virus can either be stripped away or the message can be discarded. It should be appreciated that virus stripping can occur at any point of the system 100, including when the message is selected and right before the user downloads the message.
  • a message delivery component 108 delivers the polling message to the user for voting.
  • User feedback e.g., polling message, user's vote, and any user properties associated therewith
  • ID 110 e.g., metadata
  • the ID 110 and/or the information corresponding thereto are submitted to a message store/vote store 112 (e.g., central database), where the user classifications/votes are compiled and stored.
  • message store/vote store 112 e.g., central database
  • selected messages available for polling can be kept for later polling or use.
  • the database can perform frequency analyses on a timed basis to make sure that a particular user is not being over sampled and that an amount of data is being collected from the user within limits as specified by the user.
  • the feedback system 100 monitors a percentage limit of a user's mail as well as the sampling period to mitigate bias of both sampling and data. This is especially important where users are selected from all available users, including both low usage and high usage users. For example, a low usage user typically receives and sends a significantly lower volume of mail as compared to a high usage user. Thus, the system 100 monitors the message selection process to be certain that the selected message is approximately one out of every T number of messages received by the user and no more than 1 message received every Z hours by the user. Accordingly, the system can poll 1 out of every 10 incoming messages to be sampled (e.g., considered for polling), but no more than 1 every 2 hours, for example. The frequency, or percentage, limit mitigates sampling a disproportionate amount of messages for a low usage user as compared to a high usage user, and also mitigates overly annoying a user.
  • the central database 112 scans for messages which have been sampled by the system 100 for polling but that have not been classified. The database pulls these messages and localizes them relative to respective user's demographic properties and creates polling messages to request the user(s) to vote and classify the message(s).
  • the spam filter may not be modified or trained immediately after receipt of every new incoming classification. Rather, offline training allows a trainer to continually look at the data received into the database 112 on a scheduled, ongoing, or daily basis. That is, the trainer starts from a prescribed starting point or at a set amount of time in the past and looks at all the data from that point forward to train the filter. For example, the prescribed time period can be from midnight to 6:00 AM.
  • the new spam filter can be trained on an ongoing basis by analyzing the message classifications maintained in the database 112 by way of machine-learning techniques 114 (e.g., neural networks, support vector machines (SVMs)).
  • machine learning techniques require both examples of good mail and spam to learn from so that they can learn to distinguish between them. Even techniques based on matching known examples of spam can benefit from having examples of good mail, so that they can make sure they do not accidentally catch good mail.
  • the new filter 116 can be distributed on an ongoing basis by a distribution component 118 across participating internet service providers (ISP), to the email or message servers, to individual email clients, to an update server, and/or to the central databases of individual companies.
  • ISP internet service providers
  • the feedback system 100 functions on an ongoing basis such that samples of messages considered and utilized for polling can follow an actual distribution of email received by the system 100.
  • training data sets employed to train new spam filters are kept current with respect to adaptive spammers.
  • polling data can be discarded or down weighted (e.g., discounted) based on how long ago it was obtained.
  • the system 100 can be implemented when mail is received at a server such as a gateway server, email server, and/or message server. For instance, when mail comes into an email server, the server looks up the properties of the intended recipients to determine whether the recipients have opted in to the system 100. If their properties indicate as such, the recipients' mail is potentially available for polling.
  • Client-only architectures also exist. For example, client email software can make the polling decisions for a single user and deliver the email either to a central database or use the polling information to improve the performance of a personalized filter.
  • other alternative architectures for this system 100 exist and such are contemplated to fall within the scope of the present invention.
  • Fig. 2 there is illustrated a flow diagram of a basic feedback loop process 200 in accordance with one aspect of the present invention. While, for purposes of simplicity of explanation, the methodology is shown and described as a series of acts, it is to be understood and appreciated that the present invention is not limited by the order of acts, as some acts may, in accordance with the present invention, occur in different orders and/or concurrently with other acts from that shown and described herein. For example, those skilled in the art will understand and appreciate that a methodology could alternatively be represented as a series of interrelated states or events, such as in a state diagram. Moreover, not all illustrated acts may be required to implement a methodology in accordance with the present invention.
  • the process 200 begins with mail coming into and being received by a component such as a server at 202.
  • the server identifies properties of intended recipients to determine whether the intended recipients have previously opted in as spam fighters for polling (at 204).
  • the process 200 utilizes a user property field where it can be indicated whether the recipient has opted in to the feedback system, or consults a list of users who have opted in. If the user is determined to be a participant in the feedback system and has been selected for polling at 206, the feedback system takes action by determining which messages are selected for polling (at 208). Otherwise, the process 200 returns to 202 until at least one intended recipient of an incoming message is determined to be a user (e.g., spam fighter).
  • a user e.g., spam fighter
  • Each message or mail item received by the server has a set of properties corresponding to the mail transaction.
  • the server compiles these properties and sends them along with the polling message to a central database.
  • the properties include the recipient list (e.g., as listed in "To:”, “cc:”, and/or "bcc:” fields), verdict of a currently employed filter (e.g., whether filter identified message as spam), verdict of another optional spam filter (e.g., Brightmail filter), and user information (e.g., username, password, real name, frequency of messages polled, usage data,).
  • the polling message and/or its contents, as well as the corresponding user/recipient are each assigned a unique identifier. The identifier can also be sent to the database and subsequently updated as needed.
  • the message(s) selected for polling (e.g., original messagei- M , where M is an integer greater than or equal to one) is modified to indicate to the user that the messagei- M is a polling message i-PM and then is delivered to the user for polling (at
  • the polling message can include the original message to be voted on as an attachment and a set of instructions on how to vote on the message.
  • the set of instructions includes at least two buttons such as a "good mail” button and a "spam” button, for example.
  • the user clicks on one of the buttons (at 218) to classify the message as good mail or spam the user is directed to a uniform resource locator (URL) that corresponds to a unique identifier for the classification that the user is submitting. This information is posted and the associated record in the central database for that original message i- is updated.
  • URL uniform resource locator
  • the original message can optionally be delivered to the user.
  • the user receives the message twice - once in its original form and again in its modified polling form.
  • a new spam filter is created and trained at 220 based at least in part upon user feedback.
  • the filter can be employed immediately on the email server and/or can be distributed to client servers, client email software, and the like (at 222). Training and distributing a new or updated spam filter is an ongoing activity.
  • the process 200 continues at 204 when a new stream of incoming messages is received.
  • new filters are built, older data is discarded or down weighted based on how long ago they were obtained.
  • the feedback system 100 and process 200 rely on the feedback of its participating users. Unfortunately, some users cannot be trusted or are simply lazy and fail to provide consistent and accurate classifications.
  • the central database 112 (Fig. la) maintains histories of user classifications.
  • the feedback system 100 can track the number of contradictions, the number of times the user changed his/her mind, responses of the user to known good mail or known spam, as well as the number or frequency of user replies to polling messages. When any one of these numbers exceeds a prescribed threshold, or simply for every user of the system, the feedback system 100 can invoke one or several validation techniques to assess the trustworthiness of a particular user or users.
  • One approach is a cross-validation method 300 as illustrated in Fig. 3 in accordance with another aspect of the present invention.
  • the cross-validation technique begins at 302 with a central database receiving incoming data such as polling results and respective user information. Next, it must be determined whether cross-validation is desired to test a suitable number of users at 304. If it is desired, then, a new spam filter is trained using some portion of the incoming data at 306. That is, the data from the users which are being tested is excluded from the training. For example, the filter is trained with about 90% of the polled user data (denoted as the 90% filter), thereby excluding about 10% of the data (denoted as the 10% tested user) which corresponds to the data submitted by the tested user. At 308, the 90% filter is run against the remaining 10% tested user data to determine how the 90% users would have voted on the tested user's messages.
  • the cross-validation technique 300 can be utilized with any suitable set of test users, excluding different users as necessary to determine and maintain the trustworthiness of the voting/classification data.
  • a second approach to assess user fidelity and reliability includes training a filter on all data gathered in a given period, and then testing on the training data, using the filter.
  • This technique is known as test-on-training. If a message was included in the training, the filter should have learned its rating, e.g., the learned filter should classify the message the same way that the user did. However, the filter may continue to make a mistake on it by labeling it as spam when the user labeled it is as not spam or vice versa. In order for a filter to disagree with its training data, the message has to strongly disagree with other messages. Otherwise, the trained filter would almost certainly have found some way to classify it correctly. Thus, the message can be discarded as having an unreliable label. Either this technique or cross validation may be used: cross- validation can yield more mistakes in classifications less reliably; conversely test-on- training finds fewer mistakes more reliably.
  • Both the test-on-training and the cross-validation technique 300 may be applied to individual messages wherein an individual user's classification or rating of a message is excluded by general agreement (e.g., following the majority rating). Alternatively, both techniques can be used to identify potentially unreliable users.
  • a flow diagram of a process 400 to validate the fidelity of user voting in accordance with one aspect of the invention.
  • the process 400 refers from 314 as shown in Fig. 3.
  • a known result test message(s) is sent to suspicious user(s) (or all users). For example, a test message may be injected into the incoming mail and then hand classified so that the database receives the "known" result. Otherwise, the process 400 can wait until a known result message is sent by a third party.
  • the users are allowed to vote on the same test messages.
  • the voting results are compared to the known results at 404.
  • a fourth approach (not shown) to assess user reliability is active learning.
  • active learning techniques messages are not picked at random. Instead, the feedback system can estimate how useful the message will be to the system. For instance, if the filter returns a probability of spam, one can preferentially select the messages which are most uncertainly classified by the current filter for polling, i.e., those whose probability of spam is closest to 50%. Another way to select messages is to determine how common the message is. The more common the message, then the more useful it is to poll. Unique messages are less useful because they are less common.
  • Active learning can be employed by using the confidence levels of existing filters, using how common features of the message are, and using existing filter's confidence levels of its settings or content (e.g., metaconfidence). There are many other active learning techniques, such as query-by-committee, well known to those skilled in the art of machine learning, and any of these techniques can be used.
  • Honeypots are email addresses to which it is known who should be sending them email. For example, a newly created email address may be kept private and disclosed only to selected individuals (at 502). They may also be disclosed publicly but in restrictive ways not seen by people (e.g., putting it on a white background in white typeface as a mail link). Honeypots are particularly useful in dictionary attacks by spammers.
  • a dictionary attack is one in which a spammer tries emailing a very large number of addresses, perhaps all addresses in a dictionary or made from pairs of words in a dictionary or similar techniques in order to find valid addresses.
  • Any email sent to a honeypot (at 504) or any email not from the few selected individuals (at 506) is considered spam (at 508).
  • An email address can also be signed up with a suspect merchant.
  • any email received from the merchant is considered good mail (at 510) but all other mail is considered spam.
  • the spam filter can be trained accordingly (at 512).
  • the suspect merchant is determined to sell or otherwise disclose the user's information (e.g., at least the email address) to third parties. This can be repeated with other suspect merchants and a list can be generated to warn users that their information could be distributed to spammers.
  • honeypots are a good source of spam but a serious source of legitimate mail
  • the data from honeypots can be combined with data from the feedback loop system (Fig. 1) to train new spam filters.
  • Mail from different sources or different classifications can be weighed differently. For example, if there are 10 honeypots and 10 users who are polled on 10% of their mail, about 10 times as much spam is to be expected from the honeypots as from polling. Therefore, the legitimate mail from polling can be weighted at 10 or 11 times as much as the spam in order to make up for this difference.
  • honeypot data can be selectively down weighted. For example, about 50% of a user's mail is good mail and about 50% of it is spam. The same volume of spam is going to the honeypots.
  • the honeypot looks like the honeypot has 100% of spam, and all of it is sampled, not just 10%.
  • the honeypot data is down weighted by 95% and the user spam is down weighted by 50% to result in a 1 : 1 overall ratio.
  • Other sources of spam reports include users who are not included as participants in the feedback loop system. For instance, there may be a "Report Spam” button available to all users for all mail, to report spam that has made it through the filter. This data can be combined with data from the feedback loop system. Again, this source of spam should be down weighted or weighted differently since it can be biased or untrustworthy in various aspects. Re-weighting should also be done to reflect the fact that only mail that was not filtered is subject to reporting by the "Report-as-spam" button.
  • a quarantine filter can be created and employed by the feedback loop system.
  • the quarantine filter makes use of both positive and negative mail features. For example, mail from a popular online merchant is almost always good. A spammer exploits the system by mimicking an aspect of the good merchant mail in his spam. Another example is that the spammer intentionally tricks the feedback system by sending small amounts of good mail via an IP address. The feedback loop leams to classify this mail as good mail, when at such time, the spammer starts sending spam from the same IP address. Thus, the quarantine filter notices a particular positive feature is being received in much greater quantities than the system is used to on the basis of historical data.
  • the quarantine filter can also be employed when mail is received from a new IP address, for which it is not known or certain whether the mail is spam or not spam and such will not be known for a while. Quarantining can be performed in a number of ways, including provisionally marking the mail as spam and moving it to a spam folder or by not delivering it to the user or storing it somewhere where it will not be seen. Quarantining can be done for messages that are near the spam filter threshold: it can be assumed that additional information from polling will help make a correct decision.
  • Quarantining can also be done when many similar messages are received: a few of the messages can be sent for polling with the feedback loop, and the retrained filter can be used to correctly classify the messages.
  • the feedback loop system as described herein can be utilized to evaluate them as well. That is, parameters of the spam filters can be tuned as needed. For example, a filter is trained up through midnight of last night. After midnight, take data that comes into the database to determine error rates of the spam filter as compared to the users' classifications. Further, the feedback loop can be employed to determine false positive and catch rates of the spam filter. For example, the user votes can be taken and the mail can be run through a potential filter to determine the false positive and catch rates. This information can then be used to tune and optimize the filter. Different parameter settings or different algorithms can be manually or automatically tried by building several filters, each one using a different setting or algorithm, to obtain the lowest false positive and catch rates. Thus, the results can be compared to select the best or optimal filter parameters.
  • the feedback loop can be utilized for building and populating lists of IP addresses or domains or URLs that are always voted as spam or always voted as good, or voted at least 90% good, etc. These lists can be used for spam filtering in other ways. For instance, a list of IP addresses voted at least 90% spam could be used for building a black-hole list of addresses from which to accept no mail.
  • the feedback loop can also be used to terminate the accounts of spammers. For example, if a particular user of an ISP appears to be sending spam, the ISP can be automatically notified. Similarly, if a particular domain appears responsible for a large amount of spam, the domain's email provider can be automatically notified.
  • a number of architectures that can be used to implement the feedback loop system.
  • One exemplary architecture is served based, as will be described in Fig. 7, with the selection process happening when the mail reaches the email server.
  • An alternate architecture is client based, as is described in Fig. 6.
  • polling information can be utilized to improve the performance of a personalized filter, or, in the exemplary implementation illustrated here, the information can be sent to a shared repository as training data for a shared filter (e.g. corporate wide, or global.)
  • a shared repository e.g. corporate wide, or global.
  • a network 600 is provided to facilitate communication of e-mail to and from one or more clients 602, 604, and 606 (also denoted as CLIENT i , CLIENT ... CLJENT N . where N is an integer greater or equal to one).
  • the network can be a global communication network (GCN) such as the internet, or a WAN (Wide Area Network), LAN (Local Area Network), or any other network configuration.
  • GCN global communication network
  • WAN Wide Area Network
  • LAN Local Area Network
  • an SMTP gateway server 608 interfaces to the network 600 to provide SMTP services to a LAN 610.
  • An email server 612 operatively disposed on the LAN 610 interfaces to the gateway 608 to control and process incoming and outgoing email of the clients 602, 604, and 606.
  • Such clients 602, 604, and 606 are also disposed on the LAN 610 to access at least the mail services provided thereon.
  • the client] 602 includes a central processing unit (CPU) 614 that controls client processes.
  • the CPU 614 can comprise multiple processors.
  • the CPU 614 executes instractions in connection with providing any of the one or more data gathering/feedback functions described hereinabove.
  • the instructions include, but are not limited to, the encoded instructions that execute at least the basic feedback loop methodology described above, at least any or all of the approaches that can be used in combination therewith for addressing client and message selection, polling message modification, data retention, client reliability and classification validation, reweighing of data from multiple sources including the feedback loop system, spam filter optimization and tuning, quarantine filters, creation of spam lists, and automatic notification of spammers to their respective ISPs and email providers.
  • a user interface 616 is provided to facilitate communication with the CPU 614 and client operating system such that the client] can interact to access the email and vote on polling messages.
  • a sampling of client messages retrieved from the server 612 can be selected for polling by a message selector 620. Messages are selected and modified for polling if the intended recipient (client) has previously agreed to participate.
  • a message modifier
  • the message modifier 622 modifies the message to become a polling message.
  • the message(s) can be modified to include voting instructions and voting buttons and/or links according to the message modification descriptions provided hereinabove. Voting buttons and/or links are implemented by modifying the user interface 616 of the client email software.
  • the message modifier 622 can remove any viruses in the messages (polling and non-polling messages) before they are opened or downloaded for viewing by the client 602.
  • the user of the spam fighting client 602 sees each message only once, with some messages specially marked as polling messages, and including voting buttons, etc.
  • the user of the spam fighting client 602 may see some messages twice, wherein one is the normal message and the other is the polling message.
  • the polling message can be returned to the server 612 and stored in a polled message store
  • the client 602 can store an additional message in the E-
  • the client 602 can show the user each message twice, once as a normal message, and once in modified form.
  • Polling results 626 can be sent to the CPU 614 and then to a database 630 which can be configured to store data from one client or from more than one client, depending on the specific arrangement of the client feedback architecture.
  • the central database 630 can be configured to store data from one client or from more than one client, depending on the specific arrangement of the client feedback architecture.
  • 630 stores polling messages, polling results as well as the respective client-user information.
  • Related components can be employed to analyze such information such as to determine polling frequency, client-user trustworthiness (e.g., user validation 632), and other client statistics.
  • Validation techniques can be employed particularly when the reliability of the client's voting is in question. Suspicion can arise from analyzing the number of contradictions, the number of changed minds, and the number of messages polled for a particular user or users; alternatively, validation techniques can be employed for every user. Any suitable amount of data stored in the central database can be employed in machine learning techniques 634 to facilitate the training of a new and/or improved spam filter.
  • Clients 604 and 606 include similar components as described hereinabove to obtain and train a filter which is personalized to the particular client(s).
  • a polled message scrubber 628 can interface between the CPU 614 and the central database 630 such that aspects of the polled message may be removed for a variety of reasons such as data aggregation, data compression, etc.
  • the polled message scrubber 628 can flush out extraneous portions of the polled message as well as any undesired user information associated therewith.
  • a network 702 is provided to facilitate communication of e-mail to and from one or more users 704 (also denoted as USER] 704], USER 704 2 ... and USER N 704 N; where N is an integer greater or equal to one).
  • the network 702 can be a global communication network (GCN) such as the internet, or a WAN (Wide Area Network), LAN (Local Area Network), WAN (Wide Area Network), LAN (Local Area
  • an SMTP gateway server 710 interfaces to the network 702 to provide SMTP services to a LAN 712.
  • An email server 714 operatively disposed on the LAN 712 interfaces to the gateway 710 to control and process incoming and outgoing email of the users 704.
  • the system 700 provides multiple login capability such that user and message selection 716, message modification 718, and message polling (720, 722, 724) takes place for each different user that logs into the system 700.
  • a user interface 726 that presents a login screen as part of the boot-up process of the computer operating system, or as required, to engage an associated user profile before the user 704 can access his or her incoming messages.
  • a first user 704] (USER]) chooses to access the messages
  • the first user 704] logs into the system via a login screen 728 by entering access information typically in the form of a username and password.
  • a CPU 730 processes the access information to allow the user access, via a message communication application (e.g., a mail client) to only a first user inbox location 732.
  • a message communication application e.g., a mail client
  • incoming mail When incoming mail is received on the message server 714, they are randomly selected for polling which means that at least one of the messages is tagged for polling.
  • the intended recipient(s) of the tagged messages are looked at to determine whether any one of the recipients is also a designated spam fighting user. Recipient properties indicating such information can be maintained on the message server 714 or on any other component of the system 700 as appropriate.
  • a copy of their respective mail as well as any other information regarding the mail transaction can be sent to a central database 734 for storage.
  • Messages tagged for polling are modified by the message modifier 718 in any number of ways described hereinabove. Messages selected for polling may also be specific to the user 704.
  • the user 704 can indicate that only certain types of messages are available for polling. Since this can result in a biased sampling of data, such data can be re-weighted with respect to other client data to mitigate building disproportionate training data sets.
  • Virus scanning of the polling messages can also be performed at this time or at any other time before the polling message is downloaded and/or opened by the user 704. Once the messages have been modified in the appropriate manner, they are delivered to the respective user's inboxes which are denoted as INBOX] 732, INBOX?
  • each polling message includes two or more voting buttons or links, which when selected by the user, generates information relating to the polling message and the polling result.
  • the text of each polling message can be modified to incorporate the voting buttons or links therein.
  • Message poll results (denoted as MESSAGE POLL, 720, MESSAGE POLL 2 722, and MESSAGE POLL N 724.), which include any information resulting from the classification (e.g., polling message or ID associated therewith, user properties), are sent to the central database 734 via a network interface 740 on the LAN 712.
  • the central database 734 can store polling and user information (720, 722, 724) from the respective users to apply to machine learning techniques to build or optimize a new and/or improved spam filter 742.
  • confidential information can be removed or stripped out of the information before it is sent to the central database 714.
  • Information generated by the user(s) 704 via polling can also be aggregated into statistical data. Thus, less bandwidth is used to transmit the information.
  • the newly trained spam filter 742 can then be distributed to other servers (not shown) as well as client email software (not shown) interfacing with the LAN 712 on an ongoing basis, such as when a new filter is available, either by specific request or automatically.
  • the newest spam filter can be automatically pushed out to them and/or made available for downloading via a website.
  • older data sets e.g., information previously obtained and/or employed to train a filter
  • the filter provider is also a very large provider of email services (e.g.
  • the filter provider chooses to also use some data from some of the filter-using organizations, so as to better capture the range of good mail and spam.
  • the feedback loop system as described hereinabove can also be employed in such a cross-organizational scenario, either in a server or client-based architecture.
  • the filter provider who aggregates data from its own users and from the different filter-using organizations the "internal" organization and call the components residing at one of the participating filter using organizations "external.”
  • the cross-organizational system includes a mail database server at the filter provider (internal), such as, but not limited to, Hotmail and one or more message servers (external) such as those which may reside within one or more individual companies.
  • the internal mail database server also stores substantial email feedback from its own customers.
  • training data sets may be generated based on information stored on an internal database (e.g., free e- mail/messaging on a Hotmail or MSN server) as well as information stored on one or more external databases associated with the respective external servers.
  • Information maintained on the external databases can be communicated to the internal server via a network such as the Internet, for example, for employment in machine learning techniques.
  • data from the external databases can be utilized to train new spam filters and/or improve existing spam filters located externally (e.g., within the respective company) or associated with the internal mail server.
  • the data from one or more of the external databases should include at least one of polling messages, polling results (classifications), user information/properties, and voting statistical data per user, per group of users or on average for each company.
  • the voting statistical data facilitate determining reliability of the information generated by the respective companies as well as mitigating bias of external data.
  • the data from one or more external databases can be re-weighted or weighted differently from one or more of the other external databases.
  • the external entities can be tested for reliability and trustworthiness using similar validation techniques as described with hereinabove.
  • FIG. 8 there is illustrated an exemplary cross-organizational feedback system 800 where an internal database server and an external mail server can communicate and exchange database information via a network to facilitate the generation of training data sets used in machine learning techniques to build improved spam filters.
  • the system 800 includes at least one external message server 802 (e.g., associated with at least one company) and an internal database server 804. Due to the nature of the cross-organization system, the external server 802 and the internal e-mail server 804 respectively maintain their own databases. That is, the e-mail server 804 is associated with an internal database 806 that can also be used to train a new spam filter 808.
  • the external server 802 is associated with an external database 810 which can be employed to train at least one new spam filter 812 as well as the spam filter 808 located internally with respect to the e-mail server 804.
  • the information stored on the external database 810 can be utilized to train the spam filter 808 located on the e-mail server.
  • one or more users can log into the system at the same time in order to make use of the available mail services.
  • a first user 820 (USER J ) chooses to access the messages
  • the first user 820 logs into the system via a login screen 818 by entering access information typically in the form of a username and password.
  • a CPU 826 processes the access information to allow the user access to only a first user inbox location 828 via a message communication application (e.g., a mail client).
  • messages are randomly or specifically targeted for polling.
  • the intended recipients of such targeted messages are compared to a spam- fighter user list to determine whether any one ofthe recipients is also a designated spam fighting user. Recipient properties indicating such information can be maintained on the message server 802, database 810, or on any other component of the system 800 as appropriate.
  • the message(s) are selected for polling and a copy of polling message(s) as well as any other information pertaining to the mail transaction can be sent to the database 810.
  • Messages selected for polling are modified by a message modifier 830 in any number of ways described hereinabove.
  • a unique identification can be assigned to each polling message, to each spam fighter, and/or to each polling result and stored in the database 810.
  • messages selected for polling can be randomly chosen or may be specific to the respective user(s) (820, 822, and 824).
  • the USER] 820 can indicate that only certain types of messages are available for polling (e.g., messages sent from outside of the company). Data generated from such specific messages is re-weighted and/or discounted to mitigate obtaining a biased sampling of data.
  • Virus scanning of the polling messages can also be performed at this time or at any other time before the polling message is downloaded and/or opened by the user.
  • each polling message includes two or more voting buttons or links, which when selected by the user, generates information relating to the polling message and the polling result.
  • the text of each polling message can be modified to incorporate the voting buttons or links therein.
  • Message poll results (denoted as MESSAGE POLL] 836, MESSAGE POLL? 838, and MESSAGE POLL N 840.), which include any information resulting from the classification (e.g., polling message or ID associated therewith, user properties), are sent to the database 810 via a network interface 842 located on the LAN 815.
  • the database 810 stores polling and user information from the respective users for later use in machine learning techniques which are employed to build and/or optimize a new and/or improved spam filter(s) 812, 808.
  • each company may want to strip out key information before sending the polled message and/or user information to either its own database 810 and/or to the e-mail database 806 over the GCN 814, for example.
  • One approach is to only provide feedback to the database (806 and/or 810) on spam messages, thereby excluding feedback on legitimate mail.
  • Another approach is to only provide a partial subset of information on the legitimate mail such as the sender and the sender's IP address.
  • Another approach is, for selected messages, such as those marked as good by the user that would be marked as bad by the filter, or vice versa, to explicitly ask for user permission before sending them to the filter. Any of these approaches or a combination thereof facilitates maintaining privacy of confidential information for the participating clients while continually providing data to train the spam filter(s) (808 and/or 812).
  • User validation schemes such as those described hereinabove can also be applied to each company as well as to each user within the company.
  • the users can individually be subjected to cross-validation techniques wherein the classifications of a suspect user(s) are excluded from filter training.
  • the filter is trained using the data from the remaining user(s).
  • the trained filter then runs through the messages from the excluded user(s) to determine how it would have classified the messages. If the number of disagreements exceeds a threshold level, then the suspect user(s) is considered untrustworthy. Future message classifications from the untrustworthy user(s) can be manually inspected before they are accepted by the database and/or filter. Otherwise, the user(s) can be removed from future polling. Referring now to Fig.
  • an exemplary environment 910 for implementing various aspects of the invention includes a computer 912.
  • the computer 912 includes a processing unit 914, a system memory 916, and a system bus 918.
  • the system bus 918 couples system components including, but not limited to, the system memory 916 to the processing unit 914.
  • the processing unit 914 can be any of various available processors. Dual microprocessors and other multiprocessor architectures also can be employed as the processing unit 914.
  • the system bus 918 can be any of several types of bus structure(s) including the memory bus or memory controller, a peripheral bus or external bus, and/or a local bus using any variety of available bus architectures including, but not limited to, 11-bit bus, Industrial Standard Architecture (ISA), Micro-Channel Architecture (MSA), Extended ISA (EISA), Intelligent Drive Electronics (IDE), VESA Local Bus (VLB), Peripheral Component Interconnect (PCI), Universal Serial Bus (USB), Advanced Graphics Port (AGP), Personal Computer Memory Card International Association bus (PCMCIA), and Small Computer Systems Interface (SCSI).
  • the system memory 916 includes volatile memory 920 and nonvolatile memory
  • nonvolatile memory 922 can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable ROM
  • Disk storage 924 includes, but is not limited to, devices like a magnetic disk drive, floppy disk drive, tape drive, Jaz drive, Zip drive, LS-100 drive, flash memory card, or memory stick.
  • disk storage 924 can include storage media separately or in combination with other storage media including, but not limited to, an optical disk drive such as a compact disk ROM device (CD-ROM), CD recordable drive (CD-R Drive), CD rewritable drive (CD-RW Drive) or a digital versatile disk ROM drive (DVD-ROM).
  • an optical disk drive such as a compact disk ROM device (CD-ROM), CD recordable drive (CD-R Drive), CD rewritable drive (CD-RW Drive) or a digital versatile disk ROM drive (DVD-ROM).
  • a removable or non-removable interface is typically used such as interface 926.
  • Fig 9 describes software that acts as an intermediary between users and the basic computer resources described in suitable operating environment 910.
  • Such software includes an operating system 928.
  • Operating system 928 which can be stored on disk storage 924, acts to control and allocate resources of the computer system 912.
  • System applications 930 take advantage of the management of resources by operating system 928 through program modules 932 and program data 934 stored either in system memory 916 or on disk storage 924. It is to be appreciated that the present invention can be implemented with various operating systems or combinations of operating systems.
  • a user enters commands or information into the computer 912 through input device(s) 936.
  • Output adapter 942 is provided to illustrate that there are some output devices 940 like monitors, speakers, and printers among other output devices 940 that require special adapters.
  • the output adapters 942 include, by ay of illustration and not limitation, video and sound cards that provide a means of connection between the output device 940 and the system bus
  • Communication connection(s) 950 refers to the hardware/software employed to connect the network interface 948 to the bus 918. While communication connection 950 is shown for illustrative clarity inside computer 912, it can also be external to computer 912.
  • the hardware/software necessary for connection to the network interface 948 includes, for exemplary purposes only, internal and external technologies such as, modems including regular telephone grade modems, cable modems and DSL modems, ISDN adapters, and Ethernet cards.
  • Fig. 10 is a schematic block diagram of a sample computing environment 1000 with which the present invention can interact.
  • the system 1000 includes one or more client(s) 1010.
  • the client(s) 1010 can be hardware and/or software (e.g., threads, processes, computing devices).
  • the system 1000 also includes one or more server(s) 1030.
  • the server(s) 1030 can also be hardware and/or software (e.g., threads, processes, computing devices).
  • the servers 1030 can house threads to perform transformations by employing the present invention, for example.
  • One possible communication between a client 1010 and a server 1030 may be in the form of a data packet adapted to be transmitted between two or more computer processes.
  • the system 1000 includes a communication framework 1050 that can be employed to facilitate communications between the client(s) 1010 and the server(s) 1030.
  • the client(s) 1010 are operably connected to one or more client data store(s) 1060 that can be employed to store information local to the client(s) 1010.
  • the server(s) 1030 are operably connected to one or more server data store(s) 1040 that can be employed to store information local to the servers 1030.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Data Mining & Analysis (AREA)
  • Signal Processing (AREA)
  • General Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • Computer Hardware Design (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Information Transfer Between Computers (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Computer And Data Communications (AREA)
PCT/US2004/005501 2003-03-03 2004-02-25 Feedback loop for spam prevention Ceased WO2004079514A2 (en)

Priority Applications (11)

Application Number Priority Date Filing Date Title
JP2006508818A JP4828411B2 (ja) 2003-03-03 2004-02-25 スパム防止のためのフィードバックループ
BR0407045-3A BRPI0407045A (pt) 2003-03-03 2004-02-25 Laço de realimentação para prevenção de spam
AU2004216772A AU2004216772B2 (en) 2003-03-03 2004-02-25 Feedback loop for spam prevention
MXPA05008303A MXPA05008303A (es) 2003-03-03 2004-02-25 Circuito de retroalimentacion para prevencion de mensajes basura.
NZ541628A NZ541628A (en) 2003-03-03 2004-02-25 Feedback loop for spam prevention
EP04714607A EP1599781A4 (en) 2003-03-03 2004-02-25 FEEDBACK LOOP FOR PREVENTING SPAM
CA2513967A CA2513967C (en) 2003-03-03 2004-02-25 Feedback loop for spam prevention
NO20053733A NO20053733L (no) 2003-03-03 2005-08-03 Tilbakemeldingslokke for unngaelse av soppelpost.
IL170115A IL170115A (en) 2003-03-03 2005-08-04 Feedback loop for spam prevention
EGNA2005000502 EG23988A (en) 2003-03-03 2005-08-31 Feed back loop for spam prevention
IL206121A IL206121A (en) 2003-03-03 2010-06-01 Feedback loop for spam prevention

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US10/378,463 2003-03-03
US10/378,463 US7219148B2 (en) 2003-03-03 2003-03-03 Feedback loop for spam prevention

Publications (2)

Publication Number Publication Date
WO2004079514A2 true WO2004079514A2 (en) 2004-09-16
WO2004079514A3 WO2004079514A3 (en) 2006-03-30

Family

ID=32926496

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2004/005501 Ceased WO2004079514A2 (en) 2003-03-03 2004-02-25 Feedback loop for spam prevention

Country Status (18)

Country Link
US (2) US7219148B2 (enExample)
EP (1) EP1599781A4 (enExample)
JP (1) JP4828411B2 (enExample)
KR (1) KR101021395B1 (enExample)
CN (1) CN100472484C (enExample)
AU (1) AU2004216772B2 (enExample)
BR (1) BRPI0407045A (enExample)
CA (2) CA2799691C (enExample)
CO (1) CO6141494A2 (enExample)
EG (1) EG23988A (enExample)
IL (2) IL170115A (enExample)
MX (1) MXPA05008303A (enExample)
NO (1) NO20053733L (enExample)
NZ (1) NZ541628A (enExample)
RU (1) RU2331913C2 (enExample)
TW (2) TW201036399A (enExample)
WO (1) WO2004079514A2 (enExample)
ZA (1) ZA200506085B (enExample)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005043351A2 (en) 2003-11-03 2005-05-12 Cloudmark, Inc. Method and apparatus to block spam based on spam reports from a community of users
WO2007036152A1 (fr) * 2005-09-27 2007-04-05 Tencent Technology (Shenzhen) Company Limited Systeme et procede de filtrage des pourriels et terminal client pour courriel et serveur de courriel
JP2008547067A (ja) * 2005-05-05 2008-12-25 シスコ アイアンポート システムズ エルエルシー 参照リソースの確率的解析に基づく不要な電子メールメッセージの検出
US9276930B2 (en) 2011-10-19 2016-03-01 Artashes Valeryevich Ikonomov Device for controlling network user data
US10115084B2 (en) 2012-10-10 2018-10-30 Artashes Valeryevich Ikonomov Electronic payment system
US20200351696A1 (en) 2018-01-22 2020-11-05 Beijing Xiaomi Mobile Software Co., Ltd. Method, device and system for minimization of drive test
US11317302B2 (en) 2018-02-08 2022-04-26 Beijing Xiaomi Mobile Software Co., Ltd. Minimization of drive test configuration method and apparatus

Families Citing this family (300)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6252547B1 (en) 1998-06-05 2001-06-26 Decisionmark Corp. Method and apparatus for limiting access to signals delivered via the internet
US20030097654A1 (en) * 1998-06-05 2003-05-22 Franken Kenneth A. System and method of geographic authorization for television and radio programming distributed by multiple delivery mechanisms
US9928508B2 (en) 2000-08-04 2018-03-27 Intellectual Ventures I Llc Single sign-on for access to a central data repository
US7257581B1 (en) 2000-08-04 2007-08-14 Guardian Networks, Llc Storage, management and distribution of consumer information
US8566248B1 (en) 2000-08-04 2013-10-22 Grdn. Net Solutions, Llc Initiation of an information transaction over a network via a wireless device
US8010981B2 (en) 2001-02-08 2011-08-30 Decisionmark Corp. Method and system for creating television programming guide
US7849141B1 (en) * 2001-06-14 2010-12-07 Apple Inc. Training a computer storage system for automatic filing of data using graphical representations of storage locations
US7640305B1 (en) 2001-06-14 2009-12-29 Apple Inc. Filtering of data
US7913287B1 (en) 2001-06-15 2011-03-22 Decisionmark Corp. System and method for delivering data over an HDTV digital television spectrum
JP2003333096A (ja) * 2002-05-08 2003-11-21 Nec Corp メール着信拒否システム,メール着信拒否方法およびメール着信拒否プログラム
WO2003104947A2 (en) 2002-06-06 2003-12-18 Hardt Dick C Distributed hierarchical identity management
WO2004001558A2 (en) * 2002-06-25 2003-12-31 Abs Software Partners Llc System and method for online monitoring of and interaction with chat and instant messaging participants
US8046832B2 (en) * 2002-06-26 2011-10-25 Microsoft Corporation Spam detector with challenges
US7428580B2 (en) 2003-11-26 2008-09-23 Aol Llc Electronic message forwarding
US7590696B1 (en) 2002-11-18 2009-09-15 Aol Llc Enhanced buddy list using mobile device identifiers
WO2004077710A2 (en) * 2003-02-27 2004-09-10 Businger, Peter, A. Minimizing unsolicited e-mail based on prior communications
US7219148B2 (en) * 2003-03-03 2007-05-15 Microsoft Corporation Feedback loop for spam prevention
US7543053B2 (en) 2003-03-03 2009-06-02 Microsoft Corporation Intelligent quarantining for spam prevention
US20050091320A1 (en) * 2003-10-09 2005-04-28 Kirsch Steven T. Method and system for categorizing and processing e-mails
US20060168006A1 (en) * 2003-03-24 2006-07-27 Mr. Marvin Shannon System and method for the classification of electronic communication
US7680886B1 (en) * 2003-04-09 2010-03-16 Symantec Corporation Suppressing spam using a machine learning based spam filter
US7546348B2 (en) 2003-05-05 2009-06-09 Sonicwall, Inc. Message handling with selective user participation
US20050108340A1 (en) * 2003-05-15 2005-05-19 Matt Gleeson Method and apparatus for filtering email spam based on similarity measures
US7484096B1 (en) * 2003-05-28 2009-01-27 Microsoft Corporation Data validation using signatures and sampling
US7457791B1 (en) * 2003-05-30 2008-11-25 Microsoft Corporation Using invariants to validate applications states
US7272853B2 (en) 2003-06-04 2007-09-18 Microsoft Corporation Origination/destination features and lists for spam prevention
US20040254988A1 (en) * 2003-06-12 2004-12-16 Rodriguez Rafael A. Method of and universal apparatus and module for automatically managing electronic communications, such as e-mail and the like, to enable integrity assurance thereof and real-time compliance with pre-established regulatory requirements as promulgated in government and other compliance database files and information websites, and the like
US7376652B2 (en) * 2003-06-17 2008-05-20 The Hayes-Roth Family Trust Personal portal and secure information exchange
US7711779B2 (en) 2003-06-20 2010-05-04 Microsoft Corporation Prevention of outgoing spam
US7519668B2 (en) * 2003-06-20 2009-04-14 Microsoft Corporation Obfuscation of spam filter
US7882179B2 (en) * 2003-06-20 2011-02-01 Compuware Corporation Computer system tools and method for development and testing
US8533270B2 (en) * 2003-06-23 2013-09-10 Microsoft Corporation Advanced spam detection techniques
US7051077B2 (en) * 2003-06-30 2006-05-23 Mx Logic, Inc. Fuzzy logic voting method and system for classifying e-mail using inputs from multiple spam classifiers
US20050015626A1 (en) * 2003-07-15 2005-01-20 Chasin C. Scott System and method for identifying and filtering junk e-mail messages or spam based on URL content
US20050015455A1 (en) * 2003-07-18 2005-01-20 Liu Gary G. SPAM processing system and methods including shared information among plural SPAM filters
US8214437B1 (en) 2003-07-21 2012-07-03 Aol Inc. Online adaptive filtering of messages
US7653693B2 (en) * 2003-09-05 2010-01-26 Aol Llc Method and system for capturing instant messages
US7814545B2 (en) * 2003-07-22 2010-10-12 Sonicwall, Inc. Message classification using classifiers
GB2405229B (en) * 2003-08-19 2006-01-11 Sophos Plc Method and apparatus for filtering electronic mail
US20050065906A1 (en) * 2003-08-19 2005-03-24 Wizaz K.K. Method and apparatus for providing feedback for email filtering
US8200761B1 (en) 2003-09-18 2012-06-12 Apple Inc. Method and apparatus for improving security in a data processing system
US9338026B2 (en) * 2003-09-22 2016-05-10 Axway Inc. Delay technique in e-mail filtering system
US7840646B2 (en) * 2003-10-08 2010-11-23 Yahoo! Inc. Learned upload time estimate module
US7181498B2 (en) * 2003-10-31 2007-02-20 Yahoo! Inc. Community-based green list for antispam
US7181764B2 (en) * 2003-11-04 2007-02-20 Yahoo! Inc. System and method for a subscription model trusted email database for use in antispam
US7797529B2 (en) * 2003-11-10 2010-09-14 Yahoo! Inc. Upload security scheme
US20050102638A1 (en) * 2003-11-10 2005-05-12 Jiang Zhaowei C. Navigate, click and drag images in mobile applications
US7783741B2 (en) * 2003-11-17 2010-08-24 Hardt Dick C Pseudonymous email address manager
US20050120019A1 (en) * 2003-11-29 2005-06-02 International Business Machines Corporation Method and apparatus for the automatic identification of unsolicited e-mail messages (SPAM)
US20050120118A1 (en) * 2003-12-01 2005-06-02 Thibadeau Robert H. Novel network server for electronic mail filter benchmarking
US20050160144A1 (en) * 2003-12-24 2005-07-21 Rishi Bhatia System and method for filtering network messages
JP4386261B2 (ja) * 2004-01-15 2009-12-16 株式会社エヌ・ティ・ティ・ドコモ 移動通信端末及び課金制御装置
US7590694B2 (en) 2004-01-16 2009-09-15 Gozoom.Com, Inc. System for determining degrees of similarity in email message information
US7693943B2 (en) * 2004-01-23 2010-04-06 International Business Machines Corporation Classification of electronic mail into multiple directories based upon their spam-like properties
CA2554915C (en) * 2004-02-17 2013-05-28 Ironport Systems, Inc. Collecting, aggregating, and managing information relating to electronic messages
US10257164B2 (en) * 2004-02-27 2019-04-09 International Business Machines Corporation Classifying e-mail connections for policy enforcement
US8214438B2 (en) * 2004-03-01 2012-07-03 Microsoft Corporation (More) advanced spam detection features
US20050198508A1 (en) * 2004-03-04 2005-09-08 Beck Stephen H. Method and system for transmission and processing of authenticated electronic mail
US7644127B2 (en) * 2004-03-09 2010-01-05 Gozoom.Com, Inc. Email analysis using fuzzy matching of text
US8918466B2 (en) * 2004-03-09 2014-12-23 Tonny Yu System for email processing and analysis
US7631044B2 (en) 2004-03-09 2009-12-08 Gozoom.Com, Inc. Suppression of undesirable network messages
US20050223074A1 (en) * 2004-03-31 2005-10-06 Morris Robert P System and method for providing user selectable electronic message action choices and processing
US8769671B2 (en) 2004-05-02 2014-07-01 Markmonitor Inc. Online fraud solution
US8041769B2 (en) * 2004-05-02 2011-10-18 Markmonitor Inc. Generating phish messages
US9203648B2 (en) 2004-05-02 2015-12-01 Thomson Reuters Global Resources Online fraud solution
US7870608B2 (en) 2004-05-02 2011-01-11 Markmonitor, Inc. Early detection and monitoring of online fraud
US7992204B2 (en) * 2004-05-02 2011-08-02 Markmonitor, Inc. Enhanced responses to online fraud
US7457823B2 (en) 2004-05-02 2008-11-25 Markmonitor Inc. Methods and systems for analyzing data related to possible online fraud
US7913302B2 (en) 2004-05-02 2011-03-22 Markmonitor, Inc. Advanced responses to online fraud
US7912905B2 (en) * 2004-05-18 2011-03-22 Computer Associates Think, Inc. System and method for filtering network messages
WO2005116851A2 (en) * 2004-05-25 2005-12-08 Postini, Inc. Electronic message source information reputation system
US7461063B1 (en) * 2004-05-26 2008-12-02 Proofpoint, Inc. Updating logistic regression models using coherent gradient
US7552365B1 (en) * 2004-05-26 2009-06-23 Amazon Technologies, Inc. Web site system with automated processes for detecting failure events and for selecting failure events for which to request user feedback
US7756930B2 (en) * 2004-05-28 2010-07-13 Ironport Systems, Inc. Techniques for determining the reputation of a message sender
US7917588B2 (en) 2004-05-29 2011-03-29 Ironport Systems, Inc. Managing delivery of electronic messages using bounce profiles
US9245266B2 (en) 2004-06-16 2016-01-26 Callahan Cellular L.L.C. Auditable privacy policies in a distributed hierarchical identity management system
US8504704B2 (en) 2004-06-16 2013-08-06 Dormarke Assets Limited Liability Company Distributed contact information management
US7748038B2 (en) * 2004-06-16 2010-06-29 Ironport Systems, Inc. Method and apparatus for managing computer virus outbreaks
US8527752B2 (en) 2004-06-16 2013-09-03 Dormarke Assets Limited Liability Graduated authentication in an identity management system
US7565445B2 (en) 2004-06-18 2009-07-21 Fortinet, Inc. Systems and methods for categorizing network traffic content
US8353028B2 (en) * 2004-06-21 2013-01-08 Ebay Inc. Render engine, and method of using the same, to verify data for access and/or publication via a computer system
US8484295B2 (en) 2004-12-21 2013-07-09 Mcafee, Inc. Subscriber reputation filtering method for analyzing subscriber activity and detecting account misuse
US7953814B1 (en) * 2005-02-28 2011-05-31 Mcafee, Inc. Stopping and remediating outbound messaging abuse
US7680890B1 (en) 2004-06-22 2010-03-16 Wei Lin Fuzzy logic voting method and system for classifying e-mail using inputs from multiple spam classifiers
US7552186B2 (en) * 2004-06-28 2009-06-23 International Business Machines Corporation Method and system for filtering spam using an adjustable reliability value
US7664819B2 (en) * 2004-06-29 2010-02-16 Microsoft Corporation Incremental anti-spam lookup and update service
US8819142B1 (en) * 2004-06-30 2014-08-26 Google Inc. Method for reclassifying a spam-filtered email message
US7904517B2 (en) 2004-08-09 2011-03-08 Microsoft Corporation Challenge response systems
US7660865B2 (en) 2004-08-12 2010-02-09 Microsoft Corporation Spam filtering with probabilistic secure hashes
FI20041159A0 (fi) * 2004-09-07 2004-09-07 Nokia Corp Menetelmä viestien suodattamiseksi tietoverkossa
US7555524B1 (en) * 2004-09-16 2009-06-30 Symantec Corporation Bulk electronic message detection by header similarity analysis
US8180834B2 (en) * 2004-10-07 2012-05-15 Computer Associates Think, Inc. System, method, and computer program product for filtering messages and training a classification module
US7849506B1 (en) * 2004-10-12 2010-12-07 Avaya Inc. Switching device, method, and computer program for efficient intrusion detection
US8433768B1 (en) * 2004-10-14 2013-04-30 Lockheed Martin Corporation Embedded model interaction within attack projection framework of information system
US7711781B2 (en) * 2004-11-09 2010-05-04 International Business Machines Corporation Technique for detecting and blocking unwanted instant messages
US20060112430A1 (en) * 2004-11-19 2006-05-25 Deisenroth Jerrold M Method and apparatus for immunizing data in computer systems from corruption
WO2006060581A2 (en) * 2004-11-30 2006-06-08 Sensory Networks Inc. Apparatus and method for acceleration of security applications through pre-filtering
US20060123478A1 (en) * 2004-12-02 2006-06-08 Microsoft Corporation Phishing detection, prevention, and notification
US8291065B2 (en) * 2004-12-02 2012-10-16 Microsoft Corporation Phishing detection, prevention, and notification
US7634810B2 (en) * 2004-12-02 2009-12-15 Microsoft Corporation Phishing detection, prevention, and notification
US7577984B2 (en) * 2004-12-09 2009-08-18 Microsoft Corporation Method and system for a sending domain to establish a trust that its senders communications are not unwanted
US7653812B2 (en) * 2004-12-09 2010-01-26 Microsoft Corporation Method and system for evaluating confidence in a sending domain to accurately assign a trust that a communication is not unwanted
EP1672936B1 (en) * 2004-12-16 2018-12-05 Sony Mobile Communications Inc. Prevention of unsolicited messages
US8396927B2 (en) * 2004-12-21 2013-03-12 Alcatel Lucent Detection of unwanted messages (spam)
US9015472B1 (en) 2005-03-10 2015-04-21 Mcafee, Inc. Marking electronic messages to indicate human origination
US9160755B2 (en) * 2004-12-21 2015-10-13 Mcafee, Inc. Trusted communication network
US8738708B2 (en) * 2004-12-21 2014-05-27 Mcafee, Inc. Bounce management in a trusted communication network
US20060168030A1 (en) * 2004-12-21 2006-07-27 Lucent Technologies, Inc. Anti-spam service
US7716743B2 (en) * 2005-01-14 2010-05-11 Microsoft Corporation Privacy friendly malware quarantines
US8087068B1 (en) 2005-03-08 2011-12-27 Google Inc. Verifying access to a network account over multiple user communication portals based on security criteria
US8103868B2 (en) * 2005-04-20 2012-01-24 M-Qube, Inc. Sender identification system and method
JP4559295B2 (ja) * 2005-05-17 2010-10-06 株式会社エヌ・ティ・ティ・ドコモ データ通信システム及びデータ通信方法
US7600126B2 (en) * 2005-05-27 2009-10-06 Microsoft Corporation Efficient processing of time-bounded messages
US20060277259A1 (en) * 2005-06-07 2006-12-07 Microsoft Corporation Distributed sender reputations
US7552230B2 (en) * 2005-06-15 2009-06-23 International Business Machines Corporation Method and apparatus for reducing spam on peer-to-peer networks
GB0513375D0 (en) 2005-06-30 2005-08-03 Retento Ltd Computer security
US7600258B2 (en) * 2005-07-01 2009-10-06 Symantec Corporation Methods and systems for detecting and preventing the spread of malware on instant messaging (IM) networks by using fictitious buddies
US7577993B2 (en) * 2005-07-01 2009-08-18 Symantec Corporation Methods and systems for detecting and preventing the spread of malware on instant messaging (IM) networks by using Bayesian filtering
US7822818B2 (en) * 2005-07-01 2010-10-26 Symantec Corporation Methods and systems for detecting and preventing the spread of malware on instant messaging (IM) networks by using automated IM users
US7823200B2 (en) * 2005-07-01 2010-10-26 Symantec Corporation Methods and systems for detecting and preventing the spread of malware on instant messaging (IM) networks by analyzing message traffic patterns
US7610345B2 (en) * 2005-07-28 2009-10-27 Vaporstream Incorporated Reduced traceability electronic message system and method
US9282081B2 (en) 2005-07-28 2016-03-08 Vaporstream Incorporated Reduced traceability electronic message system and method
US7930353B2 (en) * 2005-07-29 2011-04-19 Microsoft Corporation Trees of classifiers for detecting email spam
US20070124582A1 (en) * 2005-08-07 2007-05-31 Marvin Shannon System and Method for an NSP or ISP to Detect Malware in its Network Traffic
US7577994B1 (en) * 2005-08-25 2009-08-18 Symantec Corporation Detecting local graphic password deciphering attacks
US20070061402A1 (en) * 2005-09-15 2007-03-15 Microsoft Corporation Multipurpose internet mail extension (MIME) analysis
WO2007045150A1 (en) * 2005-10-15 2007-04-26 Huawei Technologies Co., Ltd. A system for controlling the security of network and a method thereof
US8065370B2 (en) 2005-11-03 2011-11-22 Microsoft Corporation Proofs to filter spam
US20070106734A1 (en) * 2005-11-10 2007-05-10 Motorola, Inc. Incentive driven subscriber assisted spam reduction
US8713122B2 (en) * 2005-11-10 2014-04-29 International Business Machines Corporation Message value indicator
US20070136428A1 (en) * 2005-12-08 2007-06-14 International Business Machines Corporation Methods, systems, and computer program products for implementing community messaging services
US7565366B2 (en) * 2005-12-14 2009-07-21 Microsoft Corporation Variable rate sampling for sequence analysis
US20070180031A1 (en) * 2006-01-30 2007-08-02 Microsoft Corporation Email Opt-out Enforcement
US20070192490A1 (en) * 2006-02-13 2007-08-16 Minhas Sandip S Content-based filtering of electronic messages
US8291066B2 (en) * 2006-02-21 2012-10-16 Trading Systems Associates (Ts-A) (Israel) Limited Method and system for transaction monitoring in a communication network
DE602006014246D1 (de) * 2006-03-06 2010-06-24 Alcatel Lucent Bedingungssteuerung zum Übertragen von Nachrichten
US7685271B1 (en) * 2006-03-30 2010-03-23 Symantec Corporation Distributed platform for testing filtering rules
US20070256133A1 (en) * 2006-04-27 2007-11-01 Garbow Zachary A Blocking processes from executing based on votes
US7680891B1 (en) 2006-06-19 2010-03-16 Google Inc. CAPTCHA-based spam control for content creation systems
US8023927B1 (en) 2006-06-29 2011-09-20 Google Inc. Abuse-resistant method of registering user accounts with an online service
US20080077704A1 (en) * 2006-09-24 2008-03-27 Void Communications, Inc. Variable Electronic Communication Ping Time System and Method
US7945627B1 (en) 2006-09-28 2011-05-17 Bitdefender IPR Management Ltd. Layout-based electronic communication filtering systems and methods
US8224905B2 (en) 2006-12-06 2012-07-17 Microsoft Corporation Spam filtration utilizing sender activity data
US8290203B1 (en) * 2007-01-11 2012-10-16 Proofpoint, Inc. Apparatus and method for detecting images within spam
US8510467B2 (en) * 2007-01-11 2013-08-13 Ept Innovation Monitoring a message associated with an action
US7873583B2 (en) * 2007-01-19 2011-01-18 Microsoft Corporation Combining resilient classifiers
US8364617B2 (en) * 2007-01-19 2013-01-29 Microsoft Corporation Resilient classification of data
US8209381B2 (en) * 2007-01-19 2012-06-26 Yahoo! Inc. Dynamic combatting of SPAM and phishing attacks
US20080177843A1 (en) * 2007-01-22 2008-07-24 Microsoft Corporation Inferring email action based on user input
WO2008101165A2 (en) * 2007-02-15 2008-08-21 Void Communications, Inc. Electronic messaging recordlessness warning and routing system and method
US8015246B1 (en) 2007-03-21 2011-09-06 Google Inc. Graphical user interface for chat room with thin walls
US8006191B1 (en) 2007-03-21 2011-08-23 Google Inc. Chat room with thin walls
US7899869B1 (en) 2007-03-22 2011-03-01 Google Inc. Broadcasting in chat system without topic-specific rooms
US7904500B1 (en) 2007-03-22 2011-03-08 Google Inc. Advertising in chat system without topic-specific rooms
US7865553B1 (en) * 2007-03-22 2011-01-04 Google Inc. Chat system without topic-specific rooms
US7860928B1 (en) * 2007-03-22 2010-12-28 Google Inc. Voting in chat system without topic-specific rooms
US7853589B2 (en) * 2007-04-30 2010-12-14 Microsoft Corporation Web spam page classification using query-dependent data
US20080313285A1 (en) * 2007-06-14 2008-12-18 Microsoft Corporation Post transit spam filtering
US20090006532A1 (en) * 2007-06-28 2009-01-01 Yahoo! Inc. Dynamic phishing protection in instant messaging
US8239460B2 (en) * 2007-06-29 2012-08-07 Microsoft Corporation Content-based tagging of RSS feeds and E-mail
US20090012965A1 (en) * 2007-07-01 2009-01-08 Decisionmark Corp. Network Content Objection Handling System and Method
US20090006211A1 (en) * 2007-07-01 2009-01-01 Decisionmark Corp. Network Content And Advertisement Distribution System and Method
US8849909B2 (en) * 2007-07-06 2014-09-30 Yahoo! Inc. Real-time asynchronous event aggregation systems
US7937468B2 (en) * 2007-07-06 2011-05-03 Yahoo! Inc. Detecting spam messages using rapid sender reputation feedback analysis
US8689330B2 (en) * 2007-09-05 2014-04-01 Yahoo! Inc. Instant messaging malware protection
US9363231B2 (en) * 2007-09-13 2016-06-07 Caterpillar Inc. System and method for monitoring network communications originating in monitored jurisdictions
US8230025B2 (en) * 2007-09-20 2012-07-24 Research In Motion Limited System and method for delivering variable size messages based on spam probability
US8572184B1 (en) 2007-10-04 2013-10-29 Bitdefender IPR Management Ltd. Systems and methods for dynamically integrating heterogeneous anti-spam filters
US8428367B2 (en) * 2007-10-26 2013-04-23 International Business Machines Corporation System and method for electronic document classification
US8010614B1 (en) 2007-11-01 2011-08-30 Bitdefender IPR Management Ltd. Systems and methods for generating signatures for electronic communication classification
US8171388B2 (en) 2007-11-15 2012-05-01 Yahoo! Inc. Trust based moderation
US8239537B2 (en) * 2008-01-02 2012-08-07 At&T Intellectual Property I, L.P. Method of throttling unwanted network traffic on a server
WO2009102117A2 (en) * 2008-02-14 2009-08-20 Lg Electronics Inc. Terminal, server and method for determining and processing contents as spams
US7849146B2 (en) * 2008-02-21 2010-12-07 Yahoo! Inc. Identifying IP addresses for spammers
US8401968B1 (en) * 2008-03-27 2013-03-19 Amazon Technologies, Inc. Mobile group payments
US20090282112A1 (en) * 2008-05-12 2009-11-12 Cloudmark, Inc. Spam identification system
US8108323B2 (en) * 2008-05-19 2012-01-31 Yahoo! Inc. Distributed spam filtering utilizing a plurality of global classifiers and a local classifier
US8131655B1 (en) 2008-05-30 2012-03-06 Bitdefender IPR Management Ltd. Spam filtering using feature relevance assignment in neural networks
EP2318944A4 (en) * 2008-06-23 2013-12-11 Cloudmark Inc SYSTEMS AND METHOD FOR RESTORING DATA
CN101616101B (zh) * 2008-06-26 2012-01-18 阿里巴巴集团控股有限公司 一种用户信息过滤方法及装置
US8490185B2 (en) * 2008-06-27 2013-07-16 Microsoft Corporation Dynamic spam view settings
WO2010002892A1 (en) * 2008-06-30 2010-01-07 Aol Llc Systems and methods for reporter-based filtering of electronic communications and messages
US8181250B2 (en) * 2008-06-30 2012-05-15 Microsoft Corporation Personalized honeypot for detecting information leaks and security breaches
CN101330476B (zh) * 2008-07-02 2011-04-13 北京大学 一种垃圾邮件动态检测方法
CN101321365B (zh) * 2008-07-17 2011-12-28 浙江大学 一种利用短信回复频率的垃圾短信发送用户识别方法
US8291024B1 (en) * 2008-07-31 2012-10-16 Trend Micro Incorporated Statistical spamming behavior analysis on mail clusters
US10354229B2 (en) * 2008-08-04 2019-07-16 Mcafee, Llc Method and system for centralized contact management
US8069128B2 (en) * 2008-08-08 2011-11-29 Yahoo! Inc. Real-time ad-hoc spam filtering of email
WO2010033784A2 (en) * 2008-09-19 2010-03-25 Mailrank, Inc. Ranking messages in an electronic messaging environment
US8868663B2 (en) * 2008-09-19 2014-10-21 Yahoo! Inc. Detection of outbound sending of spam
US8069210B2 (en) * 2008-10-10 2011-11-29 Microsoft Corporation Graph based bot-user detection
US8365267B2 (en) * 2008-11-13 2013-01-29 Yahoo! Inc. Single use web based passwords for network login
RU2399091C2 (ru) * 2008-11-27 2010-09-10 ООО "НеоБИТ" Способ адаптивного параметрического управления безопасностью информационных систем и система для его осуществления
CN101415159B (zh) * 2008-12-02 2010-06-02 腾讯科技(深圳)有限公司 对垃圾邮件进行拦截的方法和装置
US8364766B2 (en) * 2008-12-04 2013-01-29 Yahoo! Inc. Spam filtering based on statistics and token frequency modeling
US8886728B2 (en) 2008-12-12 2014-11-11 At&T Intellectual Property I, L.P. Method and apparatus for reclassifying e-mail or modifying a spam filter based on users' input
US20100161537A1 (en) * 2008-12-23 2010-06-24 At&T Intellectual Property I, L.P. System and Method for Detecting Email Spammers
US8195753B2 (en) * 2009-01-07 2012-06-05 Microsoft Corporation Honoring user preferences in email systems
US8255468B2 (en) * 2009-02-11 2012-08-28 Microsoft Corporation Email management based on user behavior
US20100211641A1 (en) * 2009-02-16 2010-08-19 Microsoft Corporation Personalized email filtering
US20100211645A1 (en) * 2009-02-18 2010-08-19 Yahoo! Inc. Identification of a trusted message sender with traceable receipts
US8443447B1 (en) * 2009-08-06 2013-05-14 Trend Micro Incorporated Apparatus and method for detecting malware-infected electronic mail
US8874663B2 (en) * 2009-08-28 2014-10-28 Facebook, Inc. Comparing similarity between documents for filtering unwanted documents
CN101656923B (zh) * 2009-09-15 2012-09-05 中兴通讯股份有限公司 判断垃圾消息的方法和系统
EP2348424A1 (en) 2009-12-21 2011-07-27 Thomson Licensing Method for recommending content items to users
US8370902B2 (en) * 2010-01-29 2013-02-05 Microsoft Corporation Rescuing trusted nodes from filtering of untrusted network entities
US9098459B2 (en) * 2010-01-29 2015-08-04 Microsoft Technology Licensing, Llc Activity filtering based on trust ratings of network
US8959159B2 (en) 2010-04-01 2015-02-17 Microsoft Corporation Personalized email interactions applied to global filtering
SG177015A1 (en) * 2010-06-07 2012-01-30 Boxsentry Pte Ltd In situ correction of false-positive errors in messaging security systems (lagotto)
US8639773B2 (en) 2010-06-17 2014-01-28 Microsoft Corporation Discrepancy detection for web crawling
US8635289B2 (en) 2010-08-31 2014-01-21 Microsoft Corporation Adaptive electronic message scanning
US8464342B2 (en) * 2010-08-31 2013-06-11 Microsoft Corporation Adaptively selecting electronic message scanning rules
US10574630B2 (en) * 2011-02-15 2020-02-25 Webroot Inc. Methods and apparatus for malware threat research
CN102760130B (zh) * 2011-04-27 2016-11-16 腾讯科技(深圳)有限公司 处理信息的方法和装置
WO2013050837A1 (en) * 2011-05-06 2013-04-11 Quojax Corp. System and method for giving users control of information flow
RU2472308C1 (ru) * 2011-05-19 2013-01-10 Владимир Алексеевич Небольсин Предотвращение несанкционированной массовой рассылки электронной почты
US9519682B1 (en) 2011-05-26 2016-12-13 Yahoo! Inc. User trustworthiness
US9519883B2 (en) 2011-06-28 2016-12-13 Microsoft Technology Licensing, Llc Automatic project content suggestion
IL214360A (en) * 2011-07-31 2016-05-31 Verint Systems Ltd System and method for identifying main pages in decoding network traffic
US9442881B1 (en) 2011-08-31 2016-09-13 Yahoo! Inc. Anti-spam transient entity classification
US8682990B2 (en) 2011-10-03 2014-03-25 Microsoft Corporation Identifying first contact unsolicited communications
CN103166830B (zh) * 2011-12-14 2016-02-10 中国电信股份有限公司 一种智能选择训练样本的垃圾邮件过滤系统和方法
CN103220262A (zh) * 2012-01-19 2013-07-24 北京千橡网景科技发展有限公司 用于在网站中检测垃圾消息发送方的方法和设备
US9130778B2 (en) * 2012-01-25 2015-09-08 Bitdefender IPR Management Ltd. Systems and methods for spam detection using frequency spectra of character strings
RU2510982C2 (ru) 2012-04-06 2014-04-10 Закрытое акционерное общество "Лаборатория Касперского" Система и способ оценки пользователей для фильтрации сообщений
WO2013172742A1 (ru) * 2012-05-18 2013-11-21 Ikonomov Artashes Valeryevich Система коммуникационного взаимодействия
US9660947B1 (en) * 2012-07-27 2017-05-23 Intuit Inc. Method and apparatus for filtering undesirable content based on anti-tags
CN103595614A (zh) * 2012-08-16 2014-02-19 无锡华御信息技术有限公司 一种基于用户反馈的垃圾邮件检测方法
WO2014046974A2 (en) 2012-09-20 2014-03-27 Case Paul Sr Case secure computer architecture
CN102946383B (zh) * 2012-10-24 2015-11-18 珠海市君天电子科技有限公司 一种基于第三方公用接口的远程查询、修改病毒特征的方法和系统
CN103078753B (zh) * 2012-12-27 2016-07-13 华为技术有限公司 一种邮件的处理方法、装置和系统
US10346411B1 (en) * 2013-03-14 2019-07-09 Google Llc Automatic target audience suggestions when sharing in a social network
US20140279734A1 (en) * 2013-03-15 2014-09-18 Hewlett-Packard Development Company, L.P. Performing Cross-Validation Using Non-Randomly Selected Cases
US9027137B2 (en) 2013-04-22 2015-05-05 Imperva, Inc. Automatic generation of different attribute values for detecting a same type of web application layer attack
US10810654B1 (en) 2013-05-06 2020-10-20 Overstock.Com, Inc. System and method of mapping product attributes between different schemas
RU2541123C1 (ru) * 2013-06-06 2015-02-10 Закрытое акционерное общество "Лаборатория Касперского" Система и способ определения рейтинга электронных сообщений для борьбы со спамом
JP5572252B1 (ja) * 2013-09-11 2014-08-13 株式会社Ubic デジタル情報分析システム、デジタル情報分析方法およびデジタル情報分析プログラム
CN103607339B (zh) * 2013-09-11 2016-08-17 北京京东尚科信息技术有限公司 基于内容自动调节邮件发送策略的方法和系统
US20160248707A1 (en) * 2013-10-24 2016-08-25 Hewlett Packard Enterprise Development Lp Real-time inter-personal communication
CN103634201B (zh) * 2013-11-12 2017-09-12 新浪网技术(中国)有限公司 电子邮件系统及其隔离邮件处理方法
US11568280B1 (en) * 2019-01-23 2023-01-31 Amdocs Development Limited System, method, and computer program for parental controls and recommendations based on artificial intelligence
US10778618B2 (en) * 2014-01-09 2020-09-15 Oath Inc. Method and system for classifying man vs. machine generated e-mail
US9942182B2 (en) * 2014-11-17 2018-04-10 At&T Intellectual Property I, L.P. System and method for cloud based IP mobile messaging spam detection and defense
US9160680B1 (en) 2014-11-18 2015-10-13 Kaspersky Lab Zao System and method for dynamic network resource categorization re-assignment
JP2016191973A (ja) 2015-03-30 2016-11-10 日本電気株式会社 情報転送装置、学習システム、情報転送方法及びプログラム
US20170222960A1 (en) * 2016-02-01 2017-08-03 Linkedin Corporation Spam processing with continuous model training
GB201603118D0 (en) * 2016-02-23 2016-04-06 Eitc Holdings Ltd Reactive and pre-emptive security system based on choice theory
US10397256B2 (en) * 2016-06-13 2019-08-27 Microsoft Technology Licensing, Llc Spam classification system based on network flow data
US9749360B1 (en) * 2017-01-05 2017-08-29 KnowBe4, Inc. Systems and methods for performing simulated phishing attacks using social engineering indicators
WO2018128403A1 (en) * 2017-01-06 2018-07-12 Samsung Electronics Co., Ltd. Apparatus and method for processing content
KR20180081444A (ko) * 2017-01-06 2018-07-16 삼성전자주식회사 콘텐츠를 처리하는 장치 및 방법
EP3367261A1 (de) 2017-02-28 2018-08-29 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Verfahren zum klassifizieren von information und klassifizierungsprozessor
CN108694202A (zh) * 2017-04-10 2018-10-23 上海交通大学 基于分类算法的可配置垃圾邮件过滤系统及过滤方法
US20180337840A1 (en) * 2017-05-18 2018-11-22 Satori Worldwide, Llc System and method for testing filters for data streams in publisher-subscriber networks
US10839291B2 (en) * 2017-07-01 2020-11-17 Intel Corporation Hardened deep neural networks through training from adversarial misclassified data
US11232369B1 (en) * 2017-09-08 2022-01-25 Facebook, Inc. Training data quality for spam classification
US10635813B2 (en) 2017-10-06 2020-04-28 Sophos Limited Methods and apparatus for using machine learning on multiple file fragments to identify malware
WO2019089795A1 (en) * 2017-10-31 2019-05-09 Edgewave, Inc. Analysis and reporting of suspicious email
CN110089076B (zh) * 2017-11-22 2021-04-09 腾讯科技(深圳)有限公司 实现信息互动的方法和装置
US11003858B2 (en) * 2017-12-22 2021-05-11 Microsoft Technology Licensing, Llc AI system to determine actionable intent
CN108073718A (zh) * 2017-12-29 2018-05-25 长春理工大学 一种基于主动学习和否定选择的邮件二类分类算法
US11003774B2 (en) 2018-01-26 2021-05-11 Sophos Limited Methods and apparatus for detection of malicious documents using machine learning
US11941491B2 (en) 2018-01-31 2024-03-26 Sophos Limited Methods and apparatus for identifying an impact of a portion of a file on machine learning classification of malicious content
US12386796B2 (en) 2018-02-21 2025-08-12 Accenture Global Solutions Limited End-to-end identification of erroneous data using machine learning and similarity analysis
US11270205B2 (en) 2018-02-28 2022-03-08 Sophos Limited Methods and apparatus for identifying the shared importance of multiple nodes within a machine learning model for multiple tasks
US20190327127A1 (en) * 2018-04-23 2019-10-24 Entit Software Llc Information technology event management
KR102117543B1 (ko) * 2018-04-26 2020-06-01 주식회사 슈퍼브에이아이 컴퓨팅 장치 및 이를 이용한 인공 지능 기반 영상 처리 서비스 시스템
CN110213152B (zh) * 2018-05-02 2021-09-14 腾讯科技(深圳)有限公司 识别垃圾邮件的方法、装置、服务器及存储介质
US12159474B2 (en) * 2018-05-17 2024-12-03 Hasan Mirjan Methods and systems of handwriting recognition in virtualized-mail services
US20200371988A1 (en) * 2018-05-31 2020-11-26 Microsoft Technology Licensing, Llc Distributed Computing System with a Synthetic Data as a Service Frameset Package Generator
US11281996B2 (en) 2018-05-31 2022-03-22 Microsoft Technology Licensing, Llc Distributed computing system with a synthetic data as a service feedback loop engine
US11012500B2 (en) * 2018-07-27 2021-05-18 Vmware, Inc. Secure multi-directional data pipeline for data distribution systems
US11521108B2 (en) * 2018-07-30 2022-12-06 Microsoft Technology Licensing, Llc Privacy-preserving labeling and classification of email
WO2020030913A1 (en) 2018-08-07 2020-02-13 Sophos Limited Methods and apparatus for management of a machine-learning model to adapt to changes in landscape of potentially malicious artifacts
US10601868B2 (en) 2018-08-09 2020-03-24 Microsoft Technology Licensing, Llc Enhanced techniques for generating and deploying dynamic false user accounts
US11212312B2 (en) 2018-08-09 2021-12-28 Microsoft Technology Licensing, Llc Systems and methods for polluting phishing campaign responses
US10922097B2 (en) * 2018-09-18 2021-02-16 International Business Machines Corporation Collaborative model execution
US11947668B2 (en) 2018-10-12 2024-04-02 Sophos Limited Methods and apparatus for preserving information between layers within a neural network
CN109471920A (zh) * 2018-11-19 2019-03-15 北京锐安科技有限公司 一种文本标识的方法、装置、电子设备及存储介质
US11574052B2 (en) 2019-01-31 2023-02-07 Sophos Limited Methods and apparatus for using machine learning to detect potentially malicious obfuscated scripts
JP6992774B2 (ja) * 2019-02-13 2022-01-13 セイコーエプソン株式会社 情報処理装置、学習装置及び学習済モデル
CN111815306B (zh) * 2019-04-11 2024-03-26 深圳市家家分类科技有限公司 上门服务下单方法及相关设备
US20210027104A1 (en) * 2019-07-25 2021-01-28 Microsoft Technology Licensing, Llc Eyes-off annotated data collection framework for electronic messaging platforms
CN110598157B (zh) * 2019-09-20 2023-01-03 北京字节跳动网络技术有限公司 目标信息识别方法、装置、设备及存储介质
RU2717721C1 (ru) * 2019-09-20 2020-03-25 Антон Борисович Ёркин Способ создания автоматизированных систем управления информационной безопасностью и система для его осуществления
US11347572B2 (en) 2019-09-26 2022-05-31 Vmware, Inc. Methods and apparatus for data pipelines between cloud computing platforms
US11757816B1 (en) * 2019-11-11 2023-09-12 Trend Micro Incorporated Systems and methods for detecting scam emails
US11050881B1 (en) * 2020-04-20 2021-06-29 Avaya Management L.P. Message routing in a contact center
US11722503B2 (en) * 2020-05-05 2023-08-08 Accenture Global Solutions Limited Responsive privacy-preserving system for detecting email threats
US11438370B2 (en) * 2020-07-16 2022-09-06 Capital One Services, Llc Email security platform
US11966469B2 (en) 2020-10-29 2024-04-23 Proofpoint, Inc. Detecting and protecting against cybersecurity attacks using unprintable tracking characters
CN114827073A (zh) * 2021-01-29 2022-07-29 Zoom视频通讯公司 语音邮件垃圾信息检测
CN112883316B (zh) * 2021-03-02 2025-06-13 广州市百果园信息技术有限公司 数据处理方法、装置、电子设备及存储介质
US12010129B2 (en) 2021-04-23 2024-06-11 Sophos Limited Methods and apparatus for using machine learning to classify malicious infrastructure
CN114040409B (zh) * 2021-11-11 2023-06-06 中国联合网络通信集团有限公司 短信识别方法、装置、设备及存储介质
CN115952207B (zh) * 2022-12-21 2024-02-20 北京中睿天下信息技术有限公司 一种基于StarRocks数据库的威胁邮件存储方法和系统
CN117474510B (zh) * 2023-12-25 2024-11-26 彩讯科技股份有限公司 一种基于特征选择的垃圾邮件过滤方法
TWI883754B (zh) * 2023-12-29 2025-05-11 中華電信股份有限公司 以機器學習分析與封鎖異常來源網路位址之系統、方法及電腦可讀媒介
CN119172346A (zh) * 2024-09-24 2024-12-20 中国建设银行股份有限公司 电子邮件防护方法及装置
CN120151116B (zh) * 2025-05-15 2025-08-08 天翼安全科技有限公司 一种网络攻击的防御方法、装置、设备及存储介质

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6421709B1 (en) 1997-12-22 2002-07-16 Accepted Marketing, Inc. E-mail filter and method thereof

Family Cites Families (149)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB8918553D0 (en) 1989-08-15 1989-09-27 Digital Equipment Int Message control system
US5758257A (en) 1994-11-29 1998-05-26 Herz; Frederick System and method for scheduling broadcast of and access to video programs and other data using customer profiles
US5619648A (en) 1994-11-30 1997-04-08 Lucent Technologies Inc. Message filtering techniques
US5638487A (en) 1994-12-30 1997-06-10 Purespeech, Inc. Automatic speech recognition
AU706649B2 (en) 1995-05-08 1999-06-17 Cranberry Properties, Llc Rules based electronic message management system
US5845077A (en) 1995-11-27 1998-12-01 Microsoft Corporation Method and system for identifying and obtaining computer software from a remote computer
US6101531A (en) 1995-12-19 2000-08-08 Motorola, Inc. System for communicating user-selected criteria filter prepared at wireless client to communication server for filtering data transferred from host to said wireless client
EP0870386B1 (de) * 1995-12-29 2000-04-12 Tixi.Com GmbH Telecommunication Verfahren und mikrocomputersystem zur automatischen, sicheren und direkten datenübertragung
US5704017A (en) 1996-02-16 1997-12-30 Microsoft Corporation Collaborative filtering utilizing a belief network
US5884033A (en) 1996-05-15 1999-03-16 Spyglass, Inc. Internet filtering system for filtering data transferred over the internet utilizing immediate and deferred filtering actions
US6453327B1 (en) * 1996-06-10 2002-09-17 Sun Microsystems, Inc. Method and apparatus for identifying and discarding junk electronic mail
US6072942A (en) 1996-09-18 2000-06-06 Secure Computing Corporation System and method of electronic mail filtering using interconnected nodes
DE69607166T2 (de) * 1996-10-15 2000-12-14 Stmicroelectronics S.R.L., Agrate Brianza Elektronische Anordnung zur Durchführung von Konvolutionsoperationen
US5905859A (en) 1997-01-09 1999-05-18 International Business Machines Corporation Managed network device security method and apparatus
US5805801A (en) 1997-01-09 1998-09-08 International Business Machines Corporation System and method for detecting and preventing security
US6122657A (en) 1997-02-04 2000-09-19 Networks Associates, Inc. Internet computer system with methods for dynamic filtering of hypertext tags and content
US6742047B1 (en) * 1997-03-27 2004-05-25 Intel Corporation Method and apparatus for dynamically filtering network content
DE69724235T2 (de) 1997-05-28 2004-02-26 Siemens Ag Computersystem und Verfahren zum Schutz von Software
US20050081059A1 (en) 1997-07-24 2005-04-14 Bandini Jean-Christophe Denis Method and system for e-mail filtering
US7117358B2 (en) 1997-07-24 2006-10-03 Tumbleweed Communications Corp. Method and system for filtering communication
US6199102B1 (en) 1997-08-26 2001-03-06 Christopher Alan Cobb Method and system for filtering electronic messages
US6041324A (en) 1997-11-17 2000-03-21 International Business Machines Corporation System and method for identifying valid portion of computer resource identifier
US6003027A (en) 1997-11-21 1999-12-14 International Business Machines Corporation System and method for determining confidence levels for the results of a categorization system
US6393465B2 (en) 1997-11-25 2002-05-21 Nixmail Corporation Junk electronic mail detector and eliminator
US6351740B1 (en) 1997-12-01 2002-02-26 The Board Of Trustees Of The Leland Stanford Junior University Method and system for training dynamic nonlinear adaptive filters which have embedded memory
US6023723A (en) 1997-12-22 2000-02-08 Accepted Marketing, Inc. Method and system for filtering unwanted junk e-mail utilizing a plurality of filtering mechanisms
US6052709A (en) 1997-12-23 2000-04-18 Bright Light Technologies, Inc. Apparatus and method for controlling delivery of unsolicited electronic mail
GB2334116A (en) 1998-02-04 1999-08-11 Ibm Scheduling and dispatching queued client requests within a server computer
US6484261B1 (en) 1998-02-17 2002-11-19 Cisco Technology, Inc. Graphical network security policy management
US6504941B2 (en) 1998-04-30 2003-01-07 Hewlett-Packard Company Method and apparatus for digital watermarking of images
US6314421B1 (en) * 1998-05-12 2001-11-06 David M. Sharnoff Method and apparatus for indexing documents for message filtering
US6074942A (en) * 1998-06-03 2000-06-13 Worldwide Semiconductor Manufacturing Corporation Method for forming a dual damascene contact and interconnect
US6308273B1 (en) 1998-06-12 2001-10-23 Microsoft Corporation Method and system of security location discrimination
US6161130A (en) 1998-06-23 2000-12-12 Microsoft Corporation Technique which utilizes a probabilistic classifier to detect "junk" e-mail by automatically updating a training and re-training the classifier based on the updated training set
US6192360B1 (en) 1998-06-23 2001-02-20 Microsoft Corporation Methods and apparatus for classifying text and for building a text classifier
US7275082B2 (en) 1998-07-15 2007-09-25 Pang Stephen Y F System for policing junk e-mail messages
US6167434A (en) * 1998-07-15 2000-12-26 Pang; Stephen Y. Computer code for removing junk e-mail messages
US6112227A (en) 1998-08-06 2000-08-29 Heiner; Jeffrey Nelson Filter-in method for reducing junk e-mail
US6434600B2 (en) 1998-09-15 2002-08-13 Microsoft Corporation Methods and systems for securely delivering electronic mail to hosts having dynamic IP addresses
US6732273B1 (en) 1998-10-21 2004-05-04 Lucent Technologies Inc. Priority and security coding system for electronic mail messages
GB2343529B (en) 1998-11-07 2003-06-11 Ibm Filtering incoming e-mail
US6546416B1 (en) 1998-12-09 2003-04-08 Infoseek Corporation Method and system for selectively blocking delivery of bulk electronic mail
US6477531B1 (en) * 1998-12-18 2002-11-05 Motive Communications, Inc. Technical support chain automation with guided self-help capability using active content
US6643686B1 (en) * 1998-12-18 2003-11-04 At&T Corp. System and method for counteracting message filtering
US6857051B2 (en) 1998-12-23 2005-02-15 Intel Corporation Method and apparatus for maintaining cache coherence in a computer system
US6615242B1 (en) 1998-12-28 2003-09-02 At&T Corp. Automatic uniform resource locator-based message filter
US6266692B1 (en) 1999-01-04 2001-07-24 International Business Machines Corporation Method for blocking all unwanted e-mail (SPAM) using a header-based password
US6330590B1 (en) 1999-01-05 2001-12-11 William D. Cotten Preventing delivery of unwanted bulk e-mail
US6424997B1 (en) 1999-01-27 2002-07-23 International Business Machines Corporation Machine learning based electronic messaging system
US6449634B1 (en) 1999-01-29 2002-09-10 Digital Impact, Inc. Method and system for remotely sensing the file formats processed by an E-mail client
US6477551B1 (en) 1999-02-16 2002-11-05 International Business Machines Corporation Interactive electronic messaging system
US7032030B1 (en) 1999-03-11 2006-04-18 John David Codignotto Message publishing system and method
US6732149B1 (en) 1999-04-09 2004-05-04 International Business Machines Corporation System and method for hindering undesired transmission or receipt of electronic messages
US6370526B1 (en) 1999-05-18 2002-04-09 International Business Machines Corporation Self-adaptive method and system for providing a user-preferred ranking order of object sets
DE19923093A1 (de) * 1999-05-20 2000-11-23 Mann & Hummel Filter Flüssigkeitsabscheider, insbesondere für die Reinigung von Kurbelgehäusegasen mit Abscheidepatrone
US6592627B1 (en) 1999-06-10 2003-07-15 International Business Machines Corporation System and method for organizing repositories of semi-structured documents such as email
US6449636B1 (en) 1999-09-08 2002-09-10 Nortel Networks Limited System and method for creating a dynamic data file from collected and filtered web pages
US6321267B1 (en) * 1999-11-23 2001-11-20 Escom Corporation Method and apparatus for filtering junk email
US6728690B1 (en) 1999-11-23 2004-04-27 Microsoft Corporation Classification system trainer employing maximum margin back-propagation with probabilistic outputs
US6633855B1 (en) 2000-01-06 2003-10-14 International Business Machines Corporation Method, system, and program for filtering content using neural networks
US6701440B1 (en) * 2000-01-06 2004-03-02 Networks Associates Technology, Inc. Method and system for protecting a computer using a remote e-mail scanning device
US7822977B2 (en) 2000-02-08 2010-10-26 Katsikas Peter L System for eliminating unauthorized electronic mail
US6691156B1 (en) 2000-03-10 2004-02-10 International Business Machines Corporation Method for restricting delivery of unsolicited E-mail
US6684201B1 (en) 2000-03-31 2004-01-27 Microsoft Corporation Linguistic disambiguation system and method using string-based pattern training to learn to resolve ambiguity sites
RU2179738C2 (ru) * 2000-04-24 2002-02-20 Вильчевский Никита Олегович Способ обнаружения удаленных атак в компьютерной сети
US7210099B2 (en) 2000-06-12 2007-04-24 Softview Llc Resolution independent vector display of internet content
US20040073617A1 (en) * 2000-06-19 2004-04-15 Milliken Walter Clark Hash-based systems and methods for detecting and preventing transmission of unwanted e-mail
JP2004531780A (ja) 2000-06-22 2004-10-14 マイクロソフト コーポレーション 分散型コンピューティングサービスプラットフォーム
US7003555B1 (en) 2000-06-23 2006-02-21 Cloudshield Technologies, Inc. Apparatus and method for domain name resolution
US6779021B1 (en) 2000-07-28 2004-08-17 International Business Machines Corporation Method and system for predicting and managing undesirable electronic mail
US6842773B1 (en) * 2000-08-24 2005-01-11 Yahoo ! Inc. Processing of textual electronic communication distributed in bulk
US6971023B1 (en) 2000-10-03 2005-11-29 Mcafee, Inc. Authorizing an additional computer program module for use with a core computer program
US6757830B1 (en) 2000-10-03 2004-06-29 Networks Associates Technology, Inc. Detecting unwanted properties in received email messages
US6748422B2 (en) 2000-10-19 2004-06-08 Ebay Inc. System and method to control sending of unsolicited communications relating to a plurality of listings in a network-based commerce facility
US7243125B2 (en) 2000-12-08 2007-07-10 Xerox Corporation Method and apparatus for presenting e-mail threads as semi-connected text by removing redundant material
JP3554271B2 (ja) 2000-12-13 2004-08-18 パナソニック コミュニケーションズ株式会社 情報通信装置
US6775704B1 (en) 2000-12-28 2004-08-10 Networks Associates Technology, Inc. System and method for preventing a spoofed remote procedure call denial of service attack in a networked computing environment
US20050159136A1 (en) 2000-12-29 2005-07-21 Andrew Rouse System and method for providing wireless device access
US20020129111A1 (en) * 2001-01-15 2002-09-12 Cooper Gerald M. Filtering unsolicited email
US8219620B2 (en) * 2001-02-20 2012-07-10 Mcafee, Inc. Unwanted e-mail filtering system including voting feedback
US20020124025A1 (en) 2001-03-01 2002-09-05 International Business Machines Corporataion Scanning and outputting textual information in web page images
GB2373130B (en) 2001-03-05 2004-09-22 Messagelabs Ltd Method of,and system for,processing email in particular to detect unsolicited bulk email
US6928465B2 (en) 2001-03-16 2005-08-09 Wells Fargo Bank, N.A. Redundant email address detection and capture system
US6751348B2 (en) * 2001-03-29 2004-06-15 Fotonation Holdings, Llc Automated detection of pornographic images
US8949878B2 (en) 2001-03-30 2015-02-03 Funai Electric Co., Ltd. System for parental control in video programs based on multimedia content information
US6920477B2 (en) 2001-04-06 2005-07-19 President And Fellows Of Harvard College Distributed, compressed Bloom filter Web cache server
US8095597B2 (en) 2001-05-01 2012-01-10 Aol Inc. Method and system of automating data capture from electronic correspondence
US7188106B2 (en) 2001-05-01 2007-03-06 International Business Machines Corporation System and method for aggregating ranking results from various sources to improve the results of web searching
US7103599B2 (en) 2001-05-15 2006-09-05 Verizon Laboratories Inc. Parsing of nested internet electronic mail documents
US6768991B2 (en) * 2001-05-15 2004-07-27 Networks Associates Technology, Inc. Searching for sequences of character data
US20030009698A1 (en) 2001-05-30 2003-01-09 Cascadezone, Inc. Spam avenger
US7502829B2 (en) 2001-06-21 2009-03-10 Cybersoft, Inc. Apparatus, methods and articles of manufacture for intercepting, examining and controlling code, data and files and their transfer
US20030009495A1 (en) 2001-06-29 2003-01-09 Akli Adjaoute Systems and methods for filtering electronic content
US7328250B2 (en) * 2001-06-29 2008-02-05 Nokia, Inc. Apparatus and method for handling electronic mail
TW533380B (en) * 2001-07-23 2003-05-21 Ulead Systems Inc Group image detecting method
US6769016B2 (en) 2001-07-26 2004-07-27 Networks Associates Technology, Inc. Intelligent SPAM detection system using an updateable neural analysis engine
US7146402B2 (en) 2001-08-31 2006-12-05 Sendmail, Inc. E-mail system providing filtering methodology on a per-domain basis
US20060036701A1 (en) 2001-11-20 2006-02-16 Bulfer Andrew F Messaging system having message filtering and access control
CN1350247A (zh) * 2001-12-03 2002-05-22 上海交通大学 针对邮件内容的监管系统
WO2003054764A1 (en) * 2001-12-13 2003-07-03 Youn-Sook Lee System and method for preventing spam mail
US8561167B2 (en) 2002-03-08 2013-10-15 Mcafee, Inc. Web reputation scoring
US6785820B1 (en) 2002-04-02 2004-08-31 Networks Associates Technology, Inc. System, method and computer program product for conditionally updating a security program
US20030204569A1 (en) 2002-04-29 2003-10-30 Michael R. Andrews Method and apparatus for filtering e-mail infected with a previously unidentified computer virus
US20030229672A1 (en) * 2002-06-05 2003-12-11 Kohn Daniel Mark Enforceable spam identification and reduction system, and method thereof
US8046832B2 (en) 2002-06-26 2011-10-25 Microsoft Corporation Spam detector with challenges
US8924484B2 (en) 2002-07-16 2014-12-30 Sonicwall, Inc. Active e-mail filter with challenge-response
US7363490B2 (en) 2002-09-12 2008-04-22 International Business Machines Corporation Method and system for selective email acceptance via encoded email identifiers
US7188369B2 (en) 2002-10-03 2007-03-06 Trend Micro, Inc. System and method having an antivirus virtual scanning processor with plug-in functionalities
US20040083270A1 (en) 2002-10-23 2004-04-29 David Heckerman Method and system for identifying junk e-mail
US7149801B2 (en) 2002-11-08 2006-12-12 Microsoft Corporation Memory bound functions for spam deterrence and the like
US6732157B1 (en) * 2002-12-13 2004-05-04 Networks Associates Technology, Inc. Comprehensive anti-spam system, method, and computer program product for filtering unwanted e-mail messages
WO2004059506A1 (en) 2002-12-26 2004-07-15 Commtouch Software Ltd. Detection and prevention of spam
US7533148B2 (en) 2003-01-09 2009-05-12 Microsoft Corporation Framework to enable integration of anti-spam technologies
US7171450B2 (en) 2003-01-09 2007-01-30 Microsoft Corporation Framework to enable integration of anti-spam technologies
US7725544B2 (en) 2003-01-24 2010-05-25 Aol Inc. Group based spam classification
US7249162B2 (en) 2003-02-25 2007-07-24 Microsoft Corporation Adaptive junk message filtering system
US7543053B2 (en) 2003-03-03 2009-06-02 Microsoft Corporation Intelligent quarantining for spam prevention
US7219148B2 (en) * 2003-03-03 2007-05-15 Microsoft Corporation Feedback loop for spam prevention
US7366761B2 (en) 2003-10-09 2008-04-29 Abaca Technology Corporation Method for creating a whitelist for processing e-mails
US20040177120A1 (en) 2003-03-07 2004-09-09 Kirsch Steven T. Method for filtering e-mail messages
US7320020B2 (en) 2003-04-17 2008-01-15 The Go Daddy Group, Inc. Mail server probability spam filter
US7653698B2 (en) 2003-05-29 2010-01-26 Sonicwall, Inc. Identifying e-mail messages from allowed senders
US7293063B1 (en) 2003-06-04 2007-11-06 Symantec Corporation System utilizing updated spam signatures for performing secondary signature-based analysis of a held e-mail to improve spam email detection
US7263607B2 (en) 2003-06-12 2007-08-28 Microsoft Corporation Categorizing electronic messages based on trust between electronic messaging entities
US8533270B2 (en) 2003-06-23 2013-09-10 Microsoft Corporation Advanced spam detection techniques
US7155484B2 (en) 2003-06-30 2006-12-26 Bellsouth Intellectual Property Corporation Filtering email messages corresponding to undesirable geographical regions
US7051077B2 (en) 2003-06-30 2006-05-23 Mx Logic, Inc. Fuzzy logic voting method and system for classifying e-mail using inputs from multiple spam classifiers
US20050015455A1 (en) * 2003-07-18 2005-01-20 Liu Gary G. SPAM processing system and methods including shared information among plural SPAM filters
US20050060643A1 (en) 2003-08-25 2005-03-17 Miavia, Inc. Document similarity detection and classification system
US20050050150A1 (en) 2003-08-29 2005-03-03 Sam Dinkin Filter, system and method for filtering an electronic mail message
US7451487B2 (en) 2003-09-08 2008-11-11 Sonicwall, Inc. Fraudulent message detection
US7257564B2 (en) 2003-10-03 2007-08-14 Tumbleweed Communications Corp. Dynamic message filtering
US7451184B2 (en) 2003-10-14 2008-11-11 At&T Intellectual Property I, L.P. Child protection from harmful email
US7930351B2 (en) 2003-10-14 2011-04-19 At&T Intellectual Property I, L.P. Identifying undesired email messages having attachments
US7610341B2 (en) 2003-10-14 2009-10-27 At&T Intellectual Property I, L.P. Filtered email differentiation
US7373385B2 (en) 2003-11-03 2008-05-13 Cloudmark, Inc. Method and apparatus to block spam based on spam reports from a community of users
US20050102366A1 (en) 2003-11-07 2005-05-12 Kirsch Steven T. E-mail filter employing adaptive ruleset
US20050120019A1 (en) 2003-11-29 2005-06-02 International Business Machines Corporation Method and apparatus for the automatic identification of unsolicited e-mail messages (SPAM)
US7359941B2 (en) 2004-01-08 2008-04-15 International Business Machines Corporation Method and apparatus for filtering spam email
US7590694B2 (en) 2004-01-16 2009-09-15 Gozoom.Com, Inc. System for determining degrees of similarity in email message information
US7693943B2 (en) 2004-01-23 2010-04-06 International Business Machines Corporation Classification of electronic mail into multiple directories based upon their spam-like properties
US20050182735A1 (en) 2004-02-12 2005-08-18 Zager Robert P. Method and apparatus for implementing a micropayment system to control e-mail spam
WO2005082101A2 (en) 2004-02-26 2005-09-09 Truefire, Inc. Systems and methods for producing, managing, delivering, retrieving, and/or tracking permission based communications
US20050204159A1 (en) 2004-03-09 2005-09-15 International Business Machines Corporation System, method and computer program to block spam
US7627670B2 (en) 2004-04-29 2009-12-01 International Business Machines Corporation Method and apparatus for scoring unsolicited e-mail
US7155243B2 (en) 2004-06-15 2006-12-26 Tekelec Methods, systems, and computer program products for content-based screening of messaging service messages
US20060123083A1 (en) 2004-12-03 2006-06-08 Xerox Corporation Adaptive spam message detector
US7937480B2 (en) 2005-06-02 2011-05-03 Mcafee, Inc. Aggregation of reputation data
US7971137B2 (en) 2005-12-14 2011-06-28 Google Inc. Detecting and rejecting annoying documents

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6421709B1 (en) 1997-12-22 2002-07-16 Accepted Marketing, Inc. E-mail filter and method thereof

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005043351A2 (en) 2003-11-03 2005-05-12 Cloudmark, Inc. Method and apparatus to block spam based on spam reports from a community of users
EP1680728A4 (en) * 2003-11-03 2011-06-15 Cloudmark Inc METHOD AND DEVICE FOR BLOCKING SPAM BASED ON SPAM REPORTS FROM A COMMUNITY OF USERS
JP2008547067A (ja) * 2005-05-05 2008-12-25 シスコ アイアンポート システムズ エルエルシー 参照リソースの確率的解析に基づく不要な電子メールメッセージの検出
JP4880675B2 (ja) * 2005-05-05 2012-02-22 シスコ アイアンポート システムズ エルエルシー 参照リソースの確率的解析に基づく不要な電子メールメッセージの検出
WO2007036152A1 (fr) * 2005-09-27 2007-04-05 Tencent Technology (Shenzhen) Company Limited Systeme et procede de filtrage des pourriels et terminal client pour courriel et serveur de courriel
US9276930B2 (en) 2011-10-19 2016-03-01 Artashes Valeryevich Ikonomov Device for controlling network user data
US10115084B2 (en) 2012-10-10 2018-10-30 Artashes Valeryevich Ikonomov Electronic payment system
US20200351696A1 (en) 2018-01-22 2020-11-05 Beijing Xiaomi Mobile Software Co., Ltd. Method, device and system for minimization of drive test
RU2740072C1 (ru) * 2018-01-22 2021-01-11 Бейджин Сяоми Мобайл Софтвеа Ко., Лтд. Способ, устройство и система для измерения минимизации выездного тестирования
US11765611B2 (en) 2018-01-22 2023-09-19 Beijing Xiaomi Mobile Software Co., Ltd. Method, device and system for minimization of drive test
US11317302B2 (en) 2018-02-08 2022-04-26 Beijing Xiaomi Mobile Software Co., Ltd. Minimization of drive test configuration method and apparatus

Also Published As

Publication number Publication date
ZA200506085B (en) 2006-11-29
TW200507576A (en) 2005-02-16
MXPA05008303A (es) 2006-03-21
CO6141494A2 (es) 2010-03-19
TW201036399A (en) 2010-10-01
CA2799691A1 (en) 2004-09-16
KR101021395B1 (ko) 2011-03-14
JP2006521635A (ja) 2006-09-21
CN1809821A (zh) 2006-07-26
RU2005124681A (ru) 2006-01-20
IL170115A (en) 2010-12-30
WO2004079514A3 (en) 2006-03-30
JP4828411B2 (ja) 2011-11-30
AU2004216772A2 (en) 2004-09-16
NO20053733L (no) 2005-08-24
CN100472484C (zh) 2009-03-25
US7558832B2 (en) 2009-07-07
NZ541628A (en) 2007-12-21
CA2513967C (en) 2014-04-15
RU2331913C2 (ru) 2008-08-20
TWI331869B (en) 2010-10-11
EP1599781A4 (en) 2011-12-07
US7219148B2 (en) 2007-05-15
AU2004216772B2 (en) 2009-12-17
CA2799691C (en) 2014-09-16
EP1599781A2 (en) 2005-11-30
IL206121A (en) 2012-03-29
US20070208856A1 (en) 2007-09-06
US20040177110A1 (en) 2004-09-09
EG23988A (en) 2008-03-05
CA2513967A1 (en) 2004-09-16
AU2004216772A1 (en) 2004-09-16
KR20060006769A (ko) 2006-01-19
BRPI0407045A (pt) 2006-01-17

Similar Documents

Publication Publication Date Title
CA2513967C (en) Feedback loop for spam prevention
US7543053B2 (en) Intelligent quarantining for spam prevention
US8635690B2 (en) Reputation based message processing
US7660865B2 (en) Spam filtering with probabilistic secure hashes
US7610344B2 (en) Sender reputations for spam prevention
US20140082726A1 (en) Real-time classification of email message traffic
US8135780B2 (en) Email safety determination
US8291024B1 (en) Statistical spamming behavior analysis on mail clusters
MXPA04005335A (es) Caracteristicas de origen/destino y listas para la prevencion de correo electronico no solicitado (spam).
US8135778B1 (en) Method and apparatus for certifying mass emailings
Isacenkova et al. Measurement and evaluation of a real world deployment of a challenge-response spam filter
JP4839318B2 (ja) メッセージプロファイリングシステムおよび方法
Johansen Email Communities of Interest and Their Application
Islam Designing Spam Mail Filtering Using Data Mining by Analyzing User and Email Behavior

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): BW GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 1-2005-501431

Country of ref document: PH

WWE Wipo information: entry into national phase

Ref document number: 1020057013142

Country of ref document: KR

WWE Wipo information: entry into national phase

Ref document number: 2513967

Country of ref document: CA

WWE Wipo information: entry into national phase

Ref document number: 3244/DELNP/2005

Country of ref document: IN

WWE Wipo information: entry into national phase

Ref document number: 2004216772

Country of ref document: AU

ENP Entry into the national phase

Ref document number: 2005124681

Country of ref document: RU

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 200506085

Country of ref document: ZA

WWE Wipo information: entry into national phase

Ref document number: 541628

Country of ref document: NZ

Ref document number: 2006508818

Country of ref document: JP

Ref document number: 05076623

Country of ref document: CO

WWE Wipo information: entry into national phase

Ref document number: 170115

Country of ref document: IL

Ref document number: PA/a/2005/008303

Country of ref document: MX

WWE Wipo information: entry into national phase

Ref document number: 20048037693

Country of ref document: CN

ENP Entry into the national phase

Ref document number: 2004216772

Country of ref document: AU

Date of ref document: 20040225

Kind code of ref document: A

WWP Wipo information: published in national office

Ref document number: 2004216772

Country of ref document: AU

WWE Wipo information: entry into national phase

Ref document number: 2004714607

Country of ref document: EP

DPEN Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed from 20040101)
WWP Wipo information: published in national office

Ref document number: 2004714607

Country of ref document: EP

ENP Entry into the national phase

Ref document number: PI0407045

Country of ref document: BR

WWP Wipo information: published in national office

Ref document number: 1020057013142

Country of ref document: KR