WO2005086437A1 - A method and system for blocking unwanted unsolicited information - Google Patents

A method and system for blocking unwanted unsolicited information

Info

Publication number
WO2005086437A1
WO2005086437A1 PCT/EP2005/002164 EP2005002164W WO2005086437A1 WO 2005086437 A1 WO2005086437 A1 WO 2005086437A1 EP 2005002164 W EP2005002164 W EP 2005002164W WO 2005086437 A1 WO2005086437 A1 WO 2005086437A1
Authority
WO
Grant status
Application
Patent type
Prior art keywords
abuse
information
device
sending
profile
Prior art date
Application number
PCT/EP2005/002164
Other languages
French (fr)
Inventor
Franklin Selgert
Original Assignee
Koninklijke Kpn N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00Arrangements for user-to-user messaging in packet-switching networks, e.g. e-mail or instant messages
    • H04L51/12Arrangements for user-to-user messaging in packet-switching networks, e.g. e-mail or instant messages with filtering and selective blocking capabilities

Abstract

A method and a system for blocking unwanted unsolicited information, also known as spam, which is sent via a data network to a device. The information comprises a mark, which comprises an identification of a sending party and a content class identification. The invention makes it possible to check the mark using a profile of a user of the device, to create an abuse-report and to send the abuse-report.

Description

Title

A method and system for blocking unwanted unsolicited information

Field of the invention

The invention relates to a method and a system for blocking unwanted unsolicited information. More specifically, the invention relates to blocking and reporting spam sent via a data network to a device.

Background of the invention

There is an increasing amount of digital information being send unsolicited to computers connected to the Internet or to mobile handsets connected to mobile data networks. In the future unsolicited information can be send to even more types of devices connected to whatever data network. The digital information often consists of advertisement, but is also misused for spreading viruses or other malicious content. Unwanted unsolicited information is often referred to as "spam".

Blocking of spam is becoming a high priority. Currently most blocking is achieved by screening the information for specific words, blacklisting (disallowing) of sending parties, whitelisting (allowing) of sending parties and/or virus filtering. In practice a lot of unwanted unsolicited information still gets delivered though.

A method and system for filtering spam is described in US2003/0009698 "spam avenger". This method and system is applicable to email and is based on whitelisting / blacklisting of sending parties. Whenever a message is first received from an unapproved sender, a confirmation request email is sent to the sender's email address requesting the sender to confirm its existence and identity. Spammers typically don't receive, and can't handle reply emails. Therefore, until the unapproved sender replies to the confirmation request email, electronic messages received by the unapproved sender are treated as spam. A system and method facilitating detection of unsolicited e-mail message (s) with challenges is described in EP1376427 "spam detector with challenges". This system and method includes an e- mail component and a challenge component. The system can receive e-mail message (s) and associated probabilities that the e-mail message (s) are spam. Based, at least part, upon the associated probability, the system can send a challenge to a sender of an e-mail message. The challenge can be embedded code, computational challenge, human challenge and/or micro payment request. Based, at least pin part, upon a response to the challenge (or lack of response) , the challenge component can modify the associated probability and/or delete the e-mail message .

A method for enforceably reducing spam is described in WO03/105008 "enforceable spam identification and reduction system, and method thereof". This method checks an email message for a specific mark and if the specific mark is present, tags this email message as non-spam. The tagged email is displayed to a user of a local computer. The specific mark may be displayed with the tagged email. The mark is part of an enforceable anti- spam email header field comprising a field name and a field body. The field body comprises the mark, which is used for identification or indication of ownership. The mark is legally reserved for the exclusive use of the owner of the mark. If the user identifies the tagged email as spam, the tagged email is sent to a remote enforcement computer.

A method and system for selecting and removing spam from a stream of mail received by mail service or mail client is described in WO01/53965 "e-mail spam filter". Here spam is attracted by creating a series of e-mail addresses held by a spam attractor site connected to the internet SMTP mail network, that are used to engage in high risk activities known to attract spam. All spam received would be funneled and subject to the same processing, regardless to which of the e-mail addresses it was actually sent to. All mails received at these e-mail addresses must be spam, as the addresses are not provided to any other mail source. The mail is received, and the fingerprint is calculated for each message. The central fingerprint database contains a relational table and one row for each unique item of spam detected. A spam filter is associated with each mail gateway and when an e-mail is received, the mail gateway calculates a mail signature using the fingerprint algorithm, the signature is looked up in the local database, and if found, the mail is discarded.

An Internet member's license system is known from an article of Nova Spivack, which can be found on the website "http://novaspivack.typepad.com" on page "/nova_spivacks_weblog/2004/01/a_new_solution_.html" . The IML system is described as follows. A central IML registry that runs as a non-profit would issue IML certificates. Getting an IML is similar to getting a driver's license from the Registry of Motor Vehicles: Email service providers may apply for these certificates and once they get them, they may then issue sub- certificates off of their identities to their members. Email service providers (such as ISPs, enterprises, etc.) automatically append IMLs onto the end of every outgoing message they route (as ASCII text or a MIME attachment) that authenticate the identity of the service provider as well as the sender of the message. Alternatively individuals can apply for IMLs directly from the IML Registry and/or they can just add their IMLs into their sig files if their email providers are not IML-compliant yet. IML's can also be put into the metadata for content that individuals and services post onto the Net in order to authenticate that content as "not spam". Every IML starts with a certain number of "points" on it, just like a driver's license. Individual email client applications (whether Web ail or desktop mail clients) can then simply screen each incoming message for a valid IML certificate. Messages with valid IML's are accepted and the sender's may also be automatically whitelisted; incoming messages without valid IML's can be blocked or go into a "suspect e-mail folder" and can get a bounced message informing the sender that their IML was missing or invalid. If you get a message from someone that you feel is "spam" you can simply mail it to the IML registry to report abuse. If a certain number of "spam" citations are filed against any IML holder per unit of time, the holder gets a "traffic violation citation", in other words a ticket. The "cost" of the ticket depends on how far over the "spam limit" they are. The ticket deducts points from their IML certificate, based on the cost, as well as from the IML certificate for their ISP. Lost points can be regained with good behavior (every IML earns back 1 point per month) , or from community service (to the IML Registry perhaps) , or by paying a fine to the IML Registry. The IML Registry can dynamically re-issue new IMLs to ISPs and their users based on their current status. So for example, if a user gets a ticket they get a new IML that replaces their previous one and which encodes the new number of points remaining on their license. Because each IML holder's current license points can be encoded in their IML (cryptographically) , spam' filters and recipient apps can not only check for valid IMLs on incoming mail, but if they wish they can even prioritize or screen messages by the number of points on the IMLs of the senders and their ISPs. Those who have very few points may be considered to be "on probation" or "likely to be spam". If any party loses all their points they may still send email, but their IML certificate will reflect that they have no points on their license. Thus IML-compliant spam filters can simply screen them out or treat them as suspect senders. The reputation of email providers is linked to the reputations of their members, and vice-versa. This helps to reinforce good behavior at both levels of the community (ISPs and their users, mutually; a nice cybernetic feedback loop) . Thus if an email provider allows misuse they could lose points on their provider-IML, which in turn is then inherited down to the member-IMLs of all their members (because their members' IMLs are sub-certificates of their certificate) . So as an email sender, I will want to use an ISP that has a sterling IML, because I don't want my own reputation tarnished. Similarly as an ISP I will want to be careful about not routing spam because if I route spam for my members then it harms my reputation, which means messages from my service may not be accepted by others, and that may cause my members to go elsewhere, therefore as an ISP my policy may be that I only allow members with IMLs that have a certain number of points: if one of my member's IML goes below a certain number of points I may kick them out of my service. The central IML Registry can charge a modest fee to applicants to get an IML and renew it every year, and this can support the cost of running the Registry as a non-profit. Furthermore, the IML Registry can open up an API that lets other applications query it by inputting an IML certificate to it in order to get the current status of that license (e.g. whether or not it is valid and the number of points remaining on it) . Ultimately this entire infrastructure could be decentralized such that every ISP could run their own sub-Registry. Thus the central Registry would issue and maintain IMLs for ISPs, and then ISPs would issue and maintain the IMLs for their members. This is similar to the global DNS infrastructure.

A method for identifying the original sender of an email message is known from "Caller ID for E-Mail" by Microsoft. The Caller ID for E-Mail method aims to eliminate domain spoofing and increase the effectiveness of spam filters by verifying what domain a message came from — much like how caller ID for telephones shows the phone number of the person calling. The method involves three steps to authenticate a sender: (1) e-mail senders, large or small, publish the Internet protocol (IP) addresses of their outbound e-mail servers in the Domain Name

System (DNS) ; (2) recipient e-mail systems examine each message to determine the purported responsible domain (i.e., the Internet domain that purports to have sent the message) ; (3) recipient e-mail systems query the DNS for the list of outbound e-mail server IP addresses of the purported responsible domain. They then check whether the IP address from which the message was received is on that list. If no match is found, the message has most likely been spoofed and will be rejected.

All solutions mentioned above have in common that they are used for blocking unwanted unsolicited email messages only.

Furthermore all solutions use the criteria that an email message is either spam or not spam, a mere binary rating.

A multiple rating method is known from "Kijkwijzer" (http://www.kijkwijzer.nl), which is used in The Netherlands for rating television programs, cinema films and video. With Kijkwijzer, parents and educators can see at a single glance whether a television program, cinema film or video may be harmful to children. Kijkwijzer gives an age recommendation - All Ages, Age 6, Age 12 and Age 16 - and pictograms indicating which aspects of the content have led to the production being not recommended for particular age groups. The aspects distinguished are violence, fear, sex, discrimination, drug and/or alcohol abuse, and coarse language. The Kijkwijzer pictograms appear in advertisements, newspaper program listings, listings magazines, cinema listings and the packaging of video cassettes and DVDs. They also appear on screen at the beginning of a TV program, video or movie. Productions are rated by having a questionnaire filled in after which coders (people) determine, based on the filled in questionnaire, which classification a production should be given.

The usage of a multiple rating method as described above is limited to information that can be analyzed by people prior to showing.

Problem definition

The prior art fails to provide a solution for blocking and reporting unwanted unsolicited information of any kind using multiple rating. Aim of the invention

It is the aim of the invention to provide a method and a system for blocking unwanted unsolicited information of any kind using multiple rating. Furthermore it is an aim of the invention to be able to report that a party sends unwanted unsolicited information.

Summary of the invention

The present invention provides a solution for blocking and reporting unwanted unsolicited information of any kind using multiple rating.

According to an aspect of the invention a method for blocking information sent via a data network to a device is provided. The information comprises a mark. The mark can comprise an identification of a sending party and a content class identification. The method can comprise the steps of checking the mark using a profile of a user of the device, creating an abuse-report and sending the abuse-report. The method can further comprise the step of verifying the identification of the sending party through a server of a certified licensee organization. This advantageously makes it possible to only allow licensed parties to send information. The profile can comprise characteristics of the user of the device. This advantageously makes it possible to block the information if from the characteristics of the user it is derived that the mark is not allowed. The profile can comprise information about an applicable law. This advantageously makes it possible to block the information if from the applicable law it is derived that the mark is not allowed. The profile can comprise personal settings. This advantageously makes it possible to block the information if from the personal settings, which are set by the user of the device, it is derived that the mark is not allowed. The profile can be stored within the device or within the data network. The checking, creating and sending can be performed within the device. This makes it possible for the device to be in control. The steps of checking, creating and sending can be performed within the data network. This makes it possible for the data network to be in control. The step of checking can be performed within the data network and the steps of creating and sending can performed within the device. This makes it possible for the data network to be in control of the checking and the device to be in control of the abuse-report handling. The method can comprise the step of displaying an anti- abuse-button on a display of the device or enabling an anti- abuse menu item in a software running in the device. This advantageously makes it possible for the user of the device to quickly have an abuse-report created. The abuse-report can comprise an abuse-ranking and the method can comprise the step of calculating the abuse-ranking using the content class identification and the profile of the user. This advantageously enables to rank the level of abuse.

According to another aspect of the invention a system for blocking information sent via a data network to a device is provided. The information comprises a mark. The mark can comprise an identification of a sending party and a content class identification. The system can comprise means for checking the mark using a profile of a user of the device, means for creating an abuse-report and means for sending the abuse-report. The system can further comprise means for verifying the identification of the sending party through a server of a certified licensee organization. This advantageously makes it possible to only allow licensed parties to send information. The profile can comprise characteristics of the user of the device. This advantageously makes it possible to only allow licensed parties to send information. The profile can comprise information about an applicable law. This advantageously makes it possible to block the information if from the applicable law it is derived that the mark is not allowed. The profile can comprise personal settings. This advantageously makes it possible to block the information if from the personal settings, which are set by the user of the device, it is derived that the mark is not allowed. The profile can be stored within the device or within the data network. The system can comprise means for displaying an anti-abuse- button on a display of the device or means for enabling an anti- abuse menu item in a software running in the device. This advantageously makes it possible for the user of the device to quickly have an abuse-report created. The abuse-report can comprise an abuse-ranking and the system can further comprise means for calculating the abuse- ranking using the content class identification and the profile of the user. This advantageously enables to rank the level of abuse.

Brief description of the drawings

The invention will be explained in greater detail by reference to exemplary embodiments shown in the drawings, in which: Fig.l shows a schematic view of the method of an exemplary embodiment; Fig.2 shows a time sequence diagram of an exemplary embodiment; Fig.3 shows a time sequence diagram of an exemplary embodiment; Fig.4 shows a time sequence diagram of an exemplary embodiment; Fig.5 shows a system of an exemplary embodiment; Fig.6 shows a system of an exemplary embodiment.

Detailed description For the purpose of teaching of the invention, preferred embodiments of the method and system of the invention are described in the sequel. It will be apparent to the person skilled in the art that other alternative and equivalent embodiments of the invention can be conceived and reduced to practice without departing from the true spirit of the invention, the scope of the invention being only limited by the claims as finally granted. In fig .1 a schematic view of a method according to an exemplary embodiment of the invention is shown. The method is used for blocking information sent via a data network to a device. The information comprises a mark, the mark comprising an identification of a sending party and a content class identi ication. The mark is checked (1) , the identification of the sending party is verified (11) and an abuse-ranking is calculated (12) . Optionally an anti-abuse-button is displayed on a display of the device (21) or an anti-abuse menu item in a software running in the device is enabled (22) . An abuse-report is created (2) and sent (3) to the proper authority.

In fig.2 a time sequence diagram of an exemplary embodiment is shown. The device (100) performs the step of checking (1) the mark using the profile of the user of the device (100) . Next the identification of the sending party is verified (11) through a server (300) of a certified licensee organization. The device (100) creates (2) the abuse-report and sends (3) the abuse- report to the proper authority (900) .

In fig.3 a time sequence diagram of another exemplary embodiment is shown. The data network (200) performs the step of checking (1) the mark using the profile of the user of the device (100) . Next the identification of the sending party is verified (11) through the server (300) of a certified licensee organization. The data network (200) creates (2) the abuse-report and sends (3) the abuse-report to the proper authority (900) .

In fig.4 a time sequence diagram of another exemplary embodiment is shown. The data network (200) performs the step of checking (1) the mark using the profile of the user of the device (100) . Next the identification of the sending party is verified (11) through the server (300) of a certified licensee organization. The device (100) creates (2) the abuse-report and sends (3) the abuse-report to the proper authority (900) . In fig.5 a system of an exemplary embodiment is shown. The device (100) is linked to the data network (200) . The server (300) of the certified licensee organization is also linked to the data network (200) , as is the server (900) of the proper authority that receives the abuse-report. It is possible that the server (300) of the certified licensee organization is the same as the server (900) of the proper authority that receives the abuse-report. The profile (101) can be stored within the device (100) . Alternatively the profile (201) can be stored within the data network (200) .

In fig.6 a system of another exemplary embodiment is shown. A means (1000) for checking the mark using the profile of the user of the device is linked to a means (1001) for verifying the identity of the sending party through the server (300) of the certified licensee organization. The server (300) of the certified licensee organization is linked to the means (1001) for verifying. The means (1000) for checking is linked to the profile (101) that is stored within the device or to the profile (201) that is stored within the data network. The means (1000) for checking is linked to a means (2000) for creating the abuse- report. The means (2000) for creating the abuse-report is linked to a means (1002) for calculating the abuse-ranking. The means (1002) for calculating the abuse-ranking is linked to the profile (101,201). The means (2000) for creating the abuse- report is optionally linked to a means (2001) for displaying the anti-abuse button on the display of the device, or to a means (2002) for enabling the anti-abuse menu item in the software running in the device. The means (2000) for creating the abuse- report is linked to a means (3000) for sending the abuse-report to the proper authority (900) .

The next example explains the invention for unwanted unsolicited information sent to a mobile device connected to a mobile data network. Such a mobile device can e.g. be a mobile phone comprising a WAP or HTML-flavored browser, which is connected to a GPRS or UMTS mobile network. The invention is not limited to such devices. It is also possible to use the invention when unwanted unsolicited information is sent to a computer (pc, notebook, tablet pc, etcetera) connected to the Internet via a fixed line (pstn, isdn, adsl, leased line, Ian, etcetera) or via a wireless line (gsm, gprs, umts, wlan, etcetera) . Furthermore the invention is applicable for blocking unwanted unsolicited information sent to any other device capable of sending and receiving information via a data network.

Unwanted unsolicited information exists in a variety of forms.

Examples are sms messages, ms messages, push messages and email messages. In fact any unsolicited information that can be sent to a device connected to a data network is potentially unwanted and can be blocked by the invention. The content of the information can be advertisement, games, e-mail, SMS, MMS, films, pictures, text, programs, and etcetera.

The invention can be explained as follows. Information is only allowed to be displayed on a mobile phone when marked. The mark identifies the identity of the sending party and the type of information. This mark is checked against the profile of the user. This profile can comprise characteristics of the user (e.g. adult or minor), applicable law (e.g. law of the United Stated of America disallowing spam or Dutch law with other specific rules) , and/or personal settings like the current mood of the user (e.g. tired so not wanting to receive advertisement, hungry so particularly interested in information about food, in love so interested in romantic information, etcetera) . The profile can be stored within the mobile phone or within the network (e.g. in a server of the Internet Service Provider, a portal, an access gateway, email server, etcetera) . If the profile is stored within the mobile phone, then the information can be checked within the phone prior to displaying the information. If the profile is stored within the network the information can be checked before sending the information to the mobile phone. When information is wrongly marked (possibly abusively) or illegally send, it can be made easy for the user to report this to the proper authority that can take legal actions against the sending party. E.g. the receiving device can have an anti-abuse button, which will - after pressing the button - trigger the sending of an abuse-report to the proper authority based on the received information. It is also possible that the user selects a send-abuse-report option in a menu residing in the software of the device and in this way triggering the sending of an abuse- report. Yet another possibility is that the receiving device automatically sends an abuse-report. In case of checking within the network an abuse-report can be generated automatically there.

The identity of the sending party can be obtained from the received information or through a certified licensee organization. In SMS messages and MMS messages e.g. the received information includes the MSISDN or IMSI identity of the sending party. Information received via the HTTP protocol includes in the HTTP header the IP address of the sending party. The MAC address indicating the network interface card can be used in the HTTP header as well. E-mail messages often include the email- address of the sending party. If the email-address of the sending party is spoofed, then the IP address of the sending party can be included in the information by using e.g. the described "caller ID for e-mail" solution. A certified licensee organization can be used to provide licenses to content providers. Especially in mobile environments where content providers offer content through portals, content providers can be allowed to send content only when a license is acquired from the licensee organization. It is also possible though that content providers on the Internet are required to acquire a license. Information received from a licensed content provider should include a license identifier from which the identity of the sending party can be obtained at the licensee organization. It is possible that the identity of the sending party is encrypted. Where the mark is checked it should be possible to decrypt the identity.

A universal set of content classes is defined. These content classes contain an indication of the nature of the information, comparable with methodologies used on television where symbols can indicate the nature of the television program (e.g. violence, fear, sex, discrimination, drug and/or alcohol abuse or coarse language) . The content classes are used to indicate the type of information. The content classes can e.g. be binary coded using 3 bit encoding. This gives the possibility to have eight content classes defined. If more content classes are required then n bit encoding can be used, where n is any integer value fulfilling the requirements. It is also possible to indicate the classes by using natural wordings. E.g. to indicate that information contains violent language the literal indication "violence" can be used in the mark. Any other representation of content classes can be used, as long as the content class can be derived from the mark.

When the mark is checked against the profile of the user, different rules can apply. If e.g. the profile indicates that the user is a minor and the mark identifies the type of information as being of nature "sex" then the information can be blocked. If the law of a country forbids sending information of a specific nature, e.g. "sex", then based on the profile indicating the applicable law the information can be blocked for all users.

Preferably the abuse-report comprises an abuse-ranking. This abuse-ranking indicates the significance of the abuse and is calculated from the type of content and the profile of the user. The ranking can e.g. be on a scale of 5 where "1" denotes a low significance i.e. not so abusive and "5" denotes a high significance i.e. very abusive. Other scales or ranking indications are also possible, as long the significance of the abuse can be derived. Taking the example from above, the significance of the abuse will be high when the user is a minor and the mark identifies the type of content as being of nature "sex". On the other hand, if the profile of the user allows receiving content of nature "sex" but the user still uses the abuse-button, then the significance of the abuse will be lower for the sending party was allowed to send the information. Using abuse-ranking enables the proper authority to takes measures more precisely. Especially in case the sending party has a license obtained from a certified licensee organization, the abuse-report containing an abuse-ranking can be compared with the license to verify whether or not there really is a case of abuse.

The invention is also applicable to personal generated information, e.g. messages send from a computer or mobile phone to another computer or mobile phone. Each mobile device can generate a mark comprising the identity of the sending party and an indication of the type of information. Prior to sending the information the user can be asked to choose from a predefined list of content classes the applicable content class for the information. It is possible that the user only gets a limited choice, e.g. "offensive" or "non-offensive". The mark will be send along with the information.

The invention also works when Digital Rights Management (DRM) is used. Typically with DRM the content is encoded and hard to screen with conventional methods, but the invention can be applied without loss of effectiveness.

Claims

Claims
1. Method for blocking information sent via a data network (200) to a device (100) , the information comprising a mark, the mark comprising an identification of a sending party and a content class identification, the method comprising the steps of checking (1) the mark using a profile (101,201) of a user of the device (100) ; creating (2) an abuse-report; sending (3) the abuse-report.
2. Method according to claim 1, the method further comprising the step of verifying (11) the identification of the sending party through a server (300) of a certified licensee organization.
3. Method according to claims 1-2 in which the profile (101,201) comprises characteristics of the user of the device (100) .
4. Method according to claims 1-3 in which the profile (101,201) comprises information about an applicable law.
5. Method according to claims 1-4 in which the profile (101,201) comprises personal settings.
6. Method according to claims 1-5 in which the profile (101) is stored within the device (100) .
7. Method according to claims 1-6 in which the profile (201) is stored within the data network (200) .
8. Method according to claims 6-7 in which the steps of checking (1) , creating (2) and sending (3) are performed within the device (100) .
9. Method according to claim 7 in which the steps of checking (1) , creating (2) and sending (3) are performed within the data network (200) .
10. Method according to claim 7 in which the step of checking (1) is performed within the data network (200) and the steps of creating (2) and sending (3) are performed within the device (100) .
11. Method according to claim 8, the method further comprising the step of displaying (21) an anti-abuse-button on a display of the device (100) .
12. Method according to claim 8, the method further comprising the step of enabling (22) an anti-abuse menu item in a software running in the device (1) .
13. Method according to claims 1-12 in which the abuse-report comprises an abuse-ranking, the method further comprising the step of calculating (12) the abuse-ranking using the content class identification and the profile (101,201) of the user.
14. System for blocking information sent via a data network (200) to a device (100) , the information comprising a mark, the mark comprising an identification of a sending party and a content class identification, the system comprising means (1000) for checking the mark using a profile (101,201) of a user of the device (100); means (2000) for creating an abuse-report; means (3000) for sending the abuse-report.
15. System according to claim 14, the system further comprising means (1001) for verifying the identification of the sending party through a server (300) of a certified licensee organization.
16. System according to claims 14-15 in which the profile (101,201) comprises characteristics of the user of the device (100) .
17. System according to claims 14-16 in which the profile (101,201) comprises information about an applicable law.
18. System according to claims 14-17 in which the profile (101,201) comprises personal settings.
19. System according to claims 14-18 in which the profile (101) is stored within the device (100) .
20. System according to claims 14-19 in which the profile (201) is stored within the data network (200) .
21. System according to claims 19-20, the system further comprising means (2001) for displaying an anti-abuse-button on a display of the device (100) .
22. System according to claims 19-20, the system further comprising means (2002) for enabling an anti-abuse menu item in a software running in the device (100) .
23. System according to claims 14-22 in which the abuse-report comprises an abuse-ranking, the system further comprising means (1002) for calculating the abuse-ranking using the content class identification and the profile (101,201) of the user.
PCT/EP2005/002164 2004-02-27 2005-02-28 A method and system for blocking unwanted unsolicited information WO2005086437A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US54883904 true 2004-02-27 2004-02-27
US60/548,839 2004-02-27

Publications (1)

Publication Number Publication Date
WO2005086437A1 true true WO2005086437A1 (en) 2005-09-15

Family

ID=34919407

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2005/002164 WO2005086437A1 (en) 2004-02-27 2005-02-28 A method and system for blocking unwanted unsolicited information

Country Status (1)

Country Link
WO (1) WO2005086437A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1982540A2 (en) * 2005-11-10 2008-10-22 Secure Computing Corporation Content-based policy compliance systems and methods
EP2391090A1 (en) * 2010-05-28 2011-11-30 Prim'Vision System and method for increasing relevancy of messages delivered to a device over a network
US8141133B2 (en) 2007-04-11 2012-03-20 International Business Machines Corporation Filtering communications between users of a shared network
US8214497B2 (en) 2007-01-24 2012-07-03 Mcafee, Inc. Multi-dimensional reputation scoring
CN101741818B (en) 2008-11-05 2013-01-02 南京理工大学 Independent network safety encryption isolator arranged on network cable and isolation method thereof
US8549611B2 (en) 2002-03-08 2013-10-01 Mcafee, Inc. Systems and methods for classification of messaging entities
US8561167B2 (en) 2002-03-08 2013-10-15 Mcafee, Inc. Web reputation scoring
US8763114B2 (en) 2007-01-24 2014-06-24 Mcafee, Inc. Detecting image spam

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002045392A2 (en) * 2000-11-22 2002-06-06 Tekelec Methods and systems for automatically registering complaints against calling parties
US20020120600A1 (en) * 2001-02-26 2002-08-29 Schiavone Vincent J. System and method for rule-based processing of electronic mail messages
US20030140014A1 (en) * 2001-10-16 2003-07-24 Fitzsimmons Todd E. System and method for mail verification
US20030229672A1 (en) * 2002-06-05 2003-12-11 Kohn Daniel Mark Enforceable spam identification and reduction system, and method thereof

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002045392A2 (en) * 2000-11-22 2002-06-06 Tekelec Methods and systems for automatically registering complaints against calling parties
US20020120600A1 (en) * 2001-02-26 2002-08-29 Schiavone Vincent J. System and method for rule-based processing of electronic mail messages
US20030140014A1 (en) * 2001-10-16 2003-07-24 Fitzsimmons Todd E. System and method for mail verification
US20030229672A1 (en) * 2002-06-05 2003-12-11 Kohn Daniel Mark Enforceable spam identification and reduction system, and method thereof

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ANONYMOUS: "A New Solution to Spam: "The Internet Member's License"", INTERNET CITATION, 26 January 2004 (2004-01-26), XP002335509, Retrieved from the Internet <URL:http://novaspivack.typepad.com/nova_spivacks_weblog/2004/01/a_new_solution_.html> [retrieved on 20050711] *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8549611B2 (en) 2002-03-08 2013-10-01 Mcafee, Inc. Systems and methods for classification of messaging entities
US8561167B2 (en) 2002-03-08 2013-10-15 Mcafee, Inc. Web reputation scoring
EP1982540A2 (en) * 2005-11-10 2008-10-22 Secure Computing Corporation Content-based policy compliance systems and methods
EP1982540A4 (en) * 2005-11-10 2011-01-05 Mcafee Inc Content-based policy compliance systems and methods
US8763114B2 (en) 2007-01-24 2014-06-24 Mcafee, Inc. Detecting image spam
US9009321B2 (en) 2007-01-24 2015-04-14 Mcafee, Inc. Multi-dimensional reputation scoring
US8214497B2 (en) 2007-01-24 2012-07-03 Mcafee, Inc. Multi-dimensional reputation scoring
US9544272B2 (en) 2007-01-24 2017-01-10 Intel Corporation Detecting image spam
US8141133B2 (en) 2007-04-11 2012-03-20 International Business Machines Corporation Filtering communications between users of a shared network
CN101741818B (en) 2008-11-05 2013-01-02 南京理工大学 Independent network safety encryption isolator arranged on network cable and isolation method thereof
EP2391090A1 (en) * 2010-05-28 2011-11-30 Prim'Vision System and method for increasing relevancy of messages delivered to a device over a network
WO2011147938A1 (en) * 2010-05-28 2011-12-01 Prim' Vision System and method for increasing relevancy of messages delivered to a device over a network

Similar Documents

Publication Publication Date Title
US7380126B2 (en) Methods and apparatus for controlling the transmission and receipt of email messages
US7580982B2 (en) Email filtering system and method
US6266692B1 (en) Method for blocking all unwanted e-mail (SPAM) using a header-based password
US20060095459A1 (en) Publishing domain name related reputation in whois records
US20040243678A1 (en) Systems and methods for automatically updating electronic mail access lists
US20030009698A1 (en) Spam avenger
US20040073621A1 (en) Communication management using a token action log
US20040203589A1 (en) Method and system for controlling messages in a communication network
US20030229672A1 (en) Enforceable spam identification and reduction system, and method thereof
US20050044156A1 (en) Verified registry
US20070078936A1 (en) Detecting unwanted electronic mail messages based on probabilistic analysis of referenced resources
US20030200267A1 (en) Email management system
US20060095586A1 (en) Tracking domain name related reputation
US20060095404A1 (en) Presenting search engine results based on domain name related reputation
US7072943B2 (en) System and method for granting deposit-contingent E-mailing rights
US20050262209A1 (en) System for email processing and analysis
US20030187942A1 (en) System for selective delivery of electronic communications
US20050015626A1 (en) System and method for identifying and filtering junk e-mail messages or spam based on URL content
US6654779B1 (en) System and method for electronic mail (e-mail) address management
US20030236847A1 (en) Technology enhanced communication authorization system
US20050044154A1 (en) System and method of filtering unwanted electronic mail messages
US20060085504A1 (en) A global electronic mail classification system
US20050254514A1 (en) Access control of resources using tokens
US20030023692A1 (en) Electronic message delivery system, electronic message delivery managment server, and recording medium in which electronic message delivery management program is recorded
US20080134282A1 (en) System and method for filtering offensive information content in communication systems

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

DPEN Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed from 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase