WO2009041982A1 - Dialogue analyzer configured to identify predatory behavior - Google Patents
Dialogue analyzer configured to identify predatory behavior Download PDFInfo
- Publication number
- WO2009041982A1 WO2009041982A1 PCT/US2007/080008 US2007080008W WO2009041982A1 WO 2009041982 A1 WO2009041982 A1 WO 2009041982A1 US 2007080008 W US2007080008 W US 2007080008W WO 2009041982 A1 WO2009041982 A1 WO 2009041982A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- monitored
- alert
- inappropriate
- information
- Prior art date
Links
- 230000022542 predatory behavior Effects 0.000 title claims description 4
- 238000004891 communication Methods 0.000 claims abstract description 96
- 238000000034 method Methods 0.000 claims description 81
- 238000012544 monitoring process Methods 0.000 claims description 21
- 230000000694 effects Effects 0.000 claims description 13
- 230000006399 behavior Effects 0.000 claims description 9
- 230000003993 interaction Effects 0.000 claims description 4
- 230000006855 networking Effects 0.000 claims description 3
- 230000008569 process Effects 0.000 description 54
- 238000004458 analytical method Methods 0.000 description 35
- 238000010586 diagram Methods 0.000 description 16
- 230000014509 gene expression Effects 0.000 description 10
- 230000000875 corresponding effect Effects 0.000 description 9
- 238000001514 detection method Methods 0.000 description 6
- 238000012545 processing Methods 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 230000009545 invasion Effects 0.000 description 2
- 230000001568 sexual effect Effects 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 238000010207 Bayesian analysis Methods 0.000 description 1
- 241001522296 Erithacus rubecula Species 0.000 description 1
- 206010019133 Hangover Diseases 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 244000062645 predators Species 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000007790 scraping Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/535—Tracking the activity of the user
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/10—Office automation; Time management
- G06Q10/107—Computer-aided management of electronic mailing [e-mailing]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/04—Real-time or near real-time messaging, e.g. instant messaging [IM]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/06—Message adaptation to terminal or network requirements
- H04L51/063—Content adaptation, e.g. replacement of unsuitable content
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/21—Monitoring or handling of messages
- H04L51/212—Monitoring or handling of messages using filtering or selective blocking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/10—Network architectures or network communication protocols for network security for controlling access to devices or network resources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2221/00—Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F2221/21—Indexing scheme relating to G06F21/00 and subgroups addressing additional information or applications relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F2221/2149—Restricted operating environment
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/02—Details
- H04L12/16—Arrangements for providing special services to substations
- H04L12/18—Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
- H04L12/1895—Arrangements for providing special services to substations for broadcast or conference, e.g. multicast for short real-time information, e.g. alarms, notifications, alerts, updates
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/14—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
- H04L63/1408—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
Definitions
- the present disclosure relates to a system and method for monitoring and analyzing communications.
- one aspect of the present disclosure is to provide a dialogue analyzer that can be configured to straightforwardly identify predatory and/or inappropriate behavior without substantial invasion of the privacy of those monitored.
- the dialogue analyzer presents straightforward reports of the predatory and/or inappropriate behavior to a parent or guardian of the child. These reports are substantially limited to the inappropriate dialogue and contextual information.
- the contextual information may include portions of the conversation that occurred before (or after) the inappropriate dialogue, summaries of the dialogue, pictures, multimedia, links to the inappropriate dialogue, and the like.
- This system preserves the monitored user's privacy by limiting the amount of the conversation that the parent or guardian is able to read, while also presenting the parent or guardian with contextual text surrounding the inappropriate content in order to make the content easier to understand.
- the reports also contain an explanation of why the communication was improper.
- Another aspect of the present disclosure includes a method for monitoring electronic communications by using lexical rules based on word concepts in order to more accurately detect behavior that is considered predatory or otherwise inappropriate.
- word concepts include expressions that contain not only a given word, but also other words and alphanumeric combinations that are associated with the given word because of like sound, meaning, usage, etc.
- a monitored user's communications are copied and transmitted to a threat analysis server, which scans the communications to determine whether any portion of the communication matches a lexical rule. When a match is found, an alert containing the rule-matching conversation is forwarded to an electronic address associated with a parent or guardian of the user.
- Yet another aspect of the present disclosure is to provide a system for monitoring a child's electronic communications without unnecessary administration by a parent or guardian.
- the system employs a central service that administers the detection of inappropriate and/or predatory online behavior.
- the parent needs only to install or download a client onto any computer device he or she wishes to be monitored.
- the central service then identifies any communications made between the monitored child and remote users, scans the communications for inappropriate content, and provides notice of the inappropriate content to all system users who are monitoring the child.
- the central service is also regularly updated by a central administrator in order to improve its detection and notification features.
- FIG. 1 is a diagram of the dialogue analyzer according to an embodiment of the present disclosure.
- FIG. 2 is a screen shot of the user interface of the dialogue analyzer system according to an embodiment of the present disclosure.
- FIG. 2B is a screen shot of a screen name report and rating survey according to an embodiment of the present disclosure.
- FIG. 3 is a screen shot of an instant message alert notification according to an embodiment of the present disclosure.
- FIG. 4 is a screen shot of a social network alert notification according to an embodiment of the present disclosure.
- FIG. 5 is a depiction of the client architecture according to an embodiment of the present disclosure.
- FIG. 6 is a component diagram of a threat analysis server according to an embodiment of the present disclosure.
- FIG. 7 is a process flow diagram for an instant message collector according to an embodiment of the present disclosure.
- FIG. 8 is a process flow diagram for a note collector according to an embodiment of the present disclosure.
- FIG. 9 is a diagram of the instant message scanning process according to an embodiment of the present disclosure.
- FIGs. 9A, 9B, and 9C are process flow diagrams for an instant message scanner according to a preferred embodiment of the present disclosure.
- FIG. 10 is a diagram of the note scanning process according to an embodiment of the present disclosure.
- FIGs. 10A, 10B, and 10C are process flow diagrams for a note scanner according to a preferred embodiment of the present disclosure.
- FIG. 11 is a diagram depicting the basic definition of a primitive according to an embodiment of the present disclosure.
- FIG. 12 is a diagram depicting the basic definition of a rule according to an embodiment of the present disclosure.
- FIG. 13 is a diagram depicting the basic definition of an alert according to an embodiment of the present disclosure.
- the following disclosure describes a tool for parents and guardians to monitor the online behavior of their children without substantially invading their privacy.
- the tool includes a client that is installed on a computer for the purpose of copying certain online communications of a monitored child user. These communications are forwarded to a threat analysis server administered by a central web service, which substantially eliminates the administrative effort required by parents.
- the threat analysis server scans the communications to determine whether any portion of the communication matches a lexical rule associated with improper content. When a match is found, an alert containing the rule-matching conversation is sent to an electronic address associated with a parent or guardian of the user.
- the alert is substantially restricted to the inappropriate dialogue along with a limited amount of contextual dialogue, thus preserving the privacy of the child user while also making the content easy to understand.
- the alert notification can also contain an explanation of why the communication is considered improper.
- FIG. 1 is a diagram of the dialogue analyzer system according to an embodiment of the present disclosure.
- the dialogue analyzer system includes a monitored-user computer, threat analysis servers and a monitoring-user computer.
- the monitored-user computer 100 includes a monitored browser 110, client service 120, and chat-based application 121.
- client service 120 is configured as a Windows service, but can also run on OSX 1 Linux or even a router in other embodiments.
- the client service 120 comprises software that is downloaded or installed on a monitored-user computer 100.
- Chat-based application 121 includes a previously installed instant messaging client, social network (browser), or any other application that includes a chat-based component.
- Threat analysis servers 145 include a collector servers 150, raw messages database 155, all-messages cache database 156, a scanner 160, Rules engine 163, a mailer 165, an alerts database 170, and a user interface server 175.
- Monitoring-user computer 180 also includes monitoring browser 190. Although monitoring-user computer 180 is described in FIG. 1 as distinct from monitored-user computer 100, these computers may actually be one and the same.
- a local (ie., monitored) user on monitored-user computer 100 communicates via a network connection, such as internet 125, with one or more remote (ie., non-monitored) users via an Instant Messaging Service 140, Social Network 135, Virtual Chat Room 130, or like services, such as a video game service that facilitates text-based chat.
- a network connection such as internet 125
- remote (ie., non-monitored) users via an Instant Messaging Service 140, Social Network 135, Virtual Chat Room 130, or like services, such as a video game service that facilitates text-based chat.
- the previously installed client service 120 receives the communication via the TCP/IP suite, which is the set of communications protocols that implement the protocol stack on which the internet and most commercial networks run.
- the client service 120 filters the communications it receives and retains data relating to communications between a monitored local user and a remote user of communications services, such as chat rooms, social networks, instant message services, and the like.
- This data is formatted and delivered to XML/RPC application program interface.
- the XML/RPC API puts the formatted communication into an HTTP-POST request, the body of which is in XML format.
- the request is first encrypted and then sent, via internet connection 125, to the threat analysis servers 145 for collection and scanning.
- collector server 150 Once the request is received by collector server 150, it is verified through a process explained in greater detail in connection with FIGs. 7 and 8.
- the processed data is then sent to message process database 155, where it is stored and forwarded to scanner 160.
- the scanner analyzes the content in order to determine whether any content matches a previously stored rule in Rules Engine 163. These rules, as well as the scanning process, are explained in greater detail with respect to FIGs. 9-12. If a match exists, the content is forwarded to alerts database 170, which is in communication with user interface server 175.
- the content is forwarded and displayed in the form of an alert notification on monitoring browser 190, which is usually associated with a parent or guardian account in order to notify a parent user that someone has had improper conversations with his or her child.
- This alert notification includes various details relating to the potentially dangerous communication, as will be explained in greater detail in FIGs. 3 and 4.
- collector 150 and scanner 160 may be implemented within client service 120.
- data can be transmitted to the threat analysis servers not only via XML/RPC application, but also in SOAP (Simple Object Access Protocol), CORBA (Common Object Request Broken Architecture), by posting key value pairs, transmitting binary files, and even through a Telnet (ie., non-HTTP) connection.
- SOAP Simple Object Access Protocol
- CORBA Common Object Request Broken Architecture
- Telnet ie., non-HTTP
- communications can also be monitored via a serial override PIP (or any other Private Internet Protocol), the UDP (User Datagram Protocol) stack, a human-input device (such as the keyboard), log files, or even a local memory if the communications are first stored and retrieved on local memory. Communications can also be obtained by performing a "screen-scrape" in Windows.
- FIG. 2 is a screen shot of the monitoring-user interface 200 according to a preferred embodiment of the present disclosure. It includes community statistics reporting statement 210 that tells the user how many messages the dialogue analyzer service has monitored along with an indication of the number of user screen names (for I. M. or social networking) that have been monitored.
- calendar 220 is also posted on the user monitoring-user interface 200. Calendar 220 provides an indication of the number of alerts previously generated by day-of-month based on a log of all alerts that is stored in the alert message database. The user can click on a specific day on the calendar to see the conversations that took place on that day only.
- Section 230 provides further information identifying the time and local screen name of the last message monitored by the dialogue analyzer service. This information can be gleaned from the data stored in the message process database, as explained earlier with respect to FIGs. 7-10.
- User interface 200 further includes alert selection box 240.
- Alert selection box 240 includes various columns of information corresponding to each alert that has been generated.
- the alerts identified in the alert selection box 240 can be sorted by any of the header column titles. These column titles include identifications of (a) the child screen name that was monitored (column 241); (b) the remote screen name participant (column 242); (c) the subject matter of the inappropriate content (column 243); (d) the date the communication took place (column 244); and (e) the time the communication took place (column 245).
- a user can select line 247, the line is highlighted and information relating to that particular communication is displayed in sections 250, 260 and 270 (explained shortly).
- Line 247 indicates that the screen name of the child monitored is "harvey," and the screen name of the remote participant is "Tommy123.”
- the subject matter of the communication is "what's your phone #" which is a commonly used way of requesting or telling a person that you want or will call the person's home.
- Line 247 also includes a date corresponding to the date of the communication that led to the generated alert, and a time corresponding to the logout time of the local child screen name.
- a parent user can select any of the listed alerts for more detailed information regarding the alert, as shown in block 250.
- Block 250 is a notification of the alert selected from alert selection box 240 (discussed in greater detail in connection with FIG. 4).
- different colors are used to indicate which alerts have been read (e.g., blue) and which have not (e.g., yellow).
- the user interface notification function can be carried out exclusively via text messaging, email messaging, or automated phone calls to access data.
- the information displayed in section 260 relates to the number of potentially dangerous conversations that the remote screen name has engaged in. This information is generated based on a vote that each parent can participate in when he or she receives an alert that identifies a remote (non-monitored) screen name participant as the author of a potentially dangerous communication.
- Paragraph 270 displays different sets of information depending on whether the user's child was responsible for the selected communication or conversation. If a local user's child generated the content responsible for making the selected communication dangerous, then a message will be displayed communicating to the user that he or she cannot vote to establish a reputation for his or her own child.
- Section 271 gives the user the option of deleting the conversation.
- FIG. 2B is a screen shot of a screen name report and rating survey according to an embodiment of the present disclosure.
- Question 262 asks each user whether, based on the message identified in the alert, other parents should be concerned if their child is having a conversation with the particular remote screen name identified. The user is given two answer options.
- Option 263 corresponds to the answer "this user could be dangerous” (or a similar option) and option 274 corresponds to the answer "this user seems safe” (or a similar option).
- an email notification identifying the potentially dangerous remote screen name is sent to a parent or guardian of the monitored screen name when the number of user-votes corresponding to answer option 263 surpasses a predetermined threshold (e.g., 6).
- a predetermined threshold e.g. 6
- FIG. 3 is an enlarged view of alert notification 250 for alerts relating to instant messages.
- the alert notification includes date and time identification 310, which lists the date and time the communication began.
- Provider identification 320 identifies the provider of the instant messaging service that was used in the communication (ie., Yahoo!®, MSN, AOL, etc.).
- Block 330 includes an excerpt of the communication itself. This excerpt includes the specific content deemed inappropriate (line 331 in FIG. 3) according to the rules stored in the Rules Engine in the threat analysis servers. In the example shown, the phrase that has been identified as inappropriate is "you woudl get a lot of porn luvers" (preferably highlighted for easy reading). The excerpt also includes multiple conversation lines that precede (or follow) the inappropriate content in order to give the reader some context to the inappropriate content. Further, in section 340, the alert notification includes a human-readable explanation of why the threat analysis server deemed that the content was inappropriate based on the current rule set.
- the explanation relates to the use of the phrase "you woudl get a lot of porn luvers," telling the reader that the phrase is a reference to pornography and that it may be harmful.
- This explanation (correlated to the use of "porn luvers") is stored in the alert and message database in the threat analysis servers along with other explanations of slang, shorthand, IM language, and leet speak terminology. These explanations are important because slang, IM short-hand, and leet-language terms are oftentimes difficult to understand, yet frequently used in the communication of inappropriate content online.
- the notification may include less information in order to further protect the privacy of the child that is being monitored.
- the notification may include only a) the lines of text flagged as inappropriate (with no context); b) an explanation of what type of inappropriate communications took place; c) a summary of the conversation or communication; or d) the names of the parties involved in the communication.
- the notification may provide the text of the entire communication that included inappropriate content.
- FIG. 4 is a sample of an alert notification relating to the posting of a comment, note, or any other text-based communication on a social network, such as MySpace®, Bebo, or Facebook.
- the note/comment alert notification like the IM alert notification displayed in FIG. 3, includes an identification of (a) the date and time the posting of the note or comment took place in (410); (b) the social network in which the posting took place (420); (c) the display name of the remote user that posted the message (440); (d) the comment flagged by the threat analysis rules engine as inappropriate (450); and (e) a human-readable explanation of why the threat analysis servers deemed that the content was inappropriate based on the current rule set (470).
- the note/comment alert notification further displays the profile picture 460 of the monitored local user or the (non-monitored) remote user in the social network.
- This picture may give a parent further information regarding the remote user, including his sex, age, and overall appearance. A parent or guardian can use this information to determine whether it is desirable for the child to discontinue their communication with a remote user in the social network.
- Explanatory message 430 is also displayed in the note/comment alert notification. This message explains to the parent user that a comment authored by their child (or left for their child) on a specific social network and that it was used for the communication of the inappropriate content. It also explains how a user's social network profile page can be accessed.
- the user name 440 or picture 460 would include a hyperlink to that remote user's profile page.
- the note/comment alert notification may include less information in order to further protect the privacy of the child that is being monitored.
- the notification may include only a) the lines of text flagged as inappropriate (with no context); b) an explanation of what type of inappropriate communications took place; c) a summary of the conversation or communication; or d) the names of the parties involved in the communication. Conversely, if privacy is of little or no concern, the notification may provide the text of the entire communication that included inappropriate content.
- FIG. 5 is a depiction of the Client Service Architecture.
- the Client core includes a service network packet filter and reassembly module 515, service content filter 530, service content parser 532, Dialogue analyzer service description template 560, Data Cache database 570, and Dialogue analyzer web service API 575.
- the service network packet filter and reassembly module further includes a service network packet filter 516, TCP Stream reassembly 520, and HTTP stream reassembly 525.
- the Dialogue analyzer service description template 560 further includes service network filter descriptions database 517, service content filter descriptions database 531 , and service content parser descriptions database 555.
- This layer is responsible for moving data packets from the network traffic 501 to the OS Network Stack 507 across a shared channel. Data packets are copied by the client as they pass through the MAC layer 504. These data packets include substantially all communications between a monitored user and a remote screen name. These packet copies 511 are sent to the Service network packet filter and reassembly module 515 for first-level filtering and reassembly.
- Service network packet filter 516 performs a first-level filtering of the data in packet copy 511.
- This data would include various forms of data on any one of a number of service networks, such as instant messages on Yahoo.com, or notes and/or comments transmitted via a social network like MySpace.com.
- the incoming data is converted to a format that includes a computer-readable IP address.
- Filter 516 filters the content by creating filter strings that are defined by service network filter descriptions database 517.
- This database is stored and periodically updated with information relating to the protocol format of various service networks.
- This protocol format includes a variety of data identifiers, such as TCP service port numbers and/or domain name identifiers. For example, the TCP port used by Yahoo.com in its instant messenger is port 5050.
- Service network filter descriptions 517 supply the packet filter 516 with this and other information, which the filter uses to identify data that is transmitted via the Yahoo!® instant messaging tool.
- domain names may also be used to identify desired data.
- the MySpace® network consists of multiple domain names. However, there are two domain names that typically include comments or notes between users (and therefore may include inappropriate content). The domain names are profile.myspace.com and comments.myspace.com.
- the service network filter descriptions database contains this and other domain name information, which is then used by the service network packet filter 516 to identify messages on either domain name.
- the data packets After the data packets have been first-level filtered, they are sent to TCP Stream reassembly unit 520 (and then to HTTP stream reassembly unit 525, if necessary) in order to reassemble any out-of-sequence or lost packets that are delivered by the underlying network.
- This task can be performed by various methods that are known in the art.
- Service network filtered and reassembled data is then sent to the service content filter 530, which filters content by data type.
- there are multiple data types including Chat Data 545 and HTTP data 550.
- data from social networks comes in the form of HTTP data.
- service content filter descriptions 531 are used as parameters that define which content to allow (and which to filter out) by content data type. For example, it is known that online communication data (in the form of notes, comments, and the like) may be exchanged between a local and remote user by posting such data on "profile" pages of social networks. This data, however, is found on a limited number of subpaths in each service.
- the data posted on profiles on the Facebook social network can be found on www.facebook.com/profile.php.
- a parameter identifying "/profile. php" as a subpath containing data that should be allowed (i.e., not filtered) is thus supplied by service content filter descriptions 531 to service content filter 530.
- these descriptions are periodically updated by the central administrator of the presently disclosed threat analyzer service.
- the various data streams are then individually parsed or extracted by data type, using parameters provided by the service content parser descriptions 555. These parameters include a template of regular expressions that define which content is extracted from the incoming data. In an alternative embodiment, however, any one of other well-known methods can be used to parse the data, including pattern matching, URL matching, and extracting data from known offsets.
- the resulting information is converted into XML/RPC format and then sent to the Data Cache database 570.
- Data cache database 570 stores data that is received from parser 532 before it is forwarded to the dialogue analyzer web Service API and eventually ends up in the threat analysis servers.
- Data Cache database 570 includes separate caches for notes (transmitted via social-networks) and instant messages (from IM service provider sites).
- the Data Cache database 570 provides a method for storing data when the threat analysis servers are down or otherwise inoperable. Under this scenario, data is sent to Data Cache database 570, where it is stored until the Servers are operating once again, at which point the data is spooled out into the Web Service API 575.
- data packets which are supposed to be sent to the Servers through the web service API 575, are not lost.
- XML/RPC application programming interface 571 sends the data to the Dialogue analyzer web service API 575.
- the filtered and parsed data is formatted into an XML/RPC request.
- This request is formatted differently depending on whether it comprises "note” or “comment” data (from social networks) or instant messaging data. This is because alerts relating to instant messages contain less information than alerts relating to a note placed on the profile of a member of a social network.
- the following table lists the names and types of parameters that are identified and included in a request relating to instant messaging data, along with details regarding the respective significance of each parameter:
- XML/RPC message request includes information pertaining to the client id, the machine (or computer id), the MAC address of the interface that captured the instant message, the operating system user name, the screen names of the local and remote IM users, the author of the instant message, the protocol on which the instant message was captured, a time stamp, and the contents of the instant message itself.
- the information in the parameters is useful for accurate scanning and notification of inappropriate content, as explained later in FIGs 9-13.
- the XML/RPC note request contains all the same information as the message request, but also contains information relating to the URL of the image of the remote screen name and any details associated with the note (such as the location on the web page from which the note was collected).
- Requests are sent to dialogue analyzer Web Service API 575, where it is then sent to threat analysis servers for data analysis. It is important to note that this data is UTF-8 encoded and thus can support the implementation of languages other than English. Thus, in alternative embodiments, electronic communications in languages other than English can also be analyzed by using lexical rules that are written in that particular language.
- the Web Service API 575 also allows for the client to be periodically updated with new service descriptions, updates to its configuration database 590, as well as live updates 580 (which are updates to the core client code). Each of these updates is initiated by the threat analysis servers according to any parameters set by a central administrator of the dialogue/threat analyzer service. Thus, the user of the client does not have to install updates manually, making the use and maintenance of the tool as simple and effortless as possible.
- the client can also obtain communication data via "screen scraping," monitoring log files, local disk or memory, or via keyboard logging (or logging any other human input device).
- An API may also be used whereby third party clients can inject data into the system at the threat analysis server.
- FIG. 6 is a diagram of the components included in the threat analysis server according to a preferred embodiment of the present disclosure.
- the threat analysis server includes incoming load balancer 605, collector 610, raw messages database 620, scanner 630, rules engine 640, all-messages data cache database 645, alerts database 650, user interfaces 660, and user-interface load balancer 670.
- an XML/RPC request is transmitted via a network connection, such as the internet, and received by the server at incoming load balancer 605, which handles the traffic relating to all incoming requests and increases the scalability of the application.
- the data is then sent to collector 610.
- the collector creates parent or guardian screen names for the account associated with the request, converts all HTML entities to ASCII format, and adds the messages to the raw messages database 620 (a process that is discussed in greater detail in connection with FIGs. 7 and 8).
- the raw messages database stores the message data for access by scanner 630, which scans the messages for inappropriate content and generates alerts that are ultimately sent to the user.
- Alerts are generated when the scanner matches the text in a given message string with pre-stored lexical rules supplied by rules engine 640 (the scanning and rule- matching process is discussed in greater detail in FIGs. 9-12).
- the messages are sent to the all-messages data cache database 645 and alerts are sent to the alert database 650, which is then accessed by web user interfaces 660 in order to forward the alerts (in notification form) to users of the present dialogue analyzer tool.
- an HTTP load balancer (block 670) is implemented in order to increase the scalability of the application. A number of well known methods can accomplish this goal, including the use of a round robin system or hardware load balancers.
- the alert notification is then sent to an electronic account associated with a monitoring user (parent or guardian).
- FIG. 7 is a process flow diagram describing the process by which the threat analysis instant messaging collector gathers data.
- the collector receives an incoming XML/RPC send_message request, according to the format specified in table 1.1.
- the collector then moves on to step 710: determining whether the message is being sent from a valid user.
- the clientjd is a 37 character globally unique identifier that associates the client to a threat analyzer service account.
- the client corresponding to each threat analyzer service has the ability to monitor any number of screen names that are logged onto a local computer with the client installed.
- step 710 the clientjd of the collected message is cross-compared to a list of all known clientjds (which is stored at the threat analysis server).
- the clientjds on this list accrue each time a new threat analyzer service user, who wishes to monitor the online activity of anyone using its local computer(s) for electronic communications with remote computer users, signs up for an electronic account corresponding to the present threat analyzer service. If a match exists between the clientjd associated with the collected message and the list of known clientjds, the process moves forward to step 730. If no match is found, the message is dropped in step 715.
- step 730 the collector determines whether the screen name associated with the account number is being monitored by the user sending the request. To execute this step, the collector checks a previously-generated table that displays all known screen names being monitored by the user account associated with the specific client. If the screen name is being monitored by such user, then this signifies that the user is monitoring the screen name and the process jumps forward to step 770. If the screen name is not monitored by the user sending the message, then step 740 is performed. Step 740 determines whether the screen name is being monitored by any known user accounts by referencing the list of all known screen names being monitored. If the screen name is not being monitored by any known user, then a screen name for a user account as parent is created in step 750.
- step 760 a monitored screen name for the user account as guardian is created.
- This process ie., steps 730-760) ensures that any potential alert notification that is generated based on the contents of the message is sent not only to a user currently monitoring the message, but also to any known parent account associated with the screen name being monitored. This procedure is advantageous because each user that is concerned with the local (ie., monitored) child's safety is notified when alerts are generated based on communications involving that child.
- an alert notification will be sent to the child's parent as well as the administrator of the school computer (who may be charged with the safety of that child).
- step 770 the HTML tags on the message are filtered, then all HTML entities are converted to ASCII (American standard code for information interchange) code.
- ASCII American standard code for information interchange
- This collected message is then tracked in step 780. This "tracking process" involves keeping a statistical record of the screen name being monitored. These statistics are accumulated and displayed to users of the threat analyzer service in a community statistics reporting statement (shown in FIG. 2). Finally, the collected message is added to a database of raw messages (i.e., those that have not yet been processed by the collector and/or scanner) in step 790.
- an email may be sent to both client user accounts, requesting the user identify themselves and their relationship to the particular screen name.
- a frequency monitor may be used in order to determine the frequency at which the screen name is using one account as compared to the other. In this situation, if it is determined that a guardian account is being used more frequently then one identified as a parent account, the designations of the accounts may be switched, with the guardian account being designated as parent and the parent being designated as guardian.
- FIG. 8 displays this process.
- step 805 a send_note request is received (see table 2.2).
- step 810 a determination is made as to whether a checksum associated with the communication (based on the information in the local_screen_name, remote_screen_name, and message fields) already exists. This step is performed because communications that occur over social networks are sometimes monitored by the client service more than once. These duplicates exist because the client copies substantially all of the data relating note postings and other communications on social networks, which usually includes previously communicated (and thus previously collected) data.
- Step 810 is executed by comparing the checksum to a list of all known checksums previously calculated for given screen names.
- step 825 If the checksum does not exist, the process proceeds to step 825. If, however, the checksum exists, the note, comment, or like communication is a duplicate. Duplicates are tracked (ie., relevant statistics recorded) in step 815 and dropped in step 820. In alternative embodiments, however, this duplicate-tracking can occur within the client.
- step 825 the collector determines whether the communication is associated with a valid user.
- the client id of the collected message is cross-compared to a list of all known client ids. If a match exists between the client id associated with the collected communication and the list of known client_ids, the process moves forward to step 835. If no match is found, the message is dropped in step 830.
- step 835 the collector determines whether the screen name associated with the account number is being monitored by the user sending the request. To execute this step, the collector checks a previously-generated table that displays all known screen names being monitored by the user account associated with the specific client. If the screen name is being monitored by such user, then this signifies that the user is monitoring the screen name and the process jumps forward to step 855. If the screen name is not monitored by the user sending the message, then step 840 is performed. Step 840 determines whether the screen name is being monitored by any known user account by referencing the list of all known screen names being monitored. If the screen name is not being monitored by any known user, then a screen name for the user account as parent is created in step 845.
- step 845 a monitored screen name for the user account as guardian is created. Similar to the process relating to instant messages that occurs in FIG. 7, this process (ie., steps 835-840) ensures that any potential alert notification that is generated based on the contents of the message is sent to each user that is concerned with the local (ie., monitored) child's safety.
- step 855 the HTML tags on the message are filtered, then all HTML entities are converted to ASCII (American standard code for information interchange) code.
- step 860 is performed, whereby the time stamp from the social network is converted to the ISO 8601 standard, the international standard for data and time representations.
- the signature feature of the ISO 8601 format for date and time is that the information is ordered from the most to the least significant or, in plain terms, from the largest (the year) to the smallest (the second). From here, a checksum is created from the information in the locah_screen_name, remote_screen_name, and message fields stored in send note request.
- the checksum that is utilized is an MD-5 checksum, well known by those having skill in the art.
- the note is tracked in step 870 (statistics are recorded in order to update the community statistics report). Finally, the note is added to a database of raw messages for scanning in step 875.
- FIG. 9 is a flow chart depicting the process by which the dialogue analyzer scans collected instant messages for inappropriate content. As shown in the diagram, the scanning process accomplishes three major tasks: 1) finding and preparing messages for scanning (this process is depicted in greater detail in FIG. 9A); 2) scanning messages and create alerts (depicted in FIG. 9B); and 3) writing stats, alerts, and messages (depicted in FIG. 9C).
- FIG. 9A is a process flow diagram that illustrates the procedure by which messages are found and prepared for scanning. As previously discussed with respect to FIG. 7, these messages have been collected from conversations involving valid users of the threat analysis service.
- conversations are found in the message processing database. These conversations include instant messages between a local monitored user and a remote participant.
- the instant messages that are transmitted in a conversation between a local (dialogue analyzer monitored user) screen name and a remote screen name are gathered. This gathering process occurs until there is a break in the communication between the two parties. This break may be defined as a cessation of communication predetermined length of time (e.g., 2 hours).
- the position of the last message from the last conversation scanned is found in step 903.
- step 904 position all the messages in the conversation in the order of their occurrence.
- steps (901-904) may be performed by the client before transmitting the data to the threat analysis servers.
- step 905 the messages corresponding to the local screen name are separated from those that relate to the remote screen name.
- This step involves separating all of the messages sent from the local screen name to the remote screen name from the messages sent from the remote screen name to the local screen name. This is done in order to determine which screen name is responsible for the transmission of inappropriate content so that the dialogue analyzer tool can include that screen name identification in an email notification of the flagged content to the parent or guardian account.
- step 906 after the messages have been separated by screen name, the scanner selects a number of messages in order to populate a window of messages.
- the size of the window is based on the messages transmitted in a predetermined period of time. In a preferred embodiment, the size of the window is approximately 120 seconds. This translates into a carrying capacity of roughly 10 messages and 128 characters per window.
- each individual window is analyzed for inappropriate content based on the rules stored in the threat analysis rules engine. Because multiple messages may be stored in a single window, these messages are concatenated in step 907 in order to produce windows including messages in single text-string format. At this point (step 908), the message windows have been prepared and are ready for processing.
- step 930 each window is processed with each rule from the threat analysis rules engine. These rules are discussed in greater detail in the discussion of FIG. 12.
- step 931 determines whether the text in the particular window matches any of the rules in the threat analysis rules engine. If not, the process proceeds to step 938. If, however, the text in the window matches a rule, then a loop is performed whereby alerts are created before proceeding to step 938. This loop begins at step 931 , where an alert and copy of rules is created for each user that is monitoring the local screen name. These alerts are also described in greater detail in connection with FIG. 12.
- step 933 determines whether the next window of messages should be added to the current window with the alert(s). This situation is referred to herein as a "hang over" and occurs when the first message in a given window contains an alert. If there is a hang over, then a mini-loop is performed to step 937, where additional messages from the all-messages cache database are flagged to added to the beginning of the message containing the alert inside the current window.
- step 934 a determination is made of whether the last message in the window contains the alert. This situation is referred to herein as a "hang under.” If a hang under exists, then a mini-loop to step 936 is performed, whereby a record can be created with message positions needed from the next scan. After this process is performed, or if no hang unders existed, the process moves forward to step 935, where the massages from the window containing the alert are flagged to be written to the alerts database. After this loop is performed, the process moves forward to step 938. In this step, the scanner determines whether there is a previous hang under by analyzing the record created from a previous scan in step 936. If the record indicates that there was a previous hang under, then additional messages from the window are flagged to be written to the alerts database.
- step 960 the process then proceeds to steps 960-963, in which alerts and messages are written for the user interface to the alerts database.
- This process is illustrated in FIG. 9C.
- step 960 messages that have been flagged are removed from the raw messages database and written to the all messages cache database in step 961.
- step 962 involves writing the alert(s), rules and messages for the user interface to the alerts database. Samples of email notifications that include these alerts, rules and messages are illustrated in FIGs. 3 and 4.
- the conversation positions and stats for each screen name in the analyzed conversation are updated in step 963.
- FIG. 10 is a flow chart of the scanning process for scanning notes or comments on social networks (like Facebook, Myspace®, etc.). This process is similar to that described in FIG. 9 with respect to scanned instant messages, but includes a few minor modifications based upon the fact that an instant message is a two-way conversation between a remote and local screen names, while a note placed on a social networking website is more akin to just one side of a conversation taking place.
- FIG. 10A illustrates steps 1001-1008, in which messages are found and prepared for processing. As previously discussed in connection to FIG. 8, these messages have been collected from conversations involving users with valid accounts.
- conversations are found. Conversations include the transmission of instant messages to and from a local user and a remote screen name.
- step 1002 all of the instant messages that are transmitted in a conversation between a local and remote screen name are gathered. This gathering process occurs until there is a break in the communication between the two parties based upon a predetermined length of time (e.g., 2 hours).
- the position of the last message from the last conversation scanned is found in step 1003.
- the next step (1004) is to position all the messages in the conversation in the order of their occurrence.
- These steps ie., 1001-1004) also may be performed by the client prior to transmission of the data to the threat analysis servers.
- step 1006 after the messages have been separated by screen name, each message is placed in its own window of data. These messages are concatenated in step 1007 in order to produce windows including messages in single text-string format. At this point, the message windows have been prepared and are ready for processing (step 1008).
- each text window is scanned and alerts are created.
- Step 1031 determines whether the text in the particular window matches any of the rules in the threat analysis server rules engine. If not, the process proceeds to step 1060. If, however, the text in the window matches a rule, then a loop is performed whereby alerts are created. This loop begins at step 1032, where an alert and copy of rules is created for each user that is monitoring a local screen name. Further detail regarding rules and alerts is given in FIGs. 10-13 and the discussion thereof.
- step 1033 the messages from the text window are flagged to be written to the alerts database.
- step 1060 In this step, messages that have been processed and then scanned are removed from the raw messages database and written to the all messages cache database in step 1061.
- step 1062 involves writing the alert(s), rules and flagged messages for the user interface to the alerts database. Samples of email notifications that include these alerts, rules and messages are illustrated in FIGs. 3 and 4. Finally, the conversation positions and stats for each screen name in the analyzed conversation are updated in step 1063.
- FIG. 11 is an overview of the threat analysis rules engine according to one embodiment of the present disclosure. This rules engine provides the bases for determining whether a particular message contains inappropriate content. It also provides the protocol by which to alert a parent account of the inappropriate activity. The rules in the engine are based on language concepts that are referred to herein as "primitives.”
- FIG. 11 provides the basic definition of a primitive 1100.
- a primitive is essentially a word concept that comprises many words that are associated by having a similar sound, meaning, use, spelling, appearance or (probability of appearance in a text string), etc.
- Primitives can include people, places, pronouns, verbs, adverbs, adjectives, activities, or any other lexical unit.
- a primitive has a root in a specific word 1110, like "parent.”
- Primitive expression 1120 includes any number of words that can be used in everyday parlance as a substitute for the primitive or have a similar meaning as that word.
- the expression of the word "parent” includes of a number of associated words (i.e., other words having a similar sound, meaning, usage, spelling, appearance, etc.), such as mother, momma, dad, father, stepmom, stepdad, pairent, par3nt, etc.
- the threat analysis rules engine understands not only proper English, but common misspellings, slang and even leet speak (where alphanumerics are interchanged with letters).
- primitives are used as a way in which to normalize text data collected during online communication monitoring.
- Another example of a primitive is the word "home,” which can have several words associated with it, such as hOme, crib, pad, horn, place, etc.
- a primitive is the word "sex,” which could also have several associated words like coitus, lovemaking, intimacy, s3x, seeks, etc.
- fuzzy matching is implemented by regular expressions (i.e., strings used to describe or match a set of strings according to certain syntax rules).
- the words can be matched by a direct comparison to libraries of words or word concepts corresponding to a particular primitive.
- the expressions of primitives are machine-formatted patterns that represent words that administrators of the dialogue analyzer service wish to flag when used during electronic communications. These patterns are implemented as regular expressions, but any technology that allows for the matching and representation of patterns may be implemented.
- the root word can be processed by an algorithm that generates like words through the use of a thesaurus, dictionary, or a catalogue of misspellings and axioms of leet speak or common instant messaging language.
- the threat analysis rules are defined by situations where multiple primitives are found together in a text string.
- FIG. 12 provides the basic definition of a rule in the rules engine.
- a rule is defined by a text string including one or more primitives with a certain number of non-primitive words in between the primitives (if there are multiple primitives).
- Each rule has a name (1210), description (1220) and category (1230) classification.
- the name 1210 classification is a substantially unique identifier of a rule. It can include a word, a number, a combination of both, or a like identifier.
- Description 1220 is a brief summarization of the intended subject matter associated with the rule, such as “asking for phone number” or “sexually explicit communication.”
- Category 1230 is a broad classification of a group in which the particular rule is logically a part. Category classifications can include “lewd,” “offensive,” “threatening,” “direct contact,” “indirect contact,”, “sexual act,” etc.
- any number of primitives can exist in a set of primitives 1240 (i.e., from 1 to N, N being defined as any number).
- a rule is matched when these primitives are detected with a finite number (e.g., 6 or less) of non-primitive words 1245 in between them in any given text window. Matching is executed by implementation of regular expressions to identify any text that closely corresponds to the definition of a rule. In a preferred embodiment, the number of words spaced in between each primitive is 6 or less in order to decrease the probability of a detection of a rule match when the phrase is not reasonably inappropriate.
- FIG. 13 provides the basic definition for alerts generated by the dialogue analyzer tool. As previously mentioned, these alerts are generated whenever text inside a window is found to have matched a rule in the threat analysis rules engine. Alerts are comprised of various fields of information that have been collected by the client and processed and stored by the methods described in FIGs. 7-10. In the preferred embodiment, these fields include Date Created 1305, Date Sent 1310, Longest Matched Text 1315, Monitoring User 1320, Local Screen Name (ie., monitored) 1335, Remote Screen Name 1330, Author (of message) Screen Name 1325, Message Window 1350 and Rules Set 1375. Date Created 1305 corresponds to the date the message was created, while Date Sent 1310 corresponds to the date the alert was sent to the user. Longest matched text 1315 includes a copy of the longest string of text that matched one of the rules in the rules engine. Monitoring User 1320 is an identification of the user name of the logged-in operating system user on the computer with the dialogue analyzer client software.
- Message window 1350 contains the messages from the text window that included a rule-matching message. As described earlier, the window is designed to capture approximately 2 minutes of text in a conversation. Thus, the window can contain any number of messages (from 1 to n) based on length of the individual messages.
- Rules Set 1375 is a collection of copies of the rules that were matched by any set of the message data in message window 1350. In a preferred embodiment, rules are updated and revised frequently, thus it is desirable to create and store copies of rules in Rules Set 1375 in order to have the ability to reference them in the future.
- alerts can be generated based on a traditional Bayesian analysis of the probability that a text string will include certain predetermined words or subject matter. This alternative can be effectively implemented once a sufficient corpus of alerts has been created.
- Other alternatives for identifying a specific subject matter (e.g., predatory behavior) in text-based communications include strict keyword matching, phonetic matching, grammar checks, and the like.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Computer Hardware Design (AREA)
- Business, Economics & Management (AREA)
- General Engineering & Computer Science (AREA)
- Human Resources & Organizations (AREA)
- Entrepreneurship & Innovation (AREA)
- Strategic Management (AREA)
- Computing Systems (AREA)
- Computer Security & Cryptography (AREA)
- Operations Research (AREA)
- Economics (AREA)
- Marketing (AREA)
- Data Mining & Analysis (AREA)
- Quality & Reliability (AREA)
- Tourism & Hospitality (AREA)
- Physics & Mathematics (AREA)
- General Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Telephonic Communication Services (AREA)
- Information Transfer Between Computers (AREA)
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1006973A GB2466606A (en) | 2007-09-28 | 2007-09-28 | Dialogue analyzer configured to identify predatory behavior |
PCT/US2007/080008 WO2009041982A1 (en) | 2007-09-28 | 2007-09-28 | Dialogue analyzer configured to identify predatory behavior |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/US2007/080008 WO2009041982A1 (en) | 2007-09-28 | 2007-09-28 | Dialogue analyzer configured to identify predatory behavior |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2009041982A1 true WO2009041982A1 (en) | 2009-04-02 |
WO2009041982A8 WO2009041982A8 (en) | 2009-11-26 |
Family
ID=39567837
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2007/080008 WO2009041982A1 (en) | 2007-09-28 | 2007-09-28 | Dialogue analyzer configured to identify predatory behavior |
Country Status (2)
Country | Link |
---|---|
GB (1) | GB2466606A (en) |
WO (1) | WO2009041982A1 (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2252021A1 (en) * | 2009-05-12 | 2010-11-17 | Avaya Inc. | Treatment of web feeds as work assignment in a contact center |
US7912828B2 (en) | 2007-02-23 | 2011-03-22 | Apple Inc. | Pattern searching methods and apparatuses |
WO2012004283A1 (en) * | 2010-07-06 | 2012-01-12 | Telefonica, S.A. | System for monitoring online interaction |
US8311806B2 (en) | 2008-06-06 | 2012-11-13 | Apple Inc. | Data detection in a sequence of tokens using decision tree reductions |
US8489388B2 (en) | 2008-11-10 | 2013-07-16 | Apple Inc. | Data detection |
US8738360B2 (en) | 2008-06-06 | 2014-05-27 | Apple Inc. | Data detection of a character sequence having multiple possible data types |
CN104995870A (en) * | 2012-11-21 | 2015-10-21 | 瑞典爱立信有限公司 | Multi-objective server placement determination |
US10592612B2 (en) | 2017-04-07 | 2020-03-17 | International Business Machines Corporation | Selective topics guidance in in-person conversations |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030028622A1 (en) * | 2001-08-06 | 2003-02-06 | Mitsuhiro Inoue | License management server, terminal device, license management system and usage restriction control method |
WO2005002180A2 (en) * | 2003-06-26 | 2005-01-06 | Thomson Licensing S.A. | Parental monitoring of digital content |
US20050102407A1 (en) * | 2003-11-12 | 2005-05-12 | Clapper Edward O. | System and method for adult approval URL pre-screening |
US20050240959A1 (en) * | 2004-04-26 | 2005-10-27 | Roland Kuhn | Method for parental control and monitoring of usage of devices connected to home network |
-
2007
- 2007-09-28 WO PCT/US2007/080008 patent/WO2009041982A1/en active Application Filing
- 2007-09-28 GB GB1006973A patent/GB2466606A/en not_active Withdrawn
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030028622A1 (en) * | 2001-08-06 | 2003-02-06 | Mitsuhiro Inoue | License management server, terminal device, license management system and usage restriction control method |
WO2005002180A2 (en) * | 2003-06-26 | 2005-01-06 | Thomson Licensing S.A. | Parental monitoring of digital content |
US20050102407A1 (en) * | 2003-11-12 | 2005-05-12 | Clapper Edward O. | System and method for adult approval URL pre-screening |
US20050240959A1 (en) * | 2004-04-26 | 2005-10-27 | Roland Kuhn | Method for parental control and monitoring of usage of devices connected to home network |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7912828B2 (en) | 2007-02-23 | 2011-03-22 | Apple Inc. | Pattern searching methods and apparatuses |
US8311806B2 (en) | 2008-06-06 | 2012-11-13 | Apple Inc. | Data detection in a sequence of tokens using decision tree reductions |
US8738360B2 (en) | 2008-06-06 | 2014-05-27 | Apple Inc. | Data detection of a character sequence having multiple possible data types |
US9275169B2 (en) | 2008-06-06 | 2016-03-01 | Apple Inc. | Data detection |
US9454522B2 (en) | 2008-06-06 | 2016-09-27 | Apple Inc. | Detection of data in a sequence of characters |
US8489388B2 (en) | 2008-11-10 | 2013-07-16 | Apple Inc. | Data detection |
US9489371B2 (en) | 2008-11-10 | 2016-11-08 | Apple Inc. | Detection of data in a sequence of characters |
EP2252021A1 (en) * | 2009-05-12 | 2010-11-17 | Avaya Inc. | Treatment of web feeds as work assignment in a contact center |
WO2012004283A1 (en) * | 2010-07-06 | 2012-01-12 | Telefonica, S.A. | System for monitoring online interaction |
CN104995870A (en) * | 2012-11-21 | 2015-10-21 | 瑞典爱立信有限公司 | Multi-objective server placement determination |
CN104995870B (en) * | 2012-11-21 | 2018-05-29 | 瑞典爱立信有限公司 | Multiple target server arrangement determines method and apparatus |
US10592612B2 (en) | 2017-04-07 | 2020-03-17 | International Business Machines Corporation | Selective topics guidance in in-person conversations |
Also Published As
Publication number | Publication date |
---|---|
GB2466606A (en) | 2010-06-30 |
WO2009041982A8 (en) | 2009-11-26 |
GB201006973D0 (en) | 2010-06-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20110178793A1 (en) | Dialogue analyzer configured to identify predatory behavior | |
WO2009041982A1 (en) | Dialogue analyzer configured to identify predatory behavior | |
US7194536B2 (en) | Apparatus and method for monitoring and analyzing instant messaging account transcripts | |
US8527596B2 (en) | System and method for monitoring activity of a specified user on internet-based social networks | |
US8788657B2 (en) | Communication monitoring system and method enabling designating a peer | |
US20080134282A1 (en) | System and method for filtering offensive information content in communication systems | |
KR101251862B1 (en) | Presenting and manipulating electronic mail conversations | |
US6212548B1 (en) | System and method for multiple asynchronous text chat conversations | |
US20030105822A1 (en) | Apparatus and method for monitoring instant messaging accounts | |
US8301701B2 (en) | Creating dynamic interactive alert messages based on extensible document definitions | |
US8615515B2 (en) | System and method for social inference based on distributed social sensor system | |
US7631046B2 (en) | Method and apparatus for lawful interception of web based messaging communication | |
US20050086255A1 (en) | Supervising monitoring and controlling activities performed on a client device | |
US20040260801A1 (en) | Apparatus and methods for monitoring and controlling network activity using mobile communications devices | |
Ghasem et al. | Machine learning solutions for controlling cyberbullying and cyberstalking | |
US20100174813A1 (en) | Method and apparatus for the monitoring of relationships between two parties | |
US20120151046A1 (en) | System and method for monitoring and reporting peer communications | |
US20080133745A1 (en) | Employee internet management device | |
EP1023663A1 (en) | System for immediate popup messaging across the internet | |
WO2007021719A2 (en) | Virtual robot communication format customized by endpoint | |
WO2006094335A1 (en) | Method and apparatus for analysing and monitoring an electronic communication | |
US20080133676A1 (en) | Method and system for providing email | |
US20130091274A1 (en) | Process for Monitoring, Analyzing, and Alerting an Adult of a Ward's Activity on a Personal Electronic Device (PED) | |
WO2020102349A1 (en) | Methods, systems, and apparatus for email to persistent messaging and/or text to persistent messaging | |
JP4445243B2 (en) | Spam blocking method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 07843569 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 1006973 Country of ref document: GB Kind code of ref document: A Free format text: PCT FILING DATE = 20070928 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1006973.0 Country of ref document: GB |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 07843569 Country of ref document: EP Kind code of ref document: A1 |