WO2023278885A1 - Modération du contenu d'un utilisateur pour une plateforme de messagerie sociale - Google Patents

Modération du contenu d'un utilisateur pour une plateforme de messagerie sociale Download PDF

Info

Publication number
WO2023278885A1
WO2023278885A1 PCT/US2022/036070 US2022036070W WO2023278885A1 WO 2023278885 A1 WO2023278885 A1 WO 2023278885A1 US 2022036070 W US2022036070 W US 2022036070W WO 2023278885 A1 WO2023278885 A1 WO 2023278885A1
Authority
WO
WIPO (PCT)
Prior art keywords
offensive
expression
reply message
expressions
list
Prior art date
Application number
PCT/US2022/036070
Other languages
English (en)
Inventor
Andrew COURTER
Christine SU
Original Assignee
Twitter, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Twitter, Inc. filed Critical Twitter, Inc.
Publication of WO2023278885A1 publication Critical patent/WO2023278885A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/237Lexical tools

Definitions

  • Popular social messaging platforms generally provide functionality for users to draft and post/send messages, including video and/or audio content, both synchronously and asynchronously to other users. Other common features include the ability to post messages that are visible to one or more identified other users of the platform, to other users by virtue of a connection to the authoring user on the platform, or even publicly to any user of the platform without specific designation by the authoring user. Examples of popular social messaging platforms include Facebook®, Instagram®, Pinterest®, and Twitter®. (“Facebook” and “Instagram” are trademarks of Facebook, Inc. “Pinterest” is a trademark of Pinterest, Inc. “Twitter” is a trademark of Twitter, Inc.).
  • the users of the social messaging platform are typically permitted to, and capable of, both authoring messages for others and receiving messages from others. Some users, however, are more adept at generating content/authoring messages and/or are famous, such that there is widespread interest in their messages. These users are sometimes referred to as “authoring users,” “content creators,” or “creators.” For example, content creators are often celebrities who are users of the social messaging platform. In turn, some users of the social messaging platform who are connected to the content creators predominantly consume the content generated by the content creators (e.g., by reading, rating, and/or sharing the messages authored by the content creators).
  • Social messaging platforms also typically permit users to post one or more messages in response to the content creator’ s message, thus providing a way for users to directly engage with a content creator. In some instances, this allows users to participate in a conversation with the content creator and other users where the content creators and the users post one or messages in response to one another.
  • These users are sometimes referred to as “content consumers,” “subscribers,” or “followers.” It should be appreciated that content creators can be followers of other users and each follower can themselves be a content creator.
  • Social messaging platforms also typically provide a user interface to display multiple messages for users to view and consume.
  • the user interface can display messages in a stream (also referred to herein as a “timeline” or a “feed”) where the messages are arranged, for example, in chronological order or reverse chronological order based on the respective dates the messages were posted or according to a computationally predicted relevance.
  • the stream can further facilitate display of messages in a conversation. For example, the stream can display a first message by one user and a second message posted by another user in response to the first message.
  • the first message is sometimes referred to or characterized as a base message and the second message can be referred to or characterized as a response message or a reply message, which is a message posted in direct response to the base message and sent to at least the authoring user of the base message.
  • One base message and one or more reply messages posted in response to the base message constitute a message branch.
  • a base message can be characterized as a root message (i.e., a message that is not posted in response to another message) or a reply message (e.g., a message posted in response to the root message or another reply message).
  • a message thread is initiated by a single root message and can further include one or more reply messages posted in response to the root message or another reply message.
  • a message thread can include multiple message branches.
  • social messaging platforms are generally well-suited for users and, in particular, content creators to post messages and/or share other social media content with a large audience (e.g., the content creator’s followers).
  • content creators to post messages and/or share other social media content with a large audience
  • the Inventors have also recognized providing users access to a large audience can also give rise to unwanted attention or, worse, harassment by other users of the social messaging platform.
  • social messaging platforms generally include a set of rules and, in some instances, content filters to deter users from including toxic and/or abusive language in their messages (e.g., profanity, slurs)
  • users can readily circumvent these rules and/or content filters using creative language in their content.
  • users can include language that is offensive in some situations based on the context of the conversation and/or the users involved, but benign in other situations, which is challenging to detect and moderate using a broad set of rules and/or content filters.
  • Content-specific language can also vary based on several factors including, but not limited to, the geographic location of the users, the nationality of the users, and the native language of the users.
  • the present disclosure is directed to various inventive implementations of a social messaging platform that provides users a way to create user-defined content filters to moderate the language in reply messages posted in response to a user’s base message.
  • the present disclosure is also directed to providing users a way to label messages and/or provide guidelines to users posting reply messages of the user’s preferred tone and/or injunctive social norms.
  • the social messaging platforms disclosed herein provide users greater control over the reply messages posted in response to their content while simultaneously improving the ease of managing reply messages.
  • the present disclosure is directed to various methods of implementing the above features using one or more platform servers and/or user devices of the social messaging platform.
  • the user-defined content filter disclosed herein (also sometimes referred to herein as a “smellcheck” or a “nudge”) allows users to selectively choose certain expressions (e.g., words or phrases) to moderate content posted by other users. This can be accomplished by each user (e.g., a content creator) defining one or more lists of offensive expressions for their user account that they want to discourage and/or do not want to see in reply messages posted by other users (e.g., content consumers) in response to that user’s base messages.
  • certain expressions e.g., words or phrases
  • the platform can detect the presence of one or more offensive expressions as users draft a reply message in response to a base message by comparing the draft of the reply message against the list(s) of offensive expressions defined by the user who authored the base message. If an offensive expression is detected, the platform can highlight the offensive expression in the draft of the reply message to notify the user authoring the reply message their message includes the offensive expression and the actions (e.g., a warning, a penalty) taken by the platform for posting the reply message without removing the offensive expression. The platform can further execute the actions against the users who post reply messages with offensive expression(s).
  • the user-defined content filter provides a way to detect and moderate context-specific language that is offensive to certain users, but not other users especially if the offensive expression has different meanings in different situations.
  • the offensive expressions can generally include words or phrases in textual form, one or more icons (e.g., emojis, emoticons), or any combinations of the foregoing.
  • the platform can provide users one or more lists of predetermined offensive expressions that are commonly considered to be toxic and/or abusive.
  • the list(s) of predetermined offensive expressions can tailored to include offensive expressions that are more likely to be relevant and/or understood by a user based on one or more attributes of that user’s user account, such as a location of the user’s user device (e.g., the user device includes a position tracking sensor to monitor the location), a region/location associated with the user account, a nationality associated with the user account, and/or a default language associated with the user account.
  • a location of the user’s user device e.g., the user device includes a position tracking sensor to monitor the location
  • a region/location associated with the user account e.g., the user device includes a position tracking sensor to monitor the location
  • a region/location associated with the user account e.g., the user device includes a position tracking sensor to monitor the location
  • a nationality associated with the user account e.g., a nationality associated with the user account
  • a default language associated with the user account e.g., a default language
  • the social messaging platforms disclosed herein further provide users a way to assign labels and/or guidelines (also sometimes referred to collectively as “house rules” or “author norms”) to their base message.
  • the labels and/or guidelines can define one or more injunctive social norms users should follow when posting a reply message.
  • users are less likely to use social cues within the message thread itself (e.g., reply messages posted by other users) as a way to determine what language and/or content is acceptable. This, in turn, increases the likelihood of users posting reply messages that conform with the preferences of the user who posted the base message.
  • users can assign one or more labels to their base message so that other users who view and post a reply message can see the desired tone of the user who posted the base message.
  • a content creator can post a base message with labels of “positive,” “curious,” and “thoughtful” as a way to encourage users who post reply messages to include content with these tones, tenor, mood, attitude, and/or intent.
  • the label(s) can further be displayed between the base message and the reply messages such that users see the content creator’s expected tone before viewing any reply message.
  • users can post a base message with guidelines containing details on the content creator’ s expectations, values, and/or preferences for the content of the reply messages.
  • the guidelines can be generated, in part, based on user input.
  • the platform can also provide standard guidelines that can be modified by the user.
  • the guidelines can also be generated automatically, for example, based on the label(s) selected by the user.
  • the labels associated with a base message can be interactive such that users who select the labels are directed to the content creator’s guidelines.
  • FIG. 1A shows an example social messaging platform that supports moderation of reply messages.
  • FIG. IB shows example components of a user device in the platform of FIG. 1 A.
  • FIG. 1C shows example components of a platform server in the platform of FIG. 1A.
  • FIG. 2A shows an example user interface of account settings associated with a user account of a user with an option to activate or deactivate moderation for reply messages posted in response to a message authored by the user associated with the user account.
  • FIG. 2B shows an example user interface of the moderation settings associated with the user account of FIG. 2 A.
  • FIG. 2C shows an example user interface to manage a list of offensive expressions associated with the user account of FIG. 2 A.
  • FIG. 2D shows the user interface of FIG. 2C when an offensive expression is queried and is determined to not be in the list of offensive expressions.
  • FIG. 2E shows the user interface of FIG. 2C when an offensive expression is queried and is determined to be in the list of offensive expressions.
  • FIG. 2F shows an example user interface to display the list of offensive expressions.
  • FIG. 3 shows another example user interface of moderation settings associated with a user account of a user.
  • FIG. 4A shows an example user interface displayed on a first user device associated with a first user account of a first user to compose a base message.
  • the user interface includes an interactive label to direct the first user to the moderation settings of FIG. 2B.
  • FIG. 4B shows an example user interface to manage muted words for a user account of a user.
  • FIG. 4C shows an example user interface with a message thread displayed on a user device associated with the user account of FIG. 4B.
  • the user interface further includes a notification of a muted word selected by the user and a button to direct the user to the moderation settings of FIG. 2B.
  • FIG. 5A shows an example user interface displayed on a first user device associated with a first user account of a first user to compose a reply message in response to a base message posted by a second user account of a second user where a draft of the reply message includes an offensive expression.
  • a prompt is further displayed to warn the first user the draft of the reply message includes an offensive expression.
  • FIG. 5B shows the user interface of FIG. 5 A with the prompt expanded to summarize the actions taken by the platform when the reply message is posted without removing the offensive expression.
  • the prompt further includes a user interface element to direct the first user to additional information on the second user’s moderation settings on reply messages.
  • FIG. 5C shows the user interface of FIG. 5B after selecting the user interface element.
  • FIG. 6A shows a first portion of a flow chart of an example method for moderating a reply message posted by a second user in response to a base message posted by a first user.
  • FIG. 6B shows a second portion of the flow chart of FIG. 6 A.
  • FIG. 7A shows an example user interface displayed on a first user device associated with a first user account of a first user to compose a base message.
  • the user interface further includes a prompt to manage reply message settings with a user interface element to direct the first user to labels and guidelines to associate with the base message.
  • FIG. 7B shows an example user interface of various settings associated with the labels and guidelines of FIG. 7 A.
  • FIG. 8A shows an example user interface on a second user device associated with a second user account of a second user that includes a message thread with the base message authored by the first user account of FIG. 7 A.
  • FIG. 8B shows the user interface of FIG. 8 A after selecting the label(s) associated with the base message.
  • FIG. 9 shows a flow chart of an example method for assigning and displaying one or more labels with a message.
  • FIG. 10 shows a flow chart of an example method for assigning and displaying guidelines with a message.
  • a social messaging platform that provides user-controlled moderation features including a user-defined filter to moderate reply messages and features to label a base message and/or provide guidelines of the desired tone of the conversation.
  • Various aspects of creating/editing a list of offensive expressions, creating/editing one or more label(s) and/or guidelines to associate with a message, defining penalties for users who post reply messages with one or more offensive expression(s), notifying users that a reply message includes one or more offensive expression(s) are also disclosed herein. It should be appreciated that various concepts introduced above and discussed in greater detail below may be implemented in multiple ways. Examples of specific implementations and applications are provided primarily for illustrative purposes so as to enable those skilled in the art to practice the implementations and alternatives apparent to those skilled in the art.
  • inventive social messaging platforms are provided, wherein a given example or set of examples showcases one or more particular features or aspects related to the generation, management, and enforcement of a list of offensive expressions for moderation and the generation, management, and application of one or more label(s) and/or guidelines to a base message. It should be appreciated that one or more features discussed in connection with a given example of a social messaging platform may be employed in other examples of social messaging platforms according to the present disclosure, such that the various features disclosed herein may be readily combined in a given social messaging platform according to the present disclosure (provided that respective features are not mutually inconsistent).
  • FIG. 1A illustrates an example online social messaging platform 100 and example user devices 104a-104n configured to interact with the platform over one or more wired or wireless data communication networks 120.
  • Users 102a-102n of the platform use their user devices 104a- 104n, on which client software 106a-106n is installed, to use the platform.
  • a user device can be any Internet-connected computing device, e.g., a laptop or desktop computer, a smartphone, or an electronic tablet.
  • the user device can be connected to the Internet through a mobile network, through an Internet service provider (ISP), or otherwise.
  • ISP Internet service provider
  • Each user device is configured with software, which will be referred to as a client or as client software 106a-106n, that in operation can access the platform 100 so that a user can post and receive messages, view, interact with, and create streams of messages and other content items, and otherwise use the service provided by the platform.
  • the client software 106a-106n can be adapted for operation on different user devices and/or different operating systems.
  • the client software 106a-106n can run on various operating systems including, but not limited to, Google AndroidTM, Apple iOS®, Google Chrome OSTM, Apple MacOS®, Microsoft Windows®, and Linux®.
  • the client software 106a-106n can further include web applications and cloud-based smartphone applications (e.g., the client software 106a isn’t installed directly onto the user’s device, but is rather accessible through a web browser on the user’s device).
  • a message posted on the platform 100 contains data representing content provided or selected by the author of the message.
  • the message may be an instance of a container data type (also sometimes referred to as a ‘message object’) storing the content data.
  • the types of data that may be stored in a message include text, graphics, images, video, audio content, and computer code, e.g., uniform resource locators (URLs), for example.
  • URLs uniform resource locators
  • Messages can also include key phrases or tags (e.g., a hashtag represented by “#”), that can aid in categorizing messages or in linking messages to topics.
  • Messages can further include one or more fields for metadata that may or may not be editable by the message author or account holder, depending on what the platform permits.
  • Examples of fields for message metadata can include, but is not limited to, a time and date of authorship, the user account of the authoring user, a geographical location of the user device when the client posted the message, an indication the message contains one or more offensive expression(s) and/or a list of the offensive expression(s) in the message (see Section 2), and one or more labels and/or guidelines associated with message (see Section 3).
  • what metadata is provided to the platform by a client is determined by privacy settings controlled by the user or the account holder.
  • Messages composed by one account holder may include references to other accounts, other messages, or both.
  • a message may be composed in reply to another message posted by another account or by the user.
  • Messages may also be re-publications of messages received from another account.
  • an account referenced in a message may appear as visible content in the message, e.g., as the name of the account, and may also appear as metadata in the message.
  • the referenced accounts can be interactive in the platform. For example, users may interact with account names that appear in their message stream to navigate to message streams of those accounts.
  • the platform also allows users to designate particular messages as private; a private message will only appear in the message streams of the composing and recipient accounts.
  • messages on the platform are microblog posts, which differ from email messages in a number of ways, for example, in that an author of the post does not necessarily need to specify, or even know, which accounts the platform will select to provide the message to.
  • the platform 100 is implemented on one or more servers l lOa-l lOm in one or more locations (also referred to more generally as a “platform server 110”). Each server is implemented on one or more computers, e.g., on a cluster of computers.
  • the platform 100 further includes a database 117 to store, for example, various data on each user account, such as one or more lists of offensive expressions, moderation settings, and settings for applying labels and/or guidelines to a message.
  • the platform, the user devices, or both are configured, as will be described, to implement or perform one or more of the innovative technologies described in this specification. Further information about user devices, clients, servers, and the platform is provided later in this specification (see Section 4).
  • aspects disclosed herein are generally directed to the moderation in reply messages using, in part, user-defined content filter(s) and display of injunctive social norms to users by applying one or more labels and/or guidelines to a base message.
  • Such aspects may be executable by any suitable components of the platform 100 such as, for example, by one or more of the platform servers l lOa-l lOm, and/or by any suitable components of the user devices 104a-104n.
  • FIG. IB shows an expanded view of the user device 104a.
  • the user device 104a includes one or more processors 101 and a memory 105. Unless indicated otherwise, all components of the user device 104a herein can be in communication with each other.
  • FIG. 1C shows an expanded view of the platform server 110a.
  • the platform server 110a includes one or more processors 111 and a memory 115.
  • the platform server 110a can further be communicatively coupled to the database 117. Unless indicated otherwise, all components of the platform server 110a herein can be in communication with each other.
  • One or more of the servers 110 implement a moderation module 112, directed to the detection of offensive expressions in reply messages posted in response to a base message as well as the notification and enforcement of moderation settings of the user who authored the base message (see, for example, the moderation module 112 in the memory 115 of FIG. 1C).
  • the moderation module 112 is also directed to managing the application of label(s) and/or guidelines to a base message.
  • the client software 106a-106n also includes a moderation module 108 to facilitate user interaction and communication with the moderation module 112 on the server(s) 110 (see, for example, the moderation module 108 in the memory 105 of FIG. IB).
  • the functions provided by the moderation module 108 can include, but is not limited to, providing a user interface for users to manage moderation settings, create and/or edit a list of offensive expressions, and assign one or more label(s) and/or guidelines to a base message.
  • the one or more processors 101 and 111 can each (independently) be any suitable processing device configured to run and/or execute a set of instructions or code associated with its corresponding user device 104, platform server 110, and/or the platform 100.
  • Each processor can be, for example, a general-purpose processor, a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), and/or the like.
  • the one or more processors 101 and 111 can execute the moderation modules 108 and 112, respectively, as described in further detail below.
  • the memory 105, the memory 115, and the database 117 can encompass, for example, a random-access memory (RAM), a memory buffer, a hard drive, a database, an erasable programmable read-only memory (EPROM), an electrically erasable read-only memory (EEPROM), a read-only memory (ROM), Flash memory, and/or so forth.
  • the memory 105, the memory 115, and the database 117 can store instructions to cause the one or more processors 101 and 111, respectively, to execute processes and/or functions associated with the moderation modules 108 and 112, the user device 104, the platform server 110, and/or the platform 100.
  • the memory 105, the memory 115, and the database 117 can store any suitable content for use with, or generated by, the platform including, but not limited to, one or more connection graphs, a rule repository, and/or the like.
  • users are not limited to posting only reply messages in response to a base message.
  • Users can also post a share message, which is a message intended for other users that may or may not include the authoring user of the base message and includes the content of the base message without any additional content from the responding user.
  • Users can also post a quote message, which is a message intended for other users that may or may not include the authoring user of the base message and includes the content of the base message along with additional content from the responding user.
  • the moderation modules 108 and 112 can also be executed on a share message and/or a quote message to the extent applicable.
  • the platform 100 can encompass a large number of users (e.g., thousands, hundreds of thousands, millions, hundreds of millions) each of whom can post a base message and/or a reply message, define one or more lists of offensive expressions, have moderation settings to control moderation of reply messages, assign label(s) and/or guidelines to any base message.
  • each user can be both a content creator and a content consumer.
  • the moderation modules 108 and 112 disclosed herein are configured to moderate reply messages posted by content consumers in response to a base message posted by a content creator using list(s) of offensive expressions to identify language the content creator wants to discourage and/or does not want to see in the reply messages.
  • the list(s) of offensive expressions can be stored in memory (e.g., the memory 105, the memory 115, the database 117) and associated with the content creator’ s user account.
  • the moderation modules 108 and 112 can further be executed when content consumers are drafting a reply message to analyze the reply message as it is being drafted and to notify the content consumer when the content of the reply message includes an offensive expression from the content creator’s list(s) of offensive expressions.
  • the notification can include, for example, visual indicators (e.g., highlights) displayed on a user interface of the content consumer’s user device to identify the offensive expression in the draft of the reply message and/or a prompt to warn the content consumer of actions that will be taken by the platform 100 for posting a reply message with the offensive expression(s).
  • the moderation modules 108 and 112 can further apply one or more penalties to users who proceed to post reply messages with offensive expression(s).
  • the moderation modules 108 and 112 provide users of the social messaging platform 100 greater control over the moderation of reply messages.
  • content creators to define a custom list of offensive expressions
  • context-specific language that is offensive to the content creator can be detected and/or removed from view of the content creator automatically by the platform 100.
  • This can include offensive expressions that normally do not violate the rules of the platform 100 and/or any existing content filters.
  • the list(s) of offensive expressions associated with each user account of the users of the social messaging platform 100 can be different from other user accounts.
  • the moderation modules 108 and 112 also reduce the burden on each user to personally manage their reply messages since the platform 100 can automatically moderate the reply messages displayed on the content creator’s user device.
  • an offensive expression is made up of one or more characters.
  • the characters can be arranged to form one or more words in textual form.
  • the characters can also include icons, such as emojis (e.g., a pictogram) or emoticons (e.g., a combination of punctuation marks, letters, numbers, and/or the like arranged to resemble a face or an object).
  • an offensive expression can be formed using a standardized set of characters, such as the Unicode Standard.
  • an offensive expression can include a wildcard (e.g., an asterisk ‘*’, a question mark ‘?’), which can be used to represent expressions spelled in different ways or represent multiple expressions.
  • offensive expressions formed of multiple words and/or icons can also be divided into individual words and/or icons to facilitate proximity matching as described further below.
  • the characters can be a char data type and the offensive expression can be a string data type.
  • the platform 100 can also permit a content consumer to post a reply message with an offensive expression. This can be accompanied by a warning in the form of a label, a prompt, or a notification when the content consumer drafts the reply message.
  • the warning can notify the content consumer of actions that can be taken by the platform 100 when posting a reply message with an offensive expression, such as one or more penalties being applied to the content consumer’s user account and/or content.
  • This approach can be used to encourage users to proactively change their behavior by giving the content consumer opportunities to amend their reply message to remove the offensive expression(s).
  • the platform 100 can provide users the option to outright prohibit posting of reply messages with offensive expression(s).
  • moderation modules 108 and 112 are described. Specifically, the creation and management of a user-defined content filter and various moderation settings are described in Section 2.1. The enforcement of the user-defined content filter is described in Section 2.2. An example method for moderating a reply message is described in Section 2.3.
  • the moderation modules 108 and 112 can be configured to generally provide each user account of the platform 100 with several moderation settings.
  • the moderation settings can control various aspects of moderating reply messages including, but not limited to, the creation and management of one or more lists of offensive expressions associated with a content creator’s user account, the display of notifications on a content consumer’s user device when drafting a reply message with at least one offensive expression, and the management of one or more penalties applied to content consumers who post reply messages with at least one offensive expression.
  • the moderation settings associated with a content creator’s user account are generally applied to any reply message posted in direct response to a base message posted by the content creator. Said in another way, if a content creator posts a base message (e.g., a root message, a reply message), the content creator’s moderation settings dictate the moderation of any reply messages within the branch formed by that base message.
  • a message thread with multiple branches can thus have different moderation settings at each branch according to the moderation settings of the user who posted the base message of that branch.
  • the moderation settings of a content creator can also be applied to reply messages that do not directly respond to the content creator’s base message.
  • the content creator’s moderation settings can apply to both the first and second reply messages.
  • the moderation settings of the user account that posted the first reply message can be superseded by the content creator’s moderation settings.
  • the moderation settings associated with each user account can be stored in memory (e.g., the memory 105, the memory 115, the database 117).
  • a record of the content creator’s user account can include the moderation settings along with other data associated with the user account including, but not limited to, a username, a user account name, authentication information (e.g., an email, a password), and a profile picture.
  • the moderation settings can thereafter be retrieved from memory, for example, when the processor 111 of the platform 100 executes the moderation modules 108 and 112 to moderate a reply message drafted by a content consumer.
  • the moderation settings and the list(s) of offensive expressions can be stored together (e.g., locally in the memory of a user device) or separately (e.g., the moderation settings are stored in a database, the list(s) of offensive expressions are stored locally in the memory of a user device).
  • the moderation settings associated with a content creator’s user account can generally be applied to each message posted by the content creator.
  • the moderation settings can also be modified on a per message basis.
  • the content creator can apply different moderation settings to different base messages. This can be achieved, in part, by applying the moderation settings from the content creator’s user account when the content creator activates the moderation settings while drafting the base message. The content creator can then modify the moderation settings for that base message as desired.
  • the moderation settings can be modified by the content creator via one or more user interfaces displayed on their user device and managed by the moderation modules 108 and/or 112.
  • FIG. 2 A shows an example user interface of privacy and safety related settings associated with the user account of a content creator.
  • the user interface is displayed on a user device of the content creator and is accessible, for example, via an account settings page or profile page associated with the content creator’ s user account.
  • the user interface of FIG. 2A is not limited only to content creators of the platform 100, but the same user interface can be displayed for any user of the platform 100.
  • the user interface can include a section 250a for the moderation settings with a status indicator element 252 to indicate whether moderation settings are activated or deactivated.
  • the section 250a further includes a user interface element 254, which when selected, displays another user interface where various moderation settings can be modified as shown in FIG. 2B.
  • FIG. 4A shows a user interface to compose a base message displayed on the user device of the content creator.
  • the user interface can include a space 210 to contain the content of the base message.
  • the user interface can further include a submit button 212 to post the base message to the platform 100 and a cancel button 214 to cancel the base message (e.g., delete the draft of the base message).
  • FIG. 4A shows the user interface can also include an interactive label 250b provided by the moderation modules 108 and/or 112, which when selected by the content creator, displays the user interface of FIG. 2B.
  • a notification or a message can be displayed in a stream on the content creator’ s user device with a user interface element that displays the user interface of FIG. 2B.
  • the user interface of FIG. 2B includes a section 260a with a description of the moderation feature (sometimes referred to as “smellcheck”) and a toggle switch 262 to activate or deactivate moderation of reply messages.
  • the user interface further includes a section 260b, which provides a summary of the offensive expressions selected by the content creator and a user interface element 264, which when selected, displays another user interface to facilitate the creation and/or management of one or more lists of offensive expressions (see, for example, FIG. 2C).
  • the summary of offensive expressions in the section 260b can include the number of offensive expressions in each of the content creator’s lists of offensive expressions.
  • the user interface further includes a section 260c with various settings on penalties that can be applied to content consumers who post reply messages with an offensive expression in response to a content creator’s base message.
  • FIG. 2C shows an example user interface to create a list of offensive expressions and/or manage one or more list(s) of offensive expressions, such as by adding or removing an offensive expression.
  • the user interface includes a search box 270 that can be used to find an offensive expression in a list of offensive expressions and/or to add an offensive expression to a list of offensive expressions.
  • the user interface of FIG. 2C can display one or more lists of offensive expressions associated with the content creator’s user account.
  • the content creator’s user account includes a list 272a containing offensive expressions with hateful, hurtful, or violent language, a list 272b containing offensive expressions with profanity, and a list 272c containing offensive expressions added manually by the content creator.
  • the lists 272a-272c are non-limiting examples and that each user account can generally include any number of lists according to the user’s preferences.
  • Each of the lists 272a, 272b, and 272c include a toggle switch 274 to activate or deactivate the list.
  • the platform 100 only checks a reply message for offensive expressions in a list when that list is activated.
  • the user interface of FIG. 2C is designed to avoid displaying any offensive expressions in the lists of offensive expressions to reduce the likelihood of traumatizing or re-traumatizing the content creator.
  • the user interface of FIG. 2C includes a user interface element 276, which when selected, displays another user interface with the offensive expressions in each of the lists of offensive expressions to provide the content creator the option to activate or deactivate individual offensive expressions in each list (see, for example, FIG. 2F).
  • the lists of offensive expressions generally include (A) user-defined lists of offensive expressions (e.g., the list 272c) where the content creator manually creates the list and adds offensive expressions to the list thereafter and (B) lists of predetermined offensive expressions (e.g., the list 272a) that are generated by the platform 100.
  • Each list further includes a list name chosen by the content creator or generated automatically by the platform 100, such as “hateful, hurtful, or violent language,” “profanity,” or “your custom list.”
  • the content creator can manually add an offensive expression using the search box 270.
  • FIG. 2D shows that when an offensive expression (e.g., an emoji 281a) is entered in the search box 270, a search result 280a is displayed indicating the offensive expression is not in any list (e.g., the lists 272a-272c).
  • the search result 280a further includes an add button 282 to add the offensive expression to a list (e.g., the list 272c).
  • the user interface of FIG. 2D further shows a user interface element 284, which when selected, allows the content creator to search and display additional offensive expressions to add to a list.
  • the user interface can display the lists of offensive expressions associated with the content creator’s user account for selection. Upon selecting one or more of the lists, the offensive expression is added to those lists. Alternatively, the user interface can display an option to create a new list of offensive expressions along with one or more prompts for the content creator to input, for example, a list name for the list. Generally, an offensive expression can be manually added to a user-defined list of offensive expressions or a list of predetermined offensive expressions.
  • FIG. 2D is a non-limiting example and that the platform 100 can support other ways of adding offensive expressions to a content creator’s list of offensive expressions for moderation.
  • users can add an offensive expression for moderation when muting that offensive expression.
  • FIG. 4B shows an example user interface for users to add muted expressions.
  • a user mutes an expression, that expression is removed from any content displayed on the user interface of the content creator’s user device. This includes any message displayed in a stream, any notifications, and/or any direct messages. This can be accomplished, for example, by replacing the expression with several asterisks or disabling display of any message or notification containing that expression.
  • FIG. 4B includes a space 256 to enter the expression to be muted.
  • FIG. 4C shows that when an expression is added, a notification 251 can be displayed, for example, over a message thread on the content creator’ s user device.
  • the notification 251 notifies the content creator the expression is muted and can further include a button 258 provided by the moderation modules 108 and/or 112 to add the expression to a list of offensive expressions for moderation.
  • the list(s) of predetermined offensive expressions generally includes offensive expressions that are commonly used by users on the social messaging platform 100, and especially users that engage with the content creator.
  • These offensive expressions can be identified, for example, by agents of the social messaging platform 100, e.g., a human operator that reviews messages for toxic and/or abusive content.
  • agents can identify expressions commonly used in messages reported for toxic and/or abusive content that are likely to be offensive to most users.
  • the offensive expressions can thereafter be collectively compiled into a list of predetermined offensive expressions and shared with one
  • the list(s) of predetermined offensive expressions can be tailored, for example, to include offensive expressions that are more likely to be relevant and/or understood by the content creator via the moderation modules 108 and/or 112. This can be achieved, in part, by the moderation modules 108 and/or 112 generating list(s) of predetermined offensive expressions based on one or more attributes of the content creator’s user account.
  • the attributes associated with a user account can include, but is not limited to, a location of the user device associated with the user account (e.g., determined by a position tracking sensor in the user device), a region associated with the user account (e.g., the Southern region of the United States, the West Coast region of the United States), a nationality associated with the user account, and a default language associated with the user account (e.g., the native language of the content creator).
  • a location of the user device associated with the user account e.g., determined by a position tracking sensor in the user device
  • a region associated with the user account e.g., the Southern region of the United States, the West Coast region of the United States
  • a nationality associated with the user account e.g., the native language of the content creator
  • each offensive expression in a list of offensive expressions can be individually activated or deactivated in addition to activating or deactivating each list of offensive expression(s). Similar to a list of offensive expressions, the platform 100 only checks a reply message for an offensive expression when that offensive expression is activated.
  • FIG. 2E shows the user interface when an offensive expression 281b that is present in at least one list of offensive expressions is entered in the search box 270. As shown, multiple search results 280b, 280c, and 280d are displayed corresponding to offensive expressions that either match the offensive expression 281b or include the offensive expression 281b.
  • Each of the search results 280b, 280c, and 280d indicate the corresponding offensive expression is present in at least one list of offensive expressions.
  • the name of the list containing the offensive expression can be displayed.
  • the user interface can also display search results with offensive expressions that are not present in any list of offensive expressions (e.g., the search result 280a) together with the search results 280b, 280c, and 280d.
  • a toggle switch 274 is also provided for the content creator to activate or deactivate that offensive expression for moderation.
  • FIG. 2F shows another example user interface that lists the offensive expressions in the lists of offensive expressions associated with the content creator’ s user account.
  • the user interface of FIG. 2F can be displayed, for example, by selecting the user interface element 276 on the user interface of FIG. 2C.
  • the user interface can include offensive expressions 273a, 273b, 273c, 273d, 273e, and 273f with each offensive expression having a toggle switch 274 to activate or deactivate that offensive expression.
  • the user interface can further include a display filter button 286, which when selected, provides several options to filter the offensive expressions displayed on the user interface according to different criteria.
  • the criteria can include, but is not limited to, membership in a list of offensive expressions (e.g., the lists 272a-272c), the category of the offensive expression (e.g., profanity, hateful, violet), the date the offensive expression was added, and the most frequently detected offensive expressions in the reply messages posted in response to the content creator’ s base message.
  • the user interface further includes a hide button 288 to close the list of offensive expressions and return to the user interface of FIG. 2C. Any changes made by the content creator are thereafter stored in memory (e.g., the memory 105, the memory 115, the database 117). As shown in FIG.
  • each offensive expression can be partially hidden from view by replacing a portion of the offensive expression with asterisks to reduce the likelihood of traumatizing or re-traumatizing the content creator.
  • each offensive expression can be interactive to provide users the option to display the original offensive expression.
  • each user account of the platform 100 can potentially have a unique combination of lists.
  • the list(s) of offensive expressions for each user account can be stored locally on one or more user devices associated with that user account (e.g., a smartphone, a tablet).
  • Each list can further include one or more fields for each offensive expression to indicate whether the offensive expression should be searched or not (e.g., whether the toggle switch 274 is activated or deactivated).
  • a copy of the content creator’s list(s) can be transmitted from the content creator’ s user device to the platform 100 (e.g., a platform server 110) for temporary storage.
  • the platform 100 detects a content consumer is drafting a reply message in response to the content creator’s base message, the platform 100 can transmit the content creator’s list(s) to the content consumer’ s user device and the processor of the user device can thereafter evaluate the draft of the reply message based on the content creator’s list(s).
  • the content creator’s list(s) can be removed from the content consumer’s device and/or the platform 100.
  • the content creator’s list(s) can remain in memory on the platform 100 for a limited period of time so that the content consumer’ s user device does not have to repeatedly transmit a copy of the content creator’s list(s) when multiple users draft a reply message.
  • the period of time can be chosen according to the time when reply messages are most likely to be posted responding to a base message (e.g., 1-3 days after the content creator’s base message is posted).
  • the moderation module 112 can further be executed to monitor the duration that the content creator’s list(s) are stored in memory on the platform 100.
  • copies of the content creator’s list(s) can be stored indefinitely on the platform 100 (e.g., the memory 115, the database 117) and periodically updated or replaced whenever the content creator’s list(s) are changed on the content creator’s user device via the moderation modules 108 and/or 112.
  • the section 260c in the user interface of FIG. 2B shows several example penalties that can be applied by execution of the moderation modules 108 and/or 112 when a content consumer posts a reply message with one or more offensive expressions.
  • the section 260c includes a penalty 266a, which when activated, causes a reply message with an offensive expression to be downranked in the branch of the message thread such that the reply message has a lower priority for display compared to other reply messages within that branch.
  • the user interface further includes a toggle switch 268 to activate or deactivate the penalty 266a.
  • the downranking process can be facilitated, in part, by each reply message in a branch of the message thread having a rank parameter to determine the order of the reply messages displayed on the user device of a user of the platform 100.
  • the rank parameter represents the position or index to display a reply message.
  • a rank parameter with a higher value can correspond to a higher position for display.
  • the reply messages can be arranged such that the reply message having the highest rank parameter value is displayed first, the reply message with the next highest rank parameter value is displayed second, and so on.
  • a lower rank parameter value can correspond to a higher position for display where the reply message with the lowest rank parameter value is displayed first, the reply message with the next highest rank parameter value is displayed second, and so on.
  • a message thread can include a single branch with a first reply message, a second reply message, and a third reply message posted in response to a base message (e.g., a root message) of a content creator.
  • the first reply message includes a first rank parameter
  • the second reply message includes a second rank parameter with a value lower than the first rank parameter
  • the third reply message includes a third rank parameter with a value lower than the second rank parameter.
  • the first reply message can be displayed first in a message thread followed by the second reply message and lastly the third reply message.
  • the platform 100 in executing the moderation module 108 and 112, determines the first reply message includes an offensive expression from the list of offensive expressions in the moderation settings of the content creator, the first reply message is downranked such that the first rank parameter has a value lower than the second and third rank parameters. This, in turn, causes the second reply message to be displayed first in the message thread followed by the third reply message and lastly the first reply message.
  • FIG. 2B shows a penalty 266b, which when activated, causes the user account associated with a reply message having an offensive expression to be blocked from the content creator’s user account.
  • the penalty 266b also includes a toggle switch 268 to activate or deactivate the penalty 266b.
  • a first user e.g., the content creator
  • a second user e.g., the content consumer
  • the second user is no longer allowed to view the first user’s account, view any content posted by the first user, follow the first user, or send any messages or content to the first user.
  • any content posted by the second user is no longer visible to the first user.
  • any reply messages posted by the content consumer in the message thread can be made not visible to the user device of the content creator.
  • FIG. 2B shows a similar user interface as FIG. 2B, but with a penalty 266c, which when activated, causes the user account associated with a reply message with an offensive expression to be muted from the content creator’s user account.
  • a penalty 266c which when activated, causes the user account associated with a reply message with an offensive expression to be muted from the content creator’s user account.
  • a first user e.g., the content creator
  • a second user e.g., the content consumer
  • any content posted by the second user is not visible to the first user.
  • the second user can still view the first user’s account, view any content posted by the first user, and/or follow the first user.
  • each penalty can also include additional conditions before the penalty is applied to the offending content consumer’s reply message and/or user account.
  • a penalty can be applied only when the number of reply messages with an offensive expression posted by a content consumer exceeds a predetermined threshold.
  • the predetermined threshold can be chosen by the content creator and can generally range between one reply message to ten reply messages, or more, including all values and sub-ranges in between.
  • FIG. 2B shows the penalty 266b is only applied when the content consumer posts two reply messages with an offensive expression.
  • content consumers can post one reply message with an offensive expression without being blocked by the content creator, per the penalty 266b.
  • the penalty 266a does not include any threshold condition and can thus be applied.
  • a penalty can be applied only when the rate at which reply messages with an offensive expression are posted by a content consumer exceeds a predetermined threshold.
  • the predetermined threshold can range between one reply messages having an offensive expression per hour to five reply messages having an offensive expression per hour, or more, including all values and sub-ranges in between.
  • the penalty can be applied for a limited period of time.
  • the penalties 266b and 266c to block and mute, respectively, a content consumer’s user account can be applied for a limited period of time, such as 1 hour, 1 day, 1 week, 1 month, or 1 year.
  • the language modules 108 and/or 112 can further monitor the elapsed time for each penalty to determine whether the penalty should remain or be removed from a content consumer’s user account. In this manner, the content creator can provide an opportunity for the content consumer to post reply messages after the period of time elapses.
  • the content consumer’s account can be blocked and/or muted for a longer period of time (e.g., twice the period of time selected by the content creator) or indefinitely. It should be appreciated that for some penalties, such as the penalty 266a, the penalties can be applied indefinitely.
  • a penalty can also vary depending on the moderation settings of the content creator and/or the structure of the message thread. In one example, a penalty is only applied within the branch that the reply message was posted (e.g., the content consumer can still post reply messages in other branches without restriction). In another example, a penalty can be applied to the entire message thread (e.g., the content consumer is blocked from posting a reply message to the message thread). In yet another example, a penalty can be applied to other message threads that include a base message posted by the content creator.
  • the penalty 266a to downrank a reply message can be applied within the branch that the reply message was posted (e.g., the reply message is downranked only within that branch) or the entire message thread (e.g., the reply message is downranked relative to all reply messages within the message thread).
  • the penalties 266b and 266c can extend to any message thread in which the content creator posts a base message.
  • each reply message can further include one or more fields for metadata, as described above, to facilitate moderation.
  • the metadata fields can include a field to identify the user account of the base message that the reply message is directly responding to. The user account can be used to retrieve, for example, the moderation settings associated with that user account to evaluate the application of any penalties to the reply message.
  • the metadata fields can include a first field to indicate whether the reply message includes an offensive expression.
  • the metadata fields can further include a second field with an indexed number representing the number of reply messages with offensive expressions posted by that user account for that branch or message thread.
  • the metadata fields can include a field to indicate the type of penalties that should be applied to that reply message and/or the user account of the content consumer who posted that reply message. In this manner, reply messages that include offensive expressions can be tracked and penalized by updating the messages stored in memory as new reply messages are posted to the branch and/or message thread.
  • the moderation modules 108 and 112 disclosed herein can evaluate a reply message as it is drafted, i.e., substantially in real-time, by a content consumer to identify whether the draft of the reply message includes an offensive expression. If the reply message is determined to include an offensive expression, a warning can be displayed to notify the content consumer that the reply message includes the offensive expression and actions that may be taken by the platform 100 (e.g., penalties) against the content consumer’s user account and/or content if they do not remove the offensive expression. Additionally, the platform 100 can permit content consumers to post a reply message with an offensive expression.
  • the moderation features disclosed herein can be configured to encourage the content consumer to proactively change their behavior, in part, by giving the content consumer a choice to either remove an offensive expression from a reply message or post the reply message with offensive expression at the expense of being penalized.
  • FIGS. 5A-5C show several user interfaces displayed on a user device associated with a content consumer managed by the language modules 108 and/or 112.
  • FIG. 5 A shows a user interface to compose a reply message 201.
  • the user interface includes the space 210 for the message 201, the submit button 212, and the cancel button 214.
  • the message 201 includes one offensive expression 203, which can be detected, for example, by the processor of the content consumer’s user device (e.g., the processor(s) 101) as described in further detail below.
  • a visual indicator 205 can be displayed to visually distinguish the offensive expression from other portions of the message 201.
  • the visual indicator 205 can be represented in various forms including, but not limited to, a highlight of the offensive expression as shown in FIG. 5A, an underline of the offensive expression, the offensive expression being in bolded font, the offensive expression being shown in an italicized font, and any combinations of the foregoing.
  • the user interface can further display a prompt 220 with a message to notify the content consumer that the draft of the reply message includes the offensive expression 203.
  • a notification 222 can also be displayed to notify the content consumer that the content creator activated the moderation features for this particular branch of the message thread.
  • the prompt 220 can be an interactive such that, when selected in FIG. 5A, the prompt 220 expands to display a message 224 explaining why the offensive expression is prohibited as shown in FIG. 5B.
  • the message 224 can further include the penalties that the content consumer’s user account and/or reply message will receive if the offensive expression is not removed.
  • the prompt 220 can further include a confirmation button 226, which when selected, closes the prompt 220.
  • the prompt 220 can also include an ignore button 228, which when selected, adds the offensive expression 203 to a list of ignored offensive expressions associated with the content consumer’s user account.
  • the content consumer’s user device will not display a prompt 220 thereafter when that offensive expression is included in a draft of a reply message.
  • the visual indicator 205 can still be displayed whenever the offensive expression is detected in the reply message.
  • the prompt 220 in FIG. 5B includes an information button 230, which when selected, displays another prompt with information on the content creator’s moderation settings as shown in FIG. 5C.
  • the prompt of FIG. 5C includes a section 232 with a message summarizing the purpose of the moderation features.
  • the prompt also includes a section 234 summarizing some of the types of offensive expressions the content creator included in their list(s).
  • the section 234 can include, for example, the list names of each list associated with the content creator’s user account.
  • the prompt further includes a section 236 summarizing the penalties incurred if the content consumer posts the message 201 without removing the offensive expression 203.
  • the prompt further includes a section 238 covering the list of ignored offensive expressions associated with the content consumer’s user account.
  • the section 238 includes a user interface element 242, which when selected, displays the list of ignored offensive expressions.
  • the content creator can further add or remove offensive expressions from the list of ignored offensive expressions.
  • the prompt of FIG. 5C includes a confirmation button 240, which when selected, closes the prompt to return to the user interfaces of FIGS. 5 A or 5B. Any changes made by the content creator, for example, to the list of ignored offensive expressions are thereafter stored in memory (e.g., the memory 105, the memory 115, the database 117).
  • the detection of an offensive expression in a draft of the reply message can be accomplished, in part, by first transferring a copy of the list(s) of offensive expressions associated with the content creator’s user account to the content consumer’ s user device (e.g., via the platform server 110). Thereafter, the processor(s) of the content consumer’s user device can evaluate the offensive expressions in the list(s) against the draft of the reply message to determine whether the reply message includes an offensive expression. Alternatively, a copy of the draft of the reply message can be transmitted to a platform server and the processor(s) of the platform server can evaluate the draft of the reply message for any offensive expressions. This evaluation process can be accomplished in several ways in real-time or substantially in real-time as the content consumer is drafting the reply message.
  • the processor(s) of the user device or the platform server can determine whether the draft of the reply message includes an expression that exactly matches an offensive expression in the content creator’s list(s) of offensive expressions. In other words, the processor(s) evaluate whether the sequence of characters forming the offensive expression appear identically in the draft of the reply message.
  • each offensive expression in the list(s) of offensive expressions can be represented as a string data type.
  • the draft of the reply message can also be represented as a string data type.
  • the processor(s) of the content consumer’s user device can execute a loop over the list(s) of offensive expressions such that each offensive expression is compared against the draft of the reply message using, for example, a string compare function. If the string compare function returns a ‘True’ value, the offensive expression is contained within the draft of the reply message as a substring. Otherwise, if a ‘False’ value is returned, the draft of the reply message does not include the offensive expression.
  • the draft of the reply message can be represented as a list of substrings where each substring corresponds to a single word (e.g., a series of characters that begins and ends with a space) or an icon in the reply message.
  • the list of substrings representing the reply message can then be compared directly against the list(s) of offensive expressions to determine whether the respective lists include any matches. If the offensive expressions include phrases formed from two or more words and/or icons, the draft of the reply message can be divided into individual words and/or icons as well as combinations of consecutive words and/or icons.
  • the reply message “This is an offensive expression” can be represented in a list as [“This”; “is”; “an”; “offensive”; “expression”; “This is”; “is an”; “an offensive”; “offensive expression”] to cover offensive expressions formed of one word or two words.
  • the processor(s) of the user device or the platform server can determine whether the draft of the reply message includes an expression that approximately matches an offensive expression in the content creator’s list(s) of offensive expressions.
  • This approach can account for reply messages with offensive expressions that are misspelled and/or altered to avoid detection.
  • an approximate string-matching algorithm also referred to in the art as a “fuzzy string searching algorithm” can be used to evaluate the closeness between an offensive expression in the list(s) of offensive expressions and the draft of the reply message.
  • the Levenshtein algorithm can be implemented in the moderation modules 108 and 112 to evaluate the closeness between two strings using an edit distance (also referred to in the art as a “Levenshtein distance”).
  • the edit distance represents the number of primitive operations applied to one string (e.g., a portion of the reply message) for that string to exactly match another string (e.g., an offensive expression).
  • the primitive operations include modifications to individual characters of the string, such as insertion where a single character is added to the string, deletion where a single character is removed from the string, substitution where one character in the string is replaced by another character, and/or transposition where two characters are swapped in the string.
  • the edit distance can be computed by comparing each offensive expression in the list(s) of offensive expressions against different portions of the draft of the reply message using the Levenshtein algorithm.
  • the draft of the reply message can be divided into a list of substrings as described above.
  • the list of substrings can include individual words, individual icons, and/or combinations of words and/or icons.
  • Each substring in the draft of the reply message can then be compared against each offensive expression to determine an edit distance.
  • the edit distance can be compared against a predetermined threshold.
  • a predetermined threshold can range between 1 to 3.
  • the processor(s) of the user device or the platform server can also use a proximity string matching approach where the draft of the reply message is determined to include an offensive expression with two or more words and/or icons if the words if the words and/or icons are in sufficient proximity to one another in the draft of the reply message.
  • an offensive expression with multiple words can be divided into a list of substrings where each substring represents one of the words.
  • Each substring can then be compared against the draft of the reply message to determine whether the substrings are present using, for example, the exact string-matching approach or approximate string -matching approach described above. If the substrings are present in the draft of the reply message, but not positioned next to one another, the number of words and/or icons separating the two words can then be computed for each pair of substrings.
  • the number of words and/or icons separating each pair of substrings and/or all respective pairs of substrings can then be compared against a predetermined threshold to determine whether the substrings are sufficiently close to one another to match the offensive expression.
  • the offensive expression “huge jerk” can be divided into the substrings “huge” and “jerk.” If the draft of the reply message includes the expression “huge insufferable jerk,” the substrings “huge” and “jerk” are separated by one word.
  • the predetermined threshold is two words, then the expression “huge insufferable jerk” is determined to be a match with the offensive expression “huge jerk.” More generally, the predetermined threshold can range between one word/icon to five words/icons, or more, including all values and sub-ranges in between.
  • a wildcard is generally a character used to represent zero or more characters. Common wildcards include, but are not limited to, an asterisk ‘ *’ and a question mark ‘?’ . In some implementations, wildcards can be used to cover offensive expressions with different spelling or multiple offensive expressions.
  • the offensive expression “stupid*” can cover different forms of the same expression in the reply message, such as “stupidity,” “stupidly,” and “stupidest.”
  • the expression “*stupid*” can cover different expressions that include the word “stupid” in the reply message, such as “stupidhead,” “stupid-head”, and “super stupidhead.”
  • the above approaches to evaluate the draft of the reply message for any offensive expressions can also be applied to the evaluation of a posted reply message for any offensive expressions.
  • the processor(s) of the platform server e.g., the processor(s) 111
  • the processor(s) of the platform server can evaluate the reply message in the same manner as described above to determine whether the reply message includes an offensive expression. If it is determined the reply message includes an offensive expression, the processor(s) of the platform server can determine whether a penalty should be applied to the reply message and/or the content consumer’s user account.
  • the processor(s) of the platform server can apply the penalty, e.g., by blocking or muting the content consumer’s user account from the content creator’ s user account, downranking the reply message within the branch or message thread, or disabling display of the reply message on other user devices including the content creator’s user device.
  • FIGS. 6A and 6B show an example method 300 for moderating a reply message posted by a second user account (e.g., a content consumer) in response to a base message posted by a first user account (e.g., a content creator).
  • the method 300 can generally be executed via a combination of processor(s) of the platform server, the processor(s) of the content consumer’s user device, and/or the processor(s) of the content creator’s user device.
  • the platform server receives a base message, moderation settings, and a list of offensive expressions via transmission from a first user device associated with a first user account.
  • the base message, moderation settings, and a list of offensive expression can be generated at the content creator’ s user device.
  • the base message, the moderation settings, and the list of offensive expressions are then stored in memory on the platform (e.g., the memory 115, the database 117).
  • the moderation settings and the list of offensive expressions can be stored in memory for a limited period of time to facilitate transmission to one or more user devices with user accounts drafting a reply message in response to the first user account’s base message.
  • the base message is transmitted from the platform server to a plurality of user devices associated with a plurality of user accounts for display, e.g., in a stream on the user device.
  • an indication is received from a second user device associated with the second user account that a reply message is being drafted in step 308.
  • This indication can be detected by the second user device and transmitted to the platform server.
  • the indication can correspond to the second user selecting a user interface element (e.g., a reply button) to post a reply message.
  • the indication can represent the user interface to compose a reply message being opened on the second user device (e.g., the user interface of FIG. 5A).
  • the moderation settings and the list of offensive expressions are transmitted from the platform server to the second user device in step 310.
  • the draft of the reply message is then evaluated to determine whether the second user is drafting the reply message in step 312 via, for example, at the second user device. This can be accomplished, for example, by evaluating whether the second user has selected the submit button 212 to post the reply message or the cancel button 214 to cancel the reply message. If it is determined that the reply message is being drafted, the draft of the reply message can then be evaluated to determine whether the draft includes an offensive expression in the list of offensive expressions in step 314. If it is determined the draft does not include an offensive expression, the method 300 returns to step 312. Otherwise, if it is determined the draft does include an offensive expression, the offensive expression is highlighted in the user interface on the second user device in step 316. A warning is also displayed showing the penalties that will be applied if the second user account posts the reply message without removing the offensive expression in step 318. The method then returns to step 312.
  • the steps 312-318 can then be repeated by the second user device until the reply message is no longer being drafted (i.e., it is either posted or cancelled).
  • the steps 312-318 can repeat periodically over predetermined time interval (e.g., 1 second, 5 seconds).
  • the steps 312-318 can repeat once a change is made to the draft of the reply message. This can be accomplished, for example, by detecting user input from the second user via the second user device (e.g., when the second user touches a touchscreen on a smartphone). That way, if the second user does not provide any input for an extended period of time (e.g., due to a sudden interruption while drafting the reply message), the steps 312-318 are not being executed continuously, thus reducing the computational load when executing the method 300.
  • step 320 determines whether the reply message was posted to the social messaging platform in step 322. If it is determined the reply message was not posted (i.e., it was cancelled), the method is terminated in step 324. Otherwise, if the reply message is posted (e.g., transmitted to the platform server), the reply message is stored in memory (e.g., the memory 115, the database 117) in step 326.
  • the reply message is evaluated by the platform server to determine whether the reply message includes an offensive expression from the list of offensive expressions in step 328. If it is determined the reply message does not include an offensive expression, the reply message is then transmitted to the plurality of user devices including the first user device for display thereon (e.g., in a stream of the user device) in step 330.
  • the moderation settings associated with the first user account are evaluated to determine whether they include a relationship penalty, such as the penalties 266b and 266c. If the moderation settings do not include a relationship penalty, the method proceeds to step 336. If the moderation settings do include a relationship penalty, the penalty can thereafter be applied to the second user account in step 334. Additional conditions can also be evaluated beforehand to determine whether a penalty should be applied (e.g., a threshold number of reply messages with an offensive expression posted by the second user account). Thereafter, the method proceeds to step 336.
  • a relationship penalty such as the penalties 266b and 266c.
  • the moderation settings associated with the first user account are evaluated by the platform server to determine whether they include a display penalty, such as the penalty 266a or a penalty to disable display of the reply message. If the moderation settings do not include a display penalty, the method proceeds to step 340. If the moderation settings do include a display penalty, the penalty can thereafter be applied to the second user account in step 338. Again, additional conditions can be evaluated beforehand to determine whether a penalty should be applied.
  • the reply message can be transmitted to the plurality of user devices including the first user device for display thereon (e.g., in a stream of the user device) barring any penalties that prohibit the display of the reply message.
  • the penalties can include, for example, the second user account being blocked or muted from the first user account, thus disabling display of the reply message on the first user device.
  • the penalty can disable display of the reply message, in which case the reply message may not be transmitted to the plurality of user devices.
  • the moderation modules 108 and 112 disclosed herein provide features for content creators to assign one or more labels and/or guidelines to their base message.
  • the label(s) and/or guidelines are thereafter displayed together with the base message on each content consumer’s user device.
  • the content creator can explicitly communicate one or more social injunctive norms to content consumers so that the content consumers are less likely to rely upon other social cues, such as the reply messages posted by other users, to determine what language and/or content is acceptable in the branch of the message thread.
  • FIG. 7A shows an example user interface to compose a base message on a content creator’s user device.
  • the user interface includes the space 210 to contain the base message, the submit button 212, and the cancel button 214 described above.
  • the user interface includes a reply message settings element 410, which when selected, displays a prompt 411 with various reply message settings.
  • the platform 100 can provide content creators different reply message settings to control which users can post a reply message to the content creator’s base messages and/or the order of the reply messages displayed in the branch and/or message thread.
  • the reply message settings can generally be accessed and/or modified via the prompt 411.
  • the prompt 411 can further include a confirmation button 416 to close the prompt 411. Any changes made by the content creator, for example, to the reply message settings and the assignment of labels and/or guidelines are thereafter stored in memory (e.g., the memory 105, the memory 115, the database 117).
  • FIG. 7A shows a reply message setting 412a to control which group of users can reply to the content creator’s content.
  • the group of users can include, but is not limited to, all the users of the platform 100, users that are being followed by the content creator, and uses that are referenced, for example, in the content creator’s base message (e.g., by typing “@” followed by the username associated with the user account being mentioned).
  • FIG. 7A further shows a reply messaging setting 412b to control the order of the reply messages displayed when users view the branch and/or the message thread.
  • the reply messages can be displayed according to a “regular” or default arrangement. This can include displaying the reply messages in a chronological or reverse chronological order.
  • the reply messages with a Graphics Interchange Format can be displayed first, e.g., before reply message with only textual content. If there are multiple reply messages with GIFs, these reply messages an be displayed according to chronological or reverse chronological order.
  • the reply messages posted by the content creator’s friends can be displayed first.
  • a friend is a user that has an established relationship with the content creator as indicated by a connection graph of the user and/or the content creator.
  • the connection graph includes an edge connecting a node representing the user’s user account and another node representing the content creator’s user account.
  • the user interface of FIG. 7A further includes a user interface element 414, which when selected, displays a user interface to manage label(s) and/or guidelines associated with the base message as shown in FIG. 7B.
  • the user interface of FIG. 7B can include a toggle switch 420 to activate or deactivate assignment of label(s) and/or guidelines with the base message. The label(s) and/or guidelines are only displayed with the base message when the switch 420 is activated.
  • the user interface further includes a section 421 with one or more labels 422 available for selection to associate with the base message.
  • the labels 422 are intended to convey the content creator’s desired tone(s) for the branch and/or the message thread.
  • the labels 422 can be standardized such that all the users of the platform 100 select from the same set of labels. Standardizing the labels available for selection can help the users of the platform 100 better understand the desired tone(s) for the branch and/or the message thread as well as expectations on type of content to include in a reply message.
  • users can also generate labels for their base message with a user- defined tone.
  • multiple labels 422 can be selected.
  • the moderation modules 108 and 112 can limit the number of labels the content creator can select.
  • the user interface of FIG. 7B shows up to three labels can be selected.
  • the content creator may only be allowed to select one label for their base message. More generally, the number of labels a user can select can range between one label to five labels.
  • the labels 422 shown in the user interface of FIG. 7B are not necessarily all the labels available for selection. Rather, in some implementations, the section 421 can display labels that have recently been used by the content creator in other base messages. As shown in FIG. 7B, the user interface can further include a user interface element 423 to display additional labels for selection.
  • the user interface of FIG. 7B further includes a toggle switch 424 to activate or deactivate the display of guidelines with the base message.
  • the guidelines are only displayed and/or accessible when the toggle switch 424 is activated.
  • the toggle switches 420 and 424 can be independently activated or deactivated.
  • the label(s) and the guidelines can be displayed independent of one another.
  • the user interface of FIG. 7B further includes a user interface element 426, which when selected, displays a user interface (not shown) to draft and/or edit a set of guidelines.
  • the guidelines generally include more details on the content creator’s preferences for the content of the reply messages posted in the branch and/or message thread. For example, the guidelines can elaborate on the type of advice to provide in a reply message, rules when content consumers disagree with the content of the base message and/or the reply messages, and/or other expectations from the content creator.
  • the content creator can draft the guidelines themselves.
  • the platform 100 can provide standard guidelines to each user account, which can thereafter be edited by the content creator as desired.
  • the guidelines can be automatically generated by the processor(s) on the content creator’s user device based on the labels selected. For example, each label can be associated with a statement that describes the purpose of the label in more detail. When the label is selected, the statement can be automatically added to the guidelines.
  • the label(s) and/or guidelines can be stored as metadata of the base message.
  • the base message in turn, can be stored in memory (e.g., the memory 105, the memory 115, the database 117).
  • the base message can include a first field to indicate whether labels should be displayed and a second field containing the label(s) to display with the base message.
  • the base message can include a third field to indicate whether guidelines should be displayed and a fourth field containing the guidelines to display with the base message.
  • FIG. 8A shows an example user interface with a message thread displayed on a user device associated with a user different from the content creator.
  • the message thread includes a base message 430 posted by the content creator of FIGS. 7A and 7B.
  • the base message 430 is also a root message of the thread.
  • the user interface further includes a user interface element 432 that displays the labels 422 selected in FIG. 7B.
  • the user interface element 432 can be positioned directly between the base message 430 and the reply messages so that the labels 422 are seen by users first before viewing any reply messages. In this manner, users are more likely to be exposed to the content creator’s expected tones for the conversation, which function as injunctive social norms, before being influenced by social cues from the reply messages.
  • the user interface element 432 can further include a confirmation button (not shown) that requires content consumers to select before being able to view the reply messages and/or post their own reply messages.
  • the confirmation button can further reinforce a content consumer’s commitment to post reply messages with content that conforms to the expected tones and/or guidelines of the message thread.
  • the user interface element 432 can further be interactive, such that when selected, a prompt 434 is displayed as shown as shown in FIG. 8B.
  • the prompt 434 can include the content creator’s guidelines.
  • the prompt 434 can include a confirmation button 436, which when selected, closes the prompt 434.
  • FIG. 9 shows an example method 500 for assigning and displaying one or more label(s) with a base message.
  • the method 500 can generally be executed via a combination of processor(s) of the platform server, the processor(s) of the content consumer’s user device, and/or the processor(s) of the content creator’s user device.
  • the message, an indication to display label(s) with the message, and the label(s) selected are received via transmission from a first user device associated with a first user account.
  • the message, the indication, and/or the label(s) can be generated at the content creator’s user device.
  • the indication can represent the toggle switch 420 being activated.
  • the indication and/or the label(s) can be stored in metadata fields of the message.
  • the message, the indication, and the label(s) are stored in memory on the platform (e.g., the memory 115, the database 117) in step 504.
  • the message, the indication, and the label(s) can then be transmitted by a platform server to a plurality of user devices associated with a plurality of user accounts in step 506.
  • the message can then be displayed together with the label(s) by each user device of the plurality of user devices in a stream on that user device in step 508.
  • the label(s) can be displayed in a user interface element, such as the user interface element 432.
  • FIG. 10 shows an example method 600 for assigning and displaying guidelines with a base message.
  • the steps of method 600 can be similar to the method 500.
  • the method 600 can generally be executed via a combination of processor(s) of the platform server, the processor(s) of the content consumer’s user device, and/or the processor(s) of the content creator’ s user device.
  • the message, an indication to display guidelines with the message, and the guidelines are received via transmission from a first user device associated with a first user account.
  • the indication can represent the toggle switch 424 being activated.
  • the message, the indication, and/or the guidelines can be generated at the content creator’s user device. Additionally, the indication and/or the guidelines can be stored in metadata fields of the message.
  • the message, the indication, and the guidelines are stored in memory on the platform (e.g., the memory 115, the database 117) in step 604.
  • the message, the indication, and the guidelines can then be transmitted to a plurality of user devices associated with a plurality of user accounts in step 606.
  • the message can then be displayed with the guidelines in a stream on each user device of the plurality of user devices in step 608.
  • the stream can include a user interface element (e.g., the user interface element 432), which when selected, displays a prompt with the guidelines (e.g., the prompt 434).
  • the client may be a web browser and an HTML (hypertext markup language) document rendered by the web browser.
  • the client may be or include JavaScript code or Java code.
  • the client may be dedicated software, e.g., an installed app or installed application, that is designed to work specifically with the platform.
  • the client may include, for example, a Short Messaging Service (SMS) interface, an instant messaging interface, an email - based interface, and HTML-based interface, or an API function-based interface for interacting with the platform.
  • SMS Short Messaging Service
  • a user device can include a camera, microphone, or both, and the client can include or be coupled to software to record pictures, audio, and video.
  • the user device can include both a front- facing, i.e., a user-facing, camera, and a rear-facing camera.
  • the platform may have many millions of accounts, and anywhere from hundreds of thousands to millions of connections may be established or in use between clients and the platform at any given moment.
  • the accounts may be accounts of individuals, businesses, or other entities, including, e.g., pseudonym accounts, novelty accounts, and so on.
  • the platform and client software are configured to enable users to draft messages and to use the platform, over data communication networks, to post messages to the platform and to receive messages posted by other users.
  • the platform and client software are configured to enable users to post other kinds of content, e.g., image, video, or audio content, or a combination of kinds of content, either separately or combined with text messages.
  • the platform is configured to enable users to define immediate or scheduled sessions with individual or groups of users for audio or audio and video interactions.
  • the platform enables users to specify participation in such sessions using the relationships defined, i.a., in the connection graphs maintained by the platform.
  • the platform is configured to deliver content, generally messages, to users in their home feed stream.
  • the messages will generally include messages from accounts the user is following, meaning that the recipient account has registered to receive messages posted by the followed account.
  • the platform generally also includes in the stream messages that the platform determines are likely to be of interest to the recipient, e.g., messages on topics of particular current interest, as represented by the number of messages on the topics posted by platform users, or messages posted on topics of apparent interest to the recipient, as represented by messages the recipient has posted or engaged with, or messages on topics the recipient has expressly identified to the platform as being of interest to the recipient, as well as selected advertisements, public service announcements, promoted content, or the like.
  • the platform enables users to send messages directly to one or more other users of the platform, allowing the sender and recipients to have a private exchange of messages.
  • the platform is configured with interfaces through which a client can post messages directed to other users, both synchronously and asynchronously.
  • users are able to exchange messages in real-time, i.e., with a minimal delay, creating what are essentially live conversations, or to respond to messages posted earlier, on the order of hours or days or even longer.
  • the platform also indexes content items and access data that characterizes users’ access to content.
  • the platform provides interfaces that enable users to use their clients to search for users, content items, and other entities on the platform.
  • Accounts will generally have relationships with other accounts on the platform. Relationships between accounts of the platform are represented by connection data maintained by the platform, e.g., in the form of data representing one or more connection graphs.
  • the connection data can be maintained in a connection repository. Data repositories of the platform are generally stored in distributed replicas for high throughput and reliability.
  • a connection graph includes nodes representing accounts of the platform and edges connecting the nodes according to the respective relationships between the entities represented by the nodes.
  • a relationship may be any kind of association between accounts, e.g., a following, friending, subscribing, tracking, liking, tagging, or other relationship.
  • the edges of the connection graph may be directed or undirected based on the type of relationship.
  • the platform can also represent relationships between accounts and entities other than accounts. For example, when an account belongs to a company, a team, a government, or other organized group, a relationship with that account can also be, for example, a relationship of being a member of the group, having a particular role in the group, or being an expert about the group.
  • the platform can also represent abstract entities, e.g., topics, activities, or philosophies, as entities that can have relationships with accounts and, in some implementations, other entities. Such relationships can also be represented in a common connection graph or in one or more separate connection graphs, as described above.
  • the platform records user engagements with messages and maintains, in a message repository, data that describes and represents at least a collection of recent messages as well as the engagements with the messages.
  • Engagement data relative to messages includes data representing user activity with respect to messages. Examples of engagement by a user with a message include reposting the message, marking the message to indicate it is a favorite of, liked by, or endorsed by the user, responding to the message, responding to a message with a response having a sentiment determined by the platform to be positive or negative, quoting the message with further comments, and mentioning or referencing the message.
  • Engagement data relative to accounts includes data representing connections between accounts. Examples of engagements by a user with respect to an account include aggregated measures of engagement with messages authored by the account. Other examples include how many followers and followees the account has, i.e., how many other accounts are following the account and how many other accounts the account is following. Other examples include measures of similarity between the groups of followers, the groups of followees, or both, of two accounts, including non-account followees. [0141] Data about engagements can be represented on the platform as graphs with connections between accounts and messages, and stored in a graph repository.
  • the servers of the platform perform a number of different services that are implemented by software installed and running on the servers.
  • the services will be described as being performed by software modules.
  • particular servers may be dedicated to performing one or a few particular services and only have installed those components of the software modules needed for the particular services.
  • Some modules will generally be installed on most or all of the non special-purpose servers of the platform.
  • multiple instances of a module may operate in parallel so as to complete a request for service within a short period of time, so that the platform can respond to users with low latency.
  • the software of each module may be implemented in any convenient form, and parts of a module may be distributed across multiple computers in one or more locations so that the operations of the module are performed by multiple computers running software performing the operations in cooperation with each other. In some implementations, some of the operations of a module are performed by special-purpose hardware.
  • the platform includes numerous different but functionally equivalent front end servers, which are dedicated to managing network connections with remote clients.
  • the front end servers provide a variety of interfaces for interacting with different types of clients. For example, when a web browser accesses the platform, a web interface module in the front end module provides the client access. Similarly, when a client calls an API made available by the platform for such a purpose, an API interface provides the client access.
  • the front end servers are configured to communicate with other servers of the platform, which carry out the bulk of the computational processing performed by the platform as a whole.
  • a routing module stores newly composed messages in a message repository.
  • the routing module also stores an identifier for each message.
  • the identifier is used to identify a message that is to be included in a stream. This allows the message to be stored only once and accessed for a variety of different streams without needing to store more than one copy of the message.
  • a graph module manages connections between accounts, between accounts and entities, and between entities. Connections determine which streams include messages from which accounts.
  • the platform uses unidirectional connections between accounts and streams to allow account holders to subscribe to the message streams of other accounts.
  • a unidirectional connection does not imply any sort of reciprocal relationship.
  • An account holder who establishes a unidirectional connection to receive another account’s message stream may be referred to as a “follower,” and the act of creating the unidirectional connection is referred to as “following” another account.
  • the graph module receives client requests to create and delete unidirectional connections between accounts and updates the connection graph or graphs accordingly. Similarly, for entities that are represented by the platform as entities with which accounts can have relationships, the graph module can also receive client requests to create and delete connections representing account-to-entity relationships.
  • a recommendation module of the platform can recommend content, accounts, topics, or entities to a user
  • the recommendation module specifically tailors recommendations to the user.
  • a user or a client can generate a request for a recommendation, or another module of the platform can generate a request for a recommendation on its own, e.g., in order to include a recommendation in a stream being generated for a user.
  • a recommendation can be a call to action, i.e., a suggestion that the user take a particular action, or the recommendation can be the recommended content itself, e.g., a message to include in the stream.
  • the recommendation module can also provide a recommendation in response to a user action that does not explicitly request a recommendation, e.g., interacting with content on the platform in a way that indicates interest.
  • the recommendation module makes recommendations using, for example, information users provided about themselves and other data found in the users’ profiles, and data about the users’ engagements and relationships stored in graph data and otherwise in the platform’s repositories.
  • That user’s behavior and other users’ behaviors are taken into account.
  • the relationships and interactions between (i) a user, on the one hand, and (ii) content or users or other entities, on the other hand are used to make personalized content recommendations for the user.
  • recommendations can be provided without going through the client, e.g., in an email, text message, or push notification.
  • Recommendations can also identify, in a personalized way, content popular near a certain geographic location, or real-time trending topics.
  • the platform maintains data, especially about live events, with a high degree of currency and for quick access, so that the platform can provide recommendations of current interest, especially during live events.
  • the platform presents with a recommendation user-related reasons for the recommendation, e.g., because a message relates to a topic followed by the user or to a trending topic or to a topic trending in the user’s location, or because a message had strong engagement among the user’s followees, or because the message was endorsed by other users sharing common interests or sharing common followed topics with the user.
  • the platform ranks recommendations according to the reasons for the recommendations, giving preference to recommendations based on endorsements from followees, experts, or celebrities.
  • a delivery module constructs message streams and provides them to requesting clients, for example, through a front end server. Responding to a request for a stream, the delivery module either generates the stream in real time, or accesses from a stream repository some or all of a stream that has already been generated. The delivery module stores generated streams in the stream repository. An account holder may request any of their own streams, or the streams of any other account that they are permitted to access based on privacy and security settings. If a stream includes a large number of messages, the delivery module generally identifies a subset of the messages to send to a requesting client, in which case the remaining messages are maintained in a stream repository from which more messages are sent upon client request. 4.5.6 Health and Safety
  • the platform includes modules that enable users to filter the content they receive from the platform. For example, users may select settings that cause the platform to filter out sensitive content.
  • the platform also enables a user to control how the user is visible on the platform. For example, the platform enables a user to prevent particular users from following the user, from viewing the user’s messages on the platform, from sending messages directed to the user, or from tagging the user in a photo.
  • the platform also enables a user to mute particular users to prevent messages from particular users from being included in any incoming streams, or to block incoming push or SMS notifications from particular users.
  • the platform enables a user to mute another user while continuing to be a follower of the other user.
  • the platform itself can filter out content that is identified by the platform as toxic or abusive, or that originates from accounts identified by the platform as toxic or abusive, with or without a user request to do so.
  • An account module enables account holders to manage their platform accounts.
  • the account module allows an account holder to manage privacy and security settings, and their connections to other account holders. In particular, a user can choose to be anonymous on the platform. Data about each account is stored in an account repository.
  • Client software allows account holders receiving a stream to engage, e.g., interact with, comment on, or repost, the messages in the stream.
  • An engagement module receives these engagements and stores them in an engagement repository.
  • Types of engagement include selecting a message for more information regarding the message, selecting a URI (universal resource identifier) or hashtag in a message, reposting the message, or making a message a favorite.
  • Other example engagement types include opening a card included in a message, which presents additional content, e.g., an image, that represents a target of a link in the message, or that links to an application installed on the user device.
  • Account holders may engage further with the additional content, e.g., by playing a video or audio file or by voting in a poll.
  • the engagement module may also record passive interactions with messages.
  • An impression occurs when a client presents the content of a message on a user device. Impression engagements include the mere fact that an impression occurred, as well as other information, e.g., whether a message in a stream appeared on a display of the user device, and how long the message appeared on the display.
  • Any engagement stored in the engagement repository may reference the messages, accounts, or streams involved in the engagement.
  • Engagements may also be categorized beyond their type.
  • Example categories include engagements expressing a positive sentiment about a message (“positive engagements”), engagements expressing a negative sentiment about a message (“negative engagements”), engagements that allow an account to receive monetary compensation (“monetizable engagements”), engagements that are expected to result in additional future engagements (“performance engagements”), or engagements that are likely to result in one account holder following another account (“connection engagements”).
  • the negative engagements category includes, for example, engagements dismissing a message or reporting a message as offensive, while the positive engagements category typically includes engagements not in the negative engagements category.
  • Example performance engagements include selecting a URL in a message or expanding a card.
  • Example monetizable engagements include, for example, engagements that result in an eventual purchase or a software application installation on a user device.
  • categories and types are not coextensive, and a given type of engagement may fall into more than one category and vice versa.
  • inventive embodiments are presented by way of example only and that, within the scope of the appended claims and equivalents thereto, inventive embodiments may be practiced otherwise than as specifically described and claimed.
  • inventive embodiments of the present disclosure are directed to each individual feature, system, article, material, kit, and/or method described herein.
  • inventive concepts may be embodied as one or more methods, of which an example has been provided.
  • the acts performed as part of the method may be ordered in any suitable way. Accordingly, embodiments may be constructed in which acts are performed in an order different than illustrated, which may include performing some acts simultaneously, even though shown as sequential acts in illustrative embodiments.
  • a reference to “A and/or B”, when used in conjunction with open-ended language such as “comprising” can refer, in one embodiment, to A only (optionally including elements other than B); in another embodiment, to B only (optionally including elements other than A); in yet another embodiment, to both A and B (optionally including other elements); etc.
  • the phrase “at least one,” in reference to a list of one or more elements, should be understood to mean at least one element selected from any one or more of the elements in the list of elements, but not necessarily including at least one of each and every element specifically listed within the list of elements and not excluding any combinations of elements in the list of elements.
  • This definition also allows that elements may optionally be present other than the elements specifically identified within the list of elements to which the phrase “at least one” refers, whether related or unrelated to those elements specifically identified.
  • “at least one of A and B” can refer, in one embodiment, to at least one, optionally including more than one, A, with no B present (and optionally including elements other than B); in another embodiment, to at least one, optionally including more than one, B, with no A present (and optionally including elements other than A); in yet another embodiment, to at least one, optionally including more than one, A, and at least one, optionally including more than one, B (and optionally including other elements); etc.
  • This specification uses the term “configured to” in connection with systems, apparatus, and computer program components. That a system of one or more computers is configured to perform particular operations or actions means that the system has installed on it software, firmware, hardware, or a combination of them that in operation cause the system to perform those operations or actions. That one or more computer programs is configured to perform particular operations or actions means that the one or more programs include instructions that, when executed by data processing apparatus, cause the apparatus to perform those operations or actions. That special- purpose logic circuitry is configured to perform particular operations or actions means that the circuitry has electronic logic that performs those operations or actions.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

L'invention concerne une plateforme de messagerie sociale qui fournit des propriétés de modération d'utilisateur pour inclure un filtre de contenu défini par l'utilisateur et au moins une étiquette et/ou des lignes directrices à afficher avec un message de base publié par un créateur de contenu. Le filtre de contenu défini par l'utilisateur comprend une liste d'expressions désagréables sélectionnées, en partie, par le créateur de contenu, à des messages de réponse modérés envoyés par des consommateurs de contenu en réponse à un message de base de contenu du créateur. Ainsi, le filtre de contenu défini par l'utilisateur offre aux utilisateurs de la plateforme l'opportunité de choisir de manière sélective des expressions désagréables, en particulier des expressions qui peuvent avoir différentes significations dans différents contextes. Les étiquettes et/ou les lignes directrices permettent, en outre, à des utilisateurs d'avoir un moyen d'afficher explicitement des normes sociales injonctives aux utilisateurs et, ainsi, d'établir des règles de définition de quel contenu est socialement acceptable dans un message de réponse.
PCT/US2022/036070 2021-07-02 2022-07-05 Modération du contenu d'un utilisateur pour une plateforme de messagerie sociale WO2023278885A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202163218155P 2021-07-02 2021-07-02
US63/218,155 2021-07-02

Publications (1)

Publication Number Publication Date
WO2023278885A1 true WO2023278885A1 (fr) 2023-01-05

Family

ID=84690668

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2022/036070 WO2023278885A1 (fr) 2021-07-02 2022-07-05 Modération du contenu d'un utilisateur pour une plateforme de messagerie sociale

Country Status (1)

Country Link
WO (1) WO2023278885A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130138735A1 (en) * 2011-11-30 2013-05-30 Jeffrey Andrew Kanter Moderating Content in an Online Forum
US20130282841A1 (en) * 2012-04-18 2013-10-24 International Business Machines Corporation Filtering message posts in a social network
US20160147731A1 (en) * 2013-12-16 2016-05-26 Whistler Technologies Inc Message sentiment analyzer and feedback
US20190026601A1 (en) * 2016-03-22 2019-01-24 Utopia Analytics Oy Method, system and tool for content moderation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130138735A1 (en) * 2011-11-30 2013-05-30 Jeffrey Andrew Kanter Moderating Content in an Online Forum
US20130282841A1 (en) * 2012-04-18 2013-10-24 International Business Machines Corporation Filtering message posts in a social network
US20160147731A1 (en) * 2013-12-16 2016-05-26 Whistler Technologies Inc Message sentiment analyzer and feedback
US20190026601A1 (en) * 2016-03-22 2019-01-24 Utopia Analytics Oy Method, system and tool for content moderation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ANONYMOUS DEC: "Moderate Your Facebook Page", META FOR MEDIA, 7 December 2015 (2015-12-07), XP093021651, Retrieved from the Internet <URL:https://www.facebook.com/formedia/blog/moderating-your-facebook-page> [retrieved on 20230207] *
GERRARD YSABEL: "Beyond the hashtag: Circumventing content moderation on social media", NEW MEDIA & SOCIETY, vol. 20, no. 12, 1 December 2018 (2018-12-01), pages 4492 - 4511, XP093021655, ISSN: 1461-4448, DOI: 10.1177/1461444818776611 *

Similar Documents

Publication Publication Date Title
US10805386B2 (en) Reducing transmissions by suggesting digital content for display in a group-based communication interface
US11606323B2 (en) Prioritizing messages within a message network
US11871305B2 (en) System, apparatus, and computer program product for generating a group-based communication interface having improved panes positioned in a defined display window
JP6553225B2 (ja) コンテンツアイテムに関する対話の集約法
US11741115B2 (en) Dynamic presentation of searchable contextual actions and data
US9929994B2 (en) Organizing messages into conversation threads
US9521100B2 (en) Aggregate electronic mail message handling
US9299060B2 (en) Automatically suggesting groups based on past user interaction
JP6329082B2 (ja) ユーザ状態表示方法、表示端末、及びサーバ
US9449050B1 (en) Identifying relevant messages in a conversation graph
US20180253659A1 (en) Data Processing System with Machine Learning Engine to Provide Automated Message Management Functions
US8707184B2 (en) Content sharing interface for sharing content in social networks
CN114650263A (zh) 向群组聊天参与者的对新内容的前摄提供
US11695721B2 (en) Method, apparatus, and computer program product for categorizing multiple group-based communication messages
US9418117B1 (en) Displaying relevant messages of a conversation graph
US11722856B2 (en) Identifying decisions and rendering decision records in a group-based communication interface
US20150120835A1 (en) Offline prompts of online social network mentions
US11488113B1 (en) Rendering related content prior to an event in a group-based communication interface
US8180752B2 (en) Apparatus and methods for managing a social media universe
US20220393999A1 (en) Messaging system with capability to edit sent messages
US11093870B2 (en) Suggesting people qualified to provide assistance with regard to an issue identified in a file
WO2023278885A1 (fr) Modération du contenu d&#39;un utilisateur pour une plateforme de messagerie sociale
WO2023278887A2 (fr) Interaction sélective avec des utilisateurs et un contenu d&#39;utilisateur pour une plateforme de messagerie sociale

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22834345

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22834345

Country of ref document: EP

Kind code of ref document: A1