US20230162057A1 - Identify recipient(s) based on context and prompt/suggest sender to add identified recipient(s) before sending message - Google Patents

Identify recipient(s) based on context and prompt/suggest sender to add identified recipient(s) before sending message Download PDF

Info

Publication number
US20230162057A1
US20230162057A1 US17/530,959 US202117530959A US2023162057A1 US 20230162057 A1 US20230162057 A1 US 20230162057A1 US 202117530959 A US202117530959 A US 202117530959A US 2023162057 A1 US2023162057 A1 US 2023162057A1
Authority
US
United States
Prior art keywords
message
recipient
communication
machine learning
content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US17/530,959
Inventor
Tanvi Sharma
Pragati Dhumal
Navanath Navaskar
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Avaya Management LP
Original Assignee
Avaya Management LP
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Avaya Management LP filed Critical Avaya Management LP
Priority to US17/530,959 priority Critical patent/US20230162057A1/en
Assigned to AVAYA MANAGEMENT L.P. reassignment AVAYA MANAGEMENT L.P. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DHUMAL, PRAGATI, NAVASKAR, NAVANATH, SHARMA, TANVI
Assigned to WILMINGTON TRUST, NATIONAL ASSOCIATION, AS COLLATERAL AGENT reassignment WILMINGTON TRUST, NATIONAL ASSOCIATION, AS COLLATERAL AGENT INTELLECTUAL PROPERTY SECURITY AGREEMENT Assignors: AVAYA CABINET SOLUTIONS LLC, AVAYA INC., AVAYA MANAGEMENT L.P., INTELLISIST, INC.
Assigned to WILMINGTON SAVINGS FUND SOCIETY, FSB [COLLATERAL AGENT] reassignment WILMINGTON SAVINGS FUND SOCIETY, FSB [COLLATERAL AGENT] INTELLECTUAL PROPERTY SECURITY AGREEMENT Assignors: AVAYA INC., AVAYA MANAGEMENT L.P., INTELLISIST, INC., KNOAHSOFT INC.
Assigned to CITIBANK, N.A., AS COLLATERAL AGENT reassignment CITIBANK, N.A., AS COLLATERAL AGENT INTELLECTUAL PROPERTY SECURITY AGREEMENT Assignors: AVAYA INC., AVAYA MANAGEMENT L.P., INTELLISIST, INC.
Assigned to INTELLISIST, INC., AVAYA INTEGRATED CABINET SOLUTIONS LLC, AVAYA INC., AVAYA MANAGEMENT L.P. reassignment INTELLISIST, INC. RELEASE OF SECURITY INTEREST IN PATENTS (REEL/FRAME 61087/0386) Assignors: WILMINGTON TRUST, NATIONAL ASSOCIATION, AS NOTES COLLATERAL AGENT
Publication of US20230162057A1 publication Critical patent/US20230162057A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/01Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/107Computer-aided management of electronic mailing [e-mailing]

Definitions

  • the present disclosure relates to communication methods and specifically to identifying an additional suggested recipient(s) based on content of a message and prompting a user regarding the identified additional suggested recipient(s).
  • Some devices may support various communication modalities, for example, using email applications.
  • email applications may support any combination of text or multimedia communications between a user and one or more recipients.
  • a machine learning network may analyze the content and generate an output (e.g., a probability score and a confidence score) indicative of an association between the communication and the suggested recipient(s).
  • the device may prompt a user/sender of the identified recipient(s). The probability/confidence score may be determined based on a comparison of the content of the communication to profile information associated with the recipient(s) and/or sender.
  • the device/system may determine that a specific user is mentioned in the message but not included as a recipient (e.g., the email greeting is “Dear John and Jane,” but Jane is not included as a recipient.
  • the system/device may output a notification to alert a sender of an identified/suggested recipient.
  • a user profile associated with the sender and/or other recipients may be analyzed to identify additional recipients (e.g., if the email is sent to multiple recipients on a team, other members of the team may be identified as suggested recipients).
  • information from other applications may be used to identify the additional recipients (e.g., if the communication is related to a meeting, information from a calendar application may be used to determine other meeting attendees as additional recipients to the communication.
  • an intended recipient may have a rule set (e.g., the intended recipient is out of office and indicates another person to receive/respond to communications while they are out). The device may display or otherwise notify the sender of the additional recipients prior to transmitting the communication. If the user/sender selects to add a suggested recipient, the selected recipient is added to the communication and the communication is transmitted.
  • the device may determine additional users to tag in a social media posting (e.g., analyze an image and identify untagged users). In some examples, the device may output a notification suggesting to un-tag the recipient and/or tag a different recipient before completing the social media posting. In some other aspects, the device may output a notification suggesting to modify the content (e.g., text, videos, images) before completing the social media posting.
  • the content e.g., text, videos, images
  • the device may alert the sender that a communication modality (e.g., a messaging window, a messaging application, a social media application) associated with a recipient may be preferred, and the device may output a notification suggesting or indicating that the communication be sent via the preferred communication modality for the recipient.
  • a communication modality e.g., a messaging window, a messaging application, a social media application
  • a method in one aspect, includes: identifying a message that is input via a user interface of a device; providing at least a portion of the message to a machine learning network; receiving from the machine learning network, in response to the machine learning network processing at least the portion of the message, an additional suggested recipient for the message; and outputting a notification associated with the message, wherein the notification allows the additional suggested recipient to be added to as a recipient of the message before the message is transmitted.
  • Examples may include one of the following features, or any combination thereof.
  • the portion of the message comprises a body of the message, processing at least the portion of the message comprising extracting contextual information associated with content included in the body of the message, and the additional suggested recipient is selected based at least in part on the contextual information.
  • the contextual information comprises mentioning a name of the additional suggested recipient.
  • the portion of the message comprises an attachment to the message, processing at least the portion of the message comprising analyzing the attachment and determining other users associated with the attachment, and the additional suggested recipient is selected based at least in part on the users associated with the attachment.
  • the portion of the message comprises one or more recipients of the message, and wherein processing at least the portion of the message comprising determining the additional suggested recipient based on the one or more recipients of the message.
  • a recipient sets a rule for messages to be sent to the additional suggested recipient.
  • the portion of the message comprises a user profile associated with the device, and wherein processing at least the portion of the message comprising determining the additional suggested recipient based on the user profile associated with the device.
  • the method may include receiving via the user interface of the device, a selection of the additional suggested recipient; adding the additional suggested recipient to the message; and transmitting the message.
  • the method may include receiving from the machine learning network confidence and/or probability information corresponding to the additional suggested recipient.
  • the probability information, the confidence information, or both is determined based at least in part on a comparison of at least the portion of the message to profile information of one or more contacts associated with a user profile associated with the device.
  • the message includes text, multimedia data, or both.
  • a device in another aspect, includes: a processor; and memory in electronic communication with the processor; and instructions stored in the memory, the instructions being executable by the processor to: identify a message that is input via a user interface of a device; provide at least a portion of the message to a machine learning network; receive from the machine learning network, in response to the machine learning network processing at least the portion of the message, an additional suggested recipient for the message; and output a notification associated with the message, wherein the notification allows the additional suggested recipient to be added to as a recipient of the message before the message is transmitted.
  • a non-transitory, computer-readable medium comprising a set of instructions stored therein which, when executed by a processor, causes the processor to: identify a message that is input via a user interface of a device; provide at least a portion of the message to a machine learning network; receive from the machine learning network, in response to the machine learning network processing at least the portion of the message, an additional suggested recipient for the message; and output a notification associated with the message, wherein the notification allows the additional suggested recipient to be added to as a recipient of the message before the message is transmitted.
  • each of the expressions “at least one of A, B and C,” “at least one of A, B, or C,” “one or more of A, B, and C,” “one or more of A, B, or C,” “A, B, and/or C,” and “A, B, or C” means A alone, B alone, C alone, A and B together, A and C together, B and C together, or A, B and C together.
  • automated refers to any process or operation done without material human input when the process or operation is performed. However, a process or operation can be automatic, even though performance of the process or operation uses material or immaterial human input, if the input is received before performance of the process or operation. Human input is deemed to be material if such input influences how the process or operation will be performed. Human input that consents to the performance of the process or operation is not deemed to be “material.”
  • Non-volatile media includes, for example, NVRAM, or magnetic or optical disks.
  • Volatile media includes dynamic memory, such as main memory.
  • Computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, magneto-optical medium, a CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, a solid-state medium like a memory card, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read.
  • a digital file attachment to e-mail or other self-contained information archive or set of archives is considered a distribution medium equivalent to a tangible storage medium.
  • the computer-readable media is configured as a database
  • the database may be any type of database, such as relational, hierarchical, object-oriented, and/or the like. Accordingly, the disclosure is considered to include a tangible storage medium or distribution medium and prior art-recognized equivalents and successor media, in which the software implementations of the present disclosure are stored.
  • a “computer readable signal” medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof.
  • a computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • the terms “determine,” “analyze,” “process,” “execute,” “manage,” “calculate,” and “compute,” and variations thereof, as used herein, are used interchangeably and include any type of methodology, process, mathematical operation or technique.
  • the term “manage” includes any one or more of the terms determine, recommend, configure, organize, show (e.g., display), hide, update, revise, edit, and delete, and includes other means of implementing actions (including variations thereof).
  • FIG. 1 illustrates an example of a system that supports identifying and alerting a user when sending a message to an additional suggested recipient in accordance with aspects of the present disclosure.
  • FIG. 2 illustrates an example of a system that supports identifying and alerting a user when sending a message to an additional suggested recipient in accordance with aspects of the present disclosure.
  • FIG. 3 illustrates an example of a messaging window that supports identifying and alerting a user when sending a message to an additional suggested recipient in accordance with aspects of the present disclosure.
  • FIGS. 4 A-B illustrate another example of a window that supports in accordance with aspects of the present disclosure.
  • FIGS. 5 A-B illustrate an example of a process flow that supports identifying and alerting a user when sending a message to an additional suggested recipient in accordance with aspects of the present disclosure.
  • Some electronic devices may support various communication modalities, for example, using email applications, messaging applications, or social networking applications on the devices.
  • the email applications, messaging applications, and social networking applications may support text and multimedia communications between a user and one or more recipients.
  • a user may send a message, and may be unaware that not all intended recipients were included.
  • the user may inadvertently omit a recipient (e.g., intended recipient not included), may be unaware of additional recipients (e.g., other department/team members), or may not know contact information for all recipients (e.g., all attendees of a meeting).
  • the present disclosure may be implemented on the client and/or server side.
  • data on the client side may be processed to make determinations of additional and/or alternative recipients.
  • server-side data e.g., organizational data
  • a messaging application e.g., text messaging, instant messaging, e-mail
  • the user may accidentally enter recipient information which differs from that of an intended recipient (e.g., message is meant for a client named Christine, and the sender frequently emails a colleague Christine).
  • Some techniques may support analysis of syntactic content of a message (e.g., an email communication) to identify syntactical errors associated with the message. Some other techniques may support analysis of syntactic content of a message to determine whether an attachment is to accompany the message prior to transmission. However, such techniques do not support detection and analysis of content in a message, such as contextual information with respect to a recipient.
  • a communication e.g., a message, a social media message, a social media posting
  • the techniques may include building a profile of contacts and type of messages exchanged with the contacts (e.g., based on communication histories).
  • the techniques may include correlating recipients (e.g., emails from a particular sender on a particular topic are often sent to a combination of recipients, the current message is being sent to four of the five recipients, the sender may be prompted regarding adding the fifth recipient).
  • a message from a sender in the accounting department may be addressed to “All Staff,” however, not all employees in the accounting department are listed as recipients, other unlisted accounting employees may be identified and suggested to the sender prior to transmitting the message.
  • a user profile may list a preferred communication type(s) (e.g., email, text, audio call, etc.). The sender may be prompted regarding a recipient's preferred communication type.
  • a preferred communication type(s) e.g., email, text, audio call, etc.
  • a device may alert the user based on the confidence level, thresholds (e.g., detection thresholds, probability thresholds, confidence thresholds), and/or user configuration.
  • thresholds e.g., detection thresholds, probability thresholds, confidence thresholds
  • a device may analyze a message (e.g., using a text analyzer), prior to sending the message, to determine or identify additional recipients (e.g., based on a probability score and/or a confidence score), the device may output a notification to alert the user of the same.
  • a text analyzer e.g., a text analyzer
  • additional recipients e.g., based on a probability score and/or a confidence score
  • the system and/or device may refer to a detection threshold when detecting the possibility of an additional recipient.
  • the device may refer to a detection threshold when outputting the notification to alert the user.
  • the device may perform different actions based on the confidence level of the detection.
  • the system and/or device may refer to various criteria when determining additional recipients for a message.
  • the device may check the content of a message (e.g., message implies that the message is being sent to a single user or multiple users). For example, a message including text such as “I'm working with John and Alice” may imply the message is meant for two recipients, or “Hi Matt” may imply that the message is being sent to a single user. If the device analyzes the text and detects a discrepancy (e.g., multiple recipients mentioned, only one recipient added), the device may notify the user of the discrepancy. In some examples, the device may detect whether a message contains a targeted name. Based on detecting the targeted name, the device may notify the user that the message is for a specific recipient associated with the targeted name, but the specific recipient is not included.
  • the system and/or device may use an Artificial Intelligence (AI) algorithm to group multiple threads (e.g., multiple separate emails) for a given user, the AI algorithm(s) may also analyze the contents of the multiple threads and based on the analysis of the multiple threads, suggest/prompt additional and/or alternative recipients to be added when composing a new thread (e.g., new email).
  • AI Artificial Intelligence
  • a user has multiple email chains related to the same issue, each chain may have a different set of participants.
  • the user may desire to draft a summary/conclusion email for the issue, the system and/or device can process the different sets of participants to identify all unique participants and prompt the user for any missing participants/recipients.
  • the implementation may be done using client-side data (e.g., provide suggestions based on emails in the user's inbox). Additionally, or alternatively, the implementation may be performed on the server-side (e.g., provide the suggestions based on the organization level data, considering all emails company-wide).
  • the system and/or device may support forward learning based on training data.
  • the device may support forward learning based on past actions of the user (e.g., in response to past notifications provided by the device).
  • the device may improve the accuracy associated with message analysis and/or notifications provided by the device.
  • the device may support a combination of artificial intelligence (e.g., machine learning) and natural language processing for determining contextual information associated with a message.
  • the device may support the detection of content and/or contextual information from multimedia data (e.g., video, images, audio, etc.) included with a message to be sent.
  • the device may support data models that are tunable according to user parameters (e.g., user requirements) associated with different users, different enterprise applications, etc.
  • FIG. 1 illustrates an example of a system 100 that supports identifying and alerting a user when sending a message to an additional suggested recipient in accordance with aspects of the present disclosure.
  • the system 100 may include communication devices 105 (e.g., communication device 105 - a through communication device 105 - h ), a server 110 , a database 115 , and a communication network 120 .
  • the communication network 120 may facilitate machine-to-machine communications between any of the communication device 105 (or multiple communication devices 105 ), the server 110 , or one or more databases (e.g., database 115 ).
  • the communication network 120 may include any type of known communication medium or collection of communication media and may use any type of protocols to transport messages between endpoints.
  • the communication network 120 may include wired communications technologies, wireless communications technologies, or any combination thereof.
  • a communication device 105 may transmit or receive data packets to one or more other devices (e.g., another communication device 105 , the server 110 ) via the communication network 120 and/or via the server 110 .
  • the communication device 105 - a may communicate (e.g., exchange data packets) with the communication device 105 - b via the communications network 120 .
  • the communication device 105 - a may communicate with another device (e.g., communication device 105 - e , database 115 ) via the communications network 120 and the server 110 .
  • Non-limiting examples of the communication devices 105 may include, for example, personal computing devices or mobile computing devices (e.g., laptop computers, mobile phones, smart phones, smart devices, wearable devices, tablets, etc.).
  • the communication devices 105 may be operable by or carried by a human user.
  • the communication devices 105 may perform one or more operations autonomously or in combination with an input by the user.
  • the Internet is an example of the communication network 120 that constitutes an Internet Protocol (IP) network consisting of multiple computers, computing networks, and other communication devices located in multiple locations, and components in the communication network 120 (e.g., computers, computing networks, communication devices) may be connected through one or more telephone systems and other means.
  • IP Internet Protocol
  • the communication network 120 may include, without limitation, a standard Plain Old Telephone System (POTS), an Integrated Services Digital Network (ISDN), the Public Switched Telephone Network (PSTN), a Local Area Network (LAN), a Wide Area Network (WAN), a wireless LAN (WLAN), a Session Initiation Protocol (SIP) network, a Voice over Internet Protocol (VoIP) network, a cellular network, and any other type of packet-switched or circuit-switched network known in the art.
  • POTS Plain Old Telephone System
  • ISDN Integrated Services Digital Network
  • PSTN Public Switched Telephone Network
  • LAN Local Area Network
  • WAN Wide Area Network
  • WLAN wireless LAN
  • VoIP Voice over Internet Protocol
  • the communication network 120 may include of any combination of networks or network types.
  • the communication network 120 may include any combination of communication mediums such as coaxial cable, copper cable/wire, fiber-optic cable, or antennas for communicating data (e.g., transmitting/receiving
  • a machine learning network e.g., a machine learning network included in the communication device 105 , a machine learning network included in the server 110
  • the output may include, for example, a probability score and/or a confidence score.
  • the communication device 105 may compare the content of the communication with profile information associated with the recipient and/or sender.
  • the communication device 105 may output a notification to alert a sender of identified additional recipients, and the communication device 105 may refrain from completing the communication until further input (e.g., accept/regarding the additional recipients is received from the sender. For example, the communication device 105 may determine that an additional recipient for a message is suggested, and the communication device 105 may refrain from transmitting the message to the recipient. In some aspects, the communication device 105 may output a notification suggesting an additional recipient for the message.
  • the communication device 105 may identify a message that is input via a user interface of the communication device 105 .
  • the message may include text, multimedia data, or both.
  • the communication device 105 may provide at least a portion of the message to a machine learning network (e.g., a machine learning network included in or implemented by the communication device 105 or the server 110 ).
  • the communication device 105 may receive an output from the machine learning network in response to the machine learning network processing at least the portion of the message.
  • the output may include one or more additional suggested recipient(s) for the message, probability information corresponding to the message and the one or more additional suggested recipient(s) for the message, confidence information associated with the probability information, or a combination thereof.
  • the probability information may include a set of probability scores respectively corresponding to the one or more additional suggested recipient(s), and the confidence information may include a set of confidence scores respectively corresponding to the set of probability scores.
  • the one or more additional suggested recipient(s) may include, for example, one or more intended recipients associated with the message, one or more additional recipients different from the one or more intended recipients, or both.
  • the communication device 105 may suggest the one or more intended recipients based on the output received from the machine learning network.
  • the communication device 105 may select the one or more additional recipients based on the output received from the machine learning network.
  • the communication device 105 may output, via the user interface of the communication device 105 , a notification associated with the message based on the output received from the machine learning network. In some aspects, outputting the notification may be based on a comparison of the set of probability scores to a probability threshold, a comparison of the set of confidence scores to a threshold, or both. In some cases, the communication device 105 may output the notification, transmit the message, or both based on the suggestion of the one or more additional suggested recipient(s). In some cases, the communication device 105 may output the notification, transmit the message, or both based on the selection of one or more additional recipients.
  • the probability information, the confidence information, or both may be determined (e.g., by the machine learning network) based on a comparison of at least the portion of the message to profile information of the recipients/sender.
  • the communication device 105 may assign category information to the recipients/sender, where at least the portion of the message is compared (e.g., by the machine learning network) to the profile information of the recipient/sender based on the category information.
  • the communication device 105 may extract contextual information associated with content included in at least the portion of the message, where the additional suggested recipient is selected based at least in part on the contextual information.
  • the communication device 105 may train the machine learning network based on a communication history, and the machine learning network may provide the output based on the training. In some examples, the communication device 105 (or server 110 ) may train the machine learning network based on a set of actions associated with a user profile, and the machine learning network may provide the output based on the training.
  • the set of actions may be associated with one or more previous messages provided by the communication device 105 (or the server 110 ) to the machine learning network, one or more previous outputs received by the communication device 105 (or server 110 ) from the machine learning network, one or more previously output notifications by the communication device 105 (or another communication device 105 ), one or more previously transmitted messages by the communication device 105 (or another communication device 105 ), or a combination thereof.
  • Example aspects of components and functionalities of the communication devices 105 , the server 110 , the database 115 , and the communication network 120 are provided with reference to FIG. 2 .
  • While the illustrative aspects, embodiments, and/or configurations illustrated herein show the various components of the system 100 collocated, certain components of the system 100 can be located remotely, at distant portions of a distributed network, such as a Local Area Network (LAN) and/or the Internet, or within a dedicated system.
  • a distributed network such as a Local Area Network (LAN) and/or the Internet
  • the components of the system 100 can be combined in to one or more devices or collocated on a particular node of a distributed network, such as an analog and/or digital telecommunications network, a packet-switched network, or a circuit-switched network.
  • a distributed network such as an analog and/or digital telecommunications network, a packet-switched network, or a circuit-switched network.
  • FIG. 2 illustrates an example of a system 200 that supports identifying and alerting a user when sending a message to an additional suggested recipient in accordance with aspects of the present disclosure.
  • the system 200 may be implemented by aspects of the system 100 described with reference to FIG. 1 .
  • the system 200 may include communication devices 205 (e.g., communication device 205 - a through communication device 205 - e ), a server 210 , a database 215 , a communication network 220 , and a content engine 270 .
  • the communication devices 205 , the server 210 , the database 215 , and the communications network 220 may be implemented, for example, by aspects of the communication devices 105 , the server 110 , the database 115 , and the communications network 220 described with reference to FIG. 1 .
  • the communication network 220 may facilitate machine-to-machine communications between any of the communication device 205 (or multiple communication devices 205 ), the server 210 , one or more databases (e.g., database 215 ), and the content engine 270 .
  • the communication network 220 may include any type of known communication medium or collection of communication media and may use any type of protocols to transport messages between endpoints.
  • the communication network 220 may include wired communications technologies, wireless communications technologies, or any combination thereof.
  • the communication devices 205 , the server 210 , and the content engine 270 may support communications over the communications network 220 between multiple entities (e.g., users, such as a sender and a recipient).
  • the system 200 may include any number of communication devices 205 , and each of the communication devices 205 may be associated with a respective entity.
  • settings of the any of the communication device 205 , the server 110 , or the content engine 270 may be configured and modified by any user and/or administrator of the system 200 .
  • Settings may include thresholds described herein, as well as settings related to how content is managed.
  • Settings may be configured to be personalized for one or more communication devices 205 , users of the communication devices 205 , and/or other groups of entities, and may be referred to herein as profile settings, user settings, or organization settings.
  • rules and settings may be used in addition to, or instead of, thresholds described herein.
  • the rules and/or settings may be personalized by a user and/or administrator for any variable, threshold, user (user profile), communication device 205 , entity, or groups thereof.
  • a communication device 205 may include a processor 230 , a network interface 235 , a memory 240 , and a user interface 245 .
  • components of the communication device 205 e.g., processor 230 , network interface 235 , memory 240 , user interface 245
  • may communicate over a system bus e.g., control busses, address busses, data busses
  • the communication device 205 may be referred to as a computing resource.
  • the communication device 205 may transmit or receive packets to one or more other devices (e.g., another communication device 205 , the server 210 , the database 215 , the content engine 270 ) via the communication network 220 , using the network interface 235 .
  • the network interface 235 may include, for example, any combination of network interface cards (NICs), network ports, associated drivers, or the like.
  • Communications between components (e.g., processor 230 , memory 240 ) of the communication device 205 and one or more other devices (e.g., another communication device 205 , the database 215 , the content engine 270 ) connected to the communication network 220 may, for example, flow through the network interface 235 .
  • the processor 230 may correspond to one or many computer processing devices.
  • the processor 230 may include a silicon chip, such as a Field Programmable Gate Array (FPGA), an Application-Specific Integrated Circuit (ASIC), any other type of Integrated Circuit (IC) chip, a collection of IC chips, or the like.
  • the processors may include a microprocessor, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), or plurality of microprocessors configured to execute the instructions sets stored in a corresponding memory (e.g., memory 240 of the communication device 205 ). For example, upon executing the instruction sets stored in memory 240 , the processor 230 may enable or perform one or more functions of the communication device 205 .
  • the processor 230 may utilize data stored in the memory 240 as a neural network.
  • the neural network may include a machine learning architecture.
  • the neural network may be or include an artificial neural network (ANN).
  • ANN artificial neural network
  • the neural network may be or include any machine learning network such as, for example, a deep learning network, a convolutional neural network, or the like.
  • Some elements stored in memory 240 may be described as or referred to as instructions or instruction sets, and some functions of the communication device 205 may be implemented using machine learning techniques.
  • the memory 240 may include one or multiple computer memory devices.
  • the memory 240 may include, for example, Random Access Memory (RAM) devices, Read Only Memory (ROM) devices, flash memory devices, magnetic disk storage media, optical storage media, solid-state storage devices, core memory, buffer memory devices, combinations thereof, and the like.
  • RAM Random Access Memory
  • ROM Read Only Memory
  • flash memory devices magnetic disk storage media
  • optical storage media solid-state storage devices
  • core memory buffer memory devices, combinations thereof, and the like.
  • the memory 240 in some examples, may correspond to a computer-readable storage media. In some aspects, the memory 240 may be internal or external to the communication device 205 .
  • the memory 240 may be configured to store instruction sets, neural networks, and other data structures (e.g., depicted herein) in addition to temporarily storing data for the processor 230 to execute various types of routines or functions.
  • the memory 240 may be configured to store program instructions (instruction sets) that are executable by the processor 230 and provide functionality of a content engine 241 described herein.
  • the memory 240 may also be configured to store data or information that is useable or capable of being called by the instructions stored in memory 240 .
  • One example of data that may be stored in memory 240 for use by components thereof is a data model(s) 242 (also referred to herein as a neural network model) and/or training data 243 (also referred to herein as a training data and feedback).
  • the content engine 241 may include a single or multiple engines.
  • the communication device 205 e.g., the content engine 241
  • the communication device 205 may utilize one or more data models 242 for recognizing and processing information obtained from other communication devices 205 , the server 210 , and the database 215 .
  • the communication device 205 e.g., the content engine 241
  • the content engine 241 and the data models 242 may support forward learning based on the training data 243 .
  • the content engine 241 may have access to and use one or more data models 242 .
  • the data model(s) 242 may be built and updated by the content engine 241 based on the training data 243 .
  • the data model(s) 242 may be provided in any number of formats or forms.
  • Non-limiting examples of the data model(s) 242 include Decision Trees, Support Vector Machines (SVMs), Nearest Neighbor, and/or Bayesian classifiers.
  • the training data 243 may include communication inputs such as communication information associated with the communication device 205 .
  • the communication information may include communication histories between the communication device 205 and other communication devices 205 (e.g., any of communication device 205 - b through communication device 105 - e ), real-time communication data between the communication device 205 and other communication devices 205 , data transmissions between the communication device 205 and the server 210 , etc.
  • communication histories between a communication device 205 e.g., communication device 205 - a
  • another communication device 205 e.g., communication device 205 - b
  • the content engine 241 may be configured to analyze content, which may be any type of information, including information that is historical or in real-time.
  • the content engine 241 may be configured to receive information from other communication devices 205 and/or the server 210 .
  • the content engine 241 may be configured to analyze profile information associated with one or more users, groups, etc.
  • the profile information can include any type of information, including audio and visual information.
  • the content engine 241 may build any number of user profiles using automatic processing, using artificial intelligence and/or using input from one or more users associated with the communication devices 205 .
  • the content engine 241 may use automatic processing, artificial intelligence, and/or inputs from one or more users of the communication devices 205 to determine, manage, and/or combine information relevant to a user profile.
  • the content engine 241 may determine user profile information based on a user's interactions with information.
  • the content engine 241 may update (e.g., continuously, periodically) user profiles based on new information that is relevant to the user profiles.
  • the content engine 241 may receive new information from any communication device 205 , the server 210 , the database 215 , etc.
  • Profile information may be organized and classified in various manners. In some aspects, the organization and classification of profile information may be determined by automatic processing, by artificial intelligence and/or by one or more users of the communication devices 205 .
  • the content engine 241 may create, select, and execute appropriate processing decisions. Processing decisions may include content management, content extraction, and content analysis associated with communications (e.g., messages) created by a communication device 205 . Illustrative examples of content management include rearranging content, modifying content, changing formatting of content, and showing and/or hiding content. Processing decisions, including content analysis, may be handled automatically by the content engine 305 , with or without human input.
  • the content engine 241 may store, in the memory 240 (e.g., in a database included in the memory 240 ), historical information (e.g., communication histories) between the communication device 205 and other devices (e.g., other communication devices 205 , the server 210 , etc.). Data within the database of the memory 240 may be updated, revised, edited, or deleted by the content engine 241 .
  • the content engine 241 may support continuous, periodic, and/or batch fetching of content (e.g., content referenced within a communication, content related to a user, content related to a recipient, etc.) and content aggregation.
  • Information stored in the database included in the memory 240 may include and is not limited to communication information, user information, historical analysis information, processing information including historical processing information, key words, configurations, settings, variables, and properties. Further, information regarding the relevance of different types of content, as well as how to determine relevance (e.g., rules, settings, source(s) of content, rankings of content, location of key words/phrases, repetition of key words/phrases, definitions of relevance, etc.) or contextual information associated with content may be stored in the database included in the memory 240 .
  • relevance e.g., rules, settings, source(s) of content, rankings of content, location of key words/phrases, repetition of key words/phrases, definitions of relevance, etc.
  • the communication device 205 may render a presentation (e.g., visually, audibly, using haptic feedback, etc.) of an application 244 (e.g., a browser application 244 - a , a messaging application 244 - b , a social media application 244 - c ).
  • an application 244 e.g., a browser application 244 - a , a messaging application 244 - b , a social media application 244 - c .
  • the communication device 205 may render the presentation via the user interface 245 .
  • the user interface 245 may include, for example, a display (e.g., a touchscreen display), an audio output device (e.g., a speaker, a headphone connector), or any combination thereof.
  • the applications 244 may be stored on the memory 240 .
  • the applications 244 may include cloud-based applications or server-based applications (e.g., supported and/or hosted by the server 210 ).
  • Settings of the user interface 245 may be partially or entirely customizable and may be managed by one or more users, by automatic processing, and/or by artificial intelligence.
  • any of the applications 244 may be configured to receive data in an electronic format and present content of data via the user interface 245 .
  • the applications 244 may receive data from another communication device 205 , the server 210 , or the content engine 270 via the communications network 220 , and the communication device 205 may display the content via the user interface 245 .
  • the database 215 may include a relational database, a centralized database, a distributed database, an operational database, a hierarchical database, a network database, an object-oriented database, a graph database, a NoSQL (non-relational) database, etc.
  • the database 215 may store and provide access to, for example, any of the stored data described herein.
  • the server 210 may include a processor 250 , a network interface 255 , database interface instructions 260 , and a memory 265 .
  • components of the server 210 e.g., processor 250 , network interface 255 , database interface 260 , memory 265
  • the processor 250 , network interface 255 , and memory 265 of the server 210 may include examples of aspects of the processor 230 , network interface 235 , and memory 240 of the communication device 205 described herein.
  • the processor 250 may be configured to execute instruction sets stored in memory 265 , upon which the processor 250 may enable or perform one or more functions of the server 210 .
  • the processor 250 may utilize data stored in the memory 265 as a neural network.
  • the server 210 may transmit or receive packets to one or more other devices (e.g., a communication device 205 , the database 215 , another server 210 , the content engine 270 ) via the communication network 220 , using the network interface 255 .
  • Communications between components (e.g., processor 250 , memory 265 ) of the server 210 and one or more other devices (e.g., a communication device 205 , the database 215 , the content engine 270 ) connected to the communication network 220 may, for example, flow through the network interface 255 .
  • the database interface instructions 260 when executed by the processor 250 , may enable the server 210 to send data to and receive data from the database 215 .
  • the database interface instructions 260 when executed by the processor 250 , may enable the server 210 to generate database queries, provide one or more interfaces for system administrators to define database queries, transmit database queries to one or more databases (e.g., database 215 ), receive responses to database queries, access data associated with the database queries, and format responses received from the databases for processing by other components of the server 210 .
  • the memory 265 may be configured to store instruction sets, neural networks, and other data structures (e.g., depicted herein) in addition to temporarily storing data for the processor 250 to execute various types of routines or functions.
  • the memory 265 may be configured to store program instructions (instruction sets) that are executable by the processor 250 and provide functionality of the content engine 266 described herein.
  • One example of data that may be stored in memory 265 for use by components thereof is a data model(s) 267 (also referred to herein as a neural network model) and/or training data 268 .
  • the data model(s) 267 and the training data 268 may include examples of aspects of the data model(s) 242 and the training data 243 described with reference to the communication device 205 .
  • the server 210 may utilize one or more data models 267 for recognizing and processing information obtained from communication devices 205 , another server 210 , and the database 215 .
  • the server 210 e.g., the content engine 266
  • components of the content engine 266 may be provided in a separate engine (e.g., the content engine 270 ) in communication with the server 210 .
  • the content engine 270 may include a processor 275 , a network interface 280 , and a memory 285 .
  • components of the content engine 270 e.g., processor 275 , network interface 280 , memory 285
  • the processor 275 , network interface 280 , memory 285 may include examples of aspects of the processor 250 , network interface 255 , and memory 265 of the server 210 described herein.
  • the memory 285 may be configured to store program instructions (instruction sets) that are executable by the processor 275 and provide functionality of the content engine 270 described herein.
  • One example of data that may be stored in memory 285 for use by components thereof is a data model(s) 286 (also referred to herein as a neural network model) and/or training data 287 .
  • the data model(s) 286 and the training data 287 may include examples of aspects of the data model(s) 267 and the training data 268 described with reference to the content engine 266 .
  • a communication device 205 may support one or more operations or procedures associated with analyzing the content of a communication (e.g., a message, a social media posting) via the communication device 205 .
  • the communication device 205 may identify a communication input by a user (also referred to herein as a sender) of the communication device 205 , via the messaging application 244 - b or the email application 244 - c .
  • the messaging application 244 - b may include a text messaging application, an instant messaging application, an electronic messaging board application, an email application, or the like.
  • the communication may be a message including text and/or multimedia data (e.g., video, images, audio, etc.).
  • the message may include a recipient (also referred to herein as an intended recipient or an addressee) or a set of recipients associated with the message.
  • the communication may be an email, message, or social media posting including text and/or multimedia data (e.g., video, images, audio, etc.).
  • the message may include a recipient (also referred to herein as an intended recipient or an addressee) or a set of recipients associated with the communication.
  • the social media posting may include a contact “tagged” in the social media posting. Examples of the aspects described herein may be applied to messages communicated via the messaging application 244 - b or the email application 244 - c , social media posts created and published, or the like.
  • the recipient(s) may be a recipient selected by the communication device 205 (e.g., based on a user input, autonomously by the communication device 205 ) from a contact list associated with a user profile of the user.
  • the contact list may be stored in the memory 240 of the communication device 205 , the memory 265 of the server 210 , the database 215 , the content engine 270 , and/or any cloud-based storage.
  • the contact list may include profile information (e.g., image-based, text-based, etc.) associated with contacts included in the contact list.
  • the communication device 205 may provide at least a portion of the message (e.g., body, recipient list, sender information, etc.) to a machine learning network (e.g., a machine learning network implemented by the content engine 241 , the content engine 266 , or the content engine 270 ) to identify/suggest an additional recipient, for example, prior to sending the message.
  • a machine learning network e.g., a machine learning network implemented by the content engine 241 , the content engine 266 , or the content engine 270
  • the communication device 205 may provide the message (or message portion) to the machine learning network based on receiving a user input (e.g., via the user interface 245 ) for sending the message.
  • the communication device 205 may provide portions of the message in real-time, for example, as the message is input to the communication device 205 , when adding an attachment, when adding recipients, etc.
  • the machine learning network may generate an output based on an analysis of the content of the message (or message portion). For example, the machine learning network may analyze the content of the message to determine whether the message is for a specific recipient (e.g., recipient's name in message). The machine learning network may extract and analyze contextual information associated with the message (e.g., keywords in the subject or body of the message). In an example, the machine learning network may determine the contextual information from text (e.g., language used, text strings, etc.) included with the message. In another example, the machine learning network may determine the message is part of an email thread, and compile the suggested recipients based on the recipients included in other messages of the email thread. In yet another example, the machine learning network may use information (e.g., an organizational chart) to determine additional recipients to suggest.
  • information e.g., an organizational chart
  • the machine learning network may determine the contextual information using natural language processing techniques such as linguistic context recognition, word recognition, etc.
  • the machine learning network may support text scanning, searching, parsing, and/or analysis (including performing semantic and/or syntactic analysis).
  • the machine learning network may support recognition of keywords (e.g., names) associated with the contextual information.
  • the machine learning network may divide and/or subdivide text content into portions based on paragraphs, sentences, sections, types of content, etc.
  • the machine learning network may determine the contextual information from multimedia data (e.g., video, images, audio, etc.) included with the message (or portion of the message). For example, the machine learning network may determine the contextual information using techniques such as object detection and recognition (e.g., facial detection and recognition, landmark detection and recognition, etc.) to video images or still images. In some examples, the machine learning network may apply techniques such as audio-based context recognition to audio data associated with the video images. In some other cases, the machine learning network may apply audio-based context recognition to audio data (e.g., an audio file, a voice recording) included with the message. In another example, metadata from an attachment may be used to determine contextual information (e.g., suggest including the authors of an attached document as recipients).
  • object detection and recognition e.g., facial detection and recognition, landmark detection and recognition, etc.
  • audio-based context recognition e.g., an audio file, a voice recording
  • metadata from an attachment may be used to determine contextual information (e.g., suggest including the authors of an attached document
  • the machine learning network may compare the contextual information to profile information associated with the recipient.
  • Sally may be on vacation and indicate a co-worker to handle communications while she is out. If the message includes Sally as a recipient, the co-worker may be suggested as a recipient. In another example, if a message includes contextual information that the sender is following up on a previous message, the recipient's supervisor and/or colleague may be suggested as a recipient.
  • the suggested recipients may be anyone that indicated they are attending/attended the meeting (e.g., sending a meeting agenda or other meeting content prior to a meeting, or sending meeting notes after a meeting has concluded).
  • the machine learning network may determine a strength of a relationship, also referred to herein as a relevancy or a contextual relationship, between the content of the message (e.g., contextual information, subject matter) and the suggested/intended recipient of the message.
  • the strength of the relationship may be indicated by probability information (e.g., a probability score) and/or confidence information (e.g., a confidence score) described herein.
  • the communication device 205 may determine that the content does matches the profile information associated with the suggested recipient (e.g., based on a probability score and/or confidence score), and the communication device 205 may perform one or more operations based on the probability score and/or confidence score. For example, the communication device 205 may output a notification to alert a sender that the suggested recipient should be included, and the communication device 205 may refrain from completing the communication. In some aspects, the communication device 205 may output a notification suggesting an alternative recipient for the message.
  • the machine learning network may generate an output inclusive of probability information and confidence information corresponding to the message and the suggested recipient.
  • the probability information may include a probability score (e.g., from 0.00 to 1.00) of whether the content of the message matches profile information associated with the suggested recipient.
  • the confidence information may include a confidence score (e.g., from 0.00 to 1.00) corresponding to the probability score.
  • the recipient may be associated with (e.g., classified with) a specific department (e.g., accounting, legal, engineering, human resources, etc.) of a company, and the machine learning network may determine the probability score based on a correlation between the content (e.g., context information) of the message and a classification associated with the sender/recipient.
  • the machine learning network may output a relatively low probability score for cases in which the machine learning network identifies a relatively low correlation (e.g., below a threshold) between the content of the message and the classification associated with the suggested recipient.
  • the machine learning network may output a relatively high probability score for cases in which the machine learning network identifies a relatively high correlation (e.g., above a threshold) between the content of the message and the classification associated with the suggested recipient.
  • the machine learning network may generate an output inclusive of probability information and confidence information corresponding to the message and any or all of the recipients. For example, the machine learning network may compare the contextual information to respective profile information associated with the recipients.
  • the probability information may include respective probability scores corresponding to any or all of the recipients determined, for example, based on a correlation between the content (e.g., context information) of the message.
  • the confidence information may include respective confidence scores corresponding to the probability scores.
  • the machine learning network may determine the probability scores based on an analysis of communication histories (e.g., email threads, past communications, email subject, messages within a specified timeframe, etc.) between the sender of the message and suggested recipients.
  • the communication device 205 may output a notification associated with the message. For example, the communication device 205 may suggest an additional recipient associated with the message. The communication device 205 may render or output one or more notifications for suggesting the additional recipient and/or suggesting alternative recipients. In some embodiments, suggesting recipients may include suggesting that an indicated recipient be removed from the message.
  • the communication device 205 may output a notification suggesting an additional recipient (e.g., an intended recipient, an alternative recipient, an additional recipient, etc.) based on a probability score corresponding to the recipient being equal to or greater than a probability score threshold.
  • the communication device 205 may output the notification suggesting the additional recipient based on a confidence score (corresponding to the probability score) being equal to or greater than a confidence score threshold.
  • the thresholds described herein may be set based on various criteria, and multiple thresholds may be set or configured for different types of content.
  • the thresholds may be pre-set (e.g., previously configured, pre-determined, or determined before the content organizer methods are applied to new content), and may change based on any criteria.
  • the rules and thresholds may be set by a user, automatically, and/or by artificial intelligence before a communication to which content is to be applied is received.
  • thresholds may be set automatically and changed automatically (for example based on other thresholds), by artificial intelligence, and/or they may be defined by a user.
  • each threshold may be associated with a respective weighting factor.
  • the machine learning network may assign a probability score based on a first threshold associated with detecting or identifying a set of key words according to a first criteria (e.g., names included in the message text compared to names/addresses included in a recipient list).
  • the machine learning network may modify (e.g., increase) the probability score based on a second threshold associated with detecting or identifying a set of key words according to a second criteria (e.g., a correlation between the content of the communication and a classification associated with the recipient).
  • the machine learning network may modify (e.g., increase) the probability score based on a third threshold associated with detecting or identifying a set of key words according to a third criteria (e.g., whether the content of the message matches the content associated with a communication history (e.g., email thread, subject line, etc.).
  • a third threshold associated with detecting or identifying a set of key words according to a third criteria (e.g., whether the content of the message matches the content associated with a communication history (e.g., email thread, subject line, etc.).
  • the machine learning network may support any combination of criteria, thresholds, and/or weighting factors, and the machine learning network may apply the same in any order when calculating and/or assigning a probability score.
  • the notification may include any combination of visual, audio, and/or haptic notifications via the user interface 245 (e.g., a display) and/or an external device (e.g., wireless headphones, a wearable device such as smart glasses or a smartwatch, etc.) coupled to the communication device 205 .
  • the communication device 205 may apply visual differences (e.g., different highlighting, different colors, different font types, etc.) to content within the communication and/or filter the content within the communication to show less content (e.g., content having a relevance above a threshold).
  • the notification may include a pop-up with the suggested recipients listed and a method (e.g., radio button) of selecting from the list of recipients.
  • the communication device 205 may transmit the message based on a user input. For example, based on the output notification provided by the communication device 205 , the user input may select one or more of the suggested recipients for a message, content of the message, a messaging window for sending the message, and an application for sending the message.
  • the communication device 205 may transmit the message based on the user input.
  • the communication device 205 may autonomously transmit the message based on learned information included in training data (e.g., training data 243 , training data 268 , training data 287 ).
  • the training data may include, for example, previous actions by a user with respect to probability scores and confidence scores calculated by the machine learning network.
  • the training data may include, for example, previous actions by a user with respect to notifications previously output by the communication device 205 .
  • the probability score thresholds and/or confidence score thresholds described herein may be configurable via the communication device 205 (e.g., based on user settings).
  • the probability score threshold and/or confidence score threshold may autonomously configured by the communication device 205 or a machine learning network described herein.
  • the communication device 205 or machine learning network may autonomously configure the probability score threshold and/or confidence score threshold based on learned information included in training data (e.g., training data 243 , training data 268 , training data 287 ).
  • the training data may include, for example, previous decisions by a user (e.g., selecting/rejecting suggested recipients) with respect to predictions by the machine learning network.
  • the training data may include data communications created, transmitted, or received by the communication device 205 over the communication network 220 .
  • the communication device 205 may identify a communication input by a user (also referred to herein as a sender or author) of the communication device 205 .
  • the communication may include text, multimedia data (e.g., video, images, audio, etc.).
  • the communication may include a recipient (or addressee) in the case of a message.
  • the communication may include a “tagged” contact in the case of a social media posting.
  • FIG. 3 illustrates an example of a messaging window 300 for a messaging that supports identifying and alerting a user when sending a message to an additional suggested recipient in accordance with aspects of the present disclosure.
  • FIG. 3 illustrates an example communication device 305
  • the messaging window 300 may be an example of a messaging window described herein.
  • the communication device 305 may include examples of aspects of a communication device 105 or a communication device 205 described with reference to FIGS. 1 and 2 .
  • the communication device 305 may display a user interface 330 (e.g., an on-screen keyboard) for inputting messages.
  • a user interface 330 e.g., an on-screen keyboard
  • the messaging window 300 may include a header 310 indicating a contact/recipient (e.g., Bill Smith) associated with the messaging window 300 .
  • the header 310 may include the name 311 (e.g., Bill Smith) of the contact and classification information 313 (e.g., accounting) associated with the contact.
  • the messaging window 300 may include a communication history associated with a user of the communication device 305 and the contact. For example, the communication history may include messages previously sent by the user and messages previously received from the contact.
  • the user may have input a message 321 stating, “Hello all, Please find attached the meeting notes for the end of year accounting meeting held yesterday.”
  • the user may have included an attachment 322 (e.g., “Meeting Notes”) to the message 321 .
  • a machine learning network e.g., integrated or implemented by the communication device 305 , a server, or content engine described herein
  • the machine learning network may identify that the content of the message 321 (e.g., contextual information associated with the text “Hello all” and “accounting meeting).
  • the machine learning network may identify that the attachment 322 (e.g., based on contextual information associated with the title of the attachment 322 , based on content included in the attachment 322 , metadata associated with the attachment 322 , etc.).
  • the communication device 305 may output the notification (!) 312 , the notification 323 , and/or apply visual differences described herein in real-time as the user inputs the message 321 , includes the attachment 322 , and/or adds recipients. In some other aspects, the communication device 305 may output the notification 312 , the notification 323 , and/or apply visual differences based on detecting a user input selecting the “send” button 324 . In some examples, based on a user input selecting the notification 323 , the communication device 305 may display additional suggested recipients. In some other examples, based on the user input selecting the notification 323 , the communication device 305 may automatically add the suggested recipients.
  • FIGS. 4 A-B illustrate another example of a window 400 for supports identifying and alerting a user when sending a message to an additional suggested recipient in accordance with aspects of the present disclosure.
  • FIG. 4 illustrates an example communication device 405
  • the window 400 may be an example of a messaging window described herein.
  • the communication device 405 may include examples of aspects of a communication device 105 , a communication device 205 , or a communication device 305 described with reference to FIGS. 1 through 3 .
  • the communication device 405 may display a user interface 430 (e.g., an on-screen keyboard) for inputting messages.
  • a user interface 430 e.g., an on-screen keyboard
  • the window 400 may include an input area 410 for recipients of the message 421 , input area 414 for a subject of the message, attachment 422 (e.g., document, image, video, etc.).
  • the communication device 405 may determine that a “recipient” mentioned (e.g., Pragati) is not included in the input area 410 for recipients.
  • a “recipient” mentioned e.g., Pragati
  • other email messages may be identified, and the recipients of the other messages may be analyzed to identify additional recipients.
  • additional recipients may be identified.
  • the communication device 405 may refrain from completing (e.g., sending) the message 421 .
  • a machine learning network e.g., integrated or implemented by the communication device 405 , a server, or content engine described herein
  • may identify e.g., the notification 423
  • the communication device 405 may output the notification 423 suggesting the additional recipients before transmitting/sending the message 421 .
  • FIG. 4 B illustrates the user's selection of Pragati Dhumal as an additional recipient of the message 421 .
  • FIGS. 5 A-B illustrates an example of a process flow 500 that supports identifying and alerting a user when sending a message to an additional suggested recipient in accordance with aspects of the present disclosure.
  • process flow 500 may implement aspects of a communication device 105 , a server 110 , a communication device 205 , a server 210 , or a content engine 270 , and a communication device 305 described with reference to FIGS. 1 - 4 .
  • the operations may be performed in a different order than the order shown, or the operations may be performed in different orders or at different times. Certain operations may also be left out of the process flow 500 , or other operations may be added to the process flow 500 . It is to be understood that while the communication device 105 is described as performing a number of the operations of process flow 500 , any device (e.g., another communication device 105 , a combination of a communication device 105 and a server 110 ) may perform the operations shown.
  • a device may identify a message (or other communication) that is input via a user interface of the device.
  • the message may include text, multimedia data, or both.
  • the device may provide at least a portion of the message (e.g., content, subject, recipient information, etc.) to a machine learning network (e.g., a machine learning network included in or implemented by the communication device 105 or the server 110 ).
  • a machine learning network e.g., a machine learning network included in or implemented by the communication device 105 or the server 110 .
  • the portion provided to the machine learning network may comprises a portion of multiple emails and/or a portion of multiple email threads.
  • the device may receive an output from the machine learning network in response to the machine learning network processing at least the portion of the message.
  • the output may include one or more additional suggested recipients for the message, probability information corresponding to the message and the suggested recipients, confidence information associated with the probability information, or a combination thereof.
  • the communication device 105 may extract contextual information associated with content included in at least the portion of the message, where at least the portion of the message is compared (e.g., by the machine learning network) to other information and the suggested recipients are identified based on the contextual information.
  • the device may output, via the user interface of the device, a notification (e.g., notification 323 / 423 ) associated with the message based on the output received from the machine learning network.
  • a notification e.g., notification 323 / 423
  • the process 500 proceeds to transmit the message (at 514 ). If additional recipients are selected (Yes), at 512 the selected recipient(s) are added to the message, and at 514 the message is transmitted.
  • FIG. 5 B illustrates a more detailed process flow of step 504 that supports identifying and alerting a user when sending a message to an additional suggested recipient in accordance with aspects of the present disclosure.
  • the operations may be performed in a different order than the order shown, or the operations may be performed in different orders or at different times. Certain operations may also be left out of the process flow 504 , or other operations may be added to the process flow 504 . It is to be understood that while the communication device 105 is described as performing a number of the operations of process flow 504 , any device (e.g., another communication device 105 , a combination of a communication device 105 and a server 110 ) may perform the operations shown.
  • a portion of the message is analyzed (e.g., subject, metadata, keywords, etc.).
  • the portion provided to the machine learning network may comprises a portion of multiple emails and/or a portion of multiple email threads.
  • contextual information is determined based on the analysis. For example, other messages may be identified based on the subject. In another email, suggested recipients may be identified based on names, keywords, etc. included in the message. In yet another example, a rule may be identified based on a recipient. In another example, the contextual information may be based at least in part on an attachment included in the message. Additionally, contextual information may be determined based on a user profile associated with a recipient/sender of the message.
  • additional recipients are identified based on the contextual information. For example, if the email includes a name of an individual, the address for that individual may be suggested to the sender. In another example, if the attachment was produced by multiple authors (e.g., indicated by metadata), the authors of the attachment may be the suggested recipients. In yet another example, if the contextual information indicates a topic, the suggested recipients may be identified based on their relation/relevance to the identified topic. At 504 confidence/probability information associated with the suggested recipients is identified/provided.
  • the methods, devices, and systems described herein may be applied to email clients, instant messaging clients, other types of messaging clients, and social media platforms.
  • Artificial intelligence including the utilization of machine learning, can be used in various aspects disclosed herein. For example, as discussed, various levels of content can be analyzed and classified, and this can be configurable by artificial intelligence and/or by user preference. Artificial intelligence, as used herein, includes machine learning. Artificial intelligence and/or user preference can configure information that is used to analyze content and identify relationships between content. For example, artificial intelligence and/or user preference can determine which information is compared to content in order to analyze the content.
  • Artificial intelligence and/or user preference may also be used to configure user profile(s), which may be used to determine relevance to a user (e.g., the user associated with the user profile), a communication (e.g., message, social media posting), a messaging window, and/or an application by comparing the content to information contained within the user profile.
  • Some embodiments utilize natural language processing in the methods and systems disclosed herein.
  • machine learning models can be trained to learn what information is relevant to a user or different users.
  • Machine learning models can have access to resources on a network and access to additional tools to perform the systems and methods disclosed herein.
  • data mining and machine learning tools and techniques will discover information used to determine content and/or contextual information. For example, data mining and machine learning tools and techniques will discover user information, user preferences, relevance of content, levels of relevance, contextual information, key word(s) and/or phrases, thresholds, comparison(s) to threshold(s), configuration(s) of content organization of content, and configuration(s) of a user interface, among other embodiments, to provide improved content analysis and/or contextual analysis.
  • Machine learning may manage one or more types of information (e.g., user profile information, communication information, etc.), types of content (including portions of content within communication information), comparisons of information, levels of relevance, and organization (including formatting of content).
  • Machine learning may utilize all different types of information.
  • the information can include various types of visual information, documents (including markup languages like Hypertext Markup Language (HTML)), and audio information (e.g., using natural language processing).
  • Inputs and outputs, as described herein, may be managed by machine learning.
  • Machine learning may determine variables associated with information, and compare information (including variables within the information) with thresholds to determine relevance.
  • the relevance may determine content organization based on rules associated with the relevance.
  • Machine learning may manage properties (including formatting, hiding and/or reorganizing) and configurations of the organized content. Any of the information and/or outputs may be modified and act as feedback to the system.
  • methods and systems disclosed herein use information to analyze content and contextual information. Relevance, and variations thereof, can refer to a determination of how closely a piece of information relates to another piece of information.
  • information may be information that is related to a user, and includes and is not limited to user profile information, association with groups, and information from interactions of the user.
  • Information that is related to a user may be obtained from groups or entities associated with the user.
  • Artificial intelligence and/or user preferences may determine the information that is relevant to a user. For example, artificial intelligence and/or user preference may determine what information is relevant (e.g., appropriate, inappropriate, related) to one or more users, communications, messaging windows, messaging applications, social media applications, etc.
  • Artificial intelligence may use the information to build a profile associated with one or more users, where the profile defines what information (or variables within the information) is relevant to the user, previous communications with the user, current communications generated by the user, and other properties of the relevance (e.g., type of relevance (importance, priority, rank, etc.), level of relevance, etc.).
  • Artificial intelligence and/or user preference may use the profile to compare information with content of a communication to determine relevance between the content (including any differences in relevance of various portions of the content), contacts indicated in the communication, contacts of a user generating the communication, messaging windows associated with the communication, messaging applications associated with the communication, and/or social media applications associated with the communication. Relevance of information is used in various embodiments herein to determine rules for how the content may be extracted and/or analyzed for contextual information.
  • the information includes determining how closely related information is. The determination may be by comparison, and may use any criteria, such as thresholds and/or one or more key words.
  • the information may be content within a communication, where a communication can contain one or more pieces (also referred to herein as portions, sections, or items) of content, and may include content that ranges from being not related at all (e.g., not relevant) to content that is directly related (e.g., highly relevant) to contacts indicated in the communication, contacts of a user generating the communication, messaging windows associated with the communication, messaging applications associated with the communication, and/or social media applications associated with the communication.
  • Portions of content may also be referred to herein as simply “content.” There may be two levels of relevance (e.g., relevant and not relevant), or any number of levels of relevance. Any information may be compared to determine relevance.
  • user information may be used to determine relevance.
  • the user information may include profile information, such as one or more user profiles, that is compared to content.
  • Various types of relevance may be determined for one or more pieces of information, including relevance, priority, importance, precedence, weight, rank, etc. For example, relevance may determine how related content is to a user, contacts indicated in the communication, contacts of a user generating the communication, messaging windows associated with the communication, messaging applications associated with the communication, and/or social media applications associated with the communication, while priority ranks how important the content is.
  • priority may be based on how relevant a piece of content is. Any combination of one or more types of relevance may be configured and used. For example, content may have high relevance with a low priority and a low rank, content may have high relevance with a high importance and high rank, content may have low relevance with a high importance and a low priority, etc. Content may have any combinations of types of relevance.
  • information includes communications, messages, electronic records, content, visual content including text and images, audio content, rich media, data, and/or data structures.
  • Communications include emails, messages, documents, files, etc.
  • Information includes content, and communications include content (also referred to herein as data).
  • Content may be one type or multiple types (e.g., text, images, hyperlinks, etc.) and there may be multiple pieces of content within a communication or a piece of information, regardless of content type.
  • Content may contain one or more variables.
  • Communications may be data that is stored on a storage/memory device, and/or transmitted from one communication device to another communication device via a communication network.
  • Communications include messages.
  • a message may be transmitted via one or more data packets.
  • the formatting of such data packets may be based on the messaging protocol used for transmitting the electronic records over the communication network.
  • Communication information can include any type of data related to communications of a user and/or entity (e.g., information being sent to user(s), received from user(s), created by user(s), accessed by user(s), viewed by user(s), etc.).
  • Content of a communication can include information associated with the communication as well as information contained within the communication.
  • Content of a communication may include information not only that is sent and received, but also other information such as information that a user does not necessarily send or receive.
  • Content of communications may be classified in various ways, such as by a timing of the content, items the content is related to, users the content is related to, key words or other data within fields of the communication (e.g., to field, from field, subject, body, etc.), among other ways of classifying the content.
  • the content may be analyzed based on information associated with the content and/or variable(s), including the location of the content and/or variables as it relates to the communication (e.g., a field, sender, recipient, title, or body location within the communication).
  • a data model may correspond to a data set that is useable in an artificial neural network and that has been trained by one or more data sets that describe communications between two or more entities.
  • the data model may be stored as a model data file or any other data structure that is useable within a neural network or an artificial intelligence system.
  • aspects of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Any combination of one or more computer readable medium(s) may be utilized.
  • the computer readable medium may be a computer readable signal medium or a computer readable storage medium.
  • the various links connecting the elements can be wired or wireless links, or any combination thereof, or any other known or later developed element(s) that is capable of supplying and/or communicating data to and from the connected elements.
  • These wired or wireless links can also be secure links and may be capable of communicating encrypted information.
  • Transmission media used as links can be any suitable carrier for electrical signals, including coaxial cables, copper wire and fiber optics, and may take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications.
  • the systems and methods of this disclosure can be implemented in conjunction with a special purpose computer, a programmed microprocessor or microcontroller and peripheral integrated circuit element(s), an ASIC or other integrated circuit, a digital signal processor, a hard-wired electronic or logic circuit such as discrete element circuit, a programmable logic device or gate array such as PLD, PLA, FPGA, PAL, special purpose computer, any comparable means, or the like.
  • a special purpose computer a programmed microprocessor or microcontroller and peripheral integrated circuit element(s), an ASIC or other integrated circuit, a digital signal processor, a hard-wired electronic or logic circuit such as discrete element circuit, a programmable logic device or gate array such as PLD, PLA, FPGA, PAL, special purpose computer, any comparable means, or the like.
  • any device(s) or means capable of implementing the methodology illustrated herein can be used to implement the various aspects of this disclosure.
  • Illustrative hardware that can be used for the disclosed embodiments, configurations, and aspects includes computers, handheld devices, telephones (e.g., cellular, Internet enabled, digital, analog, hybrids, and others), and other hardware known in the art. Some of these devices include processors (e.g., a single or multiple microprocessors), memory, nonvolatile storage, input devices, and output devices.
  • processors e.g., a single or multiple microprocessors
  • memory e.g., a single or multiple microprocessors
  • nonvolatile storage e.g., a single or multiple microprocessors
  • input devices e.g., input devices
  • output devices e.g., input devices, and output devices.
  • alternative software implementations including, but not limited to, distributed processing or component/object distributed processing, parallel processing, or virtual machine processing can also be constructed to implement the methods described herein.
  • Examples of the processors as described herein may include, but are not limited to, at least one of Qualcomm® Qualcomm® Qualcomm® 800 and 801, Qualcomm® Qualcomm® Qualcomm® 610 and 615 with 4G LTE Integration and 64-bit computing, Apple® A7 processor with 64-bit architecture, Apple® M7 motion coprocessors, Samsung® Exynos® series, the Intel® CoreTM family of processors, the Intel® Xeon® family of processors, the Intel® AtomTM family of processors, the Intel Itanium® family of processors, Intel® Core® i5-4670K and i7-4770K 22 nm Haswell, Intel® Core® i5-3570K 22 nm Ivy Bridge, the AMD® FXTM family of processors, AMD® FX-4300, FX-6300, and FX-8350 32 nm Vishera, AMD® Kaveri processors, Texas Instruments® Jacinto C6000TM automotive infotainment processors, Texas Instruments® OMAPTM automotive-grade mobile processors, ARM® Cor
  • the disclosed methods may be readily implemented in conjunction with software using object or object-oriented software development environments that provide portable source code that can be used on a variety of computer or workstation platforms.
  • the disclosed system may be implemented partially or fully in hardware using standard logic circuits or very large-scale integration (VLSI) design. Whether software or hardware is used to implement the systems in accordance with this disclosure is dependent on the speed and/or efficiency requirements of the system, the particular function, and the particular software or hardware systems or microprocessor or microcomputer systems being utilized.
  • the disclosed methods may be partially implemented in software that can be stored on a storage medium, executed on programmed general-purpose computer with the cooperation of a controller and memory, a special purpose computer, a microprocessor, or the like.
  • the systems and methods of this disclosure can be implemented as program embedded on personal computer such as an applet, JAVA® or Common Gateway Interface (CGI) script, as a resource residing on a server or computer workstation, as a routine embedded in a dedicated measurement system, system component, or the like.
  • the system can also be implemented by physically incorporating the system and/or method into a software and/or hardware system.
  • Methods described or claimed herein can be performed with traditional executable instruction sets that are finite and operate on a fixed set of inputs to provide one or more defined outputs.
  • methods described or claimed herein can be performed using artificial intelligence, machine learning, neural networks, or the like.
  • a system is contemplated to include finite instruction sets and/or artificial intelligence-based models/neural networks to perform some or all of the steps described herein.
  • the present disclosure in various aspects, embodiments, and/or configurations, includes components, methods, processes, systems, and/or apparatus substantially as depicted and described herein, including various aspects, embodiments, configurations embodiments, subcombinations, and/or subsets thereof.
  • the present disclosure in various aspects, embodiments, and/or configurations, includes providing devices and processes in the absence of items not depicted and/or described herein or in various aspects, embodiments, and/or configurations hereof, including in the absence of such items as may have been used in previous devices or processes, e.g., for improving performance, achieving ease and ⁇ or reducing cost of implementation.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

A device may analyze the content of a communication input via the device and suggest a recipient for the communication based on the analysis. A machine learning network associated with the device may analyze the content and generate a probability score and a confidence score indicating an association between the communication and the recipient. Based on the probability score or confidence score provided by the machine learning network, the device may verify whether the content of the communication matches profile information associated with the recipient. In some cases, the device may refrain from transmitting the message to the recipient or output a notification suggesting an additional recipient(s) for the message. The device may output a notification to modify content of the communication, modify recipients for the communication, select a different messaging window for the communication, or select a different application for the communication.

Description

    FIELD
  • The present disclosure relates to communication methods and specifically to identifying an additional suggested recipient(s) based on content of a message and prompting a user regarding the identified additional suggested recipient(s).
  • BACKGROUND
  • Some devices may support various communication modalities, for example, using email applications. In some cases, email applications may support any combination of text or multimedia communications between a user and one or more recipients.
  • BRIEF SUMMARY
  • According to example aspects of the present disclosure, techniques are described for analyzing the content of a communication (e.g., a message, a social media posting, etc.) and identifying a recipient(s) associated with the communication based on the analysis (e.g., based on context information associated with the communication). In some aspects, the contents of multiple threads are analyzed to identify the recipients. In some aspects, a machine learning network may analyze the content and generate an output (e.g., a probability score and a confidence score) indicative of an association between the communication and the suggested recipient(s). In an example, based on the output provided by the machine learning network, the device may prompt a user/sender of the identified recipient(s). The probability/confidence score may be determined based on a comparison of the content of the communication to profile information associated with the recipient(s) and/or sender.
  • In an example case, the device/system may determine that a specific user is mentioned in the message but not included as a recipient (e.g., the email greeting is “Dear John and Jane,” but Jane is not included as a recipient. For example, the system/device may output a notification to alert a sender of an identified/suggested recipient. In another example, a user profile associated with the sender and/or other recipients may be analyzed to identify additional recipients (e.g., if the email is sent to multiple recipients on a team, other members of the team may be identified as suggested recipients). In yet another example, information from other applications may be used to identify the additional recipients (e.g., if the communication is related to a meeting, information from a calendar application may be used to determine other meeting attendees as additional recipients to the communication. In another example, an intended recipient may have a rule set (e.g., the intended recipient is out of office and indicates another person to receive/respond to communications while they are out). The device may display or otherwise notify the sender of the additional recipients prior to transmitting the communication. If the user/sender selects to add a suggested recipient, the selected recipient is added to the communication and the communication is transmitted.
  • In another example, the device may determine additional users to tag in a social media posting (e.g., analyze an image and identify untagged users). In some examples, the device may output a notification suggesting to un-tag the recipient and/or tag a different recipient before completing the social media posting. In some other aspects, the device may output a notification suggesting to modify the content (e.g., text, videos, images) before completing the social media posting.
  • In some aspects, the device may alert the sender that a communication modality (e.g., a messaging window, a messaging application, a social media application) associated with a recipient may be preferred, and the device may output a notification suggesting or indicating that the communication be sent via the preferred communication modality for the recipient.
  • In one aspect, a method is provided that includes: identifying a message that is input via a user interface of a device; providing at least a portion of the message to a machine learning network; receiving from the machine learning network, in response to the machine learning network processing at least the portion of the message, an additional suggested recipient for the message; and outputting a notification associated with the message, wherein the notification allows the additional suggested recipient to be added to as a recipient of the message before the message is transmitted.
  • Examples may include one of the following features, or any combination thereof.
  • In an example, the portion of the message comprises a body of the message, processing at least the portion of the message comprising extracting contextual information associated with content included in the body of the message, and the additional suggested recipient is selected based at least in part on the contextual information.
  • In some examples, the contextual information comprises mentioning a name of the additional suggested recipient.
  • In an example, the portion of the message comprises an attachment to the message, processing at least the portion of the message comprising analyzing the attachment and determining other users associated with the attachment, and the additional suggested recipient is selected based at least in part on the users associated with the attachment.
  • In an example, the portion of the message comprises one or more recipients of the message, and wherein processing at least the portion of the message comprising determining the additional suggested recipient based on the one or more recipients of the message.
  • In some examples, a recipient sets a rule for messages to be sent to the additional suggested recipient.
  • In an example, the portion of the message comprises a user profile associated with the device, and wherein processing at least the portion of the message comprising determining the additional suggested recipient based on the user profile associated with the device.
  • In some aspects, the method may include receiving via the user interface of the device, a selection of the additional suggested recipient; adding the additional suggested recipient to the message; and transmitting the message.
  • In some aspects, the method may include receiving from the machine learning network confidence and/or probability information corresponding to the additional suggested recipient.
  • In an example, the probability information, the confidence information, or both is determined based at least in part on a comparison of at least the portion of the message to profile information of one or more contacts associated with a user profile associated with the device.
  • In some aspects, the message includes text, multimedia data, or both.
  • In another aspect, a device is provided that includes: a processor; and memory in electronic communication with the processor; and instructions stored in the memory, the instructions being executable by the processor to: identify a message that is input via a user interface of a device; provide at least a portion of the message to a machine learning network; receive from the machine learning network, in response to the machine learning network processing at least the portion of the message, an additional suggested recipient for the message; and output a notification associated with the message, wherein the notification allows the additional suggested recipient to be added to as a recipient of the message before the message is transmitted.
  • In another aspect, a non-transitory, computer-readable medium comprising a set of instructions stored therein which, when executed by a processor, causes the processor to: identify a message that is input via a user interface of a device; provide at least a portion of the message to a machine learning network; receive from the machine learning network, in response to the machine learning network processing at least the portion of the message, an additional suggested recipient for the message; and output a notification associated with the message, wherein the notification allows the additional suggested recipient to be added to as a recipient of the message before the message is transmitted.
  • These and other needs are addressed by the various embodiments and configurations of the present disclosure. The present disclosure can provide a number of advantages depending on the particular configuration. These and other advantages will be apparent from the disclosure contained herein.
  • As used herein, the phrases “at least one,” “one or more,” “or,” and “and/or” are open-ended expressions that are both conjunctive and disjunctive in operation. For example, each of the expressions “at least one of A, B and C,” “at least one of A, B, or C,” “one or more of A, B, and C,” “one or more of A, B, or C,” “A, B, and/or C,” and “A, B, or C” means A alone, B alone, C alone, A and B together, A and C together, B and C together, or A, B and C together.
  • The term “a” or “an” entity refers to one or more of that entity. As such, the terms “a” (or “an”), “one or more” and “at least one” can be used interchangeably herein. It is also to be noted that the terms “comprising,” “including,” and “having” can be used interchangeably.
  • The term “automatic” and variations thereof, as used herein, refers to any process or operation done without material human input when the process or operation is performed. However, a process or operation can be automatic, even though performance of the process or operation uses material or immaterial human input, if the input is received before performance of the process or operation. Human input is deemed to be material if such input influences how the process or operation will be performed. Human input that consents to the performance of the process or operation is not deemed to be “material.”
  • The term “computer-readable medium” as used herein refers to any tangible storage and/or transmission medium that participate in providing instructions to a processor for execution. Such a medium may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media. Non-volatile media includes, for example, NVRAM, or magnetic or optical disks. Volatile media includes dynamic memory, such as main memory. Common forms of computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, magneto-optical medium, a CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, a solid-state medium like a memory card, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read. A digital file attachment to e-mail or other self-contained information archive or set of archives is considered a distribution medium equivalent to a tangible storage medium. When the computer-readable media is configured as a database, it is to be understood that the database may be any type of database, such as relational, hierarchical, object-oriented, and/or the like. Accordingly, the disclosure is considered to include a tangible storage medium or distribution medium and prior art-recognized equivalents and successor media, in which the software implementations of the present disclosure are stored.
  • A “computer readable signal” medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • The terms “determine,” “analyze,” “process,” “execute,” “manage,” “calculate,” and “compute,” and variations thereof, as used herein, are used interchangeably and include any type of methodology, process, mathematical operation or technique. The term “manage” includes any one or more of the terms determine, recommend, configure, organize, show (e.g., display), hide, update, revise, edit, and delete, and includes other means of implementing actions (including variations thereof).
  • It shall be understood that the term “means” as used herein shall be given its broadest possible interpretation in accordance with 35 U.S.C., Section 112, Paragraph 6. Accordingly, a claim incorporating the term “means” shall cover all structures, materials, or acts set forth herein, and all of the equivalents thereof. Further, the structures, materials or acts and the equivalents thereof shall include all those described in the summary of the disclosure, brief description of the drawings, detailed description, abstract, and claims themselves.
  • The preceding is a simplified summary to provide an understanding of some aspects of the disclosure. This summary is neither an extensive nor exhaustive overview of the disclosure and its various embodiments. It is intended neither to identify key or critical elements of the disclosure nor to delineate the scope of the disclosure but to present selected concepts of the disclosure in a simplified form as an introduction to the more detailed description presented below. As will be appreciated, other embodiments of the disclosure are possible utilizing, alone or in combination, one or more of the features set forth above or described in detail below. Also, while the disclosure is presented in terms of exemplary embodiments, it should be appreciated that individual aspects of the disclosure can be separately claimed.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 illustrates an example of a system that supports identifying and alerting a user when sending a message to an additional suggested recipient in accordance with aspects of the present disclosure.
  • FIG. 2 illustrates an example of a system that supports identifying and alerting a user when sending a message to an additional suggested recipient in accordance with aspects of the present disclosure.
  • FIG. 3 illustrates an example of a messaging window that supports identifying and alerting a user when sending a message to an additional suggested recipient in accordance with aspects of the present disclosure.
  • FIGS. 4A-B illustrate another example of a window that supports in accordance with aspects of the present disclosure.
  • FIGS. 5A-B illustrate an example of a process flow that supports identifying and alerting a user when sending a message to an additional suggested recipient in accordance with aspects of the present disclosure.
  • DETAILED DESCRIPTION
  • In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of various embodiments disclosed herein. It will be apparent, however, to one skilled in the art that various embodiments of the present disclosure may be practiced without some of these specific details. The ensuing description provides illustrative embodiments only, and is not intended to limit the scope or applicability of the disclosure. Furthermore, to avoid unnecessarily obscuring the present disclosure, the preceding description omits a number of known structures and devices. This omission is not to be construed as a limitation of the scopes of the claims. Rather, the ensuing description of the illustrative embodiments will provide those skilled in the art with an enabling description for implementing an illustrative embodiment. It should however be appreciated that the present disclosure may be practiced in a variety of ways beyond the specific detail set forth herein.
  • Some electronic devices (e.g., smartphones, personal computing devices) may support various communication modalities, for example, using email applications, messaging applications, or social networking applications on the devices. In some examples, the email applications, messaging applications, and social networking applications may support text and multimedia communications between a user and one or more recipients. In some cases, a user may send a message, and may be unaware that not all intended recipients were included. For example, the user may inadvertently omit a recipient (e.g., intended recipient not included), may be unaware of additional recipients (e.g., other department/team members), or may not know contact information for all recipients (e.g., all attendees of a meeting). The present disclosure may be implemented on the client and/or server side. For example, data on the client side (e.g., data on the user's device and/or data associated with the user's account may be processed to make determinations of additional and/or alternative recipients. Additionally, or alternatively, server-side data (e.g., organizational data) may be processed to make determinations of additional and/or alternative recipients.
  • Additionally, when creating and sending a message via a messaging application (e.g., text messaging, instant messaging, e-mail) on a device, the user may accidentally enter recipient information which differs from that of an intended recipient (e.g., message is meant for a client named Christine, and the sender frequently emails a colleague Christine).
  • Some techniques may support analysis of syntactic content of a message (e.g., an email communication) to identify syntactical errors associated with the message. Some other techniques may support analysis of syntactic content of a message to determine whether an attachment is to accompany the message prior to transmission. However, such techniques do not support detection and analysis of content in a message, such as contextual information with respect to a recipient.
  • According to example aspects of the present disclosure, techniques are described for analyzing the content of a communication (e.g., a message, a social media message, a social media posting) input or created via a communication device and identifying/suggesting additional recipients for the communication based on the analysis.
  • The techniques may include building a profile of contacts and type of messages exchanged with the contacts (e.g., based on communication histories). The techniques may include correlating recipients (e.g., emails from a particular sender on a particular topic are often sent to a combination of recipients, the current message is being sent to four of the five recipients, the sender may be prompted regarding adding the fifth recipient). In another example, a message from a sender in the accounting department may be addressed to “All Staff,” however, not all employees in the accounting department are listed as recipients, other unlisted accounting employees may be identified and suggested to the sender prior to transmitting the message.
  • Additionally, a user profile may list a preferred communication type(s) (e.g., email, text, audio call, etc.). The sender may be prompted regarding a recipient's preferred communication type.
  • Accordingly, once a communication history is established and analyzed, if a device detects that a user is attempting to send or forward a message which does not match a determined profile or classification associated with a contact, the device may alert the user based on the confidence level, thresholds (e.g., detection thresholds, probability thresholds, confidence thresholds), and/or user configuration.
  • In an example with respect to messaging applications (e.g., conference chats, group instant messaging, email), the techniques described herein may be applied to messages, prior to the messages being sent. For example, a device may analyze a message (e.g., using a text analyzer), prior to sending the message, to determine or identify additional recipients (e.g., based on a probability score and/or a confidence score), the device may output a notification to alert the user of the same.
  • In some aspects, the system and/or device may refer to a detection threshold when detecting the possibility of an additional recipient. In some examples, the device may refer to a detection threshold when outputting the notification to alert the user. In some aspects, based on configured settings, the device may perform different actions based on the confidence level of the detection.
  • The system and/or device may refer to various criteria when determining additional recipients for a message. In an example, the device may check the content of a message (e.g., message implies that the message is being sent to a single user or multiple users). For example, a message including text such as “I'm working with John and Alice” may imply the message is meant for two recipients, or “Hi Matt” may imply that the message is being sent to a single user. If the device analyzes the text and detects a discrepancy (e.g., multiple recipients mentioned, only one recipient added), the device may notify the user of the discrepancy. In some examples, the device may detect whether a message contains a targeted name. Based on detecting the targeted name, the device may notify the user that the message is for a specific recipient associated with the targeted name, but the specific recipient is not included.
  • In another example, the system and/or device may use an Artificial Intelligence (AI) algorithm to group multiple threads (e.g., multiple separate emails) for a given user, the AI algorithm(s) may also analyze the contents of the multiple threads and based on the analysis of the multiple threads, suggest/prompt additional and/or alternative recipients to be added when composing a new thread (e.g., new email). For example, a user has multiple email chains related to the same issue, each chain may have a different set of participants. The user may desire to draft a summary/conclusion email for the issue, the system and/or device can process the different sets of participants to identify all unique participants and prompt the user for any missing participants/recipients. The implementation may be done using client-side data (e.g., provide suggestions based on emails in the user's inbox). Additionally, or alternatively, the implementation may be performed on the server-side (e.g., provide the suggestions based on the organization level data, considering all emails company-wide).
  • The system and/or device may support forward learning based on training data. For example, the device may support forward learning based on past actions of the user (e.g., in response to past notifications provided by the device). In some aspects, the device may improve the accuracy associated with message analysis and/or notifications provided by the device. In some aspects, the device may support a combination of artificial intelligence (e.g., machine learning) and natural language processing for determining contextual information associated with a message. In some cases, the device may support the detection of content and/or contextual information from multimedia data (e.g., video, images, audio, etc.) included with a message to be sent. In some aspects, the device may support data models that are tunable according to user parameters (e.g., user requirements) associated with different users, different enterprise applications, etc.
  • Various additional details of embodiments of the present disclosure will be described below with reference to the figures. While the flowcharts will be discussed and illustrated in relation to a particular sequence of events, it should be appreciated that changes, additions, and omissions to this sequence can occur without materially affecting the operation of the disclosed embodiments, configuration, and aspects.
  • FIG. 1 illustrates an example of a system 100 that supports identifying and alerting a user when sending a message to an additional suggested recipient in accordance with aspects of the present disclosure.
  • The system 100 may include communication devices 105 (e.g., communication device 105-a through communication device 105-h), a server 110, a database 115, and a communication network 120. The communication network 120 may facilitate machine-to-machine communications between any of the communication device 105 (or multiple communication devices 105), the server 110, or one or more databases (e.g., database 115). The communication network 120 may include any type of known communication medium or collection of communication media and may use any type of protocols to transport messages between endpoints. The communication network 120 may include wired communications technologies, wireless communications technologies, or any combination thereof.
  • A communication device 105 may transmit or receive data packets to one or more other devices (e.g., another communication device 105, the server 110) via the communication network 120 and/or via the server 110. For example, the communication device 105-a may communicate (e.g., exchange data packets) with the communication device 105-b via the communications network 120. In another example, the communication device 105-a may communicate with another device (e.g., communication device 105-e, database 115) via the communications network 120 and the server 110.
  • Non-limiting examples of the communication devices 105 may include, for example, personal computing devices or mobile computing devices (e.g., laptop computers, mobile phones, smart phones, smart devices, wearable devices, tablets, etc.). In some examples, the communication devices 105 may be operable by or carried by a human user. In some aspects, the communication devices 105 may perform one or more operations autonomously or in combination with an input by the user.
  • The Internet is an example of the communication network 120 that constitutes an Internet Protocol (IP) network consisting of multiple computers, computing networks, and other communication devices located in multiple locations, and components in the communication network 120 (e.g., computers, computing networks, communication devices) may be connected through one or more telephone systems and other means. Other examples of the communication network 120 may include, without limitation, a standard Plain Old Telephone System (POTS), an Integrated Services Digital Network (ISDN), the Public Switched Telephone Network (PSTN), a Local Area Network (LAN), a Wide Area Network (WAN), a wireless LAN (WLAN), a Session Initiation Protocol (SIP) network, a Voice over Internet Protocol (VoIP) network, a cellular network, and any other type of packet-switched or circuit-switched network known in the art. In some cases, the communication network 120 may include of any combination of networks or network types. In some aspects, the communication network 120 may include any combination of communication mediums such as coaxial cable, copper cable/wire, fiber-optic cable, or antennas for communicating data (e.g., transmitting/receiving data).
  • According to example aspects of the present disclosure, techniques are described for analyzing the content of a communication (e.g., a message, a social media posting) input or created via a communication device 105 and identifying additional recipients for the communication based on the analysis. In some aspects, a machine learning network (e.g., a machine learning network included in the communication device 105, a machine learning network included in the server 110) may analyze the content and generate an output indicative of an association between the communication and the additional recipient(s). The output may include, for example, a probability score and/or a confidence score. In an example, based on the output provided by the machine learning network, the communication device 105 may compare the content of the communication with profile information associated with the recipient and/or sender.
  • For example, the communication device 105 may output a notification to alert a sender of identified additional recipients, and the communication device 105 may refrain from completing the communication until further input (e.g., accept/regarding the additional recipients is received from the sender. For example, the communication device 105 may determine that an additional recipient for a message is suggested, and the communication device 105 may refrain from transmitting the message to the recipient. In some aspects, the communication device 105 may output a notification suggesting an additional recipient for the message.
  • According to example aspects of the present disclosure, the communication device 105 may identify a message that is input via a user interface of the communication device 105. In some aspects, the message may include text, multimedia data, or both. The communication device 105 may provide at least a portion of the message to a machine learning network (e.g., a machine learning network included in or implemented by the communication device 105 or the server 110). In an example, the communication device 105 may receive an output from the machine learning network in response to the machine learning network processing at least the portion of the message. In some examples, the output may include one or more additional suggested recipient(s) for the message, probability information corresponding to the message and the one or more additional suggested recipient(s) for the message, confidence information associated with the probability information, or a combination thereof. In some aspects, the probability information may include a set of probability scores respectively corresponding to the one or more additional suggested recipient(s), and the confidence information may include a set of confidence scores respectively corresponding to the set of probability scores.
  • The one or more additional suggested recipient(s) may include, for example, one or more intended recipients associated with the message, one or more additional recipients different from the one or more intended recipients, or both. In some examples, the communication device 105 may suggest the one or more intended recipients based on the output received from the machine learning network. In some aspects, the communication device 105 may select the one or more additional recipients based on the output received from the machine learning network.
  • In an example, the communication device 105 may output, via the user interface of the communication device 105, a notification associated with the message based on the output received from the machine learning network. In some aspects, outputting the notification may be based on a comparison of the set of probability scores to a probability threshold, a comparison of the set of confidence scores to a threshold, or both. In some cases, the communication device 105 may output the notification, transmit the message, or both based on the suggestion of the one or more additional suggested recipient(s). In some cases, the communication device 105 may output the notification, transmit the message, or both based on the selection of one or more additional recipients.
  • In some examples, the probability information, the confidence information, or both may be determined (e.g., by the machine learning network) based on a comparison of at least the portion of the message to profile information of the recipients/sender. In some aspects, the communication device 105 may assign category information to the recipients/sender, where at least the portion of the message is compared (e.g., by the machine learning network) to the profile information of the recipient/sender based on the category information.
  • In some examples, the communication device 105 may extract contextual information associated with content included in at least the portion of the message, where the additional suggested recipient is selected based at least in part on the contextual information.
  • In some examples, the communication device 105 (or server 110) may train the machine learning network based on a communication history, and the machine learning network may provide the output based on the training. In some examples, the communication device 105 (or server 110) may train the machine learning network based on a set of actions associated with a user profile, and the machine learning network may provide the output based on the training. In some aspects, the set of actions may be associated with one or more previous messages provided by the communication device 105 (or the server 110) to the machine learning network, one or more previous outputs received by the communication device 105 (or server 110) from the machine learning network, one or more previously output notifications by the communication device 105 (or another communication device 105), one or more previously transmitted messages by the communication device 105 (or another communication device 105), or a combination thereof.
  • Example aspects of components and functionalities of the communication devices 105, the server 110, the database 115, and the communication network 120 are provided with reference to FIG. 2 .
  • While the illustrative aspects, embodiments, and/or configurations illustrated herein show the various components of the system 100 collocated, certain components of the system 100 can be located remotely, at distant portions of a distributed network, such as a Local Area Network (LAN) and/or the Internet, or within a dedicated system. Thus, it should be appreciated, that the components of the system 100 can be combined in to one or more devices or collocated on a particular node of a distributed network, such as an analog and/or digital telecommunications network, a packet-switched network, or a circuit-switched network. It will be appreciated from the following description, and for reasons of computational efficiency, that the components of the system can be arranged at any location within a distributed network of components without affecting the operation of the system.
  • FIG. 2 illustrates an example of a system 200 that supports identifying and alerting a user when sending a message to an additional suggested recipient in accordance with aspects of the present disclosure. In some examples, the system 200 may be implemented by aspects of the system 100 described with reference to FIG. 1 . The system 200 may include communication devices 205 (e.g., communication device 205-a through communication device 205-e), a server 210, a database 215, a communication network 220, and a content engine 270. The communication devices 205, the server 210, the database 215, and the communications network 220 may be implemented, for example, by aspects of the communication devices 105, the server 110, the database 115, and the communications network 220 described with reference to FIG. 1 .
  • The communication network 220 may facilitate machine-to-machine communications between any of the communication device 205 (or multiple communication devices 205), the server 210, one or more databases (e.g., database 215), and the content engine 270. The communication network 220 may include any type of known communication medium or collection of communication media and may use any type of protocols to transport messages between endpoints. In some aspects, the communication network 220 may include wired communications technologies, wireless communications technologies, or any combination thereof. In an example, the communication devices 205, the server 210, and the content engine 270 may support communications over the communications network 220 between multiple entities (e.g., users, such as a sender and a recipient). In some cases, the system 200 may include any number of communication devices 205, and each of the communication devices 205 may be associated with a respective entity.
  • In various aspects, settings of the any of the communication device 205, the server 110, or the content engine 270 may be configured and modified by any user and/or administrator of the system 200. Settings may include thresholds described herein, as well as settings related to how content is managed. Settings may be configured to be personalized for one or more communication devices 205, users of the communication devices 205, and/or other groups of entities, and may be referred to herein as profile settings, user settings, or organization settings. In some aspects, rules and settings may be used in addition to, or instead of, thresholds described herein. In some examples, the rules and/or settings may be personalized by a user and/or administrator for any variable, threshold, user (user profile), communication device 205, entity, or groups thereof.
  • A communication device 205 (e.g., communication device 205-a) may include a processor 230, a network interface 235, a memory 240, and a user interface 245. In some examples, components of the communication device 205 (e.g., processor 230, network interface 235, memory 240, user interface 245) may communicate over a system bus (e.g., control busses, address busses, data busses) included in the communication device 205. In some cases, the communication device 205 may be referred to as a computing resource.
  • In some cases, the communication device 205 (e.g., communication device 205-a) may transmit or receive packets to one or more other devices (e.g., another communication device 205, the server 210, the database 215, the content engine 270) via the communication network 220, using the network interface 235. The network interface 235 may include, for example, any combination of network interface cards (NICs), network ports, associated drivers, or the like. Communications between components (e.g., processor 230, memory 240) of the communication device 205 and one or more other devices (e.g., another communication device 205, the database 215, the content engine 270) connected to the communication network 220 may, for example, flow through the network interface 235.
  • The processor 230 may correspond to one or many computer processing devices. For example, the processor 230 may include a silicon chip, such as a Field Programmable Gate Array (FPGA), an Application-Specific Integrated Circuit (ASIC), any other type of Integrated Circuit (IC) chip, a collection of IC chips, or the like. In some aspects, the processors may include a microprocessor, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), or plurality of microprocessors configured to execute the instructions sets stored in a corresponding memory (e.g., memory 240 of the communication device 205). For example, upon executing the instruction sets stored in memory 240, the processor 230 may enable or perform one or more functions of the communication device 205.
  • The processor 230 may utilize data stored in the memory 240 as a neural network. The neural network may include a machine learning architecture. In some aspects, the neural network may be or include an artificial neural network (ANN). In some other aspects, the neural network may be or include any machine learning network such as, for example, a deep learning network, a convolutional neural network, or the like. Some elements stored in memory 240 may be described as or referred to as instructions or instruction sets, and some functions of the communication device 205 may be implemented using machine learning techniques.
  • The memory 240 may include one or multiple computer memory devices. The memory 240 may include, for example, Random Access Memory (RAM) devices, Read Only Memory (ROM) devices, flash memory devices, magnetic disk storage media, optical storage media, solid-state storage devices, core memory, buffer memory devices, combinations thereof, and the like. The memory 240, in some examples, may correspond to a computer-readable storage media. In some aspects, the memory 240 may be internal or external to the communication device 205.
  • The memory 240 may be configured to store instruction sets, neural networks, and other data structures (e.g., depicted herein) in addition to temporarily storing data for the processor 230 to execute various types of routines or functions. For example, the memory 240 may be configured to store program instructions (instruction sets) that are executable by the processor 230 and provide functionality of a content engine 241 described herein. The memory 240 may also be configured to store data or information that is useable or capable of being called by the instructions stored in memory 240. One example of data that may be stored in memory 240 for use by components thereof is a data model(s) 242 (also referred to herein as a neural network model) and/or training data 243 (also referred to herein as a training data and feedback).
  • The content engine 241 may include a single or multiple engines. The communication device 205 (e.g., the content engine 241) may utilize one or more data models 242 for recognizing and processing information obtained from other communication devices 205, the server 210, and the database 215. In some aspects, the communication device 205 (e.g., the content engine 241) may update one or more data models 242 based on learned information included in the training data 243. In some aspects, the content engine 241 and the data models 242 may support forward learning based on the training data 243. The content engine 241 may have access to and use one or more data models 242. For example, the data model(s) 242 may be built and updated by the content engine 241 based on the training data 243. The data model(s) 242 may be provided in any number of formats or forms. Non-limiting examples of the data model(s) 242 include Decision Trees, Support Vector Machines (SVMs), Nearest Neighbor, and/or Bayesian classifiers.
  • In some examples, the training data 243 may include communication inputs such as communication information associated with the communication device 205. In some cases, the communication information may include communication histories between the communication device 205 and other communication devices 205 (e.g., any of communication device 205-b through communication device 105-e), real-time communication data between the communication device 205 and other communication devices 205, data transmissions between the communication device 205 and the server 210, etc. In some aspects, communication histories between a communication device 205 (e.g., communication device 205-a) and another communication device 205 (e.g., communication device 205-b) may include communication between a user of the communication device 205 and a user of the other communication device 205.
  • The content engine 241 may be configured to analyze content, which may be any type of information, including information that is historical or in real-time. The content engine 241 may be configured to receive information from other communication devices 205 and/or the server 210. The content engine 241 may be configured to analyze profile information associated with one or more users, groups, etc. The profile information can include any type of information, including audio and visual information. The content engine 241 may build any number of user profiles using automatic processing, using artificial intelligence and/or using input from one or more users associated with the communication devices 205. The content engine 241 may use automatic processing, artificial intelligence, and/or inputs from one or more users of the communication devices 205 to determine, manage, and/or combine information relevant to a user profile.
  • The content engine 241 may determine user profile information based on a user's interactions with information. The content engine 241 may update (e.g., continuously, periodically) user profiles based on new information that is relevant to the user profiles. The content engine 241 may receive new information from any communication device 205, the server 210, the database 215, etc. Profile information may be organized and classified in various manners. In some aspects, the organization and classification of profile information may be determined by automatic processing, by artificial intelligence and/or by one or more users of the communication devices 205.
  • The content engine 241 may create, select, and execute appropriate processing decisions. Processing decisions may include content management, content extraction, and content analysis associated with communications (e.g., messages) created by a communication device 205. Illustrative examples of content management include rearranging content, modifying content, changing formatting of content, and showing and/or hiding content. Processing decisions, including content analysis, may be handled automatically by the content engine 305, with or without human input.
  • The content engine 241 may store, in the memory 240 (e.g., in a database included in the memory 240), historical information (e.g., communication histories) between the communication device 205 and other devices (e.g., other communication devices 205, the server 210, etc.). Data within the database of the memory 240 may be updated, revised, edited, or deleted by the content engine 241. In some aspects, the content engine 241 may support continuous, periodic, and/or batch fetching of content (e.g., content referenced within a communication, content related to a user, content related to a recipient, etc.) and content aggregation.
  • Information stored in the database included in the memory 240 may include and is not limited to communication information, user information, historical analysis information, processing information including historical processing information, key words, configurations, settings, variables, and properties. Further, information regarding the relevance of different types of content, as well as how to determine relevance (e.g., rules, settings, source(s) of content, rankings of content, location of key words/phrases, repetition of key words/phrases, definitions of relevance, etc.) or contextual information associated with content may be stored in the database included in the memory 240.
  • The communication device 205 may render a presentation (e.g., visually, audibly, using haptic feedback, etc.) of an application 244 (e.g., a browser application 244-a, a messaging application 244-b, a social media application 244-c). In an example, the communication device 205 may render the presentation via the user interface 245. The user interface 245 may include, for example, a display (e.g., a touchscreen display), an audio output device (e.g., a speaker, a headphone connector), or any combination thereof. In some aspects, the applications 244 may be stored on the memory 240. In some cases, the applications 244 may include cloud-based applications or server-based applications (e.g., supported and/or hosted by the server 210). Settings of the user interface 245 may be partially or entirely customizable and may be managed by one or more users, by automatic processing, and/or by artificial intelligence.
  • In an example, any of the applications 244 (e.g., browser application 244-a, messaging application 244-b, social media application 244-c) may be configured to receive data in an electronic format and present content of data via the user interface 245. For example, the applications 244 may receive data from another communication device 205, the server 210, or the content engine 270 via the communications network 220, and the communication device 205 may display the content via the user interface 245.
  • The database 215 may include a relational database, a centralized database, a distributed database, an operational database, a hierarchical database, a network database, an object-oriented database, a graph database, a NoSQL (non-relational) database, etc. In some aspects, the database 215 may store and provide access to, for example, any of the stored data described herein.
  • The server 210 may include a processor 250, a network interface 255, database interface instructions 260, and a memory 265. In some examples, components of the server 210 (e.g., processor 250, network interface 255, database interface 260, memory 265) may communicate over a system bus (e.g., control busses, address busses, data busses) included in the server 210. The processor 250, network interface 255, and memory 265 of the server 210 may include examples of aspects of the processor 230, network interface 235, and memory 240 of the communication device 205 described herein.
  • For example, the processor 250 may be configured to execute instruction sets stored in memory 265, upon which the processor 250 may enable or perform one or more functions of the server 210. In some aspects, the processor 250 may utilize data stored in the memory 265 as a neural network. In some examples, the server 210 may transmit or receive packets to one or more other devices (e.g., a communication device 205, the database 215, another server 210, the content engine 270) via the communication network 220, using the network interface 255. Communications between components (e.g., processor 250, memory 265) of the server 210 and one or more other devices (e.g., a communication device 205, the database 215, the content engine 270) connected to the communication network 220 may, for example, flow through the network interface 255.
  • In some examples, the database interface instructions 260 (also referred to herein as database interface 260), when executed by the processor 250, may enable the server 210 to send data to and receive data from the database 215. For example, the database interface instructions 260, when executed by the processor 250, may enable the server 210 to generate database queries, provide one or more interfaces for system administrators to define database queries, transmit database queries to one or more databases (e.g., database 215), receive responses to database queries, access data associated with the database queries, and format responses received from the databases for processing by other components of the server 210.
  • The memory 265 may be configured to store instruction sets, neural networks, and other data structures (e.g., depicted herein) in addition to temporarily storing data for the processor 250 to execute various types of routines or functions. For example, the memory 265 may be configured to store program instructions (instruction sets) that are executable by the processor 250 and provide functionality of the content engine 266 described herein. One example of data that may be stored in memory 265 for use by components thereof is a data model(s) 267 (also referred to herein as a neural network model) and/or training data 268. The data model(s) 267 and the training data 268 may include examples of aspects of the data model(s) 242 and the training data 243 described with reference to the communication device 205. For example, the server 210 (e.g., the content engine 266) may utilize one or more data models 267 for recognizing and processing information obtained from communication devices 205, another server 210, and the database 215. In some aspects, the server 210 (e.g., the content engine 266) may update one or more data models 267 based on learned information included in the training data 268.
  • In some aspects, components of the content engine 266 may be provided in a separate engine (e.g., the content engine 270) in communication with the server 210. In an example, the content engine 270 may include a processor 275, a network interface 280, and a memory 285. In some examples, components of the content engine 270 (e.g., processor 275, network interface 280, memory 285) may communicate over a system bus (e.g., control busses, address busses, data busses) included in the content engine 270. The processor 275, network interface 280, memory 285 may include examples of aspects of the processor 250, network interface 255, and memory 265 of the server 210 described herein.
  • For example, the memory 285 may be configured to store program instructions (instruction sets) that are executable by the processor 275 and provide functionality of the content engine 270 described herein. One example of data that may be stored in memory 285 for use by components thereof is a data model(s) 286 (also referred to herein as a neural network model) and/or training data 287. The data model(s) 286 and the training data 287 may include examples of aspects of the data model(s) 267 and the training data 268 described with reference to the content engine 266.
  • According to example aspects of the present disclosure, a communication device 205 (e.g., communication device 205-a) may support one or more operations or procedures associated with analyzing the content of a communication (e.g., a message, a social media posting) via the communication device 205. For example, the communication device 205 may identify a communication input by a user (also referred to herein as a sender) of the communication device 205, via the messaging application 244-b or the email application 244-c. In some cases, the messaging application 244-b may include a text messaging application, an instant messaging application, an electronic messaging board application, an email application, or the like. The communication may be a message including text and/or multimedia data (e.g., video, images, audio, etc.). The message may include a recipient (also referred to herein as an intended recipient or an addressee) or a set of recipients associated with the message.
  • The communication may be an email, message, or social media posting including text and/or multimedia data (e.g., video, images, audio, etc.). In an example of a message communicated via a messaging application associated with the social media client, the message may include a recipient (also referred to herein as an intended recipient or an addressee) or a set of recipients associated with the communication. In an example of a social media posting via a social media application, the social media posting may include a contact “tagged” in the social media posting. Examples of the aspects described herein may be applied to messages communicated via the messaging application 244-b or the email application 244-c, social media posts created and published, or the like.
  • In an example, the recipient(s) may be a recipient selected by the communication device 205 (e.g., based on a user input, autonomously by the communication device 205) from a contact list associated with a user profile of the user. In some aspects, the contact list may be stored in the memory 240 of the communication device 205, the memory 265 of the server 210, the database 215, the content engine 270, and/or any cloud-based storage. The contact list may include profile information (e.g., image-based, text-based, etc.) associated with contacts included in the contact list.
  • The communication device 205 may provide at least a portion of the message (e.g., body, recipient list, sender information, etc.) to a machine learning network (e.g., a machine learning network implemented by the content engine 241, the content engine 266, or the content engine 270) to identify/suggest an additional recipient, for example, prior to sending the message. In an example, the communication device 205 may provide the message (or message portion) to the machine learning network based on receiving a user input (e.g., via the user interface 245) for sending the message. Alternatively, or additionally, the communication device 205 may provide portions of the message in real-time, for example, as the message is input to the communication device 205, when adding an attachment, when adding recipients, etc.
  • In an example, the machine learning network may generate an output based on an analysis of the content of the message (or message portion). For example, the machine learning network may analyze the content of the message to determine whether the message is for a specific recipient (e.g., recipient's name in message). The machine learning network may extract and analyze contextual information associated with the message (e.g., keywords in the subject or body of the message). In an example, the machine learning network may determine the contextual information from text (e.g., language used, text strings, etc.) included with the message. In another example, the machine learning network may determine the message is part of an email thread, and compile the suggested recipients based on the recipients included in other messages of the email thread. In yet another example, the machine learning network may use information (e.g., an organizational chart) to determine additional recipients to suggest.
  • For example, the machine learning network may determine the contextual information using natural language processing techniques such as linguistic context recognition, word recognition, etc. In some aspects, the machine learning network may support text scanning, searching, parsing, and/or analysis (including performing semantic and/or syntactic analysis). In an example, the machine learning network may support recognition of keywords (e.g., names) associated with the contextual information. In some examples, the machine learning network may divide and/or subdivide text content into portions based on paragraphs, sentences, sections, types of content, etc.
  • In some aspects, the machine learning network may determine the contextual information from multimedia data (e.g., video, images, audio, etc.) included with the message (or portion of the message). For example, the machine learning network may determine the contextual information using techniques such as object detection and recognition (e.g., facial detection and recognition, landmark detection and recognition, etc.) to video images or still images. In some examples, the machine learning network may apply techniques such as audio-based context recognition to audio data associated with the video images. In some other cases, the machine learning network may apply audio-based context recognition to audio data (e.g., an audio file, a voice recording) included with the message. In another example, metadata from an attachment may be used to determine contextual information (e.g., suggest including the authors of an attached document as recipients).
  • In an example of analyzing the content, the machine learning network (e.g., implemented by the content engine 241, the content engine 266, or the content engine 270) may compare the contextual information to profile information associated with the recipient. In an example, Sally may be on vacation and indicate a co-worker to handle communications while she is out. If the message includes Sally as a recipient, the co-worker may be suggested as a recipient. In another example, if a message includes contextual information that the sender is following up on a previous message, the recipient's supervisor and/or colleague may be suggested as a recipient. In yet another example, if the message is sent before/after a meeting the suggested recipients may be anyone that indicated they are attending/attended the meeting (e.g., sending a meeting agenda or other meeting content prior to a meeting, or sending meeting notes after a meeting has concluded).
  • In some cases, based on the comparison, the machine learning network may determine a strength of a relationship, also referred to herein as a relevancy or a contextual relationship, between the content of the message (e.g., contextual information, subject matter) and the suggested/intended recipient of the message. In some aspects, the strength of the relationship may be indicated by probability information (e.g., a probability score) and/or confidence information (e.g., a confidence score) described herein.
  • In an example case, the communication device 205 may determine that the content does matches the profile information associated with the suggested recipient (e.g., based on a probability score and/or confidence score), and the communication device 205 may perform one or more operations based on the probability score and/or confidence score. For example, the communication device 205 may output a notification to alert a sender that the suggested recipient should be included, and the communication device 205 may refrain from completing the communication. In some aspects, the communication device 205 may output a notification suggesting an alternative recipient for the message.
  • In an example, for a message, the machine learning network may generate an output inclusive of probability information and confidence information corresponding to the message and the suggested recipient. The probability information, for example, may include a probability score (e.g., from 0.00 to 1.00) of whether the content of the message matches profile information associated with the suggested recipient. The confidence information, for example, may include a confidence score (e.g., from 0.00 to 1.00) corresponding to the probability score.
  • For example, the recipient may be associated with (e.g., classified with) a specific department (e.g., accounting, legal, engineering, human resources, etc.) of a company, and the machine learning network may determine the probability score based on a correlation between the content (e.g., context information) of the message and a classification associated with the sender/recipient. In some examples, the machine learning network may output a relatively low probability score for cases in which the machine learning network identifies a relatively low correlation (e.g., below a threshold) between the content of the message and the classification associated with the suggested recipient. In some other examples, the machine learning network may output a relatively high probability score for cases in which the machine learning network identifies a relatively high correlation (e.g., above a threshold) between the content of the message and the classification associated with the suggested recipient.
  • In another example, for a message addressed to multiple recipients, the machine learning network may generate an output inclusive of probability information and confidence information corresponding to the message and any or all of the recipients. For example, the machine learning network may compare the contextual information to respective profile information associated with the recipients. The probability information may include respective probability scores corresponding to any or all of the recipients determined, for example, based on a correlation between the content (e.g., context information) of the message. In some aspects, the confidence information may include respective confidence scores corresponding to the probability scores. In some cases, the machine learning network may determine the probability scores based on an analysis of communication histories (e.g., email threads, past communications, email subject, messages within a specified timeframe, etc.) between the sender of the message and suggested recipients.
  • Based on the output (e.g., analysis, probability scores, confidence scores) generated by the machine learning network with respect to a message, the communication device 205 may output a notification associated with the message. For example, the communication device 205 may suggest an additional recipient associated with the message. The communication device 205 may render or output one or more notifications for suggesting the additional recipient and/or suggesting alternative recipients. In some embodiments, suggesting recipients may include suggesting that an indicated recipient be removed from the message.
  • For example, the communication device 205 may output a notification suggesting an additional recipient (e.g., an intended recipient, an alternative recipient, an additional recipient, etc.) based on a probability score corresponding to the recipient being equal to or greater than a probability score threshold. In an example, the communication device 205 may output the notification suggesting the additional recipient based on a confidence score (corresponding to the probability score) being equal to or greater than a confidence score threshold.
  • The thresholds described herein may be set based on various criteria, and multiple thresholds may be set or configured for different types of content. In some cases, the thresholds may be pre-set (e.g., previously configured, pre-determined, or determined before the content organizer methods are applied to new content), and may change based on any criteria. For example, the rules and thresholds may be set by a user, automatically, and/or by artificial intelligence before a communication to which content is to be applied is received. In addition, thresholds may be set automatically and changed automatically (for example based on other thresholds), by artificial intelligence, and/or they may be defined by a user.
  • In some aspects, each threshold may be associated with a respective weighting factor. For example, when determining contextual information associated with a communication (e.g., a message, a social media posting), the machine learning network may assign a probability score based on a first threshold associated with detecting or identifying a set of key words according to a first criteria (e.g., names included in the message text compared to names/addresses included in a recipient list). The machine learning network may modify (e.g., increase) the probability score based on a second threshold associated with detecting or identifying a set of key words according to a second criteria (e.g., a correlation between the content of the communication and a classification associated with the recipient). The machine learning network may modify (e.g., increase) the probability score based on a third threshold associated with detecting or identifying a set of key words according to a third criteria (e.g., whether the content of the message matches the content associated with a communication history (e.g., email thread, subject line, etc.). The machine learning network may support any combination of criteria, thresholds, and/or weighting factors, and the machine learning network may apply the same in any order when calculating and/or assigning a probability score.
  • In some aspects, the notification may include any combination of visual, audio, and/or haptic notifications via the user interface 245 (e.g., a display) and/or an external device (e.g., wireless headphones, a wearable device such as smart glasses or a smartwatch, etc.) coupled to the communication device 205. In some examples, when outputting notifications associated with a communication, the communication device 205 may apply visual differences (e.g., different highlighting, different colors, different font types, etc.) to content within the communication and/or filter the content within the communication to show less content (e.g., content having a relevance above a threshold). In an example, the notification may include a pop-up with the suggested recipients listed and a method (e.g., radio button) of selecting from the list of recipients.
  • The communication device 205 may transmit the message based on a user input. For example, based on the output notification provided by the communication device 205, the user input may select one or more of the suggested recipients for a message, content of the message, a messaging window for sending the message, and an application for sending the message. The communication device 205 may transmit the message based on the user input. In some cases, the communication device 205 may autonomously transmit the message based on learned information included in training data (e.g., training data 243, training data 268, training data 287). The training data may include, for example, previous actions by a user with respect to probability scores and confidence scores calculated by the machine learning network. The training data may include, for example, previous actions by a user with respect to notifications previously output by the communication device 205.
  • In some aspects, the probability score thresholds and/or confidence score thresholds described herein may be configurable via the communication device 205 (e.g., based on user settings). In some other aspects, the probability score threshold and/or confidence score threshold may autonomously configured by the communication device 205 or a machine learning network described herein. For example, the communication device 205 or machine learning network may autonomously configure the probability score threshold and/or confidence score threshold based on learned information included in training data (e.g., training data 243, training data 268, training data 287). The training data may include, for example, previous decisions by a user (e.g., selecting/rejecting suggested recipients) with respect to predictions by the machine learning network. In some aspects, the training data may include data communications created, transmitted, or received by the communication device 205 over the communication network 220.
  • In another example supportive of operations or procedures associated with analyzing the content of a communication via the communication device 205, the communication device 205 may identify a communication input by a user (also referred to herein as a sender or author) of the communication device 205. The communication may include text, multimedia data (e.g., video, images, audio, etc.). The communication may include a recipient (or addressee) in the case of a message. The communication may include a “tagged” contact in the case of a social media posting.
  • FIG. 3 illustrates an example of a messaging window 300 for a messaging that supports identifying and alerting a user when sending a message to an additional suggested recipient in accordance with aspects of the present disclosure. FIG. 3 illustrates an example communication device 305, and the messaging window 300 may be an example of a messaging window described herein. The communication device 305 may include examples of aspects of a communication device 105 or a communication device 205 described with reference to FIGS. 1 and 2 . The communication device 305 may display a user interface 330 (e.g., an on-screen keyboard) for inputting messages.
  • The messaging window 300 may include a header 310 indicating a contact/recipient (e.g., Bill Smith) associated with the messaging window 300. The header 310 may include the name 311 (e.g., Bill Smith) of the contact and classification information 313 (e.g., accounting) associated with the contact. The messaging window 300 may include a communication history associated with a user of the communication device 305 and the contact. For example, the communication history may include messages previously sent by the user and messages previously received from the contact.
  • In an example, the user may have input a message 321 stating, “Hello all, Please find attached the meeting notes for the end of year accounting meeting held yesterday.” The user may have included an attachment 322 (e.g., “Meeting Notes”) to the message 321. A machine learning network (e.g., integrated or implemented by the communication device 305, a server, or content engine described herein) may identify that the content of the message 321 (e.g., contextual information associated with the text “Hello all” and “accounting meeting). In an example, the machine learning network may identify that the attachment 322 (e.g., based on contextual information associated with the title of the attachment 322, based on content included in the attachment 322, metadata associated with the attachment 322, etc.).
  • In some aspects, the communication device 305 may output the notification (!) 312, the notification 323, and/or apply visual differences described herein in real-time as the user inputs the message 321, includes the attachment 322, and/or adds recipients. In some other aspects, the communication device 305 may output the notification 312, the notification 323, and/or apply visual differences based on detecting a user input selecting the “send” button 324. In some examples, based on a user input selecting the notification 323, the communication device 305 may display additional suggested recipients. In some other examples, based on the user input selecting the notification 323, the communication device 305 may automatically add the suggested recipients.
  • FIGS. 4A-B illustrate another example of a window 400 for supports identifying and alerting a user when sending a message to an additional suggested recipient in accordance with aspects of the present disclosure. FIG. 4 illustrates an example communication device 405, and the window 400 may be an example of a messaging window described herein. The communication device 405 may include examples of aspects of a communication device 105, a communication device 205, or a communication device 305 described with reference to FIGS. 1 through 3 . The communication device 405 may display a user interface 430 (e.g., an on-screen keyboard) for inputting messages.
  • The window 400 may include an input area 410 for recipients of the message 421, input area 414 for a subject of the message, attachment 422 (e.g., document, image, video, etc.). In an example, the communication device 405 may determine that a “recipient” mentioned (e.g., Pragati) is not included in the input area 410 for recipients. Additionally, or alternatively, based on an analysis of the subject 414, other email messages may be identified, and the recipients of the other messages may be analyzed to identify additional recipients. Additionally, or alternatively, based on an analysis of the attachment 422 (including metadata) additional recipients may be identified. In some examples, based on the determination, the communication device 405 may refrain from completing (e.g., sending) the message 421.
  • For example, a machine learning network (e.g., integrated or implemented by the communication device 405, a server, or content engine described herein) may identify (e.g., the notification 423) that sender may want to include Pragati Dhumal, Navanath Navaskar, Jane M, and/or Laura Smith as recipients of the message 421. In an example, the communication device 405 may output the notification 423 suggesting the additional recipients before transmitting/sending the message 421. FIG. 4B illustrates the user's selection of Pragati Dhumal as an additional recipient of the message 421.
  • FIGS. 5A-B illustrates an example of a process flow 500 that supports identifying and alerting a user when sending a message to an additional suggested recipient in accordance with aspects of the present disclosure. In some examples, process flow 500 may implement aspects of a communication device 105, a server 110, a communication device 205, a server 210, or a content engine 270, and a communication device 305 described with reference to FIGS. 1-4 .
  • In the following description of the process flow 500, the operations may be performed in a different order than the order shown, or the operations may be performed in different orders or at different times. Certain operations may also be left out of the process flow 500, or other operations may be added to the process flow 500. It is to be understood that while the communication device 105 is described as performing a number of the operations of process flow 500, any device (e.g., another communication device 105, a combination of a communication device 105 and a server 110) may perform the operations shown.
  • According to example aspects of the present disclosure, at 502, a device (e.g., communication device 105, 205, 305, 405, etc.) may identify a message (or other communication) that is input via a user interface of the device. In some aspects, the message may include text, multimedia data, or both.
  • At 504, the device may provide at least a portion of the message (e.g., content, subject, recipient information, etc.) to a machine learning network (e.g., a machine learning network included in or implemented by the communication device 105 or the server 110). The portion provided to the machine learning network may comprises a portion of multiple emails and/or a portion of multiple email threads.
  • At 506, the device may receive an output from the machine learning network in response to the machine learning network processing at least the portion of the message. In some examples, the output may include one or more additional suggested recipients for the message, probability information corresponding to the message and the suggested recipients, confidence information associated with the probability information, or a combination thereof. In some examples, the communication device 105 may extract contextual information associated with content included in at least the portion of the message, where at least the portion of the message is compared (e.g., by the machine learning network) to other information and the suggested recipients are identified based on the contextual information.
  • At 508, the device may output, via the user interface of the device, a notification (e.g., notification 323/423) associated with the message based on the output received from the machine learning network.
  • At 510 if no additional recipients are selected (No), the process 500 proceeds to transmit the message (at 514). If additional recipients are selected (Yes), at 512 the selected recipient(s) are added to the message, and at 514 the message is transmitted.
  • FIG. 5B illustrates a more detailed process flow of step 504 that supports identifying and alerting a user when sending a message to an additional suggested recipient in accordance with aspects of the present disclosure.
  • In the following description of the process flow 504, the operations may be performed in a different order than the order shown, or the operations may be performed in different orders or at different times. Certain operations may also be left out of the process flow 504, or other operations may be added to the process flow 504. It is to be understood that while the communication device 105 is described as performing a number of the operations of process flow 504, any device (e.g., another communication device 105, a combination of a communication device 105 and a server 110) may perform the operations shown.
  • According to example aspects of the present disclosure, at 504, a portion of the message is analyzed (e.g., subject, metadata, keywords, etc.). The portion provided to the machine learning network may comprises a portion of multiple emails and/or a portion of multiple email threads. At 504 b contextual information is determined based on the analysis. For example, other messages may be identified based on the subject. In another email, suggested recipients may be identified based on names, keywords, etc. included in the message. In yet another example, a rule may be identified based on a recipient. In another example, the contextual information may be based at least in part on an attachment included in the message. Additionally, contextual information may be determined based on a user profile associated with a recipient/sender of the message.
  • At 504 c additional recipients are identified based on the contextual information. For example, if the email includes a name of an individual, the address for that individual may be suggested to the sender. In another example, if the attachment was produced by multiple authors (e.g., indicated by metadata), the authors of the attachment may be the suggested recipients. In yet another example, if the contextual information indicates a topic, the suggested recipients may be identified based on their relation/relevance to the identified topic. At 504 confidence/probability information associated with the suggested recipients is identified/provided.
  • In some aspects, the methods, devices, and systems described herein may be applied to email clients, instant messaging clients, other types of messaging clients, and social media platforms.
  • Artificial intelligence, including the utilization of machine learning, can be used in various aspects disclosed herein. For example, as discussed, various levels of content can be analyzed and classified, and this can be configurable by artificial intelligence and/or by user preference. Artificial intelligence, as used herein, includes machine learning. Artificial intelligence and/or user preference can configure information that is used to analyze content and identify relationships between content. For example, artificial intelligence and/or user preference can determine which information is compared to content in order to analyze the content. Artificial intelligence and/or user preference may also be used to configure user profile(s), which may be used to determine relevance to a user (e.g., the user associated with the user profile), a communication (e.g., message, social media posting), a messaging window, and/or an application by comparing the content to information contained within the user profile.
  • Some embodiments utilize natural language processing in the methods and systems disclosed herein. For example, machine learning models can be trained to learn what information is relevant to a user or different users. Machine learning models can have access to resources on a network and access to additional tools to perform the systems and methods disclosed herein.
  • In certain embodiments, data mining and machine learning tools and techniques will discover information used to determine content and/or contextual information. For example, data mining and machine learning tools and techniques will discover user information, user preferences, relevance of content, levels of relevance, contextual information, key word(s) and/or phrases, thresholds, comparison(s) to threshold(s), configuration(s) of content organization of content, and configuration(s) of a user interface, among other embodiments, to provide improved content analysis and/or contextual analysis.
  • Machine learning may manage one or more types of information (e.g., user profile information, communication information, etc.), types of content (including portions of content within communication information), comparisons of information, levels of relevance, and organization (including formatting of content). Machine learning may utilize all different types of information. The information can include various types of visual information, documents (including markup languages like Hypertext Markup Language (HTML)), and audio information (e.g., using natural language processing). Inputs and outputs, as described herein, may be managed by machine learning. Machine learning may determine variables associated with information, and compare information (including variables within the information) with thresholds to determine relevance. The relevance may determine content organization based on rules associated with the relevance. Machine learning may manage properties (including formatting, hiding and/or reorganizing) and configurations of the organized content. Any of the information and/or outputs may be modified and act as feedback to the system.
  • In some aspects, methods and systems disclosed herein use information to analyze content and contextual information. Relevance, and variations thereof, can refer to a determination of how closely a piece of information relates to another piece of information. For example, information may be information that is related to a user, and includes and is not limited to user profile information, association with groups, and information from interactions of the user. Information that is related to a user may be obtained from groups or entities associated with the user. Artificial intelligence and/or user preferences may determine the information that is relevant to a user. For example, artificial intelligence and/or user preference may determine what information is relevant (e.g., appropriate, inappropriate, related) to one or more users, communications, messaging windows, messaging applications, social media applications, etc. Artificial intelligence may use the information to build a profile associated with one or more users, where the profile defines what information (or variables within the information) is relevant to the user, previous communications with the user, current communications generated by the user, and other properties of the relevance (e.g., type of relevance (importance, priority, rank, etc.), level of relevance, etc.). Artificial intelligence and/or user preference may use the profile to compare information with content of a communication to determine relevance between the content (including any differences in relevance of various portions of the content), contacts indicated in the communication, contacts of a user generating the communication, messaging windows associated with the communication, messaging applications associated with the communication, and/or social media applications associated with the communication. Relevance of information is used in various embodiments herein to determine rules for how the content may be extracted and/or analyzed for contextual information.
  • Relevance, as described herein, includes determining how closely related information is. The determination may be by comparison, and may use any criteria, such as thresholds and/or one or more key words. The information may be content within a communication, where a communication can contain one or more pieces (also referred to herein as portions, sections, or items) of content, and may include content that ranges from being not related at all (e.g., not relevant) to content that is directly related (e.g., highly relevant) to contacts indicated in the communication, contacts of a user generating the communication, messaging windows associated with the communication, messaging applications associated with the communication, and/or social media applications associated with the communication. Portions of content may also be referred to herein as simply “content.” There may be two levels of relevance (e.g., relevant and not relevant), or any number of levels of relevance. Any information may be compared to determine relevance. In various embodiments, user information may be used to determine relevance. The user information may include profile information, such as one or more user profiles, that is compared to content. Various types of relevance may be determined for one or more pieces of information, including relevance, priority, importance, precedence, weight, rank, etc. For example, relevance may determine how related content is to a user, contacts indicated in the communication, contacts of a user generating the communication, messaging windows associated with the communication, messaging applications associated with the communication, and/or social media applications associated with the communication, while priority ranks how important the content is. In various embodiments, priority may be based on how relevant a piece of content is. Any combination of one or more types of relevance may be configured and used. For example, content may have high relevance with a low priority and a low rank, content may have high relevance with a high importance and high rank, content may have low relevance with a high importance and a low priority, etc. Content may have any combinations of types of relevance.
  • As used herein, information includes communications, messages, electronic records, content, visual content including text and images, audio content, rich media, data, and/or data structures. Communications include emails, messages, documents, files, etc. Information includes content, and communications include content (also referred to herein as data). Content may be one type or multiple types (e.g., text, images, hyperlinks, etc.) and there may be multiple pieces of content within a communication or a piece of information, regardless of content type. Content may contain one or more variables. Communications may be data that is stored on a storage/memory device, and/or transmitted from one communication device to another communication device via a communication network.
  • Communications include messages. A message may be transmitted via one or more data packets. The formatting of such data packets may be based on the messaging protocol used for transmitting the electronic records over the communication network.
  • Information related to communications may be referred to herein as communication information, communication data, or communication content, and variations of these terms. Communication information can include any type of data related to communications of a user and/or entity (e.g., information being sent to user(s), received from user(s), created by user(s), accessed by user(s), viewed by user(s), etc.). Content of a communication can include information associated with the communication as well as information contained within the communication. Content of a communication may include information not only that is sent and received, but also other information such as information that a user does not necessarily send or receive. Content of communications may be classified in various ways, such as by a timing of the content, items the content is related to, users the content is related to, key words or other data within fields of the communication (e.g., to field, from field, subject, body, etc.), among other ways of classifying the content. The content may be analyzed based on information associated with the content and/or variable(s), including the location of the content and/or variables as it relates to the communication (e.g., a field, sender, recipient, title, or body location within the communication).
  • As used herein, a data model may correspond to a data set that is useable in an artificial neural network and that has been trained by one or more data sets that describe communications between two or more entities. The data model may be stored as a model data file or any other data structure that is useable within a neural network or an artificial intelligence system.
  • Aspects of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium.
  • Furthermore, it should be appreciated that the various links connecting the elements can be wired or wireless links, or any combination thereof, or any other known or later developed element(s) that is capable of supplying and/or communicating data to and from the connected elements. These wired or wireless links can also be secure links and may be capable of communicating encrypted information. Transmission media used as links, for example, can be any suitable carrier for electrical signals, including coaxial cables, copper wire and fiber optics, and may take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications.
  • Also, while the flowcharts have been discussed and illustrated in relation to a particular sequence of events, it should be appreciated that changes, additions, and omissions to this sequence can occur without materially affecting the operation of the disclosure.
  • A number of variations and modifications of the disclosure can be used. It would be possible to provide for some features of the disclosure without providing others.
  • In yet another embodiment, the systems and methods of this disclosure can be implemented in conjunction with a special purpose computer, a programmed microprocessor or microcontroller and peripheral integrated circuit element(s), an ASIC or other integrated circuit, a digital signal processor, a hard-wired electronic or logic circuit such as discrete element circuit, a programmable logic device or gate array such as PLD, PLA, FPGA, PAL, special purpose computer, any comparable means, or the like. In general, any device(s) or means capable of implementing the methodology illustrated herein can be used to implement the various aspects of this disclosure. Illustrative hardware that can be used for the disclosed embodiments, configurations, and aspects includes computers, handheld devices, telephones (e.g., cellular, Internet enabled, digital, analog, hybrids, and others), and other hardware known in the art. Some of these devices include processors (e.g., a single or multiple microprocessors), memory, nonvolatile storage, input devices, and output devices. Furthermore, alternative software implementations including, but not limited to, distributed processing or component/object distributed processing, parallel processing, or virtual machine processing can also be constructed to implement the methods described herein.
  • Examples of the processors as described herein may include, but are not limited to, at least one of Qualcomm® Snapdragon® 800 and 801, Qualcomm® Snapdragon® 610 and 615 with 4G LTE Integration and 64-bit computing, Apple® A7 processor with 64-bit architecture, Apple® M7 motion coprocessors, Samsung® Exynos® series, the Intel® Core™ family of processors, the Intel® Xeon® family of processors, the Intel® Atom™ family of processors, the Intel Itanium® family of processors, Intel® Core® i5-4670K and i7-4770K 22 nm Haswell, Intel® Core® i5-3570K 22 nm Ivy Bridge, the AMD® FX™ family of processors, AMD® FX-4300, FX-6300, and FX-8350 32 nm Vishera, AMD® Kaveri processors, Texas Instruments® Jacinto C6000™ automotive infotainment processors, Texas Instruments® OMAP™ automotive-grade mobile processors, ARM® Cortex™-M processors, ARM® Cortex-A and ARM926EJ-S™ processors, other industry-equivalent processors, and may perform computational functions using any known or future-developed standard, instruction set, libraries, and/or architecture.
  • Any of the steps, functions, and operations discussed herein can be performed continuously and automatically.
  • In yet another embodiment, the disclosed methods may be readily implemented in conjunction with software using object or object-oriented software development environments that provide portable source code that can be used on a variety of computer or workstation platforms. Alternatively, the disclosed system may be implemented partially or fully in hardware using standard logic circuits or very large-scale integration (VLSI) design. Whether software or hardware is used to implement the systems in accordance with this disclosure is dependent on the speed and/or efficiency requirements of the system, the particular function, and the particular software or hardware systems or microprocessor or microcomputer systems being utilized.
  • In yet another embodiment, the disclosed methods may be partially implemented in software that can be stored on a storage medium, executed on programmed general-purpose computer with the cooperation of a controller and memory, a special purpose computer, a microprocessor, or the like. In these instances, the systems and methods of this disclosure can be implemented as program embedded on personal computer such as an applet, JAVA® or Common Gateway Interface (CGI) script, as a resource residing on a server or computer workstation, as a routine embedded in a dedicated measurement system, system component, or the like. The system can also be implemented by physically incorporating the system and/or method into a software and/or hardware system.
  • Methods described or claimed herein can be performed with traditional executable instruction sets that are finite and operate on a fixed set of inputs to provide one or more defined outputs. Alternatively, or additionally, methods described or claimed herein can be performed using artificial intelligence, machine learning, neural networks, or the like. In other words, a system is contemplated to include finite instruction sets and/or artificial intelligence-based models/neural networks to perform some or all of the steps described herein.
  • The present disclosure, in various aspects, embodiments, and/or configurations, includes components, methods, processes, systems, and/or apparatus substantially as depicted and described herein, including various aspects, embodiments, configurations embodiments, subcombinations, and/or subsets thereof. Those of skill in the art will understand how to make and use the disclosed aspects, embodiments, and/or configurations after understanding the present disclosure. The present disclosure, in various aspects, embodiments, and/or configurations, includes providing devices and processes in the absence of items not depicted and/or described herein or in various aspects, embodiments, and/or configurations hereof, including in the absence of such items as may have been used in previous devices or processes, e.g., for improving performance, achieving ease and\or reducing cost of implementation.
  • The foregoing discussion has been presented for purposes of illustration and description. The foregoing is not intended to limit the disclosure to the form or forms disclosed herein. In the foregoing Detailed Description for example, various features of the disclosure are grouped together in one or more aspects, embodiments, and/or configurations for the purpose of streamlining the disclosure. The features of the aspects, embodiments, and/or configurations of the disclosure may be combined in alternate aspects, embodiments, and/or configurations other than those discussed above. This method of disclosure is not to be interpreted as reflecting an intention that the claims require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed aspect, embodiment, and/or configuration. Thus, the following claims are hereby incorporated into this Detailed Description, with each claim standing on its own as a separate preferred embodiment of the disclosure.
  • Moreover, though the description has included description of one or more aspects, embodiments, and/or configurations and certain variations and modifications, other variations, combinations, and modifications are within the scope of the disclosure, e.g., as may be within the skill and knowledge of those in the art, after understanding the present disclosure. It is intended to obtain rights which include alternative aspects, embodiments, and/or configurations to the extent permitted, including alternate, interchangeable and/or equivalent structures, functions, ranges or steps to those claimed, whether or not such alternate, interchangeable and/or equivalent structures, functions, ranges or steps are disclosed herein, and without intending to publicly dedicate any patentable subject matter.

Claims (20)

What is claimed is:
1. A method comprising:
identifying a message that is input via a user interface of a device;
providing at least a portion of the message to a machine learning network;
receiving from the machine learning network, in response to the machine learning network processing at least the portion of the message, an additional suggested recipient for the message; and
outputting a notification associated with the message, wherein the notification allows the additional suggested recipient to be added to as a recipient of the message before the message is transmitted.
2. The method of claim 1, wherein the portion of the message comprises a body of the message, wherein processing at least the portion of the message comprising extracting contextual information associated with content included in the body of the message, and wherein the additional suggested recipient is selected based at least in part on the contextual information.
3. The method of claim 2, wherein the contextual information comprises mentioning a name of the additional suggested recipient.
4. The method of claim 1, wherein the portion of the message comprises an attachment to the message, wherein processing at least the portion of the message comprising analyzing the attachment and determining other users associated with the attachment, and wherein the additional suggested recipient is selected based at least in part on the users associated with the attachment.
5. The method of claim 1, wherein the portion of the message comprises one or more recipients of the message, and wherein processing at least the portion of the message comprising determining the additional suggested recipient based on the one or more recipients of the message.
6. The method of claim 5, wherein one of the one or more recipients of the message set a rule for messages to be sent to the additional suggested recipient.
7. The method of claim 1, wherein the portion of the message comprises a user profile associated with the device, and wherein processing at least the portion of the message comprising determining the additional suggested recipient based on the user profile associated with the device.
8. The method of claim 1, further comprising:
receiving via the user interface of the device, a selection of the additional suggested recipient;
adding the additional suggested recipient to the message; and
transmitting the message.
9. The method of claim 1, further comprising:
receiving from the machine learning network confidence and/or probability information corresponding to the additional suggested recipient.
10. The method of claim 9, wherein the probability information, the confidence information, or both is determined based at least in part on a comparison of at least the portion of the message to profile information of one or more contacts associated with a user profile associated with the device.
11. A device comprising:
a processor; and
memory in electronic communication with the processor; and
instructions stored in the memory, the instructions being executable by the processor to:
identify a message that is input via a user interface of a device;
provide at least a portion of the message to a machine learning network;
receive from the machine learning network, in response to the machine learning network processing at least the portion of the message, an additional suggested recipient for the message; and
output a notification associated with the message, wherein the notification allows the additional suggested recipient to be added to as a recipient of the message before the message is transmitted.
12. The device of claim 11, wherein the portion of the message comprises a body of the message, wherein processing at least the portion of the message comprising extracting contextual information associated with content included in the message, and wherein the additional suggested recipient is selected based at least in part on the contextual information.
13. The device of claim 12, wherein the contextual information comprises mentioning a name of the additional suggested recipient.
14. The device of claim 11, wherein the portion of the message comprises an attachment to the message, wherein processing at least the portion of the message comprising analyzing the attachment and determining other users associated with the attachment, and wherein the additional suggested recipient is selected based at least in part on the users associated with the attachment.
15. The device of claim 11, wherein the portion of the message comprises one or more recipients of the message, and wherein processing at least the portion of the message comprising determining the additional suggested recipient based on the one or more recipients of the message.
16. The device of claim 15, wherein one of the one or more recipients of the message set a rule for messages to be sent to the additional suggested recipient.
17. The device of claim 11, wherein the portion of the message comprises a user profile associated with the device, and wherein processing at least the portion of the message comprising determining the additional suggested recipient based on the user profile associated with the device.
18. The device of claim 11, further comprising:
receiving via the user interface of the device, a selection of the additional suggested recipient;
adding the additional suggested recipient to the message; and
transmitting the message.
19. The device of claim 11, further comprising:
receiving from the machine learning network confidence and/or probability information corresponding to the additional suggested recipient.
20. A non-transitory, computer-readable medium comprising a set of instructions stored therein which, when executed by a processor, causes the processor to:
identify a message that is input via a user interface of a device;
provide at least a portion of the message to a machine learning network;
receive from the machine learning network, in response to the machine learning network processing at least the portion of the message, an additional suggested recipient for the message; and
output a notification associated with the message, wherein the notification allows the additional suggested recipient to be added to as a recipient of the message before the message is transmitted.
US17/530,959 2021-11-19 2021-11-19 Identify recipient(s) based on context and prompt/suggest sender to add identified recipient(s) before sending message Pending US20230162057A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/530,959 US20230162057A1 (en) 2021-11-19 2021-11-19 Identify recipient(s) based on context and prompt/suggest sender to add identified recipient(s) before sending message

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US17/530,959 US20230162057A1 (en) 2021-11-19 2021-11-19 Identify recipient(s) based on context and prompt/suggest sender to add identified recipient(s) before sending message

Publications (1)

Publication Number Publication Date
US20230162057A1 true US20230162057A1 (en) 2023-05-25

Family

ID=86383984

Family Applications (1)

Application Number Title Priority Date Filing Date
US17/530,959 Pending US20230162057A1 (en) 2021-11-19 2021-11-19 Identify recipient(s) based on context and prompt/suggest sender to add identified recipient(s) before sending message

Country Status (1)

Country Link
US (1) US20230162057A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11947902B1 (en) 2023-03-03 2024-04-02 Microsoft Technology Licensing, Llc Efficient multi-turn generative AI model suggested message generation
US11962546B1 (en) * 2023-03-03 2024-04-16 Microsoft Technology Licensing, Llc Leveraging inferred context to improve suggested messages

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11947902B1 (en) 2023-03-03 2024-04-02 Microsoft Technology Licensing, Llc Efficient multi-turn generative AI model suggested message generation
US11962546B1 (en) * 2023-03-03 2024-04-16 Microsoft Technology Licensing, Llc Leveraging inferred context to improve suggested messages

Similar Documents

Publication Publication Date Title
US20220329556A1 (en) Detect and alert user when sending message to incorrect recipient or sending inappropriate content to a recipient
US11394674B2 (en) System for annotation of electronic messages with contextual information
US10904200B2 (en) Systems, apparatus, and methods for platform-agnostic message processing
US11442950B2 (en) Dynamic presentation of searchable contextual actions and data
US10552544B2 (en) Methods and systems of automated assistant implementation and management
US10846526B2 (en) Content based transformation for digital documents
US10970349B1 (en) Workflow relationship management and contextualization
EP4029204A1 (en) Composing rich content messages assisted by digital conversational assistant
US20160117624A1 (en) Intelligent meeting enhancement system
RU2698423C2 (en) Filling user contact records
US11409820B1 (en) Workflow relationship management and contextualization
US20230162057A1 (en) Identify recipient(s) based on context and prompt/suggest sender to add identified recipient(s) before sending message
US11962560B2 (en) Techniques for supervising communications from multiple communication modalities
US20180285775A1 (en) Systems and methods for machine learning classifiers for support-based group
US11314692B1 (en) Workflow relationship management and contextualization
US11567649B2 (en) Group-based communication system and apparatus configured to manage channel titles associated with group-based communication channels
US11665010B2 (en) Intelligent meeting recording using artificial intelligence algorithms
US20210406270A1 (en) Leveraging Interlinking Between Information Resources to Determine Shared Knowledge
WO2019156897A1 (en) Suggesting people qualified to provide assistance with regard to an issue identified in a file
US20220027859A1 (en) Smart user interface for calendar organization to track meetings and access consolidated data
US11526567B2 (en) Contextualizing searches in a collaborative session
CN112748828B (en) Information processing method, device, terminal equipment and medium
US20140149405A1 (en) Automated generation of networks based on text analytics and semantic analytics
US11734499B2 (en) Smart content indicator based on relevance to user
US20220383089A1 (en) Intelligent meeting hosting using artificial intelligence algorithms

Legal Events

Date Code Title Description
AS Assignment

Owner name: AVAYA MANAGEMENT L.P., NORTH CAROLINA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SHARMA, TANVI;DHUMAL, PRAGATI;NAVASKAR, NAVANATH;REEL/FRAME:058165/0174

Effective date: 20211118

AS Assignment

Owner name: WILMINGTON TRUST, NATIONAL ASSOCIATION, AS COLLATERAL AGENT, DELAWARE

Free format text: INTELLECTUAL PROPERTY SECURITY AGREEMENT;ASSIGNORS:AVAYA INC.;INTELLISIST, INC.;AVAYA MANAGEMENT L.P.;AND OTHERS;REEL/FRAME:061087/0386

Effective date: 20220712

AS Assignment

Owner name: WILMINGTON SAVINGS FUND SOCIETY, FSB (COLLATERAL AGENT), DELAWARE

Free format text: INTELLECTUAL PROPERTY SECURITY AGREEMENT;ASSIGNORS:AVAYA MANAGEMENT L.P.;AVAYA INC.;INTELLISIST, INC.;AND OTHERS;REEL/FRAME:063742/0001

Effective date: 20230501

AS Assignment

Owner name: CITIBANK, N.A., AS COLLATERAL AGENT, NEW YORK

Free format text: INTELLECTUAL PROPERTY SECURITY AGREEMENT;ASSIGNORS:AVAYA INC.;AVAYA MANAGEMENT L.P.;INTELLISIST, INC.;REEL/FRAME:063542/0662

Effective date: 20230501

AS Assignment

Owner name: AVAYA INTEGRATED CABINET SOLUTIONS LLC, NEW JERSEY

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS (REEL/FRAME 61087/0386);ASSIGNOR:WILMINGTON TRUST, NATIONAL ASSOCIATION, AS NOTES COLLATERAL AGENT;REEL/FRAME:063690/0359

Effective date: 20230501

Owner name: INTELLISIST, INC., NEW JERSEY

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS (REEL/FRAME 61087/0386);ASSIGNOR:WILMINGTON TRUST, NATIONAL ASSOCIATION, AS NOTES COLLATERAL AGENT;REEL/FRAME:063690/0359

Effective date: 20230501

Owner name: AVAYA INC., NEW JERSEY

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS (REEL/FRAME 61087/0386);ASSIGNOR:WILMINGTON TRUST, NATIONAL ASSOCIATION, AS NOTES COLLATERAL AGENT;REEL/FRAME:063690/0359

Effective date: 20230501

Owner name: AVAYA MANAGEMENT L.P., NEW JERSEY

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS (REEL/FRAME 61087/0386);ASSIGNOR:WILMINGTON TRUST, NATIONAL ASSOCIATION, AS NOTES COLLATERAL AGENT;REEL/FRAME:063690/0359

Effective date: 20230501