US20230169279A1 - System for providing dialogue guidance - Google Patents

System for providing dialogue guidance Download PDF

Info

Publication number
US20230169279A1
US20230169279A1 US17/897,749 US202217897749A US2023169279A1 US 20230169279 A1 US20230169279 A1 US 20230169279A1 US 202217897749 A US202217897749 A US 202217897749A US 2023169279 A1 US2023169279 A1 US 2023169279A1
Authority
US
United States
Prior art keywords
dialogue
excerpt
sentiment
participants
content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US17/897,749
Inventor
Daniel L. Coffing
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US17/897,749 priority Critical patent/US20230169279A1/en
Publication of US20230169279A1 publication Critical patent/US20230169279A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/55Rule-based translation
    • G06F40/56Natural language generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • G06F40/35Discourse or dialogue representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Definitions

  • the present invention relates to sentiment detection in dialogue guidance systems.
  • FIGS. 1 A-B illustrate exemplary operating environments in which systems and methods of the disclosure may be deployed, according to some embodiments of the subject technology
  • FIG. 2 is a block diagram depicting a system for sentiment detection and dialogue guidance, according to some embodiments of the subject technology
  • FIG. 3 is a flowchart of a method for detecting sentiment and guiding dialogue, according to some embodiments of the subject technology.
  • FIG. 4 is a system diagram of an example computing system that may implement various systems and methods discussed herein, in accordance with various embodiments of the subject technology.
  • aspects of the present disclosure involve systems and methods for detecting a sentiment of, for example, an audience and providing sentiment-based guidance for discourse such as argument or debate.
  • Dialogue participants such as an audience or other dialogue recipient, may receive information (e.g., a presentation or dialogue) differently based on either or both of individual and group sentiment and disposition.
  • information e.g., a presentation or dialogue
  • a presenter may realize increased success (e.g., convincing an audience of a stance, informing an audience, etc.) when made aware of the sentiment and disposition of other dialogue participants.
  • the presenter can adjust aspects of how ideas are presented in response to participant sentiment and disposition.
  • sentiment and disposition can be used to automatically adjust dialogue submitted by the presenter (e.g., via text based medium such as email or message board, etc.) to conform to reader sentiment on either an individual (e.g., each reader receives a respectively adjusted dialogue) or group basis (e.g., all readers receive a tonally optimized dialogue).
  • text based medium such as email or message board, etc.
  • group basis e.g., all readers receive a tonally optimized dialogue
  • audience audiences may be sympathetic (or antagonistic or apathetic) to certain group interests (e.g., social justice, economic freedom, etc.), contextual frameworks, and the like.
  • group interests e.g., social justice, economic freedom, etc.
  • contextual frameworks e.g., aural frameworks, and the like.
  • Those in discourse with such audiences may find it advantageous to adjust word choice, framing references, pace, duration, rhetorical elements, illustrations, reasoning support models, and other aspects of a respective dialogue.
  • it may be advantageous to engage in an inquisitive or deliberative form of dialogue whereas in other cases (e.g., before other audiences) the same ideas and points may be more likely to be successfully conveyed in a persuasive or negotiation form of dialogue.
  • a speaker may also be a poor judge of audience sentiment and disposition, for whatever reason, and so likely to misjudge or fail to ascertain the audience sentiment and disposition.
  • a three-phase process can be enacted to alleviate the above issues as well as augment intra-human persuasion (e.g., dialogue, presentation, etc.). Premises and their reasoning interrelationships may first be identified and, in some cases, communicated to a user. In a second phase, a user or users may be guided toward compliance with particular persuasive forms (e.g., avoidance of fallacies, non-sequiturs, ineffective or detrimental analogies, definition creep or over-broadening, etc.). In some examples, guidance can occur in real-time such as in a presentational setting or keyed-in messaging and the like.
  • guiding information can be augmented and/or supplemented with visual and/or audio cues and other information, such as social media and/or social network information, regarding members to a dialogue (e.g., audience members at a presentation and the like). It is with the second and third phases which the systems and methods disclosed herein are primarily concerned.
  • static information such as, without imputing limitation, demographic, location, education, work history, relationship status, life event history, group membership, cultural heritage, and other information can be used to guide dialogue.
  • dynamic information such as, without imputing limitation, interaction history (e.g., with the user/communicator, regarding the topic, with the service or organization associated with the dialogue, over the Internet generally, etc.), speed of interaction, sentiment of interaction, mental state during interaction (e.g., sobriety, etc.), limitations of the medium of dialogue (e.g., screen size, auditorium seating, etc.), sophistication of participants to the dialogue, various personality traits (e.g., aggressive, passive, defensive, victimized, etc.), search and/or purchase histories, errors and/or argument ratings or histories within the corresponding service or organization, evidence cited in the past by dialogue participants, and various other dynamic factors which may be used to determine dialogue guidance.
  • interaction history e.g., with the user/communicator, regarding the topic, with the service or organization associated with the dialogue, over the Internet generally
  • the above information may be brought to bear in a micro-sculpted real-time communication by, for example and without imputing limitation, determining changes to be made in colloquialisms, idioms, reasoning forms, evidence types or source, vocabulary or illustration choices, or sentiment language.
  • the determined changes can be provided to a user (e.g., a speaker, communicator, etc.) to increase persuasiveness of dialogue by indicating more effective paths of communication to achieving understanding by other dialogue participants (e.g., by avoiding triggers or pitfalls based on the above information).
  • visual and audio data of an audience can be processed during and throughout a dialogue.
  • the visual and audio data may be used by Natural Language Processing (NLP) and/or Computer Vision (CV) systems and services in order to identify audience sentiment and/or disposition.
  • CV/NLP processed data can be processed by a sentiment identifying service (e.g., a trained deep network, a rules based system, a probabilistic system, some combination of the aforementioned, or the like) which may receive analytic support by a group psychological deep learning system to identify sentiment and/or disposition of audience members.
  • a sentiment identifying service e.g., a trained deep network, a rules based system, a probabilistic system, some combination of the aforementioned, or the like
  • the system can provide consistent and unbiased sentiment identification based on large volumes of reference data.
  • Identified sentiments and/or dispositions can be used to select dialogue forms.
  • dialogue forms can be generally categorized as forms for sentiment-based dialogue and forms for objective-based dialogue.
  • Sentiment-based dialogue forms can include rules, lexicons, styles, and the like for engaging in dialogue (e.g., presenting to) particular sentiments.
  • objective-based dialogue forms may include rules, lexicons, styles, and the like for engaging in dialogue in order to achieve certain specified objectives (e.g., persuade, inform, etc.).
  • multiple dialogue forms can be selected and exert more or less influence based on respective sentiment and/or objectives or corresponding weights and the like.
  • Selected dialogue forms may be used to provide dialogue guidance one or more users (e.g., speakers or participants).
  • dialogue guidance may include restrictions (e.g., words, phrases, metaphors, arguments, references, and such that should not be used), suggestions (e.g., words, phrases, metaphors, arguments, references, and such that should be used), or other guidance.
  • Dialogue forms may include, for example and without imputing limitation, persuasion, negotiation, inquiry, deliberation, information seeking, Eristics, and others.
  • dialogue forms may also include evidence standards.
  • persuasive form may be associated with a heightened standard of evidence.
  • certain detected sentiments or dispositions may be associated with particular standards of evidence or source preferences.
  • a dialogue participant employed in a highly technical domain such as an engineer or the like, may be disposed towards (e.g., find more persuasive) sources associated with a particular credential (e.g., a professor from an alma mater), a particular domain (e.g., an electrical engineering textbook), a particular domain source (e.g., an IEEE publication), and the like.
  • a disposition or sentiment may be associated with heightened receptiveness to particular cultural references and the like.
  • dialogue forms may also include premise interrelationship standards.
  • premise interrelationship standards For example, threshold values, empirical support, substantiation, and other characteristics of premise interrelationships may be included in dialogue forms.
  • the premise interrelationship standards can be included directly within or associated with dialogue forms as rules, or may be included in a probabilistic fashion (e.g., increasing likelihoods of standards, etc.), or via some combination of the two.
  • Dialogue forms can also include burden of proof standards.
  • burden of proof standards For example, and without imputing limitation, null hypothesis requirements, references to tradition, “common sense”, principles based on parsimony and/or complexity, popularity appeals, default reasoning, extension and/or abstractions of chains of reasoning (in some examples, including ratings and such), probabilistic falsification, pre-requisite premises, and other rules and/or standards related to burden of proof may be included in or be associated with particular dialogue forms.
  • the forms can be presented to a user (e.g., a speaker) via a user device or some such.
  • the dialogue forms can be applied to preexisting information such as a written speech and the like.
  • the dialogue forms can also enable strategy and/or coaching of the user.
  • FIG. 1 A depicts an example of an operational environment 100 for a sentiment detection and dialogue guidance system.
  • a speaker 102 presents to an audience 104 while receiving automated and dynamic presentation coaching provided by the sentiment detection dialogue guidance system.
  • an input capture system 112 retrieves visual and audio data from members of audience 104 within a capture range.
  • the capture range is denoted by a dotted line triangle. While the range is depicted as static, it is understood that in some examples, such as where tracking of a particular audience member or some such is needed, the range may be dynamic and/or include other systems and subsystems to capture relevant input data.
  • Input capture system 112 includes a video capture device 108 and an audio capture device 110 .
  • Input capture system 112 may be a dedicated device or, in some examples, a mobile device such as a smartphone and the like with video and audio capture capability.
  • Audio and visual data captured by input capture system 112 can be provided to a processing device 106 .
  • Processing device 106 may be a local computer or may be a remotely hosted application or server.
  • Processing device 106 processes visual and audio data in order to provide dialogue coaching data to speaker 102 .
  • a real-time presentation (e.g., dialogue) coaching can be provided to speaker 102 .
  • This real-time coaching can dynamically change in response to sentiment and disposition changes of audience 104 , either on a per member basis or as a whole, detected by input capture system 112 .
  • FIG. 1 B depicts an example of an operational environment 150 for dialogue guidance system 156 .
  • operational environment 150 of FIG. 1 B can be asynchronous and includes guidance bespoke to individual participant sentiment and disposition.
  • dialogue in operational environment 150 may take place over email, a message board, instant message, voice over internet protocol (VoIP), video conferencing, or other network communication.
  • VoIP voice over internet protocol
  • Presenter dialogue is transmitted from a computer 152 over network 155 (e.g., the Internet, etc.) so that it can be received by participant A 160 A and/or participant B 160 B.
  • network 155 e.g., the Internet, etc.
  • dialogue guidance system 156 can determine sentiments and dispositions for participant A 160 A and participant B 160 B and apply respective dialogue guidance to versions of presenter dialogue corresponding to each participant. Further, dialogue guidance system 156 can provide participant sentiment and disposition information back to computer 152 for a presenter to review.
  • dialogue guidance system 156 can additionally, or instead, provide information related to dialogue guidance for respective participants 160 A-B in order to provide a presenter with a robust understanding of each dialogue participant mental state.
  • dialogue participant A 160 A and dialogue participant B 160 B are depicted as including a single person 154 A and 154 B respectively. However, it is understood that multiple people may be included as a dialogue participant and that either or both of individual sentiments and dispositions or aggregated group sentiments and dispositions can be determined and accounted for by dialogue guidance system 156 .
  • Visual and audio data retrieved by computers 156 A-B associated with respective participants 160 A-B can be processed by dialogue guidance system 156 in determining participant sentiment. Additionally, in some examples, dialogue guidance system 156 can retrieve supplemental information related to participating people 154 A-B over network 155 such as social media, social network, message board history (or histories), and the like. Dialogue guidance system 156 may then utilize the visual and audio data along with any supplemental information to determine sentiments and dispositions, determine guidance, and apply the guidance automatically to the presenter dialogue to generate bespoke guided dialogue respective to each participant 160 A-B and based on respective sentiments and dispositions. This can be performed asynchronously or, in other words, provided to participants 160 A-B at different times (e.g., as a participant logs into a forum account, checks an email account, opens an instant message client, etc.).
  • FIG. 2 depicts a sentiment detection and dialogue guidance system 200 .
  • System 200 may be implemented as an application on a computing device such as processing device 106 .
  • System 200 receives visual and audio input in order to provide dialogue data (e.g., coaching data) to a user via a smartphone, tablet, desktop, laptop, or other device.
  • dialogue data e.g., coaching data
  • a computer vision engine 202 receives visual data while a natural language processing engine 204 receives audio data.
  • visual and audio data is transmitted directly from video and/or audio devices.
  • visual and audio data can be preprocessed or provided remotely, from stored files, or other sources.
  • Computer vision engine 202 and natural language processing engine 206 respectively transmit prepared visual and audio data to a sentiment identifier service 206 .
  • Prepared visual and audio data may, for example, include flags at various portions of the visual and audio data, include clips or snapshots, isolated or extracted sources (e.g., for tracking a particular audience member and the like), multiple channels based on one or more feeds, or other transformations as may be used by sentiment identifier 206 to identify audience sentiment and/or dispositions.
  • Sentiment identifier service 206 can determine a sentiment or disposition of an audience at individual levels and/or at an aggregated level based on the audio and visual data.
  • sentiment identifier 206 can exchange data with a psychological deep learning system 214 .
  • Psychological deep learning system 214 may interconnect with social networks and media 216 to retrieve additional information on an audience and/or individuals within the audience.
  • psychological deep learning system 214 can derive and/or explore a social graph (e.g., generate a social topology and the like) associated with one or more audience members to supplement or complement information used by psychological deep learning system 214 in creation of various profiles.
  • Psychological deep learning system 214 can include general, specific, or mixed profiles generated by deep learning systems and the like.
  • the profiles may assist sentiment identifier service 206 in determining audience sentiment and disposition based on visual cues (e.g., facial expressions, etc.), audio cues (e.g., audible responses, etc.), and the like.
  • Sentiment identifier service 206 transmits sentiment data to a dialogue form selector service 208 .
  • Dialogue form selector service 208 processes received sentiment data to retrieve rules, metrics, guidance, and/or restrictions as discussed above.
  • dialogue form selector service 208 retrieves stored dialogue data (e.g., prepared speeches, etc.) for applying selected dialogue forms.
  • Dialogue form selector service 208 transmits dialogue coaching data to a user device 210 .
  • User device 210 may be a computer, mobile device, smartphone, tablet, or the like.
  • dialogue coaching data may be transmitted to downstream processes or services.
  • application programming interface (API) endpoints may receive dialogue coaching data for storage, model training, and other processes.
  • API application programming interface
  • dialogue coaching data includes prepared statements.
  • dialogue coaching data may provide rules, guidance, restrictions, metrics, and the like.
  • FIG. 3 depicts a method 300 for processing audio and visual data to generate guidance information for a speaker.
  • Audio and visual data for a participant to a dialogue are received and supplemental information relating to the participant is retrieved (operation 302 ).
  • the audio and visual data may be provided as individual streams such as from one or more respective cameras and one or more respective microphones.
  • a single system may provide a multi-layered data stream including both audio and visual data.
  • the supplemental information can be retrieved, via API and the like, from social media and/or social network platforms such as Twitter® or Facebook® and the like.
  • Audio and/or visual cues within the received audio and visual data along with the retrieved supplemental information are then used to identify participant sentiment and/or disposition in response to a presenter dialogue (operation 304 ).
  • audience gaze, respiratory response e.g., gasps, sighs, etc.
  • respiratory response e.g., gasps, sighs, etc.
  • machine learning models such as deep neural networks, regressions, support vector machines (SVMs), and other techniques may be used to identify sentiments.
  • Dialogue guidance can include restrictions, recommendations, and the like as discussed above.
  • dialogue guidance may include prepared statements, etc.
  • the determined guidance is then provided to the presenter (operation 308 ). In some examples, such as where the dialogue takes place over a text medium like a forum and the like, the determined guidance can be automatically applied to the dialogue in addition to, or rather than, providing the guidance to the presenter.
  • method 300 may repeat to provide continuous and/or streaming dialogue guidance to a speaker.
  • dialogue guidance may include recommendations regarding semantic framing, references, lexicon and the like.
  • dialogue guidance may include prepared comments to be read by the speaker (e.g., via semantic transforms and other NLP processes).
  • the dialogue may be text based and different participants may receive the dialogue individually and independently
  • the dialogue may be automatically modified according to determined guidance respective to sentiments and/or dispositions determined for each recipient at the time of receipt such that one message from the presenting user could be customized for each individual reader according to their state or sentiment at the time of their receipt, even if those receipt times and recipient sentiments were different for the different recipients and even though all the received messages might be deemed to have an equivalent persuasive effect (EPE).
  • EPE equivalent persuasive effect
  • EPE can include anticipated levels of impact upon or deflection to a belief held by a dialogue participant, tested responses to a corresponding subject matter of the dialogue participant (e.g., using before and after testing, A/B testing, etc.), physiological response tests (e.g., via brain scans, etc.), and the like which may provide further information to, for example, dialogue guidance system 156 , for optimizing dialogue guidance.
  • the dialogue guidance system is configured to receive input data captured from a communication event among at least a first participant and a second participant.
  • the communication event may include a presentation or other message or communication.
  • the participants may be, for example, one or more presenters and one or more audience members or recipients of the communication.
  • the input data may include one or more of text data, audio data, or video data.
  • the dialogue guidance system is configured to identify, based on the input data, one of a sentiment or a disposition corresponding to the communication event, determine dialogue guidance for the first participant based on one of the sentiment or the disposition, and provide the dialogue guidance to one or more of the participants.
  • the dialogue guidance system may also be configured to retrieve supplemental information corresponding to at least one of the first participant and the second participant, the supplemental information including one or more of social media information, social network information, or web platform information and the sentiment or disposition may further be identified based on the supplemental information.
  • the dialogue guidance system may include one or more processors and at least one non-transitory computer-readable medium having stored therein instructions which, when executed by the one or more processors, cause the dialogue guidance system to perform operations.
  • the operations may include receiving, from an input capture system, input data associated with a presentation, identifying a sentiment or a disposition corresponding to the presentation based on the input data, determining dialogue guidance for a presenter of the presentation, and providing the dialogue guidance to the presenter.
  • the input data may be associated with the presenter or of one or more members of an audience of the presentation.
  • the dialogue guidance may be provided to the presenter during the presentation or after it.
  • aspects of the subject technology also relate to a method for providing dialogue guidance.
  • the method may include receiving input data associated with a dialogue participant, the input data comprising one or more of text data, audio data, or video data, identifying one of a sentiment or a disposition corresponding to the dialogue participant based on the input data, determining, based on one of the sentiment or the disposition, dialogue guidance for a presenter, and providing the dialogue guidance to the presenter.
  • the dialogue participant may be a member of an audience or the presenter.
  • the identifying of the sentiment or disposition is based on a deep learning system.
  • the determining of the dialogue guidance may include selecting at least one dialogue form comprising a rule for communicating, wherein the at least one dialogue form corresponds to the identified sentiment or disposition.
  • the rule for communicating may include restrictions, suggestions, or standards.
  • the method may also include processing the input data using at least one of a Natural Language Processing (NLP) and/or Computer Vision (CV) systems.
  • NLP Natural Language Processing
  • CV Computer Vision
  • FIG. 4 is an example computing system 400 that may implement various systems and methods discussed herein.
  • the computer system 400 includes one or more computing components in communication via a bus 402 .
  • the computing system 400 includes one or more processors 404 .
  • the processor 404 can include one or more internal levels of cache 406 and a bus controller or bus interface unit to direct interaction with the bus 402 .
  • the processor 404 may specifically implement the various methods discussed herein.
  • Main memory 408 may include one or more memory cards and a control circuit (not depicted), or other forms of removable memory, and may store various software applications including computer executable instructions, that when run on the processor 404 , implement the methods and systems set out herein.
  • a storage device 410 and a mass storage device 418 may also be included and accessible, by the processor (or processors) 404 via the bus 402 .
  • the storage device 410 and mass storage device 418 can each contain any or all of the methods and systems discussed herein.
  • the computer system 400 can further include a communications interface 412 by way of which the computer system 400 can connect to networks and receive data useful in executing the methods and system set out herein as well as transmitting information to other devices.
  • the computer system 400 can also include an input device 416 by which information is input.
  • Input device 416 can be a scanner, keyboard, and/or other input devices as will be apparent to a person of ordinary skill in the art.
  • An output device 414 can be a monitor, speaker, and/or other output devices as will be apparent to a person of ordinary skill in the art.
  • FIG. 4 is but one possible example of a computer system that may employ or be configured in accordance with aspects of the present disclosure. It will be appreciated that other non-transitory tangible computer-readable storage media storing computer-executable instructions for implementing the presently disclosed technology on a computing system may be utilized.
  • the methods disclosed may be implemented as sets of instructions or software readable by a device. Further, it is understood that the specific order or hierarchy of steps in the methods disclosed are instances of example approaches. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the methods can be rearranged while remaining within the disclosed subject matter.
  • the accompanying method claims present elements of the various steps in a sample order, and are not necessarily meant to be limited to the specific order or hierarchy presented.
  • the described disclosure may be provided as a computer program product, or software, that may include a computer-readable storage medium having stored thereon instructions, which may be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure.
  • a computer-readable storage medium includes any mechanism for storing information in a form (e.g., software, processing application) readable by a computer.
  • the computer-readable storage medium may include, but is not limited to, optical storage medium (e.g., CD-ROM), magneto-optical storage medium, read only memory (ROM), random access memory (RAM), erasable programmable memory (e.g., EPROM and EEPROM), flash memory, or other types of medium suitable for storing electronic instructions.

Abstract

Various aspects of the subject technology relate to a dialogue guidance system. The dialogue guidance system is configured to receive input data captured from a communication event among at least a first participant and a second participant. The input data may include one or more of text data, audio data, or video data. The dialogue guidance system is configured to identify, based on the input data, one of a sentiment or a disposition corresponding to the communication event, determine dialogue guidance for the first participant based on one of the sentiment or the disposition, and provide the dialogue guidance to the first participant.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • The present application is a continuation and claims the priority benefit of U.S. patent application Ser. No. 16/563,461 filed Sep. 6, 2019, now U.S. Pat. No. 11,429,794, which claims the priority benefit of U.S. provisional patent application 62/727,965 filed on Sep. 6, 2018, the contents of which are hereby expressly incorporated by reference in their entirety.
  • 1. Field of the Invention
  • The present invention relates to sentiment detection in dialogue guidance systems.
  • 2. Description of the Related Art
  • Humans constantly engage in persuasive discourse across various media of interaction. It is often the case that parties engaged in persuasive discourse are unaware of the internal motivations of other parties participating in the discourse. In many cases, a party may not even be entirely aware of their own internal motivations. This unawareness of baseline motivations may cause participants to “talk past each other” and thus greatly reduce the efficiency of communication.
  • People often find it difficult to ascertain a sentiment or disposition of listeners during presentations, arguments, and other types of discourse. While training and practice can allow people to improve their ability to ascertain sentiment and/or dispositions, human-based methodologies are notoriously unreliable and often result in incorrect assessments. A presenter, speaker, or debater and the like incorrectly assessing sentiments or dispositions of other participants to a dialogue can result in ineffective framing and/or presenting of arguments, points, references, and other information.
  • It is with these observations in mind, among others, that aspects of the present disclosure were concerned and developed.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIGS. 1A-B illustrate exemplary operating environments in which systems and methods of the disclosure may be deployed, according to some embodiments of the subject technology;
  • FIG. 2 is a block diagram depicting a system for sentiment detection and dialogue guidance, according to some embodiments of the subject technology;
  • FIG. 3 is a flowchart of a method for detecting sentiment and guiding dialogue, according to some embodiments of the subject technology; and
  • FIG. 4 is a system diagram of an example computing system that may implement various systems and methods discussed herein, in accordance with various embodiments of the subject technology.
  • DETAILED DESCRIPTION
  • Aspects of the present disclosure involve systems and methods for detecting a sentiment of, for example, an audience and providing sentiment-based guidance for discourse such as argument or debate.
  • Dialogue participants, such as an audience or other dialogue recipient, may receive information (e.g., a presentation or dialogue) differently based on either or both of individual and group sentiment and disposition. Generally, a presenter may realize increased success (e.g., convincing an audience of a stance, informing an audience, etc.) when made aware of the sentiment and disposition of other dialogue participants. The presenter can adjust aspects of how ideas are presented in response to participant sentiment and disposition. Further, the sentiment and disposition can be used to automatically adjust dialogue submitted by the presenter (e.g., via text based medium such as email or message board, etc.) to conform to reader sentiment on either an individual (e.g., each reader receives a respectively adjusted dialogue) or group basis (e.g., all readers receive a tonally optimized dialogue).
  • For example, some audiences may be sympathetic (or antagonistic or apathetic) to certain group interests (e.g., social justice, economic freedom, etc.), contextual frameworks, and the like. Those in discourse with such audiences may find it advantageous to adjust word choice, framing references, pace, duration, rhetorical elements, illustrations, reasoning support models, and other aspects of a respective dialogue. In some cases, for example, it may be advantageous to engage in an inquisitive or deliberative form of dialogue, whereas in other cases (e.g., before other audiences) the same ideas and points may be more likely to be successfully conveyed in a persuasive or negotiation form of dialogue.
  • However, it is often difficult for a human to accurately determine the sentiment or disposition of an audience. In some cases, a person may be too emotionally invested in the content being conveyed. In other cases, it may be difficult to gauge sentiment and disposition due to audience size or physical characteristics of the space where the dialogue is occurring (e.g., the speaker may be at an angle or the like to the audience, etc.). A speaker may also be a poor judge of audience sentiment and disposition, for whatever reason, and so likely to misjudge or fail to ascertain the audience sentiment and disposition.
  • A three-phase process can be enacted to alleviate the above issues as well as augment intra-human persuasion (e.g., dialogue, presentation, etc.). Premises and their reasoning interrelationships may first be identified and, in some cases, communicated to a user. In a second phase, a user or users may be guided toward compliance with particular persuasive forms (e.g., avoidance of fallacies, non-sequiturs, ineffective or detrimental analogies, definition creep or over-broadening, etc.). In some examples, guidance can occur in real-time such as in a presentational setting or keyed-in messaging and the like. Further, in a third phase, guiding information can be augmented and/or supplemented with visual and/or audio cues and other information, such as social media and/or social network information, regarding members to a dialogue (e.g., audience members at a presentation and the like). It is with the second and third phases which the systems and methods disclosed herein are primarily concerned.
  • In some examples, static information such as, without imputing limitation, demographic, location, education, work history, relationship status, life event history, group membership, cultural heritage, and other information can be used to guide dialogue. In some examples, dynamic information such as, without imputing limitation, interaction history (e.g., with the user/communicator, regarding the topic, with the service or organization associated with the dialogue, over the Internet generally, etc.), speed of interaction, sentiment of interaction, mental state during interaction (e.g., sobriety, etc.), limitations of the medium of dialogue (e.g., screen size, auditorium seating, etc.), sophistication of participants to the dialogue, various personality traits (e.g., aggressive, passive, defensive, victimized, etc.), search and/or purchase histories, errors and/or argument ratings or histories within the corresponding service or organization, evidence cited in the past by dialogue participants, and various other dynamic factors which may be used to determine dialogue guidance.
  • In particular, the above information may be brought to bear in a micro-sculpted real-time communication by, for example and without imputing limitation, determining changes to be made in colloquialisms, idioms, reasoning forms, evidence types or source, vocabulary or illustration choices, or sentiment language. The determined changes can be provided to a user (e.g., a speaker, communicator, etc.) to increase persuasiveness of dialogue by indicating more effective paths of communication to achieving understanding by other dialogue participants (e.g., by avoiding triggers or pitfalls based on the above information).
  • In one example, visual and audio data of an audience can be processed during and throughout a dialogue. The visual and audio data may be used by Natural Language Processing (NLP) and/or Computer Vision (CV) systems and services in order to identify audience sentiment and/or disposition. CV/NLP processed data can be processed by a sentiment identifying service (e.g., a trained deep network, a rules based system, a probabilistic system, some combination of the aforementioned, or the like) which may receive analytic support by a group psychological deep learning system to identify sentiment and/or disposition of audience members. In particular, the system can provide consistent and unbiased sentiment identification based on large volumes of reference data.
  • Identified sentiments and/or dispositions can be used to select dialogue forms. For example, and without imputing limitation, dialogue forms can be generally categorized as forms for sentiment-based dialogue and forms for objective-based dialogue. Sentiment-based dialogue forms can include rules, lexicons, styles, and the like for engaging in dialogue (e.g., presenting to) particular sentiments. Likewise, objective-based dialogue forms may include rules, lexicons, styles, and the like for engaging in dialogue in order to achieve certain specified objectives (e.g., persuade, inform, etc.). Further, multiple dialogue forms can be selected and exert more or less influence based on respective sentiment and/or objectives or corresponding weights and the like.
  • Selected dialogue forms may be used to provide dialogue guidance one or more users (e.g., speakers or participants). For example, dialogue guidance may include restrictions (e.g., words, phrases, metaphors, arguments, references, and such that should not be used), suggestions (e.g., words, phrases, metaphors, arguments, references, and such that should be used), or other guidance. Dialogue forms may include, for example and without imputing limitation, persuasion, negotiation, inquiry, deliberation, information seeking, Eristics, and others.
  • In some examples, dialogue forms may also include evidence standards. For example, persuasive form may be associated with a heightened standard of evidence. At the same time, certain detected sentiments or dispositions may be associated with particular standards of evidence or source preferences. For example, a dialogue participant employed in a highly technical domain, such as an engineer or the like, may be disposed towards (e.g., find more persuasive) sources associated with a particular credential (e.g., a professor from an alma mater), a particular domain (e.g., an electrical engineering textbook), a particular domain source (e.g., an IEEE publication), and the like. In some examples, a disposition or sentiment may be associated with heightened receptiveness to particular cultural references and the like. Further, in cases where multiple dialogue forms interact or otherwise are simultaneously active (e.g., where a speaker is attempting to persuade an audience determined by the sentiment identification system to be disposed towards believing the speaker), an evidence standard based on both these forms may be suggested to the speaker.
  • Likewise, dialogue forms may also include premise interrelationship standards. For example, threshold values, empirical support, substantiation, and other characteristics of premise interrelationships may be included in dialogue forms. The premise interrelationship standards can be included directly within or associated with dialogue forms as rules, or may be included in a probabilistic fashion (e.g., increasing likelihoods of standards, etc.), or via some combination of the two.
  • Dialogue forms can also include burden of proof standards. For example, and without imputing limitation, null hypothesis requirements, references to tradition, “common sense”, principles based on parsimony and/or complexity, popularity appeals, default reasoning, extension and/or abstractions of chains of reasoning (in some examples, including ratings and such), probabilistic falsification, pre-requisite premises, and other rules and/or standards related to burden of proof may be included in or be associated with particular dialogue forms.
  • Once one or more dialogue forms have been selected based on identified sentiment and/or disposition, the forms can be presented to a user (e.g., a speaker) via a user device or some such. In some examples, the dialogue forms can be applied to preexisting information such as a written speech and the like. The dialogue forms can also enable strategy and/or coaching of the user.
  • FIG. 1A depicts an example of an operational environment 100 for a sentiment detection and dialogue guidance system. A speaker 102 presents to an audience 104 while receiving automated and dynamic presentation coaching provided by the sentiment detection dialogue guidance system.
  • As speaker 102 presents to audience 104, an input capture system 112 retrieves visual and audio data from members of audience 104 within a capture range. Here, the capture range is denoted by a dotted line triangle. While the range is depicted as static, it is understood that in some examples, such as where tracking of a particular audience member or some such is needed, the range may be dynamic and/or include other systems and subsystems to capture relevant input data.
  • Input capture system 112 includes a video capture device 108 and an audio capture device 110. Input capture system 112 may be a dedicated device or, in some examples, a mobile device such as a smartphone and the like with video and audio capture capability.
  • Audio and visual data captured by input capture system 112 can be provided to a processing device 106. Processing device 106 may be a local computer or may be a remotely hosted application or server.
  • Processing device 106 processes visual and audio data in order to provide dialogue coaching data to speaker 102. In effect, a real-time presentation (e.g., dialogue) coaching can be provided to speaker 102. This real-time coaching can dynamically change in response to sentiment and disposition changes of audience 104, either on a per member basis or as a whole, detected by input capture system 112.
  • FIG. 1B depicts an example of an operational environment 150 for dialogue guidance system 156. In comparison to operational environment 100, operational environment 150 of FIG. 1B can be asynchronous and includes guidance bespoke to individual participant sentiment and disposition. For example, dialogue in operational environment 150 may take place over email, a message board, instant message, voice over internet protocol (VoIP), video conferencing, or other network communication.
  • Presenter dialogue is transmitted from a computer 152 over network 155 (e.g., the Internet, etc.) so that it can be received by participant A 160A and/or participant B 160B. During transmission, dialogue guidance system 156 can determine sentiments and dispositions for participant A 160A and participant B 160B and apply respective dialogue guidance to versions of presenter dialogue corresponding to each participant. Further, dialogue guidance system 156 can provide participant sentiment and disposition information back to computer 152 for a presenter to review. In some examples, dialogue guidance system 156 can additionally, or instead, provide information related to dialogue guidance for respective participants 160A-B in order to provide a presenter with a robust understanding of each dialogue participant mental state.
  • Here, dialogue participant A 160A and dialogue participant B 160B are depicted as including a single person 154A and 154B respectively. However, it is understood that multiple people may be included as a dialogue participant and that either or both of individual sentiments and dispositions or aggregated group sentiments and dispositions can be determined and accounted for by dialogue guidance system 156.
  • Visual and audio data retrieved by computers 156A-B associated with respective participants 160A-B can be processed by dialogue guidance system 156 in determining participant sentiment. Additionally, in some examples, dialogue guidance system 156 can retrieve supplemental information related to participating people 154A-B over network 155 such as social media, social network, message board history (or histories), and the like. Dialogue guidance system 156 may then utilize the visual and audio data along with any supplemental information to determine sentiments and dispositions, determine guidance, and apply the guidance automatically to the presenter dialogue to generate bespoke guided dialogue respective to each participant 160A-B and based on respective sentiments and dispositions. This can be performed asynchronously or, in other words, provided to participants 160A-B at different times (e.g., as a participant logs into a forum account, checks an email account, opens an instant message client, etc.).
  • FIG. 2 depicts a sentiment detection and dialogue guidance system 200. System 200 may be implemented as an application on a computing device such as processing device 106. System 200 receives visual and audio input in order to provide dialogue data (e.g., coaching data) to a user via a smartphone, tablet, desktop, laptop, or other device.
  • A computer vision engine 202 receives visual data while a natural language processing engine 204 receives audio data. In some examples, visual and audio data is transmitted directly from video and/or audio devices. In some examples, visual and audio data can be preprocessed or provided remotely, from stored files, or other sources.
  • Computer vision engine 202 and natural language processing engine 206 respectively transmit prepared visual and audio data to a sentiment identifier service 206. Prepared visual and audio data may, for example, include flags at various portions of the visual and audio data, include clips or snapshots, isolated or extracted sources (e.g., for tracking a particular audience member and the like), multiple channels based on one or more feeds, or other transformations as may be used by sentiment identifier 206 to identify audience sentiment and/or dispositions.
  • Sentiment identifier service 206 can determine a sentiment or disposition of an audience at individual levels and/or at an aggregated level based on the audio and visual data. In some examples, sentiment identifier 206 can exchange data with a psychological deep learning system 214. Psychological deep learning system 214 may interconnect with social networks and media 216 to retrieve additional information on an audience and/or individuals within the audience. For example, psychological deep learning system 214 can derive and/or explore a social graph (e.g., generate a social topology and the like) associated with one or more audience members to supplement or complement information used by psychological deep learning system 214 in creation of various profiles.
  • Psychological deep learning system 214 can include general, specific, or mixed profiles generated by deep learning systems and the like. The profiles may assist sentiment identifier service 206 in determining audience sentiment and disposition based on visual cues (e.g., facial expressions, etc.), audio cues (e.g., audible responses, etc.), and the like.
  • Sentiment identifier service 206 transmits sentiment data to a dialogue form selector service 208. Dialogue form selector service 208 processes received sentiment data to retrieve rules, metrics, guidance, and/or restrictions as discussed above. In some examples, dialogue form selector service 208 retrieves stored dialogue data (e.g., prepared speeches, etc.) for applying selected dialogue forms.
  • Dialogue form selector service 208 transmits dialogue coaching data to a user device 210. User device 210 may be a computer, mobile device, smartphone, tablet, or the like. In some examples, rather than, or in addition to, user device 210, dialogue coaching data may be transmitted to downstream processes or services. For example, application programming interface (API) endpoints may receive dialogue coaching data for storage, model training, and other processes.
  • In some examples, dialogue coaching data includes prepared statements. In other examples, dialogue coaching data may provide rules, guidance, restrictions, metrics, and the like.
  • FIG. 3 depicts a method 300 for processing audio and visual data to generate guidance information for a speaker. Audio and visual data for a participant to a dialogue are received and supplemental information relating to the participant is retrieved (operation 302). In some examples, the audio and visual data may be provided as individual streams such as from one or more respective cameras and one or more respective microphones. In other examples, a single system may provide a multi-layered data stream including both audio and visual data. The supplemental information can be retrieved, via API and the like, from social media and/or social network platforms such as Twitter® or Facebook® and the like.
  • Audio and/or visual cues within the received audio and visual data along with the retrieved supplemental information are then used to identify participant sentiment and/or disposition in response to a presenter dialogue (operation 304). For example, audience gaze, respiratory response (e.g., gasps, sighs, etc.), and the like can may be associated with a sentiment. Machine learning models such as deep neural networks, regressions, support vector machines (SVMs), and other techniques may be used to identify sentiments.
  • The identified audience sentiment is then used to determine dialogue guidance (operation 306). Dialogue guidance can include restrictions, recommendations, and the like as discussed above. In some examples, dialogue guidance may include prepared statements, etc. The determined guidance is then provided to the presenter (operation 308). In some examples, such as where the dialogue takes place over a text medium like a forum and the like, the determined guidance can be automatically applied to the dialogue in addition to, or rather than, providing the guidance to the presenter.
  • As seen in FIG. 3 , method 300 may repeat to provide continuous and/or streaming dialogue guidance to a speaker. In some examples, dialogue guidance may include recommendations regarding semantic framing, references, lexicon and the like. In other examples, dialogue guidance may include prepared comments to be read by the speaker (e.g., via semantic transforms and other NLP processes). Additionally, where the dialogue is text based and different participants may receive the dialogue individually and independently, the dialogue may be automatically modified according to determined guidance respective to sentiments and/or dispositions determined for each recipient at the time of receipt such that one message from the presenting user could be customized for each individual reader according to their state or sentiment at the time of their receipt, even if those receipt times and recipient sentiments were different for the different recipients and even though all the received messages might be deemed to have an equivalent persuasive effect (EPE). EPE can include anticipated levels of impact upon or deflection to a belief held by a dialogue participant, tested responses to a corresponding subject matter of the dialogue participant (e.g., using before and after testing, A/B testing, etc.), physiological response tests (e.g., via brain scans, etc.), and the like which may provide further information to, for example, dialogue guidance system 156, for optimizing dialogue guidance.
  • Various aspects of the subject technology relate to a dialogue guidance system. The dialogue guidance system is configured to receive input data captured from a communication event among at least a first participant and a second participant. The communication event may include a presentation or other message or communication. The participants may be, for example, one or more presenters and one or more audience members or recipients of the communication.
  • The input data may include one or more of text data, audio data, or video data. The dialogue guidance system is configured to identify, based on the input data, one of a sentiment or a disposition corresponding to the communication event, determine dialogue guidance for the first participant based on one of the sentiment or the disposition, and provide the dialogue guidance to one or more of the participants.
  • The dialogue guidance system may also be configured to retrieve supplemental information corresponding to at least one of the first participant and the second participant, the supplemental information including one or more of social media information, social network information, or web platform information and the sentiment or disposition may further be identified based on the supplemental information.
  • According to some aspects, the dialogue guidance system may include one or more processors and at least one non-transitory computer-readable medium having stored therein instructions which, when executed by the one or more processors, cause the dialogue guidance system to perform operations. The operations may include receiving, from an input capture system, input data associated with a presentation, identifying a sentiment or a disposition corresponding to the presentation based on the input data, determining dialogue guidance for a presenter of the presentation, and providing the dialogue guidance to the presenter. The input data may be associated with the presenter or of one or more members of an audience of the presentation. Furthermore, the dialogue guidance may be provided to the presenter during the presentation or after it.
  • A sentiment identifier service, a dialogue form selector service, or other services may also be leveraged by the dialogue guidance system. For example, the dialogue guidance system may transmit, over a network to a sentiment identifier service, a query for the sentiment or disposition or transmit, over a network to a dialogue form selector service, a query for the dialogue guidance.
  • Aspects of the subject technology also relate to a method for providing dialogue guidance. The method may include receiving input data associated with a dialogue participant, the input data comprising one or more of text data, audio data, or video data, identifying one of a sentiment or a disposition corresponding to the dialogue participant based on the input data, determining, based on one of the sentiment or the disposition, dialogue guidance for a presenter, and providing the dialogue guidance to the presenter. The dialogue participant may be a member of an audience or the presenter. The identifying of the sentiment or disposition is based on a deep learning system.
  • The determining of the dialogue guidance may include selecting at least one dialogue form comprising a rule for communicating, wherein the at least one dialogue form corresponds to the identified sentiment or disposition. The rule for communicating may include restrictions, suggestions, or standards. The method may also include processing the input data using at least one of a Natural Language Processing (NLP) and/or Computer Vision (CV) systems.
  • FIG. 4 is an example computing system 400 that may implement various systems and methods discussed herein. The computer system 400 includes one or more computing components in communication via a bus 402. In one implementation, the computing system 400 includes one or more processors 404. The processor 404 can include one or more internal levels of cache 406 and a bus controller or bus interface unit to direct interaction with the bus 402. The processor 404 may specifically implement the various methods discussed herein. Main memory 408 may include one or more memory cards and a control circuit (not depicted), or other forms of removable memory, and may store various software applications including computer executable instructions, that when run on the processor 404, implement the methods and systems set out herein. Other forms of memory, such as a storage device 410 and a mass storage device 418, may also be included and accessible, by the processor (or processors) 404 via the bus 402. The storage device 410 and mass storage device 418 can each contain any or all of the methods and systems discussed herein.
  • The computer system 400 can further include a communications interface 412 by way of which the computer system 400 can connect to networks and receive data useful in executing the methods and system set out herein as well as transmitting information to other devices. The computer system 400 can also include an input device 416 by which information is input. Input device 416 can be a scanner, keyboard, and/or other input devices as will be apparent to a person of ordinary skill in the art. An output device 414 can be a monitor, speaker, and/or other output devices as will be apparent to a person of ordinary skill in the art.
  • The system set forth in FIG. 4 is but one possible example of a computer system that may employ or be configured in accordance with aspects of the present disclosure. It will be appreciated that other non-transitory tangible computer-readable storage media storing computer-executable instructions for implementing the presently disclosed technology on a computing system may be utilized.
  • In the present disclosure, the methods disclosed may be implemented as sets of instructions or software readable by a device. Further, it is understood that the specific order or hierarchy of steps in the methods disclosed are instances of example approaches. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the methods can be rearranged while remaining within the disclosed subject matter. The accompanying method claims present elements of the various steps in a sample order, and are not necessarily meant to be limited to the specific order or hierarchy presented.
  • The described disclosure may be provided as a computer program product, or software, that may include a computer-readable storage medium having stored thereon instructions, which may be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure. A computer-readable storage medium includes any mechanism for storing information in a form (e.g., software, processing application) readable by a computer. The computer-readable storage medium may include, but is not limited to, optical storage medium (e.g., CD-ROM), magneto-optical storage medium, read only memory (ROM), random access memory (RAM), erasable programmable memory (e.g., EPROM and EEPROM), flash memory, or other types of medium suitable for storing electronic instructions.
  • The description above includes example systems, methods, techniques, instruction sequences, and/or computer program products that embody techniques of the present disclosure. However, it is understood that the described disclosure may be practiced without these specific details.
  • While the present disclosure has been described with references to various implementations, it will be understood that these implementations are illustrative and that the scope of the disclosure is not limited to them. Many variations, modifications, additions, and improvements are possible. More generally, implementations in accordance with the present disclosure have been described in the context of particular implementations. Functionality may be separated or combined in blocks differently in various embodiments of the disclosure or described with different terminology. These and other variations, modifications, additions, and improvements may fall within the scope of the disclosure as defined in the claims that follow.

Claims (22)

1. (canceled)
2. An apparatus for dialogue guidance, the apparatus comprising:
at least one memory; and
at least one processor, the at least one processor configured to:
store a history of interactions involving a plurality of participants of the interactions;
analyze the history of interactions to identify at least one sentiment expressed by at least a subset of the plurality of participants through at least a subset of the interactions;
edit content of an excerpt of dialogue based on the at least one sentiment expressed by at least a subset of the plurality of participants to generate edited content for the excerpt of dialogue; and
provide an indication of the edited content for the excerpt of dialogue.
3. The apparatus of claim 2, wherein the content of the excerpt of dialogue includes a string of text, and wherein, to edit the content to generate the edited content for the excerpt of dialogue, the at least one processor is configured to edit the string of text based on the at least one sentiment expressed by at least the subset of the plurality of participants to generate an edited string of text.
4. The apparatus of claim 2, wherein the content of the excerpt of dialogue includes a spoken dialogue, and wherein, to edit the content to generate the edited content for the excerpt of dialogue, the at least one processor is configured to edit the spoken dialogue based on the at least one sentiment expressed by at least the subset of the plurality of participants to generate an edited spoken dialogue.
5. The apparatus of claim 2, the at least one processor configured to:
edit the content of the excerpt of dialogue to increase a likelihood of persuasiveness to at least the subset of the plurality of participants based on the at least one sentiment to generate the edited content for the excerpt of dialogue.
6. The apparatus of claim 2, the at least one processor configured to:
edit the content of the excerpt of dialogue based on the at least one sentiment without changing a topic of the excerpt of dialogue to generate the edited content for the excerpt of dialogue.
7. The apparatus of claim 2, the at least one processor configured to:
transmit the edited content for the excerpt of dialogue to a recipient device at a specified time to provide the edited content for the excerpt of dialogue.
8. The apparatus of claim 2, wherein, to analyze the history of interactions to identify at least one sentiment, the at least one processor is configured to analyze at least one of a gaze of at least one of the plurality of participants, a respiratory response of the at least one of the plurality of participants, an audible response from the at least one of the plurality of participants, a disposition of the at least one of the plurality of participants, or a facial expression of the at least one of the plurality of participants.
9. The apparatus of claim 2, wherein the at least one sentiment includes at least one disposition.
10. The apparatus of claim 2, wherein, to analyze the history of interactions to identify at least one sentiment, the at least one processor is configured to analyze the history of interactions according to at least one rule for communication.
11. The apparatus of claim 2, the at least one processor configured to:
provide the edited content for the excerpt of dialogue as an audio clip to provide the edited content for the excerpt of dialogue.
12. The apparatus of claim 2, the at least one processor configured to:
provide the edited content for the excerpt of dialogue as a text string to provide the edited content for the excerpt of dialogue.
13. The apparatus of claim 2, the at least one processor configured to:
use at least one machine learning model to analyze the history of interactions to identify the at least one sentiment expressed by at least the subset of the plurality of participants through at least a subset of the interactions.
14. The apparatus of claim 13, the at least one processor configured to:
identify at least one reaction to the edited content for the excerpt of dialogue from at least one of the plurality of participants; and
update the machine learning model based on the at least one reaction and on the edited content for the excerpt of dialogue.
15. The apparatus of claim 2, the at least one processor configured to:
identify at least one reaction to the edited content for the excerpt of dialogue from at least one of the plurality of participants;
edit the edited content of the excerpt of dialogue further based on the at least one reaction to generate secondary edited content for the excerpt of dialogue; and
provide the secondary edited content for the excerpt of dialogue.
16. A method for dialogue guidance, the method comprising:
storing a history of interactions involving a plurality of participants of the interactions;
analyzing the history of interactions to identify at least one sentiment expressed by at least a subset of the plurality of participants through at least a subset of the interactions;
editing content of an excerpt of dialogue based on the at least one sentiment expressed by at least a subset of the plurality of participants to generate edited content for the excerpt of dialogue; and
providing an indication of the edited content for the excerpt of dialogue.
17. The method of claim 16, wherein editing the content of the excerpt of dialogue based on the at least one sentiment to generate the edited content for the excerpt of dialogue includes editing the content of the excerpt of dialogue to increase a likelihood of persuasiveness to at least the subset of the plurality of participants based on the at least one sentiment.
18. The method of claim 16, wherein editing the content of the excerpt of dialogue based on the at least one sentiment to generate the edited content for the excerpt of dialogue includes editing the content of the excerpt of dialogue based on the at least one sentiment without changing a topic of the excerpt of dialogue.
19. The method of claim 16, wherein analyzing the history of interactions to identify the at least one sentiment includes using at least one machine learning model to analyze the history of interactions to identify the at least one sentiment.
20. The method of claim 19, further comprising:
identifying at least one reaction to the edited content for the excerpt of dialogue from at least one of the plurality of participants; and
updating the at least one machine learning model based on the at least one reaction and on the edited content for the excerpt of dialogue.
21. The method of claim 16, further comprising:
identifying at least one reaction to the edited content for the excerpt of dialogue from at least one of the plurality of participants;
editing the edited content of the excerpt of dialogue further based on the at least one reaction to generate secondary edited content for the excerpt of dialogue; and
providing the secondary edited content for the excerpt of dialogue.
22. A non-transitory, computer-readable storage medium, having embodied thereon instructions executable by one or more processors to perform a method for dialogue guidance, the method comprising:
storing a history of interactions involving a plurality of participants of the interactions;
analyzing the history of interactions to identify at least one sentiment expressed by at least a subset of the plurality of participants through at least a subset of the interactions;
editing content of an excerpt of dialogue based on the at least one sentiment expressed by at least a subset of the plurality of participants to generate edited content for the excerpt of dialogue; and
providing an indication of the edited content for the excerpt of dialogue.
US17/897,749 2018-09-06 2022-08-29 System for providing dialogue guidance Pending US20230169279A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/897,749 US20230169279A1 (en) 2018-09-06 2022-08-29 System for providing dialogue guidance

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201862727965P 2018-09-06 2018-09-06
US16/563,461 US11429794B2 (en) 2018-09-06 2019-09-06 System for providing dialogue guidance
US17/897,749 US20230169279A1 (en) 2018-09-06 2022-08-29 System for providing dialogue guidance

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US16/563,461 Continuation US11429794B2 (en) 2018-09-06 2019-09-06 System for providing dialogue guidance

Publications (1)

Publication Number Publication Date
US20230169279A1 true US20230169279A1 (en) 2023-06-01

Family

ID=69719530

Family Applications (2)

Application Number Title Priority Date Filing Date
US16/563,461 Active 2039-11-21 US11429794B2 (en) 2018-09-06 2019-09-06 System for providing dialogue guidance
US17/897,749 Pending US20230169279A1 (en) 2018-09-06 2022-08-29 System for providing dialogue guidance

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US16/563,461 Active 2039-11-21 US11429794B2 (en) 2018-09-06 2019-09-06 System for providing dialogue guidance

Country Status (3)

Country Link
US (2) US11429794B2 (en)
EP (1) EP3847643A4 (en)
WO (1) WO2020051500A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230244874A1 (en) * 2022-01-20 2023-08-03 Zoom Video Communications, Inc. Sentiment scoring for remote communication sessions

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9861346B2 (en) 2003-07-14 2018-01-09 W. L. Gore & Associates, Inc. Patent foramen ovale (PFO) closure device with linearly elongating petals
EP3769238A4 (en) 2018-03-19 2022-01-26 Coffing, Daniel L. Processing natural language arguments and propositions
EP3847643A4 (en) 2018-09-06 2022-04-20 Coffing, Daniel L. System for providing dialogue guidance
WO2020056409A1 (en) 2018-09-14 2020-03-19 Coffing Daniel L Fact management system
CN114969282B (en) * 2022-05-05 2024-02-06 迈吉客科技(北京)有限公司 Intelligent interaction method based on rich media knowledge graph multi-modal emotion analysis model

Family Cites Families (121)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7904187B2 (en) 1999-02-01 2011-03-08 Hoffberg Steven M Internet appliance system and method
US7509572B1 (en) 1999-07-16 2009-03-24 Oracle International Corporation Automatic generation of document summaries through use of structured text
US6347332B1 (en) 1999-12-30 2002-02-12 Edwin I. Malet System for network-based debates
AU2001261506A1 (en) 2000-05-11 2001-11-20 University Of Southern California Discourse parsing and summarization
US7100082B2 (en) 2000-08-04 2006-08-29 Sun Microsystems, Inc. Check creation and maintenance for product knowledge management
US7069547B2 (en) 2001-10-30 2006-06-27 International Business Machines Corporation Method, system, and program for utilizing impact analysis metadata of program statements in a development environment
US20030088783A1 (en) 2001-11-06 2003-05-08 Dipierro Massimo Systems, methods and devices for secure computing
US7707066B2 (en) 2002-05-15 2010-04-27 Navio Systems, Inc. Methods of facilitating merchant transactions using a computerized system including a set of titles
US6678828B1 (en) 2002-07-22 2004-01-13 Vormetric, Inc. Secure network file access control system
US9818136B1 (en) 2003-02-05 2017-11-14 Steven M. Hoffberg System and method for determining contingent relevance
US8155951B2 (en) 2003-06-12 2012-04-10 Patrick William Jamieson Process for constructing a semantic knowledge base using a document corpus
US7813916B2 (en) 2003-11-18 2010-10-12 University Of Utah Acquisition and application of contextual role knowledge for coreference resolution
US9407963B2 (en) 2004-02-27 2016-08-02 Yahoo! Inc. Method and system for managing digital content including streaming media
CA2563121A1 (en) 2004-04-05 2005-10-20 Peter Jeremy Baldwin Web application for argument maps
US7549171B2 (en) 2004-06-10 2009-06-16 Hitachi, Ltd. Method and apparatus for validation of application data on a storage system
US20060122834A1 (en) 2004-12-03 2006-06-08 Bennett Ian M Emotion detection device & method for use in distributed systems
US10002325B2 (en) 2005-03-30 2018-06-19 Primal Fusion Inc. Knowledge representation systems and methods incorporating inference rules
US9104779B2 (en) 2005-03-30 2015-08-11 Primal Fusion Inc. Systems and methods for analyzing and synthesizing complex knowledge representations
EP1867133A1 (en) 2005-04-04 2007-12-19 British Telecommunications Public Limited Company A system for processing context data
US8438142B2 (en) 2005-05-04 2013-05-07 Google Inc. Suggesting and refining user input based on original user input
US20090117883A1 (en) 2006-07-20 2009-05-07 Dan Coffing Transaction system for business and social networking
KR101254247B1 (en) 2007-01-18 2013-04-12 중앙대학교 산학협력단 Apparatus and method for detecting program plagiarism through memory access log analysis
US20080222279A1 (en) 2007-03-09 2008-09-11 Lucas Cioffi System for creating collective intelligence through multi-linear discussion over an electronic network
US8538743B2 (en) 2007-03-21 2013-09-17 Nuance Communications, Inc. Disambiguating text that is to be converted to speech using configurable lexeme based rules
US8838659B2 (en) 2007-10-04 2014-09-16 Amazon Technologies, Inc. Enhanced knowledge repository
US7890539B2 (en) 2007-10-10 2011-02-15 Raytheon Bbn Technologies Corp. Semantic matching using predicate-argument structure
US20100088262A1 (en) 2008-09-29 2010-04-08 Neuric Technologies, Llc Emulated brain
US20130179386A1 (en) 2009-04-09 2013-07-11 Sigram Schindler Innovation expert system, ies, and its ptr data structure, ptr-ds
US8595166B2 (en) 2009-09-24 2013-11-26 Pacific Metrics Corporation System, method, and computer-readable medium for plagiarism detection
US9047283B1 (en) 2010-01-29 2015-06-02 Guangsheng Zhang Automated topic discovery in documents and content categorization
US8670018B2 (en) * 2010-05-27 2014-03-11 Microsoft Corporation Detecting reactions and providing feedback to an interaction
US9798822B2 (en) 2010-06-29 2017-10-24 Apple Inc. Location based grouping of browsing histories
WO2012015988A1 (en) 2010-07-27 2012-02-02 Globalytica, Llc Collaborative structured analysis system and method
US8751795B2 (en) 2010-09-14 2014-06-10 Mo-Dv, Inc. Secure transfer and tracking of data using removable non-volatile memory devices
US9064238B2 (en) 2011-03-04 2015-06-23 Factify Method and apparatus for certification of facts
US10095848B2 (en) 2011-06-16 2018-10-09 Pasafeshare Llc System, method and apparatus for securely distributing content
US11195057B2 (en) 2014-03-18 2021-12-07 Z Advanced Computing, Inc. System and method for extremely efficient image and pattern recognition and artificial intelligence platform
US8311973B1 (en) 2011-09-24 2012-11-13 Zadeh Lotfi A Methods and systems for applications for Z-numbers
US11074495B2 (en) 2013-02-28 2021-07-27 Z Advanced Computing, Inc. (Zac) System and method for extremely efficient image and pattern recognition and artificial intelligence platform
US9916538B2 (en) 2012-09-15 2018-03-13 Z Advanced Computing, Inc. Method and system for feature detection
US8978094B2 (en) 2012-02-03 2015-03-10 Apple Inc. Centralized operation management
WO2013155619A1 (en) * 2012-04-20 2013-10-24 Sam Pasupalak Conversational agent
US8973124B2 (en) 2012-04-30 2015-03-03 General Electric Company Systems and methods for secure operation of an industrial controller
US8984582B2 (en) 2012-08-14 2015-03-17 Confidela Ltd. System and method for secure synchronization of data across multiple computing devices
US10346542B2 (en) * 2012-08-31 2019-07-09 Verint Americas Inc. Human-to-human conversation analysis
EP2929461A2 (en) 2012-12-06 2015-10-14 Raytheon BBN Technologies Corp. Active error detection and resolution for linguistic translation
US9678949B2 (en) 2012-12-16 2017-06-13 Cloud 9 Llc Vital text analytics system for the enhancement of requirements engineering documents and other documents
US20140343984A1 (en) 2013-03-14 2014-11-20 University Of Southern California Spatial crowdsourcing with trustworthy query answering
US10395216B2 (en) 2013-03-15 2019-08-27 Dan Coffing Computer-based method and system of analyzing, editing and improving content
US9413891B2 (en) * 2014-01-08 2016-08-09 Callminer, Inc. Real-time conversational analytics facility
US9565175B1 (en) 2014-01-16 2017-02-07 Microstrategy Incorporated Sharing document information
US9389852B2 (en) 2014-02-13 2016-07-12 Infosys Limited Technique for plagiarism detection in program source code files based on design pattern
US20160180238A1 (en) 2014-12-23 2016-06-23 Invent.ly LLC Biasing effects on the contextualization of a proposition by like-minded subjects considered in a quantum representation
US9643722B1 (en) 2014-02-28 2017-05-09 Lucas J. Myslinski Drone device security system
US9972055B2 (en) 2014-02-28 2018-05-15 Lucas J. Myslinski Fact checking method and system utilizing social networking information
US10043029B2 (en) 2014-04-04 2018-08-07 Zettaset, Inc. Cloud storage encryption
US10298555B2 (en) 2014-04-04 2019-05-21 Zettaset, Inc. Securing files under the semi-trusted user threat model using per-file key encryption
WO2015183583A1 (en) 2014-05-28 2015-12-03 Open Garden, Inc. App distribution over the air
US9632998B2 (en) * 2014-06-19 2017-04-25 International Business Machines Corporation Claim polarity identification
US10134072B2 (en) 2014-06-26 2018-11-20 Ericsson Ab Management of an electronic content catalog based on bandwidth or connected display capabilities
WO2016014097A1 (en) 2014-07-22 2016-01-28 Hewlett-Packard Development Company, L.P. Ensuring data integrity of a retained file upon replication
US9978362B2 (en) * 2014-09-02 2018-05-22 Microsoft Technology Licensing, Llc Facet recommendations from sentiment-bearing content
BR112017003893A8 (en) * 2014-09-12 2017-12-26 Microsoft Corp DNN STUDENT APPRENTICE NETWORK VIA OUTPUT DISTRIBUTION
US20160196342A1 (en) 2015-01-06 2016-07-07 Inha-Industry Partnership Plagiarism Document Detection System Based on Synonym Dictionary and Automatic Reference Citation Mark Attaching System
US10185777B2 (en) 2015-04-01 2019-01-22 Microsoft Technology Licensing, Llc Merged and actionable history feed
WO2016167424A1 (en) 2015-04-16 2016-10-20 주식회사 플런티코리아 Answer recommendation device, and automatic sentence completion system and method
CN106209488B (en) 2015-04-28 2021-01-29 北京瀚思安信科技有限公司 Method and device for detecting website attack
US20170094364A1 (en) 2015-09-28 2017-03-30 Mobdub, Llc Social news aggregation and distribution
US10075439B1 (en) 2015-11-06 2018-09-11 Cisco Technology, Inc. Programmable format for securely configuring remote devices
US10007720B2 (en) * 2015-11-10 2018-06-26 Hipmunk, Inc. Automatic conversation analysis and participation
US10679298B2 (en) 2015-12-03 2020-06-09 Aon Singapore Centre For Innovation Strategy And Management Pte., Ltd. Dashboard interface, platform, and environment for automated negotiation, benchmarking, compliance, and auditing
EP3391587A4 (en) 2015-12-16 2019-06-12 Newvoicemedia US Inc. System and methods for tamper proof interaction recording and timestamping
US9849364B2 (en) 2016-02-02 2017-12-26 Bao Tran Smart device
US20170277993A1 (en) * 2016-03-22 2017-09-28 Next It Corporation Virtual assistant escalation
US20170289120A1 (en) 2016-04-04 2017-10-05 Mastercard International Incorporated Systems and methods for authenticating user for secure data access using multi-party authentication system
US11036716B2 (en) 2016-06-19 2021-06-15 Data World, Inc. Layered data generation and data remediation to facilitate formation of interrelated data in a system of networked collaborative datasets
US10606952B2 (en) 2016-06-24 2020-03-31 Elemental Cognition Llc Architecture and processes for computer learning and understanding
US10789310B2 (en) * 2016-06-30 2020-09-29 Oath Inc. Fact machine for user generated content
CN109074295B (en) 2016-07-29 2022-07-05 惠普发展公司,有限责任合伙企业 Data recovery with authenticity
JP7441650B2 (en) * 2016-09-16 2024-03-01 オラクル・インターナショナル・コーポレイション Internet cloud-hosted natural language interactive messaging system with entity-based communication
US9652113B1 (en) * 2016-10-06 2017-05-16 International Business Machines Corporation Managing multiple overlapped or missed meetings
US10754323B2 (en) 2016-12-20 2020-08-25 General Electric Company Methods and systems for implementing distributed ledger manufacturing history
US11580350B2 (en) * 2016-12-21 2023-02-14 Microsoft Technology Licensing, Llc Systems and methods for an emotionally intelligent chat bot
EP3340145A1 (en) 2016-12-22 2018-06-27 Mastercard International Incorporated Method of determining crowd dynamics
CN106611055A (en) 2016-12-27 2017-05-03 大连理工大学 Chinese hedge scope detection method based on stacked neural network
US10395047B2 (en) 2016-12-31 2019-08-27 Entefy Inc. System and method of applying multiple adaptive privacy control layers to single-layered media file types
US10438170B2 (en) 2017-01-05 2019-10-08 International Business Machines Corporation Blockchain for program code credit and programmer contribution in a collective
US10366168B2 (en) * 2017-01-12 2019-07-30 Microsoft Technology Licensing, Llc Systems and methods for a multiple topic chat bot
AU2018230763A1 (en) 2017-03-08 2019-10-31 Ip Oversight Corporation System and method for creating commodity asset-secured tokens from reserves
US10592612B2 (en) * 2017-04-07 2020-03-17 International Business Machines Corporation Selective topics guidance in in-person conversations
WO2018195364A1 (en) 2017-04-19 2018-10-25 Baton Systems, Inc. Time stamping systems and methods
US10224032B2 (en) * 2017-04-19 2019-03-05 International Business Machines Corporation Determining an impact of a proposed dialog act using model-based textual analysis
WO2019010250A1 (en) * 2017-07-05 2019-01-10 Interactions Llc Real-time privacy filter
WO2019040496A1 (en) 2017-08-22 2019-02-28 Observepoint, Inc. Identifying analytic element execution paths
US11449603B2 (en) 2017-08-31 2022-09-20 Proofpoint, Inc. Managing data exfiltration risk
US20190073914A1 (en) 2017-09-01 2019-03-07 International Business Machines Corporation Cognitive content laboratory
JP7013178B2 (en) 2017-09-08 2022-01-31 株式会社日立製作所 Data analysis system, data analysis method, and data analysis program
JP6604672B2 (en) 2017-10-31 2019-11-13 デルタ ピーディーエス カンパニー,リミテッド Folder-based file management device
US11809823B2 (en) * 2017-12-07 2023-11-07 International Business Machines Corporation Dynamic operating room scheduler using machine learning
US20190180255A1 (en) 2017-12-12 2019-06-13 Capital One Services, Llc Utilizing machine learning to generate recommendations for a transaction based on loyalty credits and stored-value cards
US11170092B1 (en) 2017-12-14 2021-11-09 United Services Automobile Association (Usaa) Document authentication certification with blockchain and distributed ledger techniques
US10635861B2 (en) * 2017-12-29 2020-04-28 Facebook, Inc. Analyzing language units for opinions
US10762225B2 (en) 2018-01-11 2020-09-01 Microsoft Technology Licensing, Llc Note and file sharing with a locked device
US10943022B2 (en) 2018-03-05 2021-03-09 Microsoft Technology Licensing, Llc System for automatic classification and protection unified to both cloud and on-premise environments
EP3769238A4 (en) 2018-03-19 2022-01-26 Coffing, Daniel L. Processing natural language arguments and propositions
US10546088B2 (en) 2018-04-03 2020-01-28 International Business Machines Corporation Document implementation tool for PCB refinement
US11023601B2 (en) 2018-04-20 2021-06-01 Rohde & Schwarz Gmbh & Co. Kg System and method for secure data handling
US10719345B2 (en) 2018-05-16 2020-07-21 International Business Machines Corporation Container image building
US10839104B2 (en) * 2018-06-08 2020-11-17 Microsoft Technology Licensing, Llc Obfuscating information related to personally identifiable information (PII)
US10929545B2 (en) 2018-07-31 2021-02-23 Bank Of America Corporation System for providing access to data stored in a distributed trust computing network
US20200042864A1 (en) 2018-08-02 2020-02-06 Veritone, Inc. Neural network orchestration
WO2020086155A1 (en) 2018-08-31 2020-04-30 Coffing Daniel L System and method for vocabulary alignment
US11301590B2 (en) 2018-09-05 2022-04-12 International Business Machines Corporation Unfalsifiable audit logs for a blockchain
EP3847643A4 (en) 2018-09-06 2022-04-20 Coffing, Daniel L. System for providing dialogue guidance
WO2020056409A1 (en) 2018-09-14 2020-03-19 Coffing Daniel L Fact management system
US11170108B2 (en) 2018-11-19 2021-11-09 International Business Machines Corporation Blockchain technique for immutable source control
US10936741B2 (en) 2018-11-19 2021-03-02 Bank Of America Corporation Management of access to data stored on a distributed ledger
US11170761B2 (en) * 2018-12-04 2021-11-09 Sorenson Ip Holdings, Llc Training of speech recognition systems
US20200192872A1 (en) 2018-12-13 2020-06-18 Zoox, Inc. Device message framework
US10826705B2 (en) 2018-12-13 2020-11-03 International Business Machines Corporation Compact state database system
US11025430B2 (en) 2018-12-20 2021-06-01 International Business Machines Corporation File provenance database system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230244874A1 (en) * 2022-01-20 2023-08-03 Zoom Video Communications, Inc. Sentiment scoring for remote communication sessions

Also Published As

Publication number Publication date
WO2020051500A1 (en) 2020-03-12
EP3847643A1 (en) 2021-07-14
US20200081987A1 (en) 2020-03-12
EP3847643A4 (en) 2022-04-20
US11429794B2 (en) 2022-08-30

Similar Documents

Publication Publication Date Title
US20230169279A1 (en) System for providing dialogue guidance
US11551804B2 (en) Assisting psychological cure in automated chatting
JP6776462B2 (en) Automatic assistant with meeting ability
US10268990B2 (en) Electronic meeting intelligence
US10791078B2 (en) Assistance during audio and video calls
US10257241B2 (en) Multimodal stream processing-based cognitive collaboration system
US11006077B1 (en) Systems and methods for dynamically concealing sensitive information
JP2018170009A (en) Electronic conference system
US20190379742A1 (en) Session-based information exchange
US10528674B2 (en) Cognitive agent for capturing referential information during conversation muting
CN110493019B (en) Automatic generation method, device, equipment and storage medium of conference summary
Traum et al. Incremental dialogue understanding and feedback for multiparty, multimodal conversation
US20230080660A1 (en) Systems and method for visual-audio processing for real-time feedback
US11546392B2 (en) In-conference question summary system
US11086907B2 (en) Generating stories from segments classified with real-time feedback data
Ijuin et al. Difference in eye gaze for floor apportionment in native-and second-language conversations
US11776546B1 (en) Intelligent agent for interactive service environments
Nakano et al. Implementation and evaluation of a multimodal addressee identification mechanism for multiparty conversation systems
US20220114200A1 (en) System and method for developing a common inquiry response
Palinko et al. How should a robot interrupt a conversation between multiple humans
WO2022197938A1 (en) Automated customization media content based on insights about a consumer of the media content
US11277362B2 (en) Content post delay system and method thereof
JP2021081983A (en) Information processor
Samrose Automated Collaboration Coach for Video-conferencing based Group Discussions
CN113468297B (en) Dialogue data processing method and device, electronic equipment and storage equipment

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION