US20170069340A1 - Emotion, mood and personality inference in real-time environments - Google Patents

Emotion, mood and personality inference in real-time environments Download PDF

Info

Publication number
US20170069340A1
US20170069340A1 US14/845,528 US201514845528A US2017069340A1 US 20170069340 A1 US20170069340 A1 US 20170069340A1 US 201514845528 A US201514845528 A US 201514845528A US 2017069340 A1 US2017069340 A1 US 2017069340A1
Authority
US
United States
Prior art keywords
nodes
mood
emotion
personality
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US14/845,528
Other versions
US10025775B2 (en
Inventor
Scott P. Nowson
Julien J. Perez
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Conduent Business Services LLC
Original Assignee
Conduent Business Services LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Assigned to XEROX CORPORATION reassignment XEROX CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: NOWSON, SCOTT P., PEREZ, JULIEN J.
Priority to US14/845,528 priority Critical patent/US10025775B2/en
Application filed by Conduent Business Services LLC filed Critical Conduent Business Services LLC
Assigned to CONDUENT BUSINESS SERVICES, LLC reassignment CONDUENT BUSINESS SERVICES, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: XEROX CORPORATION
Publication of US20170069340A1 publication Critical patent/US20170069340A1/en
Publication of US10025775B2 publication Critical patent/US10025775B2/en
Application granted granted Critical
Assigned to JPMORGAN CHASE BANK, N.A. reassignment JPMORGAN CHASE BANK, N.A. SECURITY AGREEMENT Assignors: CONDUENT BUSINESS SERVICES, LLC
Assigned to CONDUENT TRANSPORT SOLUTIONS, INC., CONDUENT HEALTH ASSESSMENTS, LLC, CONDUENT BUSINESS SERVICES, LLC, CONDUENT CASUALTY CLAIMS SOLUTIONS, LLC, CONDUENT STATE & LOCAL SOLUTIONS, INC., ADVECTIS, INC., CONDUENT COMMERCIAL SOLUTIONS, LLC, CONDUENT BUSINESS SOLUTIONS, LLC reassignment CONDUENT TRANSPORT SOLUTIONS, INC. RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: JPMORGAN CHASE BANK, N.A.
Assigned to BANK OF AMERICA, N.A. reassignment BANK OF AMERICA, N.A. SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CONDUENT BUSINESS SERVICES, LLC
Assigned to U.S. BANK, NATIONAL ASSOCIATION reassignment U.S. BANK, NATIONAL ASSOCIATION SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CONDUENT BUSINESS SERVICES, LLC
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • G06F17/2765
    • G06F17/2785
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/45Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of analysis window
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/1822Parsing for meaning understanding

Definitions

  • Systems and methods herein generally relate to using specialized machines to monitor communications between users, and processes that output and constantly revise the mental state of one or more of the users.
  • customer modeling e.g., understanding who a customer is
  • Virtual Agent processes For successful communication, a useful component of the customer to model is their mental state: their personality, mood, and emotions.
  • Exemplary methods herein automatically monitor text and/or speech communications between users using a specialized language processor, and automatically analyze the communications using the specialized language processor to simultaneously determine, for a current time period, mental state variables of a user.
  • These mental state variables can include, for example, the emotion of the user, the mood of the user and the personality of the user.
  • the method then automatically aggregates the emotion, mood, and personality using a hierarchical probabilistic graphical model to determine the highest probability path through a directed probabilistic graph to infer the mental state of the user.
  • the method outputs the mental state of the user from the specialized language processor by displaying the emotion, mood, and personality on the graphic user interface of the processor, or outputting the mental state to a different process.
  • the directed probabilistic graph maintains a single state for personality for the time period, and maintains multiple states for the emotion and the mood for the time period. Therefore, this directed probabilistic graph has a single personality node, multiple mood nodes, multiple emotion nodes, and multiple evidence nodes.
  • the directed probabilistic graph has edges connecting the personality node, the mood nodes, the emotion nodes, and the evidence nodes; and the edges themselves have probability values.
  • the method processes a path through the directed probabilistic graph, and the probability of the path is formed from an aggregation of the probabilities of the edges of the series of adjacent nodes. The highest probability path has an aggregation of the probabilities of the edges that is higher than all other possible paths through the directed probabilistic graph.
  • the evidence nodes can include different dialogue variables used by the personality node, the mood nodes, and the emotion nodes.
  • the emotion nodes can be, for example, happy-for, satisfaction, anger, or distressed states; the mood nodes can be, for example, positive, neutral, or negative, and the personality nodes can be, for example, neuroticism, extraversion, openness to experience, agreeableness, or conscientiousness.
  • Exemplary systems herein include a specialized language processor and any form of interface (e.g., a graphic user interface) connected to the specialized language processor.
  • the specialized language processor automatically monitors text communications between users, and the specialized language processor automatically analyzes the text communications to simultaneously determine, for a current time period, the mental state variables of a user. These mental state variables include the emotion, personality and mood of the user.
  • the specialized language processor automatically aggregates the emotion, mood, and personality using a hierarchical probabilistic graphical model that determines the highest probability path through the graph to infer the mental state of the user.
  • the graphic user interface then outputs the mental state of the user from the specialized language processor, for example by displaying the emotion, mood, and personality status or by outputting the mental state to a different process.
  • the directed probabilistic graph maintains a single state for personality for the time period, and maintains multiple states for the emotion and the mood during the same time period. Therefore, the directed probabilistic graph described above includes a single personality node, multiple mood nodes, multiple emotion nodes, and multiple evidence nodes.
  • the edges of the directed probabilistic graph connect the personality node, the mood nodes, the emotion nodes, and the evidence nodes and the edges contain probability values.
  • a path through the directed probabilistic graph is made of a series of adjacent nodes, and the probability of the path is determined by an aggregation of the probabilities of the edges of the series of adjacent nodes. Thus, the highest probability path has an aggregation of the probabilities of the edges that is higher than all other possible paths through the graph.
  • each of the mood nodes, emotion nodes, and evidence nodes is for a different time portion of the time period and the evidence nodes include different dialogue variables used by the personality node, mood nodes, and emotion nodes.
  • the emotion variables include, for example, happy-for-satisfaction, anger, or distress
  • mood variables include, for example, positive, neutral, or negative
  • the personality variables include, for example neuroticism, extraversion, and openness to experience, agreeableness, or conscientiousness.
  • FIG. 1 is a table diagram illustrating methods herein
  • FIG. 2 is a hierarchical probabilistic graphical model of various methods herein;
  • FIG. 3 is a schematic diagram illustrating output produced by systems and methods herein;
  • FIG. 4 is a schematic diagram illustrating output produced by systems and methods herein;
  • FIG. 5 is a schematic diagram illustrating output produced by systems and methods herein;
  • FIG. 6 is a schematic diagram illustrating a system herein
  • FIG. 7 is a flowchart diagram illustrating methods herein
  • FIG. 8 is a schematic diagram illustrating a system herein.
  • FIG. 9 is a schematic diagram illustrating devices herein.
  • the systems and methods herein provide a probabilistic approach to tracking the mental state of the customer at each of three levels (e.g., personality, mood and emotion) during a sequential set of turns that compose a conversation. This can be done on a number of levels from external (e.g., the products/services that they own and use) through personal demographics (e.g., location, age, gender) to internal mental states and beliefs (e.g., personality, sentiment).
  • levels e.g., personality, mood and emotion
  • an individual's mood and emotions will cloud the ability to determine personality. Indeed, implicit personality theory considers that there are many factors that affect the impressions one forms of people, including mood.
  • the systems and methods described herein provide an approach that enables a user to statistically infer mental states at several levels of temporal stability.
  • the “mental state” includes three distinct, yet connected levels: personality, mood and emotion. More specifically, the systems and methods herein infer the three levels by hierarchically connecting models together in a coherent probabilistic graphical model (PGM).
  • PGM probabilistic graphical model
  • the systems and methods herein provide a formal PGM that infers an individual's mental states (the latent variables) at the personality, mood and emotion levels from evidence (observed variables).
  • this PGM in a temporally dynamic situation: namely conversational dialogue data.
  • this data could be drawn from a direct 1-on-1 dialogue (for example a web chat) or an asynchronous series of communications via social media (e.g., a forum thread).
  • the important considerations with the data are such that; there is a conversational partner(s) who provide external utterances to the individual which could affect their mental state.
  • the communication is relatively time-bound such that it is realistic to infer a connection between short-term emotional states.
  • the systems and methods herein present a dynamic and hierarchical mental state model.
  • One broad concept herein is to aggregate in a hierarchical probabilistic framework several approaches to mental state modeling and specifying the necessary conditional dependencies between them.
  • the systems and methods address the inference procedure associated with the model.
  • FIG. 1 when considering personality, mood, and emotion various observations can be made.
  • the description of a user's personality is typically considered as set of characteristics possessed by a person that uniquely influences their behavior, moderated by context.
  • one exemplary model is the Five Factor Model (FFM) of neuroticism, extraversion, openness to experience, agreeableness, and conscientiousness.
  • FMM Five Factor Model
  • the personality is considered to be stable with respect to temporal stability.
  • the description of a user's mood is a less specific and less intense subjective state of mind than emotions, though typically more prolonged.
  • Various models calculate mood with systems and methods herein using three states positive/good, neutral, and negative/bad. Further, as shown in FIG. 1 , for purposes herein, the mood is considered to be short-midterm with respect to temporal stability.
  • FIG. 1 also shows that the description of a user's emotion is a subjective state of mind experienced by an individual, most often triggered by a specific event, which expresses itself in many psycho-physiological ways. While many models can be used to determine personality with systems and methods herein, one exemplary model is the OCC model (Ortony, Clore and Collins,) defines 22 categories including happy-for, satisfaction, anger, distress. These can be mapped to 6 high-order, universally recognized categories.
  • FIG. 2 illustrates the Dynamic and Hierarchical Mental State Model (DHMSM) 102 that the systems and methods provide in order to infer hidden mental state variables from dialogue data with Tturns of utterances exchanged.
  • DMSM Dynamic and Hierarchical Mental State Model
  • dialogue variables are defined and referred to as evidence.
  • evidence In this directed probabilistic graphical model, the observed variables are shaded and the hidden variables that are inferred are blanked.
  • the permanent aspect of the personality of a considered user u is defined by a random variable ⁇ u drawn from, for example, a Dirichlet distribution of parameter ⁇ ⁇ ⁇ 5 .
  • the dimensionality of size 5 of this variable in this instance, aims at modeling the so-called Five Factor Model of personality; neuroticism, extraversion, openness, agreeableness and conscientiousness referenced in FIG. 1 ; however, those skilled in the art would understand that different personality models could have different dimensionalities.
  • this variable is considered stable through the time of the interactions of a given dialog.
  • the mood ⁇ u is modeled as a temporal type of random variable representing the 3 exemplary cardinal moods; good, neutral and bad drawn from a multinomial distribution of parameter ⁇ ⁇ ⁇ 3 .
  • This example uses a Markovian dependency between the overall personality variable ⁇ u , a mood state at time t, ⁇ t u and the previous mood state at time t ⁇ 1, ⁇ t ⁇ 1 u ; although those skilled in the art would understand that other known dependencies could be used with systems and methods herein.
  • FIG. 2 illustrates that that mood change is a phenomena conditioned by the past state of the variable but also the personality type of the user. For example, high scorers of neuroticism are more prone to negative moods and dramatic mood changes than low scorers.
  • the instantaneous emotion state ⁇ u is a random variable drawn from a multinomial distribution of parameter ⁇ ⁇ ⁇ 6 that aims at representing six exemplary high level emotional states; hope, fear, relief, satisfaction, joy and distress (while those skilled in the art would understand that other emotional states could be used by systems and methods herein). Note that this could be extended to the full OCC model by using the parameter distribution ⁇ ⁇ ⁇ 22 .
  • e u and e m are respectively the evidences produced by the user u and the dialogue partner m.
  • the evidence is something that is written by a user, something that is said by a user, something in the voice signal of the user, etc.
  • the partner could be either a human interlocutor, or a system producing automated responses.
  • These evidential variables can be decomposed to linguistic and statistical features including, but not limited to, those such as; bag-of-word ngrams, part-of-speech tags and parse tree features, dialogue acts associated to each utterance, message length, time of response, frequency of multiple utterances in the same turn, number of messages of a given turn, etc.
  • the partner m holds the initiative of the dialogue such that each observed utterance produced by the user u is statistically conditioned by both the current instantaneous emotional state and the last utterance of the dialogue partner e t m . See equation (1) below;
  • p ( ⁇ u , ⁇ 1:T u , ⁇ 1:T u ,e 1:T u ,e 1:T m ) p ( ⁇ u ) p ( ⁇ 1 u
  • ⁇ u ) ⁇ t 2 T p ( ⁇ t u
  • ⁇ u , ⁇ t ⁇ 1 u ) ⁇ t 1 T p ( ⁇ t u
  • ⁇ t u ) ⁇ t 1 T p ( e t u
  • Equation (1) (above) defines the closed form expression of the joint probability of the graphical model of the systems and method herein.
  • the parameters of the mental state model ⁇ u , ⁇ 1:T u , ⁇ 1:T u ⁇ are inferred with respect to the observed variables ⁇ e 1:T u ,e 1:T m ⁇ (the evidence utterances and derived features) as expressed in Equation (2) (see below), which defines the corresponding maximum a posteriori query:
  • Equation (2) two situations can be considered. Starting with a uniform, i.e., non-informative, prior over the marginal distribution of the parameters p( ⁇ u , ⁇ 1:T u , ⁇ 1:T u ). Alternatively, it can be assumed that a given prior distribution of these variables, for a specific user u, has already been inferred in a previous dialogue session analysis or by any other means. Concerning the second part of the equation, the likelihood of the evidence with respect to the model's parameters p(e 1:T u ,e 1:T m
  • the task of inferring the parameters of the model from data is also called learning.
  • learning the task of inferring the parameters of the model from data.
  • ⁇ u and ⁇ u the level of the hierarchical model
  • ⁇ u the variable is informed at the level of each dialogue.
  • the computational challenge in latent variable modeling is to compute the posterior distribution of the latent variables conditioned by available observations. Except in rudimentary models, exact posterior inference is known to be intractable and practical data analysis relies on efficient approximate alternatives.
  • the systems and methods can apply a Markov Chain Monte Carlo (MCMC) as a general technique for parameter inference in graphical models.
  • MCMC sampling is the most widely used method of approximate inference.
  • the idea behind MCMC is to approximate a distribution by forming an empirical estimate from samples.
  • One exemplary process used with the systems and methods herein of a MCMC process is the Gibbs sampler, in which the Markov chain is defined by iteratively sampling, in a sweep manner, each variable conditional on previously sampled values of the other variables. This is a form of the Metropolis-Hastings process, and thus yields a chain with the desired stationary distribution. In this modeling mentioned in the previous paragraph, every variable is sampled according to each corresponding distribution.
  • the proposed generative model can also be used in a prior knowledge equipped setting. Indeed, assuming a customer can be identified through-out a series of dialogues, it will be possible to set an informative prior on the ⁇ u parameter of the model.
  • One embodiment of the systems and methods is as part of an interface to support a human agent in understanding who their customer is (in this example case, in terms of personality and mood).
  • an example chat interface (item 116 ) is shown between users, in this case an agent and a customer.
  • the chat interface also includes additional boxes depicting the CRM (customer resource management) data as shown in item 118 and the customer information shown in item 120 .
  • the agent states to the customer “Hello and welcome to our customer service line. What can I help you with today.” These statements can be manually generated by a human agent or automatically generated by a virtual (computer generated) agent. The customer responds The internet doesn't work on my phone.” From the vagueness of this statement, the related methods determine that the customer has an expertise level of “novice” as shown in item 110 .
  • the assessment of the customer will change over time. For example, in the interaction shown in FIG. 3 there is not enough information in the opening turn of the dialogue (as seen in the dialogue text in item 104 ) to make any determination of mood or personality (nor is there any prior knowledge of the personality of this customer). However, again, the customer information section in item 120 , is able to determine using the dialogue in item 104 , that in this example, the customer is a novice in regards to the technology, as shown by item 110 .
  • the systems and methods track the mental state of the customer and update the reporting. This can be seen in FIG. 4 , where the additional dialogue between the two users (item 106 ) has enabled the systems and methods to determine the mood and personality of the customer, as seen in items 112 and 114 . More specifically, the virtual or real agent states: “I'm very sorry to hear that the internet doesn't work, let me try to help you with that”; and the customer responds with a very negative statement: The wifi works as expected but the 4 G service I'm paying for does not. The config is as I was told it should be.
  • the systems and methods herein automatically determine that the customer's mood is “negative” and display the same in the customer information section 120 , shown in FIG. 4 as item 112 .
  • the systems and methods herein analyze the customer's response 106 and automatically determine that the customer's personality is “direct, immediate” as shown by item 114 in FIG. 4 .
  • the third turn in the dialogue 108 includes the virtual or real agent stating “Okay, give me a second. Try this—Settings>Data>Network . . . ” and the customer says responds:“OK, that's seems to have worked. Thank you so much for the help.” Because of the positive statement of the customer in 108 , the systems and methods herein automatically determine that the customer's mood is “positive” and display the same in the customer information section 120 , shown in FIG. 4 as item 122 .
  • the models used by the systems and methods herein are able to determine that though the personality 114 is stable, the mood of the customer changes from 112 to 122 with the resolution of the dialogue as can be seen in FIGS. 4 and 5 .
  • the systems and methods again use the additional dialogue between the two users, shown in item 108 , to update the mental state variables of the customer.
  • the inferred change in their mood from negative to positive, as shown in item 112 and 122 can be seen as a secondary successful outcome of the interaction.
  • Customer modeling is a component of various automation projects. As shown in FIG. 6 , using the systems and methods described herein, the customer models are an observer of the dialogue between the human customer 140 and the agent (VA) 142 .
  • agent 142 is not necessarily a virtual agent, and could be a human agent who can interact with the system for other reasons. In one example, the virtual agent could be the combination of 130 , 132 and 138 . Also, element 134 could be included in the virtual agent in some situations.
  • the systems and methods herein replicate human impression formation for the agent 142 , allowing the agent 142 to adapt to an increasing understanding of the personality and mood of the customer 140 .
  • the systems and methods herein bias the selection of the dialogue act in the dialogue manager 132 .
  • the systems and methods herein impact the surface realization (e.g. the choice of words) in the natural language generation component 138 of the agent 142 .
  • input from the user 140 generates understanding output from the understanding model in item 130 , using semantic parser and dialog act recognizer elements.
  • the understanding model 130 feeds the output into the dialogue manager 132 , which can include exemplary modules Otto, Optimus, Otto v2, etc., and which provides output to the generation model 138 (e.g., using a SimpleNLG model and generation rules identifier).
  • a knowledge base is also used, as shown in item 134 , which includes a semantic enrichment engine and predictive queries.
  • a customer model 136 is also used which includes a skills identifier.
  • the customer agent 142 in FIG. 6 can also use various tools such as an apprentice module, annotation server, and dialog explorer.
  • the systems and methods herein provide the ability to understand customers at a psychological level, and can be utilized in a number of ways on various social media platforms.
  • the systems and methods herein can be used in outward engagement and can help understand which customers are most likely open to receiving a targeted marketing campaign.
  • the systems and methods herein also can be used to determine when a targeted marketing campaign would be appropriate based on mood of the customer.
  • the systems and methods are able to personalize the campaign in such a way that it resonates in the best way with different types of customers.
  • the systems and methods herein also can be used to provide personalized product/service recommendations.
  • FIG. 7 is flowchart illustrating exemplary methods herein.
  • these methods automatically monitor text and/or speech communications between users (e.g., using a specialized language processor).
  • the communications between the users that is monitored includes, but is not limited to, evidence that can be extracted from actual text of a dialog, speech signal, or features from a speech signal of a given dialog, etc.
  • Such methods then automatically analyze the text communications using the specialized language processor to simultaneously determine, for a current time period, mental state variables of the user, as shown in item 152 .
  • These mental state variables can include the emotion, mood, and personality of the user.
  • these methods then automatically aggregate the emotion, mood, and personality using a hierarchical probabilistic graphical model (e.g., a directed probabilistic graph (DPG)) as shown in item 154 .
  • a hierarchical probabilistic graphical model e.g., a directed probabilistic graph (DPG)
  • DPG directed probabilistic graph
  • the directed probabilistic graph can include a single personality node, multiple mood nodes, multiple emotion nodes, and multiple evidence nodes.
  • Each of the mood nodes, the emotion nodes, and the evidence nodes can be for a different time portion of the time period.
  • the evidence nodes can include different dialogue variables used by the personality node, the mood nodes, and the emotion nodes.
  • the emotion nodes can be, for example, happy-for-satisfaction, anger, or distressed states; the mood nodes can be, for example, positive, neutral, or negative; and the personality nodes can be, for example, neuroticism, extraversion, openness to experience, agreeableness, or conscientiousness.
  • the directed probabilistic graph has edges connecting the personality node, the mood nodes, the emotion nodes, and the evidence nodes; and the edges themselves have probability values. Therefore, as shown in item 156 , these methods also determine the highest probability path through the directed probabilistic graph to infer the mental state of the user. When processing the paths through the directed probabilistic graph in item 156 , these methods aggregate the probabilities of the edges of the series of adjacent nodes, and the highest probability path is the path that has an aggregation of the probabilities of the edges that is higher than all other possible paths through the directed probabilistic graph.
  • the methods output the mental state of the user from the specialized language processor by displaying the emotion, mood, and personality on the graphic user interface of the processor, or by providing the mental state to a separate process, such as a virtual agent. As shown in item 160 , these methods can also outputs any change in the variable mental state of the user as the conversation progresses.
  • the hardware described herein plays a significant part in permitting the foregoing methods to be performed, rather than function solely as a mechanism for permitting a solution to be achieved more quickly, (i.e., through the utilization of a computer for performing calculations).
  • the processes described herein cannot be performed by human alone (or one operating with a pen and a pad of paper) and instead such processes can only be performed by a machine.
  • processes such as automatically monitoring text communications between users using a specialized language processor, automatically analyzing the text communications using the specialized language processor to simultaneously determine, for a current time period, mental state variables of a user, automatically aggregating the emotion, mood, and personality using a hierarchical probabilistic graphical model to determine the highest probability path through a directed probabilistic graph to infer the mental state of the user use different specialized machines and cannot be performed by humans alone.
  • the methods herein solve many highly complex technological problems. For example, as mentioned above, it is difficult for automated or real customer service agents to know the mental state of the individual with which they are conducting a text chat. Therefore, the systems and methods herein provide the ability determine the mental state of a user and display the mental state or output the mental state to another process, such as a virtual agent.
  • exemplary systems and methods herein include various computerized devices 200 , 204 located at various different physical locations 206 .
  • the computerized devices 200 , 204 can include print servers, printing devices, personal computers, etc., and are in communication (operatively connected to one another) by way of a local or wide area (wired or wireless) network 202 .
  • FIG. 9 illustrates a computerized device 200 , which can be used with systems and methods herein and can comprise, for example, a print server, a personal computer, a portable computing device, etc.
  • the computerized device 200 includes a controller/tangible processor 216 and a communications port (input/output) 214 operatively connected to the tangible processor 216 and to the computerized network 202 external to the computerized device 200 .
  • the computerized device 200 can include at least one accessory functional component, such as a graphical user interface (GUI) assembly 212 .
  • GUI graphical user interface
  • the input/output device 214 is used for communications to and from the computerized device 200 and comprises a wired device or wireless device (of any form, whether currently known or developed in the future).
  • the tangible processor 216 controls the various actions of the computerized device.
  • a non-transitory, tangible, computer storage medium device 210 (which can be optical, magnetic, capacitor based, etc., and is different from a transitory signal) is readable by the tangible processor 216 and stores instructions that the tangible processor 216 executes to allow the computerized device to perform its various functions, such as those described herein.
  • a body housing has one or more functional components that operate on power supplied from an alternating current (AC) source 220 by the power supply 218 .
  • the power supply 218 can comprise a common power conversion unit, power storage element (e.g., a battery, etc), etc.
  • Computerized devices that include chip-based central processing units (CPU's), input/output devices (including graphic user interfaces (GUI), memories, comparators, tangible processors, etc.) are well-known and readily available devices produced by manufacturers such as Dell Computers, Round Rock Tex., USA and Apple Computer Co., Cupertino Calif., USA.
  • Such computerized devices commonly include input/output devices, power supplies, tangible processors, electronic storage memories, wiring, etc., the details of which are omitted herefrom to allow the reader to focus on the salient aspects of the systems and methods described herein.
  • printers, copiers, scanners and other similar peripheral equipment are available from Xerox Corporation, Norwalk, Conn., USA and the details of such devices are not discussed herein for purposes of brevity and reader focus.

Abstract

Methods and systems monitor communications between users and analyze the communications to simultaneously determine, for a current time period, mental state variables of one of the users. Such mental state variables include the emotion of the user, the mood of the user, and the personality of the user. Additionally, such methods aggregate the emotion, the mood, and the personality using a hierarchical probabilistic graphical model that determines the highest probability path through a directed probabilistic graph to infer the mental state of the user. The directed probabilistic graph maintains a single state for the personality for the time period, and maintains multiple states for the emotion and the mood for the time period. These methods and systems output the mental state of the user.

Description

    BACKGROUND
  • Systems and methods herein generally relate to using specialized machines to monitor communications between users, and processes that output and constantly revise the mental state of one or more of the users.
  • The issue of customer modeling (e.g., understanding who a customer is) is fundamental to any notion of personalization and is an issue particularly with Virtual Agent processes. For successful communication, a useful component of the customer to model is their mental state: their personality, mood, and emotions.
  • SUMMARY
  • Exemplary methods herein automatically monitor text and/or speech communications between users using a specialized language processor, and automatically analyze the communications using the specialized language processor to simultaneously determine, for a current time period, mental state variables of a user. These mental state variables can include, for example, the emotion of the user, the mood of the user and the personality of the user. The method then automatically aggregates the emotion, mood, and personality using a hierarchical probabilistic graphical model to determine the highest probability path through a directed probabilistic graph to infer the mental state of the user. Using the specialized language processor, the method outputs the mental state of the user from the specialized language processor by displaying the emotion, mood, and personality on the graphic user interface of the processor, or outputting the mental state to a different process.
  • The directed probabilistic graph maintains a single state for personality for the time period, and maintains multiple states for the emotion and the mood for the time period. Therefore, this directed probabilistic graph has a single personality node, multiple mood nodes, multiple emotion nodes, and multiple evidence nodes. The directed probabilistic graph has edges connecting the personality node, the mood nodes, the emotion nodes, and the evidence nodes; and the edges themselves have probability values. The method processes a path through the directed probabilistic graph, and the probability of the path is formed from an aggregation of the probabilities of the edges of the series of adjacent nodes. The highest probability path has an aggregation of the probabilities of the edges that is higher than all other possible paths through the directed probabilistic graph.
  • Each of the mood nodes, the emotion nodes, and the evidence nodes are for a different time portion of the time period. The evidence nodes can include different dialogue variables used by the personality node, the mood nodes, and the emotion nodes. The emotion nodes can be, for example, happy-for, satisfaction, anger, or distressed states; the mood nodes can be, for example, positive, neutral, or negative, and the personality nodes can be, for example, neuroticism, extraversion, openness to experience, agreeableness, or conscientiousness.
  • Exemplary systems herein include a specialized language processor and any form of interface (e.g., a graphic user interface) connected to the specialized language processor. The specialized language processor automatically monitors text communications between users, and the specialized language processor automatically analyzes the text communications to simultaneously determine, for a current time period, the mental state variables of a user. These mental state variables include the emotion, personality and mood of the user. The specialized language processor automatically aggregates the emotion, mood, and personality using a hierarchical probabilistic graphical model that determines the highest probability path through the graph to infer the mental state of the user. The graphic user interface then outputs the mental state of the user from the specialized language processor, for example by displaying the emotion, mood, and personality status or by outputting the mental state to a different process.
  • The directed probabilistic graph maintains a single state for personality for the time period, and maintains multiple states for the emotion and the mood during the same time period. Therefore, the directed probabilistic graph described above includes a single personality node, multiple mood nodes, multiple emotion nodes, and multiple evidence nodes. The edges of the directed probabilistic graph connect the personality node, the mood nodes, the emotion nodes, and the evidence nodes and the edges contain probability values. A path through the directed probabilistic graph is made of a series of adjacent nodes, and the probability of the path is determined by an aggregation of the probabilities of the edges of the series of adjacent nodes. Thus, the highest probability path has an aggregation of the probabilities of the edges that is higher than all other possible paths through the graph.
  • Furthermore, each of the mood nodes, emotion nodes, and evidence nodes is for a different time portion of the time period and the evidence nodes include different dialogue variables used by the personality node, mood nodes, and emotion nodes. The emotion variables include, for example, happy-for-satisfaction, anger, or distress, mood variables include, for example, positive, neutral, or negative, and the personality variables include, for example neuroticism, extraversion, and openness to experience, agreeableness, or conscientiousness. These and other features are described in, or are apparent from, the following detailed description.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Various exemplary systems and methods are described in detail below, with reference to the attached drawing figures, in which:
  • FIG. 1 is a table diagram illustrating methods herein;
  • FIG. 2 is a hierarchical probabilistic graphical model of various methods herein;
  • FIG. 3 is a schematic diagram illustrating output produced by systems and methods herein;
  • FIG. 4 is a schematic diagram illustrating output produced by systems and methods herein;
  • FIG. 5 is a schematic diagram illustrating output produced by systems and methods herein;
  • FIG. 6 is a schematic diagram illustrating a system herein;
  • FIG. 7 is a flowchart diagram illustrating methods herein;
  • FIG. 8 is a schematic diagram illustrating a system herein; and
  • FIG. 9 is a schematic diagram illustrating devices herein.
  • DETAILED DESCRIPTION
  • As mentioned above, one advance of customer modeling is to understand who the individual customer is; and the systems and methods herein provide a probabilistic approach to tracking the mental state of the customer at each of three levels (e.g., personality, mood and emotion) during a sequential set of turns that compose a conversation. This can be done on a number of levels from external (e.g., the products/services that they own and use) through personal demographics (e.g., location, age, gender) to internal mental states and beliefs (e.g., personality, sentiment).
  • As humans make an impression on one another, they wish to automatically determine the personality of a customer. Personality traits are generally considered temporally stable, and thus this disclosure's modeling ability is enriched by the acquisition of more data over time.
  • Further, an individual's mood and emotions will cloud the ability to determine personality. Indeed, implicit personality theory considers that there are many factors that affect the impressions one forms of people, including mood. The systems and methods described herein provide an approach that enables a user to statistically infer mental states at several levels of temporal stability. For purposes herein the “mental state” includes three distinct, yet connected levels: personality, mood and emotion. More specifically, the systems and methods herein infer the three levels by hierarchically connecting models together in a coherent probabilistic graphical model (PGM). The systems and methods herein provide a formal PGM that infers an individual's mental states (the latent variables) at the personality, mood and emotion levels from evidence (observed variables).
  • The systems and methods apply this PGM in a temporally dynamic situation: namely conversational dialogue data. In practice, this data could be drawn from a direct 1-on-1 dialogue (for example a web chat) or an asynchronous series of communications via social media (e.g., a forum thread). The important considerations with the data are such that; there is a conversational partner(s) who provide external utterances to the individual which could affect their mental state. The communication is relatively time-bound such that it is realistic to infer a connection between short-term emotional states.
  • Firstly, the systems and methods herein present a dynamic and hierarchical mental state model. One broad concept herein is to aggregate in a hierarchical probabilistic framework several approaches to mental state modeling and specifying the necessary conditional dependencies between them. In the second part, the systems and methods address the inference procedure associated with the model. Thus, as shown in the table 100 in FIG. 1, when considering personality, mood, and emotion various observations can be made. The description of a user's personality is typically considered as set of characteristics possessed by a person that uniquely influences their behavior, moderated by context. While many models can be used to determine personality with systems and methods herein, one exemplary model is the Five Factor Model (FFM) of neuroticism, extraversion, openness to experience, agreeableness, and conscientiousness. Further, as shown in FIG. 1, for purposes herein, the personality is considered to be stable with respect to temporal stability.
  • As also shown in FIG. 1, the description of a user's mood is a less specific and less intense subjective state of mind than emotions, though typically more prolonged. Various models calculate mood with systems and methods herein using three states positive/good, neutral, and negative/bad. Further, as shown in FIG. 1, for purposes herein, the mood is considered to be short-midterm with respect to temporal stability.
  • FIG. 1 also shows that the description of a user's emotion is a subjective state of mind experienced by an individual, most often triggered by a specific event, which expresses itself in many psycho-physiological ways. While many models can be used to determine personality with systems and methods herein, one exemplary model is the OCC model (Ortony, Clore and Collins,) defines 22 categories including happy-for, satisfaction, anger, distress. These can be mapped to 6 high-order, universally recognized categories.
  • FIG. 2 illustrates the Dynamic and Hierarchical Mental State Model (DHMSM) 102 that the systems and methods provide in order to infer hidden mental state variables from dialogue data with Tturns of utterances exchanged. In the model, dialogue variables are defined and referred to as evidence. In this directed probabilistic graphical model, the observed variables are shaded and the hidden variables that are inferred are blanked.
  • As shown in FIG. 2, the permanent aspect of the personality of a considered user u is defined by a random variable λu drawn from, for example, a Dirichlet distribution of parameter θλ
    Figure US20170069340A1-20170309-P00001
    5. The dimensionality of size 5 of this variable, in this instance, aims at modeling the so-called Five Factor Model of personality; neuroticism, extraversion, openness, agreeableness and conscientiousness referenced in FIG. 1; however, those skilled in the art would understand that different personality models could have different dimensionalities. For a given user, though the degree of personality can vary with context, this variable is considered stable through the time of the interactions of a given dialog.
  • Further, in FIG. 2, the mood βu is modeled as a temporal type of random variable representing the 3 exemplary cardinal moods; good, neutral and bad drawn from a multinomial distribution of parameter θβ
    Figure US20170069340A1-20170309-P00002
    3. This example uses a Markovian dependency between the overall personality variable λu, a mood state at time t, βt u and the previous mood state at time t−1, βt−1 u; although those skilled in the art would understand that other known dependencies could be used with systems and methods herein. Thus, FIG. 2 illustrates that that mood change is a phenomena conditioned by the past state of the variable but also the personality type of the user. For example, high scorers of neuroticism are more prone to negative moods and dramatic mood changes than low scorers.
  • Then, as further shown in FIG. 2, the instantaneous emotion state αu is a random variable drawn from a multinomial distribution of parameter θα
    Figure US20170069340A1-20170309-P00003
    6 that aims at representing six exemplary high level emotional states; hope, fear, relief, satisfaction, joy and distress (while those skilled in the art would understand that other emotional states could be used by systems and methods herein). Note that this could be extended to the full OCC model by using the parameter distribution θα
    Figure US20170069340A1-20170309-P00004
    22.
  • Additionally, in FIG. 2, eu and em are respectively the evidences produced by the user u and the dialogue partner m. The evidence is something that is written by a user, something that is said by a user, something in the voice signal of the user, etc. In this model, the partner could be either a human interlocutor, or a system producing automated responses. These evidential variables can be decomposed to linguistic and statistical features including, but not limited to, those such as; bag-of-word ngrams, part-of-speech tags and parse tree features, dialogue acts associated to each utterance, message length, time of response, frequency of multiple utterances in the same turn, number of messages of a given turn, etc.
  • In this example, the partner m holds the initiative of the dialogue such that each observed utterance produced by the user u is statistically conditioned by both the current instantaneous emotional state and the last utterance of the dialogue partner et m. See equation (1) below;

  • pu1:T u1:T u ,e 1:T u ,e 1:T m)=pu)p1 uut=2 T pt uut−1 ut=1 T pt ut ut=1 T p(e t u |e t mt u)p(e t m).
  • Equation (1) (above) defines the closed form expression of the joint probability of the graphical model of the systems and method herein. During the inference phase (see below) the parameters of the mental state model u 1:T u 1:T u } are inferred with respect to the observed variables {e 1:T u ,e 1:T m } (the evidence utterances and derived features) as expressed in Equation (2) (see below), which defines the corresponding maximum a posteriori query:
  • argmax λ u , β 1 : T u , α 1 : T u p ( λ u , β 1 : T u , α 1 : T u e 1 : T u , e 1 : T m ) = p ( e 1 : T u , e 1 : T m λ u , β 1 : T u , α 1 : T u ) p ( λ u , β 1 : T u , α 1 : T u ) p ( e 1 : T u , e 1 : T m ) p ( e 1 : T u , e 1 : T m λ u , β 1 : T u , α 1 : T u ) p ( λ u , β 1 : T u , α 1 : T u ) .
  • According to Equation (2), two situations can be considered. Starting with a uniform, i.e., non-informative, prior over the marginal distribution of the parameters p(λu1:T u1:T u). Alternatively, it can be assumed that a given prior distribution of these variables, for a specific user u, has already been inferred in a previous dialogue session analysis or by any other means. Concerning the second part of the equation, the likelihood of the evidence with respect to the model's parameters p(e1:T u,e1:T mu1:T u1:T u) will be maximized by, for example, Monte Carlo Markov Chain sampling.
  • Thus, with the systems and methods herein, the task of inferring the parameters of the model from data is also called learning. In this context, one can assume the existence of an annotated corpus of dialogues where each level of the hierarchical model, αu and βu, is informed at each turn. Concerning λu, the variable is informed at the level of each dialogue. In fact, the computational challenge in latent variable modeling is to compute the posterior distribution of the latent variables conditioned by available observations. Except in rudimentary models, exact posterior inference is known to be intractable and practical data analysis relies on efficient approximate alternatives.
  • As noted above, in one example the systems and methods can apply a Markov Chain Monte Carlo (MCMC) as a general technique for parameter inference in graphical models. MCMC sampling is the most widely used method of approximate inference. The idea behind MCMC is to approximate a distribution by forming an empirical estimate from samples. One can construct a Markov chain with the appropriate stationary distribution, and collect the samples from a chain that has converged. One exemplary process used with the systems and methods herein of a MCMC process is the Gibbs sampler, in which the Markov chain is defined by iteratively sampling, in a sweep manner, each variable conditional on previously sampled values of the other variables. This is a form of the Metropolis-Hastings process, and thus yields a chain with the desired stationary distribution. In this modeling mentioned in the previous paragraph, every variable is sampled according to each corresponding distribution.
  • Finally, the proposed generative model can also be used in a prior knowledge equipped setting. Indeed, assuming a customer can be identified through-out a series of dialogues, it will be possible to set an informative prior on the λu parameter of the model.
  • Humans are very good at forming impressions of one another's personality and mood. However, in a text-based chat dialogue, there is minimal extra-linguistic information (e.g., voice, facial expressions, body language) upon which one can form an impression. One embodiment of the systems and methods is as part of an interface to support a human agent in understanding who their customer is (in this example case, in terms of personality and mood).
  • This is shown in an example presented in FIGS. 3-5. In FIG. 3, an example chat interface (item 116) is shown between users, in this case an agent and a customer. The chat interface also includes additional boxes depicting the CRM (customer resource management) data as shown in item 118 and the customer information shown in item 120.
  • More specifically, in the dialogue shown in item 104, the agent states to the customer “Hello and welcome to our customer service line. What can I help you with today.” These statements can be manually generated by a human agent or automatically generated by a virtual (computer generated) agent. The customer responds The internet doesn't work on my phone.” From the vagueness of this statement, the related methods determine that the customer has an expertise level of “novice” as shown in item 110.
  • With the emotional state tracking ability of this system and method, the assessment of the customer will change over time. For example, in the interaction shown in FIG. 3 there is not enough information in the opening turn of the dialogue (as seen in the dialogue text in item 104) to make any determination of mood or personality (nor is there any prior knowledge of the personality of this customer). However, again, the customer information section in item 120, is able to determine using the dialogue in item 104, that in this example, the customer is a novice in regards to the technology, as shown by item 110.
  • However, as the dialogue progresses, the systems and methods track the mental state of the customer and update the reporting. This can be seen in FIG. 4, where the additional dialogue between the two users (item 106) has enabled the systems and methods to determine the mood and personality of the customer, as seen in items 112 and 114. More specifically, the virtual or real agent states: “I'm very sorry to hear that the internet doesn't work, let me try to help you with that”; and the customer responds with a very negative statement: The wifi works as expected but the 4G service I'm paying for does not. The config is as I was told it should be. There is no error in the proxy or IP address so I need you to fix this.” Because of the very negative statement of the customer, the systems and methods herein automatically determine that the customer's mood is “negative” and display the same in the customer information section 120, shown in FIG. 4 as item 112. In addition, the systems and methods herein analyze the customer's response 106 and automatically determine that the customer's personality is “direct, immediate” as shown by item 114 in FIG. 4.
  • As shown in FIG. 5, the third turn in the dialogue 108 includes the virtual or real agent stating “Okay, give me a second. Try this—Settings>Data>Network . . . ” and the customer happily responds:“OK, that's seems to have worked. Thank you so much for the help.” Because of the positive statement of the customer in 108, the systems and methods herein automatically determine that the customer's mood is “positive” and display the same in the customer information section 120, shown in FIG. 4 as item 122.
  • Thus, as shown in the example in FIGS. 3-5, the models used by the systems and methods herein are able to determine that though the personality 114 is stable, the mood of the customer changes from 112 to 122 with the resolution of the dialogue as can be seen in FIGS. 4 and 5. The systems and methods again use the additional dialogue between the two users, shown in item 108, to update the mental state variables of the customer. In addition to having solved the issue of the customer, the inferred change in their mood from negative to positive, as shown in item 112 and 122, can be seen as a secondary successful outcome of the interaction.
  • Customer modeling is a component of various automation projects. As shown in FIG. 6, using the systems and methods described herein, the customer models are an observer of the dialogue between the human customer 140 and the agent (VA) 142. Note that agent 142 is not necessarily a virtual agent, and could be a human agent who can interact with the system for other reasons. In one example, the virtual agent could be the combination of 130, 132 and 138. Also, element 134 could be included in the virtual agent in some situations. The systems and methods herein replicate human impression formation for the agent 142, allowing the agent 142 to adapt to an increasing understanding of the personality and mood of the customer 140. By knowing the mental state parameters of the customer 140, the systems and methods herein bias the selection of the dialogue act in the dialogue manager 132. Similarly, to the systems and methods herein impact the surface realization (e.g. the choice of words) in the natural language generation component 138 of the agent 142.
  • For example, in FIG. 6, input from the user 140 generates understanding output from the understanding model in item 130, using semantic parser and dialog act recognizer elements. The understanding model 130 feeds the output into the dialogue manager 132, which can include exemplary modules Otto, Optimus, Otto v2, etc., and which provides output to the generation model 138 (e.g., using a SimpleNLG model and generation rules identifier). A knowledge base is also used, as shown in item 134, which includes a semantic enrichment engine and predictive queries. A customer model 136 is also used which includes a skills identifier. The customer agent 142 in FIG. 6 can also use various tools such as an apprentice module, annotation server, and dialog explorer.
  • Thus, the systems and methods herein provide the ability to understand customers at a psychological level, and can be utilized in a number of ways on various social media platforms. For example, the systems and methods herein can be used in outward engagement and can help understand which customers are most likely open to receiving a targeted marketing campaign. The systems and methods herein also can be used to determine when a targeted marketing campaign would be appropriate based on mood of the customer. At the same time, the systems and methods are able to personalize the campaign in such a way that it resonates in the best way with different types of customers. The systems and methods herein also can be used to provide personalized product/service recommendations.
  • FIG. 7 is flowchart illustrating exemplary methods herein. In item 150, these methods automatically monitor text and/or speech communications between users (e.g., using a specialized language processor). Thus, the communications between the users that is monitored includes, but is not limited to, evidence that can be extracted from actual text of a dialog, speech signal, or features from a speech signal of a given dialog, etc. Such methods then automatically analyze the text communications using the specialized language processor to simultaneously determine, for a current time period, mental state variables of the user, as shown in item 152. These mental state variables can include the emotion, mood, and personality of the user.
  • These methods then automatically aggregate the emotion, mood, and personality using a hierarchical probabilistic graphical model (e.g., a directed probabilistic graph (DPG)) as shown in item 154. When aggregating the emotion, mood, and personality in item 154, these methods can, for example, maintain a single state for personality for the time period, and can maintain multiple states for the emotion and the mood for the time period as shown in item. Thus, if personality is known accurately, it is just one value across the interaction. However, if there is no prior knowledge of personality, and is made as a decision at one point in the dialogue, the methods herein may revise this value at a later stage. This does not however mean multiple personality nodes, it means the first value for the node was incorrect, so it was overwritten.
  • For example, the directed probabilistic graph can include a single personality node, multiple mood nodes, multiple emotion nodes, and multiple evidence nodes. Each of the mood nodes, the emotion nodes, and the evidence nodes can be for a different time portion of the time period. The evidence nodes can include different dialogue variables used by the personality node, the mood nodes, and the emotion nodes. The emotion nodes can be, for example, happy-for-satisfaction, anger, or distressed states; the mood nodes can be, for example, positive, neutral, or negative; and the personality nodes can be, for example, neuroticism, extraversion, openness to experience, agreeableness, or conscientiousness.
  • The directed probabilistic graph has edges connecting the personality node, the mood nodes, the emotion nodes, and the evidence nodes; and the edges themselves have probability values. Therefore, as shown in item 156, these methods also determine the highest probability path through the directed probabilistic graph to infer the mental state of the user. When processing the paths through the directed probabilistic graph in item 156, these methods aggregate the probabilities of the edges of the series of adjacent nodes, and the highest probability path is the path that has an aggregation of the probabilities of the edges that is higher than all other possible paths through the directed probabilistic graph.
  • As seen in item 158, using the specialized language processor, the methods output the mental state of the user from the specialized language processor by displaying the emotion, mood, and personality on the graphic user interface of the processor, or by providing the mental state to a separate process, such as a virtual agent. As shown in item 160, these methods can also outputs any change in the variable mental state of the user as the conversation progresses.
  • The hardware described herein plays a significant part in permitting the foregoing methods to be performed, rather than function solely as a mechanism for permitting a solution to be achieved more quickly, (i.e., through the utilization of a computer for performing calculations). As would be understood by one ordinarily skilled in the art, the processes described herein cannot be performed by human alone (or one operating with a pen and a pad of paper) and instead such processes can only be performed by a machine. Specifically, processes such as automatically monitoring text communications between users using a specialized language processor, automatically analyzing the text communications using the specialized language processor to simultaneously determine, for a current time period, mental state variables of a user, automatically aggregating the emotion, mood, and personality using a hierarchical probabilistic graphical model to determine the highest probability path through a directed probabilistic graph to infer the mental state of the user use different specialized machines and cannot be performed by humans alone.
  • Additionally, the methods herein solve many highly complex technological problems. For example, as mentioned above, it is difficult for automated or real customer service agents to know the mental state of the individual with which they are conducting a text chat. Therefore, the systems and methods herein provide the ability determine the mental state of a user and display the mental state or output the mental state to another process, such as a virtual agent.
  • As shown in FIG. 8, exemplary systems and methods herein include various computerized devices 200, 204 located at various different physical locations 206. The computerized devices 200, 204 can include print servers, printing devices, personal computers, etc., and are in communication (operatively connected to one another) by way of a local or wide area (wired or wireless) network 202.
  • FIG. 9 illustrates a computerized device 200, which can be used with systems and methods herein and can comprise, for example, a print server, a personal computer, a portable computing device, etc. The computerized device 200 includes a controller/tangible processor 216 and a communications port (input/output) 214 operatively connected to the tangible processor 216 and to the computerized network 202 external to the computerized device 200. Also, the computerized device 200 can include at least one accessory functional component, such as a graphical user interface (GUI) assembly 212. The user may receive messages, instructions, and menu options from, and enter instructions through, the graphical user interface or control panel 212.
  • The input/output device 214 is used for communications to and from the computerized device 200 and comprises a wired device or wireless device (of any form, whether currently known or developed in the future). The tangible processor 216 controls the various actions of the computerized device. A non-transitory, tangible, computer storage medium device 210 (which can be optical, magnetic, capacitor based, etc., and is different from a transitory signal) is readable by the tangible processor 216 and stores instructions that the tangible processor 216 executes to allow the computerized device to perform its various functions, such as those described herein. Thus, as shown in Figure ?+2, a body housing has one or more functional components that operate on power supplied from an alternating current (AC) source 220 by the power supply 218. The power supply 218 can comprise a common power conversion unit, power storage element (e.g., a battery, etc), etc.
  • While some exemplary structures are illustrated in the attached drawings, those ordinarily skilled in the art would understand that the drawings are simplified schematic illustrations and that the claims presented below encompass many more features that are not illustrated (or potentially many less) but that are commonly utilized with such devices and systems. Therefore, Applicants do not intend for the claims presented below to be limited by the attached drawings, but instead the attached drawings are merely provided to illustrate a few ways in which the claimed features can be implemented.
  • Many computerized devices are discussed above. Computerized devices that include chip-based central processing units (CPU's), input/output devices (including graphic user interfaces (GUI), memories, comparators, tangible processors, etc.) are well-known and readily available devices produced by manufacturers such as Dell Computers, Round Rock Tex., USA and Apple Computer Co., Cupertino Calif., USA. Such computerized devices commonly include input/output devices, power supplies, tangible processors, electronic storage memories, wiring, etc., the details of which are omitted herefrom to allow the reader to focus on the salient aspects of the systems and methods described herein. Similarly, printers, copiers, scanners and other similar peripheral equipment are available from Xerox Corporation, Norwalk, Conn., USA and the details of such devices are not discussed herein for purposes of brevity and reader focus.
  • In addition, terms such as “right”, “left”, “vertical”, “horizontal”, “top”, “bottom”, “upper”, “lower”, “under”, “below”, “underlying”, “over”, “overlying”, “parallel”, “perpendicular”, etc., used herein are understood to be relative locations as they are oriented and illustrated in the drawings (unless otherwise indicated). Terms such as “touching”, “on”, “in direct contact”, “abutting”, “directly adjacent to”, etc., mean that at least one element physically contacts another element (without other elements separating the described elements). Further, the terms automated or automatically mean that once a process is started (by a machine or a user), one or more machines perform the process without further input from any user. It will be appreciated that the above-disclosed and other features and functions, or alternatives thereof, may be desirably combined into many other different systems or applications. Various presently unforeseen or unanticipated alternatives, modifications, variations, or improvements therein may be subsequently made by those skilled in the art which are also intended to be encompassed by the following claims. Unless specifically defined in a specific claim itself, steps or components of the systems and methods herein cannot be implied or imported from any above example as limitations to any particular order, number, position, size, shape, angle, color, or material.

Claims (20)

What is claimed is:
1. A method comprising:
automatically monitoring communications between users using a specialized language processor;
automatically analyzing said communications using said specialized language processor to simultaneously determine, for a current time period, mental state variables of a user of said users, said mental state variables comprising:
an emotion of said user;
a mood of said user; and
a personality of said user; and
automatically aggregating said emotion, said mood, and said personality using a hierarchical probabilistic graphical model that determines a highest probability path through a directed probabilistic graph to infer the mental state of said user, using said specialized language processor;
outputting said mental state of said user from said specialized language processor,
said directed probabilistic graph maintaining a single state for said personality for said time period, and maintaining multiple states for said emotion and said mood for said time period.
2. The method according to claim 1, said directed probabilistic graph comprising a single personality node, multiple mood nodes, multiple emotion nodes, and multiple evidence nodes.
3. The method according to claim 2, said directed probabilistic graph comprising edges connecting said personality node, said mood nodes, said emotion nodes, and said evidence nodes, and
said edges comprising probability values.
4. The method according to claim 3, a path through said directed probabilistic graph comprising a series of adjacent nodes,
a probability of said path comprising an aggregation of said probabilities of said edges of said series of adjacent nodes, and
said highest probability path having an aggregation of said probabilities of said edges that is higher than all other possible paths through said directed probabilistic graph.
5. The method according to claim 2, each of said mood nodes, said emotion nodes, and said evidence nodes being for a different time portion of said time period.
6. The method according to claim 2, said evidence nodes comprising different dialogue variables used by said personality node, said mood nodes, and said emotion nodes.
7. The method according to claim 1, said emotion comprising happy-for-satisfaction, anger, or distress,
said mood comprising positive, neutral, or negative, and
said personality comprising neuroticism, extraversion, openness to experience, agreeableness, or conscientiousness.
8. A method comprising:
automatically monitoring text communications between users using a specialized language processor;
automatically analyzing said text communications using said specialized language processor to simultaneously determine, for a current time period, mental state variables of a user of said users, said mental state variables comprising:
an emotion of said user;
a mood of said user; and
a personality of said user; and
automatically aggregating said emotion, said mood, and said personality using a hierarchical probabilistic graphical model that determines a highest probability path through a directed probabilistic graph to infer the mental state of said user, using said specialized language processor;
outputting said mental state of said user from said specialized language processor by displaying said emotion, said mood, and said personality on a graphic user interface operatively connected to said specialized language processor,
said directed probabilistic graph maintaining a single state for said personality for said time period, and maintaining multiple states for said emotion and said mood for said time period.
9. The method according to claim 8, said directed probabilistic graph comprising a single personality node, multiple mood nodes, multiple emotion nodes, and multiple evidence nodes.
10. The method according to claim 9, said directed probabilistic graph comprising edges connecting said personality node, said mood nodes, said emotion nodes, and said evidence nodes, and
said edges comprising probability values.
11. The method according to claim 10, a path through said directed probabilistic graph comprising a series of adjacent nodes,
a probability of said path comprising an aggregation of said probabilities of said edges of said series of adjacent nodes, and
said highest probability path having an aggregation of said probabilities of said edges that is higher than all other possible paths through said directed probabilistic graph.
12. The method according to claim 9, each of said mood nodes, said emotion nodes, and said evidence nodes being for a different time portion of said time period.
13. The method according to claim 9, said evidence nodes comprising different dialogue variables used by said personality node, said mood nodes, and said emotion nodes.
14. The method according to claim 8, said emotion comprising happy-for-satisfaction, anger, or distress,
said mood comprising positive, neutral, or negative, and
said personality comprising neuroticism, extraversion, openness to experience, agreeableness, or conscientiousness.
15. A system comprising:
a specialized language processor; and
a graphic user interface operatively connected to said specialized language processor,
said specialized language processor automatically monitoring communications between users,
said specialized language processor automatically analyzing said communications to simultaneously determine, for a current time period, mental state variables of a user of said users, said mental state variables comprising:
an emotion of said user;
a mood of said user; and
a personality of said user,
said specialized language processor automatically aggregating said emotion, said mood, and said personality using a hierarchical probabilistic graphical model that determines a highest probability path through a directed probabilistic graph to infer the mental state of said user,
said graphic user interface outputting said mental state of said user from said specialized language processor by displaying said emotion, said mood, and said personality, and
said directed probabilistic graph maintaining a single state for said personality for said time period, and maintaining multiple states for said emotion and said mood for said time period.
16. The system according to claim 15, said directed probabilistic graph comprising a single personality node, multiple mood nodes, multiple emotion nodes, and multiple evidence nodes.
17. The system according to claim 16, said directed probabilistic graph comprising edges connecting said personality node, said mood nodes, said emotion nodes, and said evidence nodes, and
said edges comprising probability values.
18. The system according to claim 17, a path through said directed probabilistic graph comprising a series of adjacent nodes,
a probability of said path comprising an aggregation of said probabilities of said edges of said series of adjacent nodes, and
said highest probability path having an aggregation of said probabilities of said edges that is higher than all other possible paths through said directed probabilistic graph.
19. The system according to claim 16, each of said mood nodes, said emotion nodes, and said evidence nodes being for a different time portion of said time period,
said evidence nodes comprising different dialogue variables used by said personality node, said mood nodes, and said emotion nodes.
20. The system according to claim 15, said emotion comprising happy-for-satisfaction, anger, or distress,
said mood comprising positive, neutral, or negative, and
said personality comprising neuroticism, extraversion, openness to experience, agreeableness, or conscientiousness.
US14/845,528 2015-09-04 2015-09-04 Emotion, mood and personality inference in real-time environments Active 2035-09-16 US10025775B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/845,528 US10025775B2 (en) 2015-09-04 2015-09-04 Emotion, mood and personality inference in real-time environments

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US14/845,528 US10025775B2 (en) 2015-09-04 2015-09-04 Emotion, mood and personality inference in real-time environments

Publications (2)

Publication Number Publication Date
US20170069340A1 true US20170069340A1 (en) 2017-03-09
US10025775B2 US10025775B2 (en) 2018-07-17

Family

ID=58190165

Family Applications (1)

Application Number Title Priority Date Filing Date
US14/845,528 Active 2035-09-16 US10025775B2 (en) 2015-09-04 2015-09-04 Emotion, mood and personality inference in real-time environments

Country Status (1)

Country Link
US (1) US10025775B2 (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180012230A1 (en) * 2016-07-11 2018-01-11 International Business Machines Corporation Emotion detection over social media
US20180176168A1 (en) * 2016-11-30 2018-06-21 Fujitsu Limited Visual feedback system
WO2019037700A1 (en) * 2017-08-22 2019-02-28 腾讯科技(深圳)有限公司 Speech emotion detection method and apparatus, computer device, and storage medium
CN109587360A (en) * 2018-11-12 2019-04-05 平安科技(深圳)有限公司 Electronic device should talk with art recommended method and computer readable storage medium
US20190189148A1 (en) * 2017-12-14 2019-06-20 Beyond Verbal Communication Ltd. Means and methods of categorizing physiological state via speech analysis in predetermined settings
US10572585B2 (en) * 2017-11-30 2020-02-25 International Business Machines Coporation Context-based linguistic analytics in dialogues
US10817316B1 (en) 2017-10-30 2020-10-27 Wells Fargo Bank, N.A. Virtual assistant mood tracking and adaptive responses
US10891442B2 (en) * 2016-09-20 2021-01-12 International Business Machines Corporation Message tone evaluation between entities in an organization
WO2021011139A1 (en) * 2019-07-18 2021-01-21 Sri International The conversational assistant for conversational engagement
US11188809B2 (en) * 2017-06-27 2021-11-30 International Business Machines Corporation Optimizing personality traits of virtual agents
US20220164194A1 (en) * 2020-11-20 2022-05-26 Sap Se Unified semantic model of user intentions
US20220230740A1 (en) * 2021-01-21 2022-07-21 Rfcamp Ltd. Method and computer program to determine user's mental state by using user's behavior data or input data
US11824819B2 (en) 2022-01-26 2023-11-21 International Business Machines Corporation Assertiveness module for developing mental model

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10860805B1 (en) * 2017-06-15 2020-12-08 Qntfy Corp. Computerized analysis of team behavior and communication to quantify and optimize team function
US20190385711A1 (en) 2018-06-19 2019-12-19 Ellipsis Health, Inc. Systems and methods for mental health assessment
US11055668B2 (en) * 2018-06-26 2021-07-06 Microsoft Technology Licensing, Llc Machine-learning-based application for improving digital content delivery
US10592609B1 (en) * 2019-04-26 2020-03-17 Tucknologies Holdings, Inc. Human emotion detection
US10812656B1 (en) * 2019-06-13 2020-10-20 Salesboost, Llc System, device, and method of performing data analytics for advising a sales representative during a voice call

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5987415A (en) * 1998-03-23 1999-11-16 Microsoft Corporation Modeling a user's emotion and personality in a computer user interface
US20020194002A1 (en) * 1999-08-31 2002-12-19 Accenture Llp Detecting emotions using voice signal analysis
US6731307B1 (en) * 2000-10-30 2004-05-04 Koninklije Philips Electronics N.V. User interface/entertainment device that simulates personal interaction and responds to user's mental state and/or personality
US20060262920A1 (en) * 2005-05-18 2006-11-23 Kelly Conway Method and system for analyzing separated voice data of a telephonic communication between a customer and a contact center by applying a psychological behavioral model thereto
US20070071206A1 (en) * 2005-06-24 2007-03-29 Gainsboro Jay L Multi-party conversation analyzer & logger
US20130173264A1 (en) * 2012-01-03 2013-07-04 Nokia Corporation Methods, apparatuses and computer program products for implementing automatic speech recognition and sentiment detection on a device
US20160098480A1 (en) * 2014-10-01 2016-04-07 Xerox Corporation Author moderated sentiment classification method and system
US20160227036A1 (en) * 2015-01-30 2016-08-04 Mattersight Corporation Distress analysis of mono-recording system and methods

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6526395B1 (en) 1999-12-31 2003-02-25 Intel Corporation Application of personality models and interaction with synthetic characters in a computing system
US6728679B1 (en) 2000-10-30 2004-04-27 Koninklijke Philips Electronics N.V. Self-updating user interface/entertainment device that simulates personal interaction
US20080096533A1 (en) 2006-10-24 2008-04-24 Kallideas Spa Virtual Assistant With Real-Time Emotions
KR20110002757A (en) 2009-07-02 2011-01-10 삼성전자주식회사 Emotion model device, apparatus and method for adaptive learning personality of emotion model

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5987415A (en) * 1998-03-23 1999-11-16 Microsoft Corporation Modeling a user's emotion and personality in a computer user interface
US20020194002A1 (en) * 1999-08-31 2002-12-19 Accenture Llp Detecting emotions using voice signal analysis
US6731307B1 (en) * 2000-10-30 2004-05-04 Koninklije Philips Electronics N.V. User interface/entertainment device that simulates personal interaction and responds to user's mental state and/or personality
US20060262920A1 (en) * 2005-05-18 2006-11-23 Kelly Conway Method and system for analyzing separated voice data of a telephonic communication between a customer and a contact center by applying a psychological behavioral model thereto
US20070071206A1 (en) * 2005-06-24 2007-03-29 Gainsboro Jay L Multi-party conversation analyzer & logger
US20130173264A1 (en) * 2012-01-03 2013-07-04 Nokia Corporation Methods, apparatuses and computer program products for implementing automatic speech recognition and sentiment detection on a device
US20160098480A1 (en) * 2014-10-01 2016-04-07 Xerox Corporation Author moderated sentiment classification method and system
US20160227036A1 (en) * 2015-01-30 2016-08-04 Mattersight Corporation Distress analysis of mono-recording system and methods

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180012230A1 (en) * 2016-07-11 2018-01-11 International Business Machines Corporation Emotion detection over social media
US10891442B2 (en) * 2016-09-20 2021-01-12 International Business Machines Corporation Message tone evaluation between entities in an organization
US10891443B2 (en) * 2016-09-20 2021-01-12 International Business Machines Corporation Message tone evaluation between entities in an organization
US20180176168A1 (en) * 2016-11-30 2018-06-21 Fujitsu Limited Visual feedback system
US11188809B2 (en) * 2017-06-27 2021-11-30 International Business Machines Corporation Optimizing personality traits of virtual agents
US11189302B2 (en) 2017-08-22 2021-11-30 Tencent Technology (Shenzhen) Company Limited Speech emotion detection method and apparatus, computer device, and storage medium
EP3605537A4 (en) * 2017-08-22 2020-07-01 Tencent Technology (Shenzhen) Company Limited Speech emotion detection method and apparatus, computer device, and storage medium
WO2019037700A1 (en) * 2017-08-22 2019-02-28 腾讯科技(深圳)有限公司 Speech emotion detection method and apparatus, computer device, and storage medium
US11922969B2 (en) 2017-08-22 2024-03-05 Tencent Technology (Shenzhen) Company Limited Speech emotion detection method and apparatus, computer device, and storage medium
US10817316B1 (en) 2017-10-30 2020-10-27 Wells Fargo Bank, N.A. Virtual assistant mood tracking and adaptive responses
US10572585B2 (en) * 2017-11-30 2020-02-25 International Business Machines Coporation Context-based linguistic analytics in dialogues
US20190189148A1 (en) * 2017-12-14 2019-06-20 Beyond Verbal Communication Ltd. Means and methods of categorizing physiological state via speech analysis in predetermined settings
CN109587360A (en) * 2018-11-12 2019-04-05 平安科技(深圳)有限公司 Electronic device should talk with art recommended method and computer readable storage medium
WO2021011139A1 (en) * 2019-07-18 2021-01-21 Sri International The conversational assistant for conversational engagement
US20220164194A1 (en) * 2020-11-20 2022-05-26 Sap Se Unified semantic model of user intentions
US11775318B2 (en) * 2020-11-20 2023-10-03 Sap Se Unified semantic model of user intentions
US20220230740A1 (en) * 2021-01-21 2022-07-21 Rfcamp Ltd. Method and computer program to determine user's mental state by using user's behavior data or input data
US11824819B2 (en) 2022-01-26 2023-11-21 International Business Machines Corporation Assertiveness module for developing mental model

Also Published As

Publication number Publication date
US10025775B2 (en) 2018-07-17

Similar Documents

Publication Publication Date Title
US10025775B2 (en) Emotion, mood and personality inference in real-time environments
US10593350B2 (en) Quantifying customer care utilizing emotional assessments
US10424302B2 (en) Turn-based reinforcement learning for dialog management
JP6719727B2 (en) Purchase behavior analysis device and program
Zheng et al. Collaborative web service QoS prediction via neighborhood integrated matrix factorization
US8719192B2 (en) Transfer of learning for query classification
CN108139918B (en) Method, system, and medium for providing a customized experience to a user
US20070192106A1 (en) System and method for creating and using personality models for user interactions in a social network
JP4808207B2 (en) Advertisement distribution apparatus, advertisement distribution method, advertisement distribution program, and advertisement bidding method
Meshram et al. Conversational AI: Chatbots
US10992486B2 (en) Collaboration synchronization
De Carolis et al. Recognizing users feedback from non-verbal communicative acts in conversational recommender systems
US20220237386A1 (en) Aspect-aware sentiment analysis of user reviews
Dsouza et al. Chat with bots intelligently: A critical review & analysis
El-Ansari et al. Sentiment analysis for personalized chatbots in e-commerce applications
CN113590762B (en) Method and device for recommending test question information, electronic equipment and computer readable medium
US20200401933A1 (en) Closed loop biofeedback dynamic assessment
JP6030659B2 (en) Mental health care support device, system, method and program
Pham Thi et al. Do people intend to use AI Voice Assistants? An empirical study in Vietnam
EP4116884A2 (en) Method and apparatus for training tag recommendation model, and method and apparatus for obtaining tag
Shah et al. Linking technology readiness and customer engagement: an AI-enabled voice assistants investigation
Fellows et al. Task-oriented Dialogue Systems: performance vs. quality-optima, a review
Pokhrel et al. AI Content Generation Technology based on Open AI Language Model
Badawy et al. Towards Higher Customer Conversion Rate: An Interactive Chatbot Using the BEET Model
Meedin et al. Crowdsourcing towards User Experience evaluation: An intelligent user experience questionnaire (IUEQ)

Legal Events

Date Code Title Description
AS Assignment

Owner name: XEROX CORPORATION, CONNECTICUT

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:NOWSON, SCOTT P.;PEREZ, JULIEN J.;REEL/FRAME:036494/0216

Effective date: 20150708

AS Assignment

Owner name: CONDUENT BUSINESS SERVICES, LLC, TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:XEROX CORPORATION;REEL/FRAME:041542/0022

Effective date: 20170112

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: JPMORGAN CHASE BANK, N.A., NEW YORK

Free format text: SECURITY AGREEMENT;ASSIGNOR:CONDUENT BUSINESS SERVICES, LLC;REEL/FRAME:050326/0511

Effective date: 20190423

AS Assignment

Owner name: CONDUENT HEALTH ASSESSMENTS, LLC, NEW JERSEY

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:057969/0180

Effective date: 20211015

Owner name: CONDUENT CASUALTY CLAIMS SOLUTIONS, LLC, NEW JERSEY

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:057969/0180

Effective date: 20211015

Owner name: CONDUENT BUSINESS SOLUTIONS, LLC, NEW JERSEY

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:057969/0180

Effective date: 20211015

Owner name: CONDUENT COMMERCIAL SOLUTIONS, LLC, NEW JERSEY

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:057969/0180

Effective date: 20211015

Owner name: ADVECTIS, INC., GEORGIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:057969/0180

Effective date: 20211015

Owner name: CONDUENT TRANSPORT SOLUTIONS, INC., NEW JERSEY

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:057969/0180

Effective date: 20211015

Owner name: CONDUENT STATE & LOCAL SOLUTIONS, INC., NEW JERSEY

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:057969/0180

Effective date: 20211015

Owner name: CONDUENT BUSINESS SERVICES, LLC, NEW JERSEY

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:057969/0180

Effective date: 20211015

AS Assignment

Owner name: BANK OF AMERICA, N.A., NORTH CAROLINA

Free format text: SECURITY INTEREST;ASSIGNOR:CONDUENT BUSINESS SERVICES, LLC;REEL/FRAME:057970/0001

Effective date: 20211015

Owner name: U.S. BANK, NATIONAL ASSOCIATION, CONNECTICUT

Free format text: SECURITY INTEREST;ASSIGNOR:CONDUENT BUSINESS SERVICES, LLC;REEL/FRAME:057969/0445

Effective date: 20211015

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4