WO2023191695A1 - Système et procédé d'interprétation d'interaction interpersonnelle humaine - Google Patents

Système et procédé d'interprétation d'interaction interpersonnelle humaine Download PDF

Info

Publication number
WO2023191695A1
WO2023191695A1 PCT/SE2023/050279 SE2023050279W WO2023191695A1 WO 2023191695 A1 WO2023191695 A1 WO 2023191695A1 SE 2023050279 W SE2023050279 W SE 2023050279W WO 2023191695 A1 WO2023191695 A1 WO 2023191695A1
Authority
WO
WIPO (PCT)
Prior art keywords
audio
verbal
visual
cues
session
Prior art date
Application number
PCT/SE2023/050279
Other languages
English (en)
Inventor
Lennart Högman
Original Assignee
Emotion Comparator Systems Sweden AB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from EP22165071.6A external-priority patent/EP4252643A1/fr
Application filed by Emotion Comparator Systems Sweden AB filed Critical Emotion Comparator Systems Sweden AB
Publication of WO2023191695A1 publication Critical patent/WO2023191695A1/fr

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4803Speech analysis specially adapted for diagnostic purposes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1126Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique
    • A61B5/1128Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique using image analysis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/163Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state by tracking eye movement, gaze, or pupil change
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7246Details of waveform analysis using correlation, e.g. template matching or determination of similarity
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/176Dynamic expression
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/024Detecting, measuring or recording pulse rate or heart rate
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/162Testing reaction times
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state

Definitions

  • the present disclosure relates to a system and method for human interaction by using first and second audio-visual stream generating devices, wherein each audio-visual stream generating device is arranged to capture an audio-visual stream relating to at least one person during a session.
  • WO 2017/216758 relates to a computer-implemented method of digital image analysis.
  • the method includes obtaining first digital video of a human subject that indicates facial expressions of the human subject; performing micro-expression analysis on the human subject using the first digital video; comparing results of the performed micro-expression analysis with content of a presentation determined to have been provided to the human subject at the same time that particular portions of the digital video were initially captured; and modifying a manner of performing interaction with the human subject or other human subjects based on the comparing of results.
  • the present disclosure relates to analysis of interpersonal communication that will be applied in physical as well as digital meetings at a level of detail that has not previously been achieved.
  • interpersonal communication is crucial for reaching goals.
  • analysis of communication is extremely complex, being multimodal, including conscious as well as unconscious communicative elements that take place on different time scales.
  • Those communicative elements may comprise temporal shifts as for instance turn taking, mimicry patterns, shifting roles from actor to reactor, signs of dominance, benevolence, trust, distrust, etc.
  • These communicative elements may be manifest or subtle as for instance including non-conscious micro-expressions.
  • Even for a trained person like a psychotherapist it is not possible to pick up all information while communication is ongoing, this is also true for post session video analyses.
  • An object of the present invention is to alleviate at least some of the problems as described above.
  • the system comprises first and second audio-visual stream generating devices each arranged to capture an audio-visual stream relating to at least one person during a session or a series of sessions, wherein the first and second audio-visual stream generating devices are synchronized.
  • the system further comprises a processor arranged to process each audio-visual stream to identify non-verbal cues in the respective audio visual stream.
  • the processor is further arranged to compare the audiovisual streams to map identified non-verbal cues in the first one of the audio-visual streams to corresponding reactive non-verbal cue in the second audio-visual stream and to map identified nonverbal cues in the second one of the audio-visual streams to corresponding, reactive non-verbal cue in the first audio-visual stream to thereby identify a non-verbal communication pattern.
  • synchronized first and second audio-visual streams are obtained and processed to identify non-verbal cues, such as facial, head and body movements, pupil size changes, and tone of voice, and a non-verbal communication pattern is identified by mapping the identified non-verbal cues in one of the audio-visual streams with corresponding, reactive non-verbal cues in the other audiovisual stream to the audio visual streams by comparing the audio visual streams.
  • non-verbal cues such as facial, head and body movements, pupil size changes, and tone of voice
  • This advantage is achieved at least by the obtaining and processing of synchronized audio-visual streams, and the identifying of non-verbal cues and mapping the non-verbal cues to the audio visual streams to identify patterns in the human interpersonal interactions, thereby improving the capacity of the computer in interpreting the human interactions.
  • non-verbal languages can be systematically integrated to interpret human interactions.
  • the non-verbal cues comprise for example facial, head and body movements, pupil size changes, and tone of voice.
  • the non-verbal cues may have one of the flowing relations to a verbal message.
  • the mapping of identified non-verbal cues in the first one of the audio-visual streams to corresponding, reactive non-verbal cue in the second audio-visual stream and the mapping of identified non-verbal cues in the second one of the audio-visual streams to corresponding, reactive non-verbal cue in the first audio-visual stream may comprise analyzing how much of variance that could be explained in non-verbal cues from person A to B and vice versa and based thereon determine whether a non-verbal cue is a reactive non-verbal cue or a non-reactive non-verbal cue.
  • different time windows and time lags may be used.
  • a time lag is characteristically a time delay used for identifying mirroring.
  • the time lag is characteristically longer than 200 ms. Due to the fact that emotional expressions may be activated by different brain networks with different processing speeds, the time lag for a spontaneous mirroring action is smaller than for acted or social mirroring. For example, a spontaneous mirroring of a smile is characteristically about 200 - 400 ms faster than an acted or social smile.
  • time window represents a time window, sequence or time segment during which the analysis takes place.
  • the time window may be selected as 5-10 seconds. Further, or instead, the selection of the time window may depend on what is going on in the interaction between the persons.
  • the time window can be selected manually or determined by an algorithm, such as an algorithm based on Machine Learning.
  • the processor is arranged to monitor a plurality of predefined action units in the first and second audio-visual stream, the respective action unit corresponding to a part of the face or a part of the body or a characteristic in the voice.
  • the processor is then arranged to identify the non - verbal cues based on characteristics identified in the respective predefined action unit.
  • the mapping of identified non-verbal cues in the first one of the audio-visual streams to corresponding, reactive non-verbal cue in the second audio-visual stream and the mapping of identified non-verbal cues in the second one of the audio-visual streams to corresponding, reactive non-verbal cue in the first audio-visual stream may comprise doing the mapping for each predefined action unit and then determine a degree of dependence between different action units.
  • the determination of a degree of dependence between different action units comprises determining how much the pattern of non-verbal cues for the first and second audio-visual streams for one action unit correlates to the corresponding pattern for other action units.
  • This may as discussed above comprise analyzing how much of variance that could be explained in non-verbal cues from person A to B and vice versa and based thereon determine whether a nonverbal cue is a reactive non-verbal cue or a non-reactive non-verbal cue.
  • different time windows and time lags may be used.
  • time series analysis first of all pairs of non-verbal cues on corresponding action units may be made (for instance facial action unit AU 23 from A to B and vice versa) and then time series analysis may be made on all possible combinations of action units to determine presence of reactive non-verbal cues in any action unit or in preselected action units.
  • this can be explained as identified non-verbal cues in each action unit of the n action units in one of the audio-visual streams is mapped to reactive non-verbal cues in the n action units in the other audio-visual stream.
  • Granger causality is a statistical concept of causality that is based on prediction. According to Granger causality, if a signal XI "Granger-causes" (or “G-causes") a signal X2, then past values of XI should contain information that helps predict X2 above and beyond the information contained in past values of X2 alone. Its mathematical formulation is based on linear regression modeling of stochastic processes (Granger 1969). For example, bidirectional long short-term memory Granger causality (bi-LSTM-GC) calculations may be used.
  • the algorithms used for determining reactive non-verbal cues may be implemented with a recurrent neural network, RNN.
  • the action units may comprise at least one facial action unit corresponding to a predetermined part of the face, wherein for the facial action unit is defined for a set of coordinates or a relation between coordinates of the predetermined part of the face, wherein the non - verbal cues are determined based on a temporary change in the set of coordinates or a temporary change in relation between coordinates.
  • the coordinate system is head centred.
  • the action units may for example be defined using the Facial Action Coding System, FACS.
  • FACS Facial Action Coding System
  • movements of individual facial muscles are encoded by FACS from slight different instant changes in facial appearance.
  • FACS is used to systematically categorize the physical expression of emotions. The system has proven useful to psychologists and to animators.
  • FACS is a computed automated system that detects faces in videos, extracts the geometrical features of the faces, and then produces temporal profiles of each facial movement.
  • facial action units may relate to facial movements such as eye movements and also blinking rate and changes of pupil size.
  • the action units may comprise at least one body action unit corresponding to a predetermined part of the body, wherein the body action unit comprises a set of coordinates or a relation between coordinates of the predetermined part of the body, wherein the non - verbal cues are determined based on a temporary change in the set of coordinates or relation between coordinates.
  • the body action units may relate to body movements and body posture.
  • the non-verbal cues which can be determined may for example reflect approach avoidance, signs of arousal, etc.
  • the body action units may also comprise heart rate and heat rate variability, HRV, other psychophysiological data and prosody.
  • the processor is arranged to, for each action unit compare the evolution of the first and second audio-visual streams with regards to activation of nonverbal cues, to determine a time lag between activations of non-verbal cues in the first and second audio-visual streams and to, based on the determined time lags, determine occasions of activations and non-activations of reactive non-verbal cues.
  • the processor may be arranged to, based on the determined time lags between activations of nonverbal cues in the first and second audio-visual streams for one or a plurality of action units, determine whether the reactive cues are spontaneous or consciously controlled.
  • the processor may be arranged to, based on determined occasions of activations and nonactivations of reactive non-verbal cues for one or a plurality of action units, determine whether there is a dynamic in the interaction and/or determined whether any of the persons has a dynamic behaviour.
  • the processor is arranged to analyse the identified communication pattern to categorize psycho-social states of the respective person, said psycho-social states comprising at least one of emotion, attention pro-social, dominance and mirroring, said analyses being performed in a rolling window time series, wherein the time series is from 0.1 s and more, for example in the interval 0.2-10s.
  • the categorisation of states can be made based on heuristics as well as on Machine Learning, ML, models. At least one of the following heuristics models based on earlier research may be used These are examples and other heuristics may be used.
  • Positive mirroring within a time-window of 200-400 ms may be a spontaneous pro-social signal.
  • a spontaneous mirroring is characteristically at least 200-300 ms faster than deliberate or social mirroring.
  • Spontaneous mirroring of a smile is characteristically about 200 - 400 ms faster than an acted or social smile • A smile activated at the left side before the right side indicates that the smile is genuine,
  • At least some of the models above can be combined with trained ML models based on annotated (ground truth) data from psychotherapy sessions.
  • the system may further comprise a presentation device arranged to present information relating to the non-verbal communication pattern, such as the categorized psycho-social states of the respective person during the session.
  • Figure 1 is a block scheme showing an example system system for interpreting human interaction
  • Figure 2 illustrates an example set-up for a session using a system as disclosed in figure 1.
  • Figure 3 illustrates examples of facial action units.
  • Figure 4 illustrates examples of body action units.
  • Figure 5 illustrates an example for finding reactive non-verbal cues.
  • Figure 6 illustrates an example method for interpretation of human interaction.
  • a system 100 for interpreting human interpersonal interaction.
  • the system 100 comprises first and second audio-visual stream generating devices 101a and 101b, each arranged to capture an audio-visual stream relating to at least one person during a session or a series of sessions.
  • the sessions may be sessions where the participants participate in the same room or via an online meeting.
  • the participants may participate via a computer for example using Microsoft Teams or Zoom.
  • the first and second audio-visual streams as discussed above may in this context be either the audio-visual stream(s) of the online meeting.
  • a separate device for generating the herein discussed audio-visual stream arranged at the respective participant's facility may be used.
  • the first and second audio-visual stream generating devices are synchronized.
  • all video recordings include time stamps.
  • the time stamps may be time stamps from Network Time Protocol, NTP.
  • the time stamps can be used for controlling the synchronization of the audio-visual streams.
  • the differences in time stamps between the audio-visual streams can be used for quality control. Data sections with poor synchronization are then characteristically excluded or weighted down in further analyses.
  • the system 100 comprises further at least one processor 102 arranged to process each audio-visual stream to identify non-verbal cues, such as facial, head and body movements, pupil size changes, and tone of voice, and to compare the audio- visual streams to map identified non-verbal cues in the first one of the audio-visual streams to corresponding, reactive, non-verbal cue in the second audiovisual stream and map identified non-verbal cues in the second one of the audio-visual streams to corresponding, reactive, non-verbal cue in the first audio-visual stream to thereby identify a nonverbal communication pattern.
  • non-verbal cues such as facial, head and body movements, pupil size changes, and tone of voice
  • the processor 102 may comprise an Al algorithm arranged to identify the non-verbal communication pattern and/or information relating to the non-verbal communication pattern.
  • the system may further comprise a presentation device 105 arranged to present to a user information relating to the non-verbal communication pattern.
  • the presentation device may be arranged to present a first set of information relating to the non-verbal communication pattern during the session. This first set of information is generally information which can be calculated and presented in near real time.
  • the presentation device may be arranged to present a second set of information relating to the non-verbal communication pattern based on an in-depth analysis after a session.
  • a first, preferably local, processor is arranged to calculate the first set of information.
  • a second, local, remote or cloud based processor may be arranged to calculate the second set of information.
  • the presentation device may include a computer screen associated to the processor.
  • the presentation device may be any type of display arranged to present the first and/or second set of information.
  • the presentation device may be connected to a webbased interface and for example implemented as an app.
  • the information relating to the non-verbal communication pattern may for example include categorized psycho-social states of at least one of the persons during the session.
  • the information relating to the non-verbal communication pattern may comprise:
  • At least one of the above examples of information relating to the non-verbal communication pattern may be included in the first set of information.
  • displayed emotions include intensity, happiness, sadness, anger, surprise, fear disgust and combinations, such as quietly surprised.
  • mirroring may be displayed by the persons - e.g. when the interactants are showing the same expression.
  • the one or more of the following information may be included in the second set of information.
  • the above information relate to non-displayed information, i.e. information which is not directly derivable from the audio-visual streams. Instead, a plurality of simultaneous processes may be identified in the minds of the interactants characteristically having completely different temporal dynamics. It is a challenge to weighting these processes together into a whole that has an explanatory value. Machine learning is preferably used in order to make this processing.
  • the system further comprises a converter 103 arranged to convert verbal content in at least one of the audio-visual streams to text.
  • a transcript of the conversation can be provided.
  • the at least one processor may be arranged to add or associate at least some of the first and/or second set of information as discussed above to the corresponding part of the transcript of the text. Thereby an enriched transcript is provided where all or some of the non-verbal information is added to the text.
  • rule-based systems rely on pre-defined rules and dictionaries to identify sentiment, while machine learning techniques are used to learn from data and classify text and non-verbal reactions based on the non-verbal communication patterns.
  • the system can thereby provide analysis regarding certain topics or certain persons or aspects of the interpersonal interaction, for instance: what the client has said in defined time interval regarding for instance his mother, partner or work-place or, and it can show all emotional and non-verbal reactions that has been shown related to these topics.
  • the analysis may be presented as statistics for instance, trends and also as text summaries based on both verbal and non-verbal data.
  • further analysis may be made, for example further semantic or sentiment analysis, with rulebased systems, and/or machine learning, and/or deep learning techniques.
  • the system can thereby provide further analysis regarding, certain topics or certain persons or aspects of the interpersonal interaction
  • the presentation device 105 may then be arranged to present such enriched transcript of the text.
  • the at least one processor may further be arranged to compare the text and non-verbal information to identify any incoherent non-verbal signals.
  • the at least one processor may further be arranged to analyse the non-verbal information only to identify any incoherence between non-verbal signals.
  • the presentation device 105 may then be arranged to present any such identified incoherences.
  • the presentation of the incoherencies may be made in the enriched transcript of the text or in any other way.
  • the text and identified non-verbal information may even be analysed to predict an outcome or to identify critical events in a session.
  • the presentation device 105 may then be arranged to present any such outcome and/or identified critical event.
  • the presentation of the outcome and/or identified critical event may be made in the enriched transcript of the text or in any other way.
  • the system may further comprise a user input interface 104 for user input of at least one of
  • session data such as background data, session type, and script/scheme for session
  • post session data such as interpersonal ratings and task performances.
  • the system comprises an eye tracker 107.
  • the system comprises a pulse meter 108.
  • the system further comprises a database 106.
  • the database may be used to store at least some of the information/data provided using the system as described herein.
  • analysed data from the audio-visual streams such as
  • the database(s) may contain many thousands of therapy sessions.
  • the analyzed data from the audio-visual streams from those sessions may be used to identify common themes or issues that arise with certain types of patients, such as those with depression or anxiety.
  • This database for containing many thousands of therapy sessions may be referred to as a general database.
  • the sessions of the database may relate to type of data based interpersonal communication such as negotiations or interviews . Thereby it is possible to provide and store in the database(s) data on therapeutic outcome measures as for instance, symptom reduction and mental well-being.
  • the system can provide insights and advices to the therapist.
  • the system may for instance provide advice regarding therapist's communication style, for instance a lack of attention or synchrony with the client. Further, the system may give feedback or advices on other areas where the therapist could improve.
  • the system is arranged to analyse the client's verbal and nonverbal behaviour during sessions to provide insights into their emotional states, cognitive processes, and behavioural patterns, which could inform the therapist's treatment plan.
  • the verbal and nonverbal behaviour can result in inclusion of a new item of a treatment plan.
  • the advices could also be of what to focus on next session or signs of the therapist is not attuned or present or signs of mistrust and what to try to change.
  • FIG 2 an example set-up for a session is illustrated with a first person 1 and a second person 2 participating.
  • the processor of the system as disclosed in relation to figure 1 is arranged to monitor a plurality of predefined action units in the first and second audio-visual stream.
  • the respective action unit corresponding to a part of the face or a part of the body or a characteristic in the voice.
  • the processor is arranged to identify the non - verbal cues based on characteristics identified in the respective predefined action unit.
  • the processor may be arranged to, for each action unit, compare the evolution of the first and second audio-visual streams with regards to activation of non-verbal cues, and to determine a time lag between activations of non-verbal cues in the first and second audio-visual streams and to, based on the determined time lags, determine occasions of activations and non-activations of reactive nonverbal cues.
  • the processor may be arranged to, based on the determined time lags between activations of nonverbal cues in the first and second audio-visual streams for one or a plurality of action units, determine whether the reactive cues are spontaneous or consciously controlled.
  • the processor may be arranged to, based on determined occasions of activations and nonactivations of reactive non-verbal cues for one or a plurality of action units, determine whether there is a dynamic in the interaction and/or determined whether any of the persons has a dynamic behaviour.
  • the respective action units may in a simple example have an activated and non-activated state.
  • the state of the action units is defined by its coordinate(s) or relative coordinate(s).
  • the number of states of the respective action unit is not limited.
  • the action units comprise at least one facial action unit corresponding to a predetermined part of the face.
  • the facial action unit comprises a set of coordinates or a relation between coordinates of the predetermined part of the face, wherein the non - verbal cues are determined based on a temporary change in the set of coordinates or relation between coordinates.
  • the action units may comprise at least one facial action unit corresponding to a predetermined part of the face, wherein for the facial action unit is defined for a set of coordinates or a relation between coordinates of the predetermined part of the face, wherein the non - verbal cues are determined based on a temporary change in the set of coordinates or a temporary change in relation between coordinates.
  • the coordinate system is head centred.
  • the action units may for example be defined using the Facial Action Coding System, FACS.
  • FACS Facial Action Coding System
  • movements of individual facial muscles are encoded by FACS from slight different instant changes in facial appearance.
  • FACS is used to systematically categorize the physical expression of emotions. The system has proven useful to psychologists and to animators.
  • FACS is a computed automated system that detects faces in videos, extracts the geometrical features of the faces, and then produces temporal profiles of each facial movement.
  • a movement in the set of coordinates or a change in relation between coordinates of the predetermined part of the face is determined for the respective facial action unit.
  • facial action units may be provided at both the left and right side of the face.
  • Non-verbal information may be provided from a relation between left side and right side action units. For example, a smile activated at the left side before being activated at the right side indicates that the smile is genuine.
  • Other facial action units may relate to facial movements such as eye movements and also blinking rate and changes of pupil size. Data related to those other facial action units may be provided from for example an eye tracker.
  • the action units comprise at least one body action unit corresponding to a predetermined part of the body.
  • the respective body action unit comprises a set of coordinates or a relation between coordinates of the predetermined part of the body, wherein the non - verbal cues are determined based on a temporary change in the set of coordinates or relation between coordinates.
  • Other body action units may be provided from for example a pulse meter.
  • the action units comprises at least one voice characteristic such as
  • FIG 5 an example is illustrated for finding reactive non-verbal cues is illustrated.
  • the evolution of the first and second audio-visual streams are compared with regards to activation of non-verbal cues, to determine a time lag between activations of non-verbal cues in the first and second audio-visual streams and to, based on the determined time lags, determine occasions of activations and non-activations of reactive non-verbal cues.
  • HNR Harmonics-to-noise ratio
  • Spectral (balance) parameters Alpha Ratio, ratio of the summed energy from 50-1000 Hz and 1-5 kHz
  • FIG 6 an example method is illustrated for interpreting human interpersonal interaction.
  • the method is characteristically computer implemented. The method comprises:
  • each audio-visual stream relating to at least one person during a session
  • S2 non-verbal cues, such as facial, head and body movements, pupil size changes, and tone of voice
  • the method may further comprise a step of analysing the identified communication pattern to categorize (S5) psycho-social states of the respective person, said psycho-social states comprising at least one of emotion, attention pro-social, dominance and mirroring, said analyses being performed in a rolling window time series, wherein the time series is user set or set by an algorithm.
  • the interval is for example 0.2-10s.
  • the method may further comprise a step of presenting (S7) information relating to the non-verbal communication pattern, such as the categorized psycho-social states of the respective person during the session.
  • the method may further comprise receiving S6, via a user input interface for user input, at least one of
  • session data such as background data, session type, and script/scheme for session
  • data associated to at least one of the audio-visual streams such as timestamps or other markers and/or notes and/or predefined tabs
  • post session data such as interpersonal ratings and task performances.
  • the method may further comprise storing S8 in a database at least one of a. the first and/or second audio-visual stream, b. the identified non-verbal cues in the first and/or second audio-visual stream c. the categorized psycho-social states, d. at least a part of the contents of the first and/or second audio-visual stream converted to text, e. received user input data f. additional physiological data obtained from additional sensors g. an enriched transcript of a text from the communication during the session, wherein all or some of the non-verbal information is added to the text.
  • the method may further comprise a step of post-processing (S9) data for a plurality of sessions in the database, said post-processing comprising at least one of

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Psychiatry (AREA)
  • Pathology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • General Engineering & Computer Science (AREA)
  • Heart & Thoracic Surgery (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Physiology (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Social Psychology (AREA)
  • Child & Adolescent Psychology (AREA)
  • Signal Processing (AREA)
  • Hospice & Palliative Care (AREA)
  • Educational Technology (AREA)
  • Developmental Disabilities (AREA)
  • Psychology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Dentistry (AREA)
  • Mathematical Physics (AREA)
  • Fuzzy Systems (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)

Abstract

La présente divulgation se rapporte à un système (100) et à un procédé d'interprétation d'interaction interpersonnelle humaine. Le système comprend des premier et second dispositifs de génération de flux audiovisuel (101a et 101b) conçus chacun pour capturer un flux audiovisuel relatif à au moins une personne pendant une session ou une série de sessions, les premier et second dispositifs de génération de flux audiovisuel étant synchronisés, et un processeur (102) conçu pour traiter chaque flux audiovisuel afin d'identifier des signaux non verbaux, tels que des mouvements du visage, de la tête et du corps, des changements de taille des pupilles, et une tonalité de voix, et pour mapper les signaux non verbaux identifiés dans le premier des flux audiovisuels sur un signal non verbal réactif, correspondant dans le second flux audiovisuel et pour mapper des signaux non verbaux identifiés dans le second des flux audiovisuels sur un signal non verbal réactif, correspondant dans le premier flux audiovisuel pour identifier ainsi un motif de communication non verbal.
PCT/SE2023/050279 2022-03-29 2023-03-28 Système et procédé d'interprétation d'interaction interpersonnelle humaine WO2023191695A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
EP22165071.6 2022-03-29
EP22165071.6A EP4252643A1 (fr) 2022-03-29 2022-03-29 Système et procédé d'interprétation d'interaction interpersonnelle humaine
US17/716,366 2022-04-08
US17/716,366 US20230315810A1 (en) 2022-03-29 2022-04-08 System and method for interpretation of human interpersonal interaction

Publications (1)

Publication Number Publication Date
WO2023191695A1 true WO2023191695A1 (fr) 2023-10-05

Family

ID=85792710

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SE2023/050279 WO2023191695A1 (fr) 2022-03-29 2023-03-28 Système et procédé d'interprétation d'interaction interpersonnelle humaine

Country Status (1)

Country Link
WO (1) WO2023191695A1 (fr)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140212853A1 (en) * 2013-01-31 2014-07-31 Sri International Multi-modal modeling of temporal interaction sequences
US20170364741A1 (en) * 2016-06-15 2017-12-21 Stockholm University Computer-based micro-expression analysis

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140212853A1 (en) * 2013-01-31 2014-07-31 Sri International Multi-modal modeling of temporal interaction sequences
US20170364741A1 (en) * 2016-06-15 2017-12-21 Stockholm University Computer-based micro-expression analysis
WO2017216758A1 (fr) 2016-06-15 2017-12-21 Hau Stephan Analyse de micro-expression informatisée

Similar Documents

Publication Publication Date Title
Celiktutan et al. Multimodal human-human-robot interactions (mhhri) dataset for studying personality and engagement
Grabowski et al. Emotional expression in psychiatric conditions: New technology for clinicians
Gregersen et al. The motion of emotion: Idiodynamic case studies of learners' foreign language anxiety
Hoque et al. Mach: My automated conversation coach
US20220392625A1 (en) Method and system for an interface to provide activity recommendations
Niewiadomski et al. Laugh-aware virtual agent and its impact on user amusement
Drahota et al. The vocal communication of different kinds of smile
Rizzo et al. Detection and computational analysis of psychological signals using a virtual human interviewing agent
Walter et al. Transsituational individual-specific biopsychological classification of emotions
Delaherche et al. Assessment of the communicative and coordination skills of children with Autism Spectrum Disorders and typically developing children using social signal processing
Hammal et al. Interpersonal coordination of headmotion in distressed couples
Dupont et al. Laughter research: A review of the ILHAIRE project
EP3897388B1 (fr) Système et procédé de lecture et d'analyse du comportement comprenant les expressions verbales, corporelles et faciales afin de déterminer la congruence d'une personne
Dávila-Montero et al. Review and challenges of technologies for real-time human behavior monitoring
US11163965B2 (en) Internet of things group discussion coach
Kuhlen et al. Gesturing integrates top-down and bottom-up information: Joint effects of speakers' expectations and addressees' feedback
Nakagawa et al. New telecare approach based on 3D convolutional neural network for estimating quality of life
Le Maitre et al. Self-talk discrimination in human–robot interaction situations for supporting social awareness
CN116419778A (zh) 具有交互辅助特征的训练系统、训练装置和训练
EP4252643A1 (fr) Système et procédé d'interprétation d'interaction interpersonnelle humaine
WO2023191695A1 (fr) Système et procédé d'interprétation d'interaction interpersonnelle humaine
Park et al. I can already guess your answer: Predicting respondent reactions during dyadic negotiation
Bechade et al. Behavioral and emotional spoken cues related to mental states in human-robot social interaction
Iyer et al. A Proposal for virtual mental health assistant
Janssen Connecting people through physiosocial technology

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23714377

Country of ref document: EP

Kind code of ref document: A1