WO2020104722A1 - Système et méthode pour déterminer l'état émotionnel d'un utilisateur - Google Patents

Système et méthode pour déterminer l'état émotionnel d'un utilisateur

Info

Publication number
WO2020104722A1
WO2020104722A1 PCT/ES2019/070797 ES2019070797W WO2020104722A1 WO 2020104722 A1 WO2020104722 A1 WO 2020104722A1 ES 2019070797 W ES2019070797 W ES 2019070797W WO 2020104722 A1 WO2020104722 A1 WO 2020104722A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
emotional state
partial
emotional
portable device
Prior art date
Application number
PCT/ES2019/070797
Other languages
English (en)
Spanish (es)
Inventor
Rosa SAN SEGUNDO MANUEL
Clara SAINZ DE BARANDA
Marian BLANCO RUIZ
David LARRABEITI LOPEZ
Manuel URUEÑA PASCUAL
Jose Carlos ROBLEDO GARCIA
Carmen PELAEZ MORENO
Ascensión GALLARDO ANTOLÍN
Alba MÍNGUEZ SÁNCHEZ
Teresa RIESGO ALCAIDE
Jose Manuel LANZA GUTIÉRREZ
Rodrigo MARINO ANDRÉS
Jose Angel MIRANDA CALERO
Manuel FELIPE CANABAL
Marta PORTELA GARCÍA
Isabel PEREZ GARCILÓPEZ
Jose Antonio GARCÍA SOUTO
Celia LOPEZ ONGIL
Emilio Olías Ruiz
Mario GARCÍA VALDERAS
Original Assignee
Universidad Carlos Iii De Madrid
Universidad Politécnica de Madrid
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Universidad Carlos Iii De Madrid, Universidad Politécnica de Madrid filed Critical Universidad Carlos Iii De Madrid
Publication of WO2020104722A1 publication Critical patent/WO2020104722A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety

Definitions

  • the present invention refers to the technical field of emotion recognition through multimodal processing of physiological and audio signals, and more specifically to automatic and portable monitoring of a user's emotional state, with the possibility of communicating it to third parties or establishing, for example, security measures such as sending alarms to a network of contacts or emergencies in a dangerous situation.
  • the state of the art includes extensive literature that relates the physiological variations measured in human beings with the changes in their emotional states.
  • emotional detection is carried out by mapping physiological variables of individuals exposed to external stimuli (videos, audios or images) that produce known emotions.
  • the databases and studies available refer to non-portable solutions that include hundreds of classified metrics, generally making use of a two-dimensional classification space "Arousal - Valence” (AV), where the level of "Arousal” is directly related with the emotional activation and the "Valence” indicates how "positive” or “negative” that emotion is. Additionally, other dimensions can be included in this space, such as dominance or familiarity.
  • AV rousal - Valence
  • electronic devices that are integrated and camouflaged in other types of objects such as clothing are very common. They are usually called “wearables”, from the expression in English that refers to the set of devices that include bracelets, rings, glasses, jackets or pendants, among others, that allow a user to carry with them any type of electronic device in a transparent way to third parties and even yourself, but instead allow you to benefit from certain functionalities through simple interactions with the device.
  • Some state-of-the-art solutions referred to the detection of emotional states, resort to integration with bracelets or other “wearables” clothing and incorporate sensors to detect some basic emotions through intelligent algorithms, which can be combined with inertial sensors or accelerometers to clean physiological signals and noise due to user movement.
  • the classification of emotions is performed exclusively on the basis of physiological signals, which provides a limited robustness that is insufficient for certain applications where a false alarm is unacceptable, such as in cases where certain emotions have been linked to the request of an ambulance or security forces.
  • a system for determining a user's emotional state comprising:
  • a first portable device that has sensor means configured for the acquisition of a set of physiological signals from the user, where the first portable device comprises a first processor module configured to determine, from the set of physiological signals, a first partial emotional state with a first level of said emotional state; determining whether the first partial emotional state coincides with a previously established objective emotional state and whether the first level of the first partial emotional state exceeds a first previously established threshold; and transmit, if so, an alarm message to a mobile communication device of the user;
  • a second portable device configured to acquire an audio signal upon receiving an activation instruction from a mobile user communication device;
  • a second processor module configured to determine, from the audio signal, a second partial emotional state with a second level (66) of said emotional state, and to determine if it coincides with the objective emotional state and if it exceeds a second threshold previously settled down;
  • a mobile communication device comprising:
  • a wireless communication module configured to receive the alarm message from the first portable device and send the activation instruction to the second portable device
  • an analyzer module configured to determine, in case the second analyzer module indicates that the second level has exceeded the second threshold, the presence of the objective emotional state in the user, based on an analysis of the levels of partial emotional states by a machine learning algorithm.
  • the analyzer module of the present invention in one of the particular embodiments, is further configured to, once the presence of the target emotion is determined in the user, automatically send a message with information from the analysis of the analyzer module to a remote system, to through a telecommunication network, where the remote system is selected from: a user contact network, a server accessible by an emergency service, and a private server accessible by a third user authorized by the user.
  • the sensor means of the first portable device comprise, according to one of the embodiments of the invention: galvanic skin response (GSR) detecting means, configured to obtain a signal with information on the conductivity of the user's skin; pulse blood volume detecting means (BVP) configured to obtain a signal with information on the user's heart pulse; and a temperature detecting means configured to obtain a signal with the user's skin temperature information (SKT).
  • GSR galvanic skin response
  • BVP pulse blood volume detecting means
  • STT skin temperature information
  • the present invention contemplates the possibility that the analyzer module is a computational unit configured to determine the emotional state of the user according to training data previously provided.
  • the present invention comprises a controller module configured for an initial training of the system, where the controller module comprises a first database of physiological signals quantified according to audiovisual stimuli previously associated with specific emotional states and a second database of audio signals previously associated with specific emotional states.
  • a particular embodiment of the present invention contemplates incorporating into the second portable device a microphone configured to acquire the audio signal. Additionally, the possibility is considered that the second processor module incorporates a voice activity detector configured to determine the presence of silences in the acquired audio signal and to account for such silences.
  • At least one of the portable devices of the present invention comprises a button configured to, after a user press, transmit a distress message to the mobile communication device; and where the mobile communication device is further configured to, in response to receiving the distress message, forward the distress message to a previously established user group of contacts.
  • the system comprises a bracelet, with a first casing that houses the first portable device inside; and a pendant, with a second housing that houses the second device inside portable; where the mobile communication device is a smartphone-type mobile phone that integrates the second processor module.
  • the system comprises a bracelet, with a first casing that houses the first portable device inside; a pendant, with a second casing that houses inside it the second portable device and the second processor module; where the mobile communication device is a smartphone-type mobile phone.
  • a second aspect of the invention refers to a method for determining the presence of an emotional state in a user, comprising the steps of: acquiring, by means of sensor means arranged in a first portable device, a set of physiological signals; determining, by a first processor module, a first partial emotional state from the set of physiological signals, with a first level of said first partial emotional state; determining whether the first partial emotional state coincides with a previously established objective emotional state and whether the first level of the first partial emotional state exceeds a first previously established threshold; if so, transmit an alarm message to a mobile user communication device, using a wireless communication module; as a result of receiving the alarm message, acquiring an audio signal, through a second portable device in communication with the mobile communication device; determining, in a second processor module, a second partial emotional state, from the audio signal, with a second level of said second partial emotional state; determining whether the second partial emotional state coincides with the previously established objective emotional state and whether the second level of said second partial emotional state exceeds a previously established second threshold;
  • determining the presence of the user's objective emotional state comprises the steps of: determining a total emotional level based on the levels of the partial emotional states; compare the total emotional level determined with a third threshold previously established; and determining that the emotional state is present in the event that said total emotional level exceeds said third threshold.
  • a message with information from the analysis of the analyzer module is automatically sent to a remote system, through a telecommunication network (7 ), where the remote system is selected from: a network of user contacts, a server accessible by an emergency service, and a private server accessible by a third user authorized by the user.
  • the set of physiological signals comprises three physiological signals and, acquiring by means of the sensor means the set of physiological signals, comprises acquiring a signal with information on the conductivity of the skin (GSR) of the user, acquiring a signal with information from the user's heart pulse (BVP), and acquire a signal with information from the user's skin temperature (SKT).
  • GSR conductivity of the skin
  • BVP heart pulse
  • SHT skin temperature
  • determining a partial emotional state with a level of said partial emotional state comprises mapping a point in a three-dimensional space that represents all emotional states, based on numerical values of the three variables of "Pleasure / Valencia” - “Excitation” - “Dominance”, assigned for the set of acquired physiological signals or for certain characteristics of the acquired audio signal.
  • the possibility of adding a previous training stage includes: feeding a first database with quantified physiological signals according to audiovisual stimuli previously associated with specific emotional states; feeding a second database with audio signals with spectral and / or prosodic characteristics previously associated with specific emotional states; recording a deviation, with respect to the first and second databases, of the physiological signals and the spectral and prosodic characteristics of the audio signal provided by the user; and adapting the first processor module, the second processor module and the analyzer module to the deviations registered for the user.
  • the proposed invention is based on wireless technologies to offer a distributed solution on various devices connected to the user's mobile phone, preferably a bracelet and a pendant.
  • the user can transport the invention without any inconvenience and without being perceived by third parties.
  • it is completely transparent to an eventual aggressor, reducing the chances that they will be detected and canceled by said aggressor.
  • the present invention monitors, registers and collects events derived from the detection of user emotions, such as panic and stress caused, for example, by a sexual or violent attack. Subsequent to the detection of said events, advantageously, a set of alarms can be triggered automatically towards a network of "guardians", previously configured through a mobile application, or towards emergency / security services.
  • the characteristics of the present invention imply a multitude of advantages for the user.
  • the incorporation of the audio signal in the process of decision, classification and determination of the emotional state of the user provides a new approach, since, thanks to it, and the learning algorithm that is applied to it, it provides a value added to the proposed system, providing greater robustness or precision to emotional inference and, in addition, identifying possible external sounds to the user (silences, slamming doors, shots, etc.). Therefore, the present invention provides the user with the ability to react and quickly warn of possible assaults, for example of a sexual nature.
  • Figure 1 schematically represents an embodiment of the complete system of the invention.
  • Figure 2 represents an embodiment of one of the portable pendant devices.
  • Figure 3 represents an embodiment of one of the portable devices in the form of a bracelet.
  • Figure 4 shows on a block diagram the multimodal nature of the present invention.
  • Figure 5 shows by means of a block diagram the treatment of physiological signals in one of the embodiments of the invention.
  • Figure 6 shows by means of a block diagram the treatment of the audio signal in one of the embodiments of the invention.
  • Figure 7 represents a three-dimensional space used by the present invention to divide the space among eight emotional quadrants.
  • Figure 8 represents in a block diagram the training phase and the personalized initial configuration method of the system of the present invention.
  • Figure 9 schematically represents an embodiment of the complete system of the invention.
  • the present invention discloses a method and a distributed system for detecting the emotional state of a user, which may correspond to situations of risk or imminent aggression or other emotions that can be used for medical, sports, etc. purposes. It uses for this an effective multimodal integration of physiological and physical signals, preferably external audio and voice (although other audiovisual signals could also be used), by means of sensors that can be integrated into a portable and camouflage “wearable” solution in clothing and / or accessories, which It is capable of alerting a user circle of trust or security forces.
  • FIG. 1 and 9 One of the embodiments of the invention, especially advantageous in detecting an objective emotion associated with states of panic or blockages related to situations of violence or sexual assault, is represented in Figures 1 and 9, where the system is composed of three main devices: two portable devices camouflaged in clothing or “wearables”, which in this case are a bracelet 1 and a pendant 2, and a mobile communication device, which in this case is a smartphone-type mobile phone 3 in the that a specifically designed application 4 is running.
  • two portable devices camouflaged in clothing or “wearables” which in this case are a bracelet 1 and a pendant 2
  • a mobile communication device which in this case is a smartphone-type mobile phone 3 in the that a specifically designed application 4 is running.
  • the bracelet acquires and monitors physiological signals, which are captured through biometric sensors 5 in small periods of time, and applies machine learning algorithms in order to have a first level of alert in case of positive detection of the target emotion, which in this embodiment is the detection of a emotion of panic or blockage before a possible aggression.
  • Said alert is sent by means of a short-range wireless communication module 6, for example Bluetooth, to the user's mobile phone, which evaluates the content and sends, via the wireless communication network, an order to the pendant to activate the following system detection layer.
  • the pendant then begins to acquire audio from the user's environment, compresses it and sends it to the mobile phone, which applies machine learning algorithms to detect signs of risk, such as a certain level of stress, using said audio acquired by the arranged microphone on the user's pendant.
  • a network of trusted contacts 11 is notified, previously established by the user during the initial configuration from the software application 4 or directly to an emergency service 9, through a telecommunication network 7.
  • the telecommunication network used can be any network suitable for mobile telephony (GPRS / 3G / 4G) or be based directly on a connection WiFi to the Internet, which sends 10 push notifications to the chosen contact network through a dedicated server 8.
  • FIG. 2 represents an embodiment of one of the portable devices in which the present invention is distributed.
  • the portable device is implemented in a pendant 2, although in other embodiments wearables such as earrings, headbands, piercings or brooches are also contemplated.
  • the pendant comprises a housing that houses inside it the electronic components necessary for acquiring a physical signal, in this case a microphone 20 for capturing an audio signal, for wireless communications with the mobile phone, battery and microprocessor.
  • the casing has a microperforation 21, on its outer face, coinciding with the microphone housed inside the casing to facilitate audio reception. Face
  • Face The exterior of the pendant has a manually operated panic button 22, which immediately sends, via the mobile phone, a distress message to the network of contacts.
  • the panic button is camouflaged in the pendant design.
  • the rear face of the pendant has a small hole that allows access to the electronic components inside the casing by means of an elongated pointer-type object or a pin. This access is limited to a reset button to restart the device.
  • the pendant additionally comprises a camera for the acquisition of the physical signal.
  • a camera for the acquisition of the physical signal.
  • it adds functionalities and additional information to the system acquiring images and video.
  • FIG. 3 depicts an embodiment of another of the portable devices in which the present invention is distributed.
  • the portable device is implemented in a bracelet 1, although in other embodiments wearables such as bracelets or anklets are also contemplated.
  • the bracelet has a casing that houses a microprocessor and the electronic components necessary for the acquisition of the user's physiological signals.
  • a first sensor 31 galvanic skin response detector is included, which is preferably arranged outside the housing to facilitate contact with the skin of the user of a pair of electrodes, said first sensor is configured to obtain a signal with user skin conductivity (GSR) information; a second pulse blood volume detector (BVP) sensor 32 configured to obtain a signal with information on the user's heart pulse; and a third temperature detector sensor 33 configured to obtain a signal with the user's skin temperature information (SKT).
  • GSR user skin conductivity
  • BVP pulse blood volume detector
  • STT skin temperature information
  • the sensors are arranged on the internal face of the bracelet so that, when placed on the user's wrist, they remain in contact with their skin.
  • the interior of the housing also houses a short-range wireless communication module, preferably Bluetooth, a battery, and microprocessor 36.
  • the housing further features a perforation 34, on its outer face, coincident with a push button. reset housed inside the case and accessible with an elongated pointer or pin to reset the device.
  • the outer face of the bracelet features a manually operable panic button 35, which Immediately sends, via the mobile phone, a distress message to the network of contacts or the recipient that has been previously configured.
  • the multimodal emotion recognition system in a preferred embodiment of the present invention, feeds on the following physiological variables: skin conductivity 40 ("Galvanic Skin Response”, GSR), blood volume of the pulse 41 (“Blood Volume Pulse”, BVP), and temperature 42 (“Skin Temperature”, SKT). And on the other hand, it is fed by a physical variable, which in this case is audio 43 and includes the user's voice along with the sound of the environment.
  • GSR Skin conductivity 40
  • BVP Blood volume of the pulse 41
  • SKT Skin Temperature
  • a physical variable which in this case is audio 43 and includes the user's voice along with the sound of the environment.
  • the communication device is configured to receive both the information processed by the first portable device and by the second portable device, the method of the present invention is carried out in two stages, where the first stage (processing of physiological signals) it acts as a key for the second stage (audio processing), so that without a first alarm, detected exclusively from physiological signals, the rest of the communications or processes are not established in the second portable device and, only once that a second alarm has occurred after processing the audio, multimodal processing is activated in an analyzer module 47 of the communication device, which merges 45 the data of both classifications, determining 46 the emotion that the user is feeling.
  • Figure 5 represents the first of said blocks, specifically the block in charge of treating physiological signals, where a processor module 50 comprising a microprocessor integrated in the bracelet is in charge of all the processing.
  • the physiological signals obtained by the bracelet's biometric sensors are processed in several phases, firstly, the raw signals 51 undergo noise removal 52. Then, a process of standardization 53 of the signals and extraction 54 of characteristics, such as the mean, standard deviation, mean of the absolute values of the first difference, mean of the absolute values of the first difference of the normalized signal, mean of the absolute values of the second difference, mean of the first difference of the smoothed signal, etc.
  • the characteristics are merged to finally classify 56 the results and obtain an output signal with an emotional level 57.
  • Classifier 56 applies a classical low-cost machine learning algorithm to classify the emotion perceived by the system, for example following the method of the closest K-neighbors, or abbreviated KNN (from the English “K-Nearest- Neighbors') .
  • This algorithm requires training data or space points, which have been previously obtained during the system configuration, which is carried out offline and is detailed later.
  • the emotional level 57 determined at the output of the processor module 50 presents a confidence level that indicates, as a percentage, the probability that the combination of characteristics extracted from the physiological signals corresponds to a specific emotion.
  • the confidence level of each emotion is, therefore, the metric that quantifies the presence of said emotion in the user.
  • the exit signal with an emotional level 59 comprises information on whether or not the confidence level of the target emotion is above a previously determined threshold.
  • the threshold is located at the value from which the objective emotion is considered to be predominant over the other emotions. This detection threshold is the point from which the emotion of interest is predominant (in time), with respect to the other emotions detected in the same period.
  • the bracelet In case of exceeding the threshold established for the confidence level of the target emotion, the bracelet establishes communication with the user's phone, to transmit the target emotion level detected in a first alarm message, which causes the phone of the User immediately send an activation message to the pendant to start recording audio.
  • Figure 6 represents the second of said blocks to carry out the extraction of the category or type of emotion, specifically the block in charge of processing the physical signal.
  • the physical signals in this case the audio, begin to be acquired after receiving an order sent from the mobile phone on the pendant, which sends said order only after receiving an indication, the first alarm message, from the bracelet that the analysis of Physiological cues have classified the target emotion with confidence above the predetermined threshold.
  • the pendant microphone is activated and starts recording an audio signal 60.
  • This audio signal is processed locally by the pendant microprocessor to compress it and transmit it 61 to the mobile phone via the Bluetooth wireless communication network.
  • the processor module 62 which in this embodiment is integrated into the mobile phone, the compressed signal is decompressed 63 and the extraction 64 of features is carried out. After extracting the characteristics of the audio signal, we proceed to classify it 65 and obtain an output signal with a classification of the target emotion and a confidence level 66 obtained independently of the analysis of the physiological signals.
  • the processor module 62 is integrated in the microprocessor of the second portable device, the pendant for example, so that the output signal with a classification of the target emotion and a confidence level 66 is obtained in the pendant itself before transmitting anything to the mobile phone.
  • the analyzer module 47 activates the multimodal processing explained above, which merges 45 the data from both classifications, determining 46 the presence of the target emotion in the user, based on the analysis by means of an automatic learning algorithm of the output signals of the blocks of Figures 5 and 6.
  • the processor module extracts various spectral and prosodic characteristics from the audio signal that, later, 65 will be classified applying a classic low-cost machine learning algorithm, to classify and calculate the confidence level 66 of the emotion perceived by the system and thus confirm or reject, the objective emotion detected by the first portable device.
  • the distress message is sent to the contacts configured by the user, or according to other embodiments, to an emergency service, a medical center or any other agent that is considered appropriate to act upon the detection of a certain emotion.
  • a voice activity detector (“Voice Activity Detection”, VAD) is also used to eliminate and count possible silences within the recorded audio. In this way, it is not only possible to detect the user's voice, but also to detect possible relevant sounds from the environment (silences, slamming doors, shots, etc.).
  • Figures 5 and 6 which partially extract the category or type of emotion from the user by separately processing the combination of physiological signals and the audio signal respectively, are fused to determine what the emotional state is. of the user within previously defined emotional quadrants and, specifically, to determine whether or not the target emotion is present in the user.
  • Figure 7 shows a three-dimensional space used by the present invention to divide the space among eight emotional quadrants.
  • This distribution is based on a “PAD-Space” model with three coordinate axes that represent all emotional states based on the numerical values assigned for the variables of Pleasure / Valencia - Excitation - Dominance (“Pleasure / Valence - Arousal - Dominance ”, In English, where each one of said emotional states is represented in a vertex of the cube represented in said three-dimensional space.
  • a vertex of the cube is assigned to each of the following eight emotional states: joy 71 (+ p + a + d), gratitude 72 (+ p + ad), submission 73 (+ pa-d), anguish 74 ( -pad), relief 75 (+ p-a + d), contempt 76 (-p-a + d), fear 77 (-p + ad), and anger 78 (-p + a + d).
  • the variable "Pleasure / Valencia” measures how pleasant the user perceives a certain stimulus. So, “anger” 78 or “fear” 77, being emotions classified as unpleasant, are located on the negative end (-p). In contrast, the “Joy” 71, being an emotion classified as pleasant, is situated at the extreme of pleasure (+ p).
  • the user's emotional situation is therefore represented in one of the eight emotional quadrants defined by these three dimensions, depending on the results obtained for each of the three variables, Pleasure / Valencia - Excitation - Dominance, when analyzing, on the one hand in classifier 58, the combination of the characteristics extracted from all physiological signals and, on the other hand, in classifier module 64, the characteristics extracted from the audio signal.
  • the combination 45 of the information provided by the physiological signals and the audio signal results in a single final characterization 46 of the user's emotional state, comprising the target emotion and a confidence level.
  • the present invention can be configured to send this information to a remote location / user through a wireless telecommunication network, where the necessary preventive or decisive actions will be taken that are considered in each of the applications of the invention.
  • the information may also include user geolocation information, which is automatically sent upon determination that the target emotion level has exceeded a preset threshold.
  • Figure 8 represents the training phase and personalized initial configuration method of the system of the present invention, which provides the classic machine learning algorithm of the analyzer module with the feature vectors and labels necessary to carry out the training.
  • a controller module with machine learning algorithms and access to two independent databases is available, one of physiological signals 81 and the other of audio signals 82.
  • Said initial configuration is divided into two different processes based on the signals a capture, which share the signal conditioning and feature extraction of Figures 5 and 6.
  • the user is stimulated through audiovisual content, previously labeled with a specific emotional quadrant.
  • the physiological variations produced are recorded and stored 83 in the database of physiological signals 81 and, at the end of the process, the controller module obtains a predictive model 84 trained for the particular user.
  • the combination of the selected characteristics of the three physiological variables used in this preferred embodiment which obviously could be others in alternative embodiments of the invention, is numerically characterized and the influence of its variations can be directly transferred to each of the three axes of the PAD space used to represent all emotional states, which can be mapped based on the numerical values assigned for the variables of Pleasure / Valencia - Excitation - Dominance.
  • audio signals is analogous to that described for physiological signals, with the difference that the training process is carried out with voice recordings spoken by the user himself, of texts previously labeled with a specific emotional quadrant.
  • the variations in the spectral and prosodic characteristics of the user / user's voice are recorded and stored 85 in the audio signal database 82.
  • the controller module obtains a predictive model 86 trained for the private user.
  • the combination of the selected characteristics of the audio signal is numerically characterized and the influence of its variations can be directly transferred to each of the three axes of the PAD space, used to represent all the emotional states, which can be mapped based on the numerical values assigned for the variables of Pleasure / Valencia - Excitation - Dominance.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Psychiatry (AREA)
  • Developmental Disabilities (AREA)
  • Biomedical Technology (AREA)
  • Signal Processing (AREA)
  • Educational Technology (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychology (AREA)
  • Social Psychology (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Child & Adolescent Psychology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Telephone Function (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

La présente invention se rapporte à une méthode et un système pour déterminer un état émotionnel d'un utilisateur, au moyen de l'intégration multimode de signaux dans une solution portative distribuée qui comprend: un premier dispositif portatif avec des moyens de détection pour l'acquisition de signaux physiologiques de l'utilisateur, configuré pour déterminer partiellement la présence d'une émotion objective; un second dispositif portatif pour acquérir un signal audio; un processeur pour déterminer partiellement, à partir du signal audio, la présence de l'émotion objective; et un dispositif portatif de communication sans fil, en communication avec les deux dispositifs portatifs, pour déterminer, la présence de l'état émotionnel objectif chez l'utilisateur, sur la base d'une analyse des états émotionnels partiels par un algorithme d'apprentissage automatique.
PCT/ES2019/070797 2018-11-21 2019-11-21 Système et méthode pour déterminer l'état émotionnel d'un utilisateur WO2020104722A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
ES201831130A ES2762277A1 (es) 2018-11-21 2018-11-21 Sistema y metodo para determinar un estado emocional de un usuario
ESP201831130 2018-11-21

Publications (1)

Publication Number Publication Date
WO2020104722A1 true WO2020104722A1 (fr) 2020-05-28

Family

ID=70736838

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/ES2019/070797 WO2020104722A1 (fr) 2018-11-21 2019-11-21 Système et méthode pour déterminer l'état émotionnel d'un utilisateur

Country Status (2)

Country Link
ES (1) ES2762277A1 (fr)
WO (1) WO2020104722A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022042924A1 (fr) 2020-08-24 2022-03-03 Viele Sara Procédé et dispositif pour déterminer l'état mental d'un utilisateur

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210192536A1 (en) * 2019-12-24 2021-06-24 Avaya Inc. System and method for adaptive agent scripting

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110288379A1 (en) * 2007-08-02 2011-11-24 Wuxi Microsens Co., Ltd. Body sign dynamically monitoring system
US20120308971A1 (en) * 2011-05-31 2012-12-06 Hyun Soon Shin Emotion recognition-based bodyguard system, emotion recognition device, image and sensor control apparatus, personal protection management apparatus, and control methods thereof

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110288379A1 (en) * 2007-08-02 2011-11-24 Wuxi Microsens Co., Ltd. Body sign dynamically monitoring system
US20120308971A1 (en) * 2011-05-31 2012-12-06 Hyun Soon Shin Emotion recognition-based bodyguard system, emotion recognition device, image and sensor control apparatus, personal protection management apparatus, and control methods thereof

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MIRANDA CALERO JOSE ANGEL ET AL.: "Embedded Emotion Recognition within Cyber-Physical Systems using Physiological Signals", 2018 CONFERENCE ON DESIGN OF CIRCUITS AND INTEGRATED SYSTEMS (DCIS, 14 November 2018 (2018-11-14), pages 1 - 6, XP033534676, DOI: 10.1109/DCIS.2018.8681496 *
WIOLETA SZWOCH: "Using physiological signals for emotion recognition", 2013 6TH INTERNATIONAL CONFERENCE ON HUMAN SYSTEM INTERACTIONS (HSI, 6 June 2013 (2013-06-06), pages 556 - 561, XP032475731, ISSN: 2158-2246, ISBN: 978-1-4673-5635-0, DOI: 10.1109/HSI.2013.6577880 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022042924A1 (fr) 2020-08-24 2022-03-03 Viele Sara Procédé et dispositif pour déterminer l'état mental d'un utilisateur

Also Published As

Publication number Publication date
ES2762277A1 (es) 2020-05-22

Similar Documents

Publication Publication Date Title
US11158179B2 (en) Method and system to improve accuracy of fall detection using multi-sensor fusion
US11024142B2 (en) Event detector for issuing a notification responsive to occurrence of an event
Erden et al. Sensors in assisted living: A survey of signal and image processing methods
EP3416146B1 (fr) Système de surveillance et d'alarme d'état et de comportement de corps humain
US11308744B1 (en) Wrist-wearable tracking and monitoring device
KR20160054397A (ko) 조기에 위험을 경고하는 방법 및 장치
US20190228633A1 (en) Fall Warning For A User
Shiba et al. Fall detection utilizing frequency distribution trajectory by microwave Doppler sensor
KR101654708B1 (ko) 웨어러블 센서 기반 사용자 안전 시스템 및 방법
WO2020104722A1 (fr) Système et méthode pour déterminer l'état émotionnel d'un utilisateur
Ramachandiran et al. A survey on women safety device using IoT
US20200037904A1 (en) Systems, Devices, and/or Methods for Managing Health
Banjar et al. Fall event detection using the mean absolute deviated local ternary patterns and BiLSTM
KR102386182B1 (ko) 생물학적 고려사항을 이용한 손상 검출
Kulkarni et al. Smart AIOT based woman security system
KR102188076B1 (ko) 노년층 피보호자를 모니터링하기 위한 IoT 기술을 이용하는 방법 및 그 장치
ES1269890U (es) Sistema para determinar un estado emocional de un usuario
Khawandi et al. Applying machine learning algorithm in fall detection monitoring system
Kalaiselvi et al. Emergency Tracking system using Intelligent agent
Khel et al. Technical analysis of fall detection techniques
US20230410636A1 (en) System for managing a network of personal safety accessories
Abraham et al. Pro-Safe: An IoT based Smart Application for Emergency Help
Chatterjee et al. A Novel Approach Towards Identification of Alcohol and Drug Induced People
Salama et al. An intelligent mobile app for fall detection
Khawandi et al. Applying neural network architecture in a multi-sensor monitoring system for the elderly

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19886826

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19886826

Country of ref document: EP

Kind code of ref document: A1