WO2020104722A1 - System and method for determining the emotional state of a user - Google Patents

System and method for determining the emotional state of a user

Info

Publication number
WO2020104722A1
WO2020104722A1 PCT/ES2019/070797 ES2019070797W WO2020104722A1 WO 2020104722 A1 WO2020104722 A1 WO 2020104722A1 ES 2019070797 W ES2019070797 W ES 2019070797W WO 2020104722 A1 WO2020104722 A1 WO 2020104722A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
emotional state
partial
emotional
portable device
Prior art date
Application number
PCT/ES2019/070797
Other languages
Spanish (es)
French (fr)
Inventor
Rosa SAN SEGUNDO MANUEL
Clara SAINZ DE BARANDA
Marian BLANCO RUIZ
David LARRABEITI LOPEZ
Manuel URUEÑA PASCUAL
Jose Carlos ROBLEDO GARCIA
Carmen PELAEZ MORENO
Ascensión GALLARDO ANTOLÍN
Alba MÍNGUEZ SÁNCHEZ
Teresa RIESGO ALCAIDE
Jose Manuel LANZA GUTIÉRREZ
Rodrigo MARINO ANDRÉS
Jose Angel MIRANDA CALERO
Manuel FELIPE CANABAL
Marta PORTELA GARCÍA
Isabel PEREZ GARCILÓPEZ
Jose Antonio GARCÍA SOUTO
Celia LOPEZ ONGIL
Emilio Olías Ruiz
Mario GARCÍA VALDERAS
Original Assignee
Universidad Carlos Iii De Madrid
Universidad Politécnica de Madrid
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Universidad Carlos Iii De Madrid, Universidad Politécnica de Madrid filed Critical Universidad Carlos Iii De Madrid
Publication of WO2020104722A1 publication Critical patent/WO2020104722A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety

Definitions

  • the present invention refers to the technical field of emotion recognition through multimodal processing of physiological and audio signals, and more specifically to automatic and portable monitoring of a user's emotional state, with the possibility of communicating it to third parties or establishing, for example, security measures such as sending alarms to a network of contacts or emergencies in a dangerous situation.
  • the state of the art includes extensive literature that relates the physiological variations measured in human beings with the changes in their emotional states.
  • emotional detection is carried out by mapping physiological variables of individuals exposed to external stimuli (videos, audios or images) that produce known emotions.
  • the databases and studies available refer to non-portable solutions that include hundreds of classified metrics, generally making use of a two-dimensional classification space "Arousal - Valence” (AV), where the level of "Arousal” is directly related with the emotional activation and the "Valence” indicates how "positive” or “negative” that emotion is. Additionally, other dimensions can be included in this space, such as dominance or familiarity.
  • AV rousal - Valence
  • electronic devices that are integrated and camouflaged in other types of objects such as clothing are very common. They are usually called “wearables”, from the expression in English that refers to the set of devices that include bracelets, rings, glasses, jackets or pendants, among others, that allow a user to carry with them any type of electronic device in a transparent way to third parties and even yourself, but instead allow you to benefit from certain functionalities through simple interactions with the device.
  • Some state-of-the-art solutions referred to the detection of emotional states, resort to integration with bracelets or other “wearables” clothing and incorporate sensors to detect some basic emotions through intelligent algorithms, which can be combined with inertial sensors or accelerometers to clean physiological signals and noise due to user movement.
  • the classification of emotions is performed exclusively on the basis of physiological signals, which provides a limited robustness that is insufficient for certain applications where a false alarm is unacceptable, such as in cases where certain emotions have been linked to the request of an ambulance or security forces.
  • a system for determining a user's emotional state comprising:
  • a first portable device that has sensor means configured for the acquisition of a set of physiological signals from the user, where the first portable device comprises a first processor module configured to determine, from the set of physiological signals, a first partial emotional state with a first level of said emotional state; determining whether the first partial emotional state coincides with a previously established objective emotional state and whether the first level of the first partial emotional state exceeds a first previously established threshold; and transmit, if so, an alarm message to a mobile communication device of the user;
  • a second portable device configured to acquire an audio signal upon receiving an activation instruction from a mobile user communication device;
  • a second processor module configured to determine, from the audio signal, a second partial emotional state with a second level (66) of said emotional state, and to determine if it coincides with the objective emotional state and if it exceeds a second threshold previously settled down;
  • a mobile communication device comprising:
  • a wireless communication module configured to receive the alarm message from the first portable device and send the activation instruction to the second portable device
  • an analyzer module configured to determine, in case the second analyzer module indicates that the second level has exceeded the second threshold, the presence of the objective emotional state in the user, based on an analysis of the levels of partial emotional states by a machine learning algorithm.
  • the analyzer module of the present invention in one of the particular embodiments, is further configured to, once the presence of the target emotion is determined in the user, automatically send a message with information from the analysis of the analyzer module to a remote system, to through a telecommunication network, where the remote system is selected from: a user contact network, a server accessible by an emergency service, and a private server accessible by a third user authorized by the user.
  • the sensor means of the first portable device comprise, according to one of the embodiments of the invention: galvanic skin response (GSR) detecting means, configured to obtain a signal with information on the conductivity of the user's skin; pulse blood volume detecting means (BVP) configured to obtain a signal with information on the user's heart pulse; and a temperature detecting means configured to obtain a signal with the user's skin temperature information (SKT).
  • GSR galvanic skin response
  • BVP pulse blood volume detecting means
  • STT skin temperature information
  • the present invention contemplates the possibility that the analyzer module is a computational unit configured to determine the emotional state of the user according to training data previously provided.
  • the present invention comprises a controller module configured for an initial training of the system, where the controller module comprises a first database of physiological signals quantified according to audiovisual stimuli previously associated with specific emotional states and a second database of audio signals previously associated with specific emotional states.
  • a particular embodiment of the present invention contemplates incorporating into the second portable device a microphone configured to acquire the audio signal. Additionally, the possibility is considered that the second processor module incorporates a voice activity detector configured to determine the presence of silences in the acquired audio signal and to account for such silences.
  • At least one of the portable devices of the present invention comprises a button configured to, after a user press, transmit a distress message to the mobile communication device; and where the mobile communication device is further configured to, in response to receiving the distress message, forward the distress message to a previously established user group of contacts.
  • the system comprises a bracelet, with a first casing that houses the first portable device inside; and a pendant, with a second housing that houses the second device inside portable; where the mobile communication device is a smartphone-type mobile phone that integrates the second processor module.
  • the system comprises a bracelet, with a first casing that houses the first portable device inside; a pendant, with a second casing that houses inside it the second portable device and the second processor module; where the mobile communication device is a smartphone-type mobile phone.
  • a second aspect of the invention refers to a method for determining the presence of an emotional state in a user, comprising the steps of: acquiring, by means of sensor means arranged in a first portable device, a set of physiological signals; determining, by a first processor module, a first partial emotional state from the set of physiological signals, with a first level of said first partial emotional state; determining whether the first partial emotional state coincides with a previously established objective emotional state and whether the first level of the first partial emotional state exceeds a first previously established threshold; if so, transmit an alarm message to a mobile user communication device, using a wireless communication module; as a result of receiving the alarm message, acquiring an audio signal, through a second portable device in communication with the mobile communication device; determining, in a second processor module, a second partial emotional state, from the audio signal, with a second level of said second partial emotional state; determining whether the second partial emotional state coincides with the previously established objective emotional state and whether the second level of said second partial emotional state exceeds a previously established second threshold;
  • determining the presence of the user's objective emotional state comprises the steps of: determining a total emotional level based on the levels of the partial emotional states; compare the total emotional level determined with a third threshold previously established; and determining that the emotional state is present in the event that said total emotional level exceeds said third threshold.
  • a message with information from the analysis of the analyzer module is automatically sent to a remote system, through a telecommunication network (7 ), where the remote system is selected from: a network of user contacts, a server accessible by an emergency service, and a private server accessible by a third user authorized by the user.
  • the set of physiological signals comprises three physiological signals and, acquiring by means of the sensor means the set of physiological signals, comprises acquiring a signal with information on the conductivity of the skin (GSR) of the user, acquiring a signal with information from the user's heart pulse (BVP), and acquire a signal with information from the user's skin temperature (SKT).
  • GSR conductivity of the skin
  • BVP heart pulse
  • SHT skin temperature
  • determining a partial emotional state with a level of said partial emotional state comprises mapping a point in a three-dimensional space that represents all emotional states, based on numerical values of the three variables of "Pleasure / Valencia” - “Excitation” - “Dominance”, assigned for the set of acquired physiological signals or for certain characteristics of the acquired audio signal.
  • the possibility of adding a previous training stage includes: feeding a first database with quantified physiological signals according to audiovisual stimuli previously associated with specific emotional states; feeding a second database with audio signals with spectral and / or prosodic characteristics previously associated with specific emotional states; recording a deviation, with respect to the first and second databases, of the physiological signals and the spectral and prosodic characteristics of the audio signal provided by the user; and adapting the first processor module, the second processor module and the analyzer module to the deviations registered for the user.
  • the proposed invention is based on wireless technologies to offer a distributed solution on various devices connected to the user's mobile phone, preferably a bracelet and a pendant.
  • the user can transport the invention without any inconvenience and without being perceived by third parties.
  • it is completely transparent to an eventual aggressor, reducing the chances that they will be detected and canceled by said aggressor.
  • the present invention monitors, registers and collects events derived from the detection of user emotions, such as panic and stress caused, for example, by a sexual or violent attack. Subsequent to the detection of said events, advantageously, a set of alarms can be triggered automatically towards a network of "guardians", previously configured through a mobile application, or towards emergency / security services.
  • the characteristics of the present invention imply a multitude of advantages for the user.
  • the incorporation of the audio signal in the process of decision, classification and determination of the emotional state of the user provides a new approach, since, thanks to it, and the learning algorithm that is applied to it, it provides a value added to the proposed system, providing greater robustness or precision to emotional inference and, in addition, identifying possible external sounds to the user (silences, slamming doors, shots, etc.). Therefore, the present invention provides the user with the ability to react and quickly warn of possible assaults, for example of a sexual nature.
  • Figure 1 schematically represents an embodiment of the complete system of the invention.
  • Figure 2 represents an embodiment of one of the portable pendant devices.
  • Figure 3 represents an embodiment of one of the portable devices in the form of a bracelet.
  • Figure 4 shows on a block diagram the multimodal nature of the present invention.
  • Figure 5 shows by means of a block diagram the treatment of physiological signals in one of the embodiments of the invention.
  • Figure 6 shows by means of a block diagram the treatment of the audio signal in one of the embodiments of the invention.
  • Figure 7 represents a three-dimensional space used by the present invention to divide the space among eight emotional quadrants.
  • Figure 8 represents in a block diagram the training phase and the personalized initial configuration method of the system of the present invention.
  • Figure 9 schematically represents an embodiment of the complete system of the invention.
  • the present invention discloses a method and a distributed system for detecting the emotional state of a user, which may correspond to situations of risk or imminent aggression or other emotions that can be used for medical, sports, etc. purposes. It uses for this an effective multimodal integration of physiological and physical signals, preferably external audio and voice (although other audiovisual signals could also be used), by means of sensors that can be integrated into a portable and camouflage “wearable” solution in clothing and / or accessories, which It is capable of alerting a user circle of trust or security forces.
  • FIG. 1 and 9 One of the embodiments of the invention, especially advantageous in detecting an objective emotion associated with states of panic or blockages related to situations of violence or sexual assault, is represented in Figures 1 and 9, where the system is composed of three main devices: two portable devices camouflaged in clothing or “wearables”, which in this case are a bracelet 1 and a pendant 2, and a mobile communication device, which in this case is a smartphone-type mobile phone 3 in the that a specifically designed application 4 is running.
  • two portable devices camouflaged in clothing or “wearables” which in this case are a bracelet 1 and a pendant 2
  • a mobile communication device which in this case is a smartphone-type mobile phone 3 in the that a specifically designed application 4 is running.
  • the bracelet acquires and monitors physiological signals, which are captured through biometric sensors 5 in small periods of time, and applies machine learning algorithms in order to have a first level of alert in case of positive detection of the target emotion, which in this embodiment is the detection of a emotion of panic or blockage before a possible aggression.
  • Said alert is sent by means of a short-range wireless communication module 6, for example Bluetooth, to the user's mobile phone, which evaluates the content and sends, via the wireless communication network, an order to the pendant to activate the following system detection layer.
  • the pendant then begins to acquire audio from the user's environment, compresses it and sends it to the mobile phone, which applies machine learning algorithms to detect signs of risk, such as a certain level of stress, using said audio acquired by the arranged microphone on the user's pendant.
  • a network of trusted contacts 11 is notified, previously established by the user during the initial configuration from the software application 4 or directly to an emergency service 9, through a telecommunication network 7.
  • the telecommunication network used can be any network suitable for mobile telephony (GPRS / 3G / 4G) or be based directly on a connection WiFi to the Internet, which sends 10 push notifications to the chosen contact network through a dedicated server 8.
  • FIG. 2 represents an embodiment of one of the portable devices in which the present invention is distributed.
  • the portable device is implemented in a pendant 2, although in other embodiments wearables such as earrings, headbands, piercings or brooches are also contemplated.
  • the pendant comprises a housing that houses inside it the electronic components necessary for acquiring a physical signal, in this case a microphone 20 for capturing an audio signal, for wireless communications with the mobile phone, battery and microprocessor.
  • the casing has a microperforation 21, on its outer face, coinciding with the microphone housed inside the casing to facilitate audio reception. Face
  • Face The exterior of the pendant has a manually operated panic button 22, which immediately sends, via the mobile phone, a distress message to the network of contacts.
  • the panic button is camouflaged in the pendant design.
  • the rear face of the pendant has a small hole that allows access to the electronic components inside the casing by means of an elongated pointer-type object or a pin. This access is limited to a reset button to restart the device.
  • the pendant additionally comprises a camera for the acquisition of the physical signal.
  • a camera for the acquisition of the physical signal.
  • it adds functionalities and additional information to the system acquiring images and video.
  • FIG. 3 depicts an embodiment of another of the portable devices in which the present invention is distributed.
  • the portable device is implemented in a bracelet 1, although in other embodiments wearables such as bracelets or anklets are also contemplated.
  • the bracelet has a casing that houses a microprocessor and the electronic components necessary for the acquisition of the user's physiological signals.
  • a first sensor 31 galvanic skin response detector is included, which is preferably arranged outside the housing to facilitate contact with the skin of the user of a pair of electrodes, said first sensor is configured to obtain a signal with user skin conductivity (GSR) information; a second pulse blood volume detector (BVP) sensor 32 configured to obtain a signal with information on the user's heart pulse; and a third temperature detector sensor 33 configured to obtain a signal with the user's skin temperature information (SKT).
  • GSR user skin conductivity
  • BVP pulse blood volume detector
  • STT skin temperature information
  • the sensors are arranged on the internal face of the bracelet so that, when placed on the user's wrist, they remain in contact with their skin.
  • the interior of the housing also houses a short-range wireless communication module, preferably Bluetooth, a battery, and microprocessor 36.
  • the housing further features a perforation 34, on its outer face, coincident with a push button. reset housed inside the case and accessible with an elongated pointer or pin to reset the device.
  • the outer face of the bracelet features a manually operable panic button 35, which Immediately sends, via the mobile phone, a distress message to the network of contacts or the recipient that has been previously configured.
  • the multimodal emotion recognition system in a preferred embodiment of the present invention, feeds on the following physiological variables: skin conductivity 40 ("Galvanic Skin Response”, GSR), blood volume of the pulse 41 (“Blood Volume Pulse”, BVP), and temperature 42 (“Skin Temperature”, SKT). And on the other hand, it is fed by a physical variable, which in this case is audio 43 and includes the user's voice along with the sound of the environment.
  • GSR Skin conductivity 40
  • BVP Blood volume of the pulse 41
  • SKT Skin Temperature
  • a physical variable which in this case is audio 43 and includes the user's voice along with the sound of the environment.
  • the communication device is configured to receive both the information processed by the first portable device and by the second portable device, the method of the present invention is carried out in two stages, where the first stage (processing of physiological signals) it acts as a key for the second stage (audio processing), so that without a first alarm, detected exclusively from physiological signals, the rest of the communications or processes are not established in the second portable device and, only once that a second alarm has occurred after processing the audio, multimodal processing is activated in an analyzer module 47 of the communication device, which merges 45 the data of both classifications, determining 46 the emotion that the user is feeling.
  • Figure 5 represents the first of said blocks, specifically the block in charge of treating physiological signals, where a processor module 50 comprising a microprocessor integrated in the bracelet is in charge of all the processing.
  • the physiological signals obtained by the bracelet's biometric sensors are processed in several phases, firstly, the raw signals 51 undergo noise removal 52. Then, a process of standardization 53 of the signals and extraction 54 of characteristics, such as the mean, standard deviation, mean of the absolute values of the first difference, mean of the absolute values of the first difference of the normalized signal, mean of the absolute values of the second difference, mean of the first difference of the smoothed signal, etc.
  • the characteristics are merged to finally classify 56 the results and obtain an output signal with an emotional level 57.
  • Classifier 56 applies a classical low-cost machine learning algorithm to classify the emotion perceived by the system, for example following the method of the closest K-neighbors, or abbreviated KNN (from the English “K-Nearest- Neighbors') .
  • This algorithm requires training data or space points, which have been previously obtained during the system configuration, which is carried out offline and is detailed later.
  • the emotional level 57 determined at the output of the processor module 50 presents a confidence level that indicates, as a percentage, the probability that the combination of characteristics extracted from the physiological signals corresponds to a specific emotion.
  • the confidence level of each emotion is, therefore, the metric that quantifies the presence of said emotion in the user.
  • the exit signal with an emotional level 59 comprises information on whether or not the confidence level of the target emotion is above a previously determined threshold.
  • the threshold is located at the value from which the objective emotion is considered to be predominant over the other emotions. This detection threshold is the point from which the emotion of interest is predominant (in time), with respect to the other emotions detected in the same period.
  • the bracelet In case of exceeding the threshold established for the confidence level of the target emotion, the bracelet establishes communication with the user's phone, to transmit the target emotion level detected in a first alarm message, which causes the phone of the User immediately send an activation message to the pendant to start recording audio.
  • Figure 6 represents the second of said blocks to carry out the extraction of the category or type of emotion, specifically the block in charge of processing the physical signal.
  • the physical signals in this case the audio, begin to be acquired after receiving an order sent from the mobile phone on the pendant, which sends said order only after receiving an indication, the first alarm message, from the bracelet that the analysis of Physiological cues have classified the target emotion with confidence above the predetermined threshold.
  • the pendant microphone is activated and starts recording an audio signal 60.
  • This audio signal is processed locally by the pendant microprocessor to compress it and transmit it 61 to the mobile phone via the Bluetooth wireless communication network.
  • the processor module 62 which in this embodiment is integrated into the mobile phone, the compressed signal is decompressed 63 and the extraction 64 of features is carried out. After extracting the characteristics of the audio signal, we proceed to classify it 65 and obtain an output signal with a classification of the target emotion and a confidence level 66 obtained independently of the analysis of the physiological signals.
  • the processor module 62 is integrated in the microprocessor of the second portable device, the pendant for example, so that the output signal with a classification of the target emotion and a confidence level 66 is obtained in the pendant itself before transmitting anything to the mobile phone.
  • the analyzer module 47 activates the multimodal processing explained above, which merges 45 the data from both classifications, determining 46 the presence of the target emotion in the user, based on the analysis by means of an automatic learning algorithm of the output signals of the blocks of Figures 5 and 6.
  • the processor module extracts various spectral and prosodic characteristics from the audio signal that, later, 65 will be classified applying a classic low-cost machine learning algorithm, to classify and calculate the confidence level 66 of the emotion perceived by the system and thus confirm or reject, the objective emotion detected by the first portable device.
  • the distress message is sent to the contacts configured by the user, or according to other embodiments, to an emergency service, a medical center or any other agent that is considered appropriate to act upon the detection of a certain emotion.
  • a voice activity detector (“Voice Activity Detection”, VAD) is also used to eliminate and count possible silences within the recorded audio. In this way, it is not only possible to detect the user's voice, but also to detect possible relevant sounds from the environment (silences, slamming doors, shots, etc.).
  • Figures 5 and 6 which partially extract the category or type of emotion from the user by separately processing the combination of physiological signals and the audio signal respectively, are fused to determine what the emotional state is. of the user within previously defined emotional quadrants and, specifically, to determine whether or not the target emotion is present in the user.
  • Figure 7 shows a three-dimensional space used by the present invention to divide the space among eight emotional quadrants.
  • This distribution is based on a “PAD-Space” model with three coordinate axes that represent all emotional states based on the numerical values assigned for the variables of Pleasure / Valencia - Excitation - Dominance (“Pleasure / Valence - Arousal - Dominance ”, In English, where each one of said emotional states is represented in a vertex of the cube represented in said three-dimensional space.
  • a vertex of the cube is assigned to each of the following eight emotional states: joy 71 (+ p + a + d), gratitude 72 (+ p + ad), submission 73 (+ pa-d), anguish 74 ( -pad), relief 75 (+ p-a + d), contempt 76 (-p-a + d), fear 77 (-p + ad), and anger 78 (-p + a + d).
  • the variable "Pleasure / Valencia” measures how pleasant the user perceives a certain stimulus. So, “anger” 78 or “fear” 77, being emotions classified as unpleasant, are located on the negative end (-p). In contrast, the “Joy” 71, being an emotion classified as pleasant, is situated at the extreme of pleasure (+ p).
  • the user's emotional situation is therefore represented in one of the eight emotional quadrants defined by these three dimensions, depending on the results obtained for each of the three variables, Pleasure / Valencia - Excitation - Dominance, when analyzing, on the one hand in classifier 58, the combination of the characteristics extracted from all physiological signals and, on the other hand, in classifier module 64, the characteristics extracted from the audio signal.
  • the combination 45 of the information provided by the physiological signals and the audio signal results in a single final characterization 46 of the user's emotional state, comprising the target emotion and a confidence level.
  • the present invention can be configured to send this information to a remote location / user through a wireless telecommunication network, where the necessary preventive or decisive actions will be taken that are considered in each of the applications of the invention.
  • the information may also include user geolocation information, which is automatically sent upon determination that the target emotion level has exceeded a preset threshold.
  • Figure 8 represents the training phase and personalized initial configuration method of the system of the present invention, which provides the classic machine learning algorithm of the analyzer module with the feature vectors and labels necessary to carry out the training.
  • a controller module with machine learning algorithms and access to two independent databases is available, one of physiological signals 81 and the other of audio signals 82.
  • Said initial configuration is divided into two different processes based on the signals a capture, which share the signal conditioning and feature extraction of Figures 5 and 6.
  • the user is stimulated through audiovisual content, previously labeled with a specific emotional quadrant.
  • the physiological variations produced are recorded and stored 83 in the database of physiological signals 81 and, at the end of the process, the controller module obtains a predictive model 84 trained for the particular user.
  • the combination of the selected characteristics of the three physiological variables used in this preferred embodiment which obviously could be others in alternative embodiments of the invention, is numerically characterized and the influence of its variations can be directly transferred to each of the three axes of the PAD space used to represent all emotional states, which can be mapped based on the numerical values assigned for the variables of Pleasure / Valencia - Excitation - Dominance.
  • audio signals is analogous to that described for physiological signals, with the difference that the training process is carried out with voice recordings spoken by the user himself, of texts previously labeled with a specific emotional quadrant.
  • the variations in the spectral and prosodic characteristics of the user / user's voice are recorded and stored 85 in the audio signal database 82.
  • the controller module obtains a predictive model 86 trained for the private user.
  • the combination of the selected characteristics of the audio signal is numerically characterized and the influence of its variations can be directly transferred to each of the three axes of the PAD space, used to represent all the emotional states, which can be mapped based on the numerical values assigned for the variables of Pleasure / Valencia - Excitation - Dominance.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Psychiatry (AREA)
  • Developmental Disabilities (AREA)
  • Biomedical Technology (AREA)
  • Signal Processing (AREA)
  • Educational Technology (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychology (AREA)
  • Social Psychology (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Child & Adolescent Psychology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Telephone Function (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The present invention relates to a method and a system for determining the emotional state of a user by means of the multimodal integration of signals in a wearable distributed solution comprising: a first wearable device with sensors for the collecting of physiological signals from the user, configured to determine partially the presence of a target emotional state; a second wearable device to collect an audio signal; a processor to determine partially, from the audio signal, the presence of the target emotional state; and a wireless mobile communication device, in communication with both wearable devices, to determine the presence of the target emotional state in the user, based on the analysis of the partial emotional states by means of an automatic learning algorithm.

Description

SISTEMA Y MÉTODO PARA DETERMINAR UN ESTADO EMOCIONAL DE UN  SYSTEM AND METHOD FOR DETERMINING AN EMOTIONAL STATE OF A
USUARIO  USER
DESCRIPCIÓN DESCRIPTION
OBJETO DE LA INVENCIÓN OBJECT OF THE INVENTION
La presente invención se refiere al campo técnico del reconocimiento de emociones a través del procesamiento multimodal de señales fisiológicas y de audio, y más concretamente a la monitorización automática y portátil del estado emocional de un usuario, con la posibilidad de comunicarlo a terceros o establecer, por ejemplo, medidas de seguridad como el envío de alarmas a una red de contactos o emergencias ante una situación de peligro.  The present invention refers to the technical field of emotion recognition through multimodal processing of physiological and audio signals, and more specifically to automatic and portable monitoring of a user's emotional state, with the possibility of communicating it to third parties or establishing, for example, security measures such as sending alarms to a network of contacts or emergencies in a dangerous situation.
ANTECEDENTES DE LA INVENCIÓN BACKGROUND OF THE INVENTION
Actualmente, el reconocimiento automático de emociones es un campo en auge y pleno desarrollo que se beneficia de los avances en computación y algoritmos de aprendizaje automático. Esta tecnología proporciona una interacción hombre-máquina mucho más eficiente y mejora la actual lo que puede aprovecharse en una gran variedad de casos que van desde la detección de estrés, bloqueo o miedo en usuarios, hasta aplicaciones de neuro-marketing, detección de diferentes tipos de violencia, aplicaciones médicas, detección del estado del usuario en aplicaciones e-learning, o detección del estado afectivo para personas que padecen algún tipo de trastorno (p.ej., autismo).  Today, automatic emotion recognition is a booming and developing field that benefits from advances in computing and machine learning algorithms. This technology provides a much more efficient human-machine interaction and improves current technology, which can be used in a wide variety of cases, ranging from detecting stress, blocking or fear in users, to neuro-marketing applications, detection of different types violence, medical applications, user status detection in e-learning applications, or affective status detection for people with some type of disorder (eg, autism).
El estado del arte comprende amplia literatura que relaciona las variaciones fisiológicas medidas en seres humanos con los cambios en sus estados emocionales. En estos trabajos, la detección emocional se lleva a cabo mapeando variables fisiológicas de individuos expuestos a estímulos externos (vídeos, audios o imágenes) que producen emociones conocidas. Las bases de datos y estudios disponibles, se refieren a soluciones no portables que incluyen centenares de métricas clasificadas, generalmente, haciendo uso de un espacio de clasificación bidimensional“Arousal - Valence” (AV), donde el nivel de“Arousal" está directamente relacionado con la activación emocional y el de “Valence" indica lo “positiva” o “negativa” que es dicha emoción. Adicionalmente pueden ser incluidas otras dimensiones a este espacio, como por ejemplo dominancia o familiaridad. Las soluciones que abordan la combinación de variables y optan por un escenario multimodal, implican gran complejidad, alta carga computacional y sensores con peso y tamaño considerable, cableado, baterías y otros elementos que inevitablemente requieren la inmovilidad del sujeto. En contraste, las alternativas portátiles siguen un enfoque más sencillo basado en variables de una misma naturaleza, lo que condiciona claramente sus aplicaciones. The state of the art includes extensive literature that relates the physiological variations measured in human beings with the changes in their emotional states. In these works, emotional detection is carried out by mapping physiological variables of individuals exposed to external stimuli (videos, audios or images) that produce known emotions. The databases and studies available refer to non-portable solutions that include hundreds of classified metrics, generally making use of a two-dimensional classification space "Arousal - Valence" (AV), where the level of "Arousal" is directly related with the emotional activation and the "Valence" indicates how "positive" or "negative" that emotion is. Additionally, other dimensions can be included in this space, such as dominance or familiarity. The solutions that address the combination of variables and opt for a multimodal scenario, involve great complexity, high computational load and sensors with considerable weight and size, wiring, batteries and other elements that inevitably require the immobility of the subject. In contrast, portable alternatives follow a simpler approach based on variables of the same nature, which clearly conditions their applications.
Por otro lado, son muy comunes los dispositivos electrónicos que se encuentran integrados y camuflados en otro tipo de objetos como prendas de vestir. Suelen ser denominados “wearables”, de la expresión en inglés que se refiere al conjunto de dispositivos que comprende pulseras, anillos, gafas, chaquetas o colgantes entre otros, que permiten a un usuario transportar consigo cualquier tipo de dispositivo electrónico de una manera transparente a terceros e incluso a sí mismo, pero en cambio le permiten beneficiarse de ciertas funcionalidades mediante sencillas interacciones con el dispositivo. On the other hand, electronic devices that are integrated and camouflaged in other types of objects such as clothing are very common. They are usually called “wearables”, from the expression in English that refers to the set of devices that include bracelets, rings, glasses, jackets or pendants, among others, that allow a user to carry with them any type of electronic device in a transparent way to third parties and even yourself, but instead allow you to benefit from certain functionalities through simple interactions with the device.
Algunas soluciones del estado del arte, referidas a la detección de estados emocionales, recurren a la integración con pulseras u otras prendas de vestir“wearables” e incorporan sensores para detectar algunas emociones básicas por medio de algoritmos inteligentes, que pueden ser combinadas con sensores inerciales o acelerómetros para limpiar las señales fisiológicas y el ruido debido al movimiento del usuario. No obstante, la clasificación de las emociones se realiza exclusivamente en función de las señales fisiológicas, lo que proporciona una robustez limitada que resulta insuficiente para ciertas aplicaciones en las que una falsa alarma es inaceptable, como por ejemplo en casos en los que ciertas emociones hayan sido vinculadas a la solicitud de una ambulancia o cuerpos de seguridad. Some state-of-the-art solutions, referred to the detection of emotional states, resort to integration with bracelets or other “wearables” clothing and incorporate sensors to detect some basic emotions through intelligent algorithms, which can be combined with inertial sensors or accelerometers to clean physiological signals and noise due to user movement. However, the classification of emotions is performed exclusively on the basis of physiological signals, which provides a limited robustness that is insufficient for certain applications where a false alarm is unacceptable, such as in cases where certain emotions have been linked to the request of an ambulance or security forces.
Por tanto, se echa en falta en el estado del arte una solución para determinar estados emocionales de los usuarios, con una naturaleza multimodal que garantice una precisión adecuada, pero que se mantenga portátil, adaptable a cada usuario particular y que pueda ser integrada en prendas de vestir de una manera transparente, tanto para el propio usuario como para terceros. Therefore, a solution for determining emotional states of users is missing in the state of the art, with a multimodal nature that guarantees adequate precision, but that remains portable, adaptable to each individual user and that can be integrated into garments. to dress in a transparent way, both for the user and for third parties.
DESCRIPCIÓN DE LA INVENCIÓN DESCRIPTION OF THE INVENTION
Con el fin de alcanzar los objetivos y evitar los inconvenientes mencionados anteriormente, la presente invención describe, en un primer aspecto un sistema para determinar un estado emocional de un usuario, que comprende: In order to achieve the objectives and avoid the aforementioned drawbacks Above, the present invention describes, in a first aspect, a system for determining a user's emotional state, comprising:
un primer dispositivo portátil que dispone de unos medios sensores configurados para la adquisición de un conjunto de señales fisiológicas del usuario, donde el primer dispositivo portátil comprende un primer módulo procesador configurado para determinar, a partir del conjunto de señales fisiológicas, un primer estado emocional parcial con un primer nivel de dicho estado emocional; determinar si el primer estado emocional parcial coincide con un estado emocional objetivo establecido previamente y si el primer nivel del primer estado emocional parcial supera un primer umbral previamente establecido; y transmitir, en caso afirmativo, un mensaje de alarma a un dispositivo móvil de comunicación del usuario;  a first portable device that has sensor means configured for the acquisition of a set of physiological signals from the user, where the first portable device comprises a first processor module configured to determine, from the set of physiological signals, a first partial emotional state with a first level of said emotional state; determining whether the first partial emotional state coincides with a previously established objective emotional state and whether the first level of the first partial emotional state exceeds a first previously established threshold; and transmit, if so, an alarm message to a mobile communication device of the user;
un segundo dispositivo portátil configurado para adquirir una señal de audio al recibir una instrucción de activación desde un dispositivo móvil de comunicación del usuario; un segundo módulo procesador configurado para determinar, a partir de la señal de audio, un segundo estado emocional parcial con un segundo nivel (66) de dicho estado emocional, y para determinar si coincide con el estado emocional objetivo y si supera un segundo umbral previamente establecido; y  a second portable device configured to acquire an audio signal upon receiving an activation instruction from a mobile user communication device; a second processor module configured to determine, from the audio signal, a second partial emotional state with a second level (66) of said emotional state, and to determine if it coincides with the objective emotional state and if it exceeds a second threshold previously settled down; and
un dispositivo móvil de comunicación que comprende:  a mobile communication device comprising:
un módulo de comunicación inalámbrico configurado para recibir el mensaje de alarma del primer dispositivo portátil y enviar la instrucción de activación al segundo dispositivo portátil; y  a wireless communication module configured to receive the alarm message from the first portable device and send the activation instruction to the second portable device; and
un módulo analizador configurado para determinar, en caso de que el segundo módulo analizador indique que el segundo nivel ha superado el segundo umbral, la presencia del estado emocional objetivo en el usuario, basado en un análisis de los niveles de los estados emocionales parciales por un algoritmo de aprendizaje automático.  an analyzer module configured to determine, in case the second analyzer module indicates that the second level has exceeded the second threshold, the presence of the objective emotional state in the user, based on an analysis of the levels of partial emotional states by a machine learning algorithm.
El módulo analizador de la presente invención, en una de las realizaciones particulares, está además configurado para, una vez determinada la presencia de la emoción objetivo en el usuario, enviar automáticamente un mensaje con información del análisis del módulo analizador a un sistema remoto, a través de una red de telecomunicación, donde el sistema remoto se selecciona entre: una red de contactos del usuario, un servidor accesible por un servicio de emergencias, y un servidor privado accesible por un tercer usuario autorizado por el usuario. Los medios sensores del primer dispositivo portátil comprenden, de acuerdo a una de las realizaciones de la invención: unos medios detectores de respuesta galvánica de la piel (GSR), configurados para obtener una señal con información de la conductividad de la piel del usuario; unos medios detectores de volumen sanguíneo del pulso (BVP) configurados para obtener una señal con información del pulso cardíaco del usuario; y unos medios detectores de temperatura configurados para obtener una señal con información de la temperatura de la piel (SKT) del usuario. The analyzer module of the present invention, in one of the particular embodiments, is further configured to, once the presence of the target emotion is determined in the user, automatically send a message with information from the analysis of the analyzer module to a remote system, to through a telecommunication network, where the remote system is selected from: a user contact network, a server accessible by an emergency service, and a private server accessible by a third user authorized by the user. The sensor means of the first portable device comprise, according to one of the embodiments of the invention: galvanic skin response (GSR) detecting means, configured to obtain a signal with information on the conductivity of the user's skin; pulse blood volume detecting means (BVP) configured to obtain a signal with information on the user's heart pulse; and a temperature detecting means configured to obtain a signal with the user's skin temperature information (SKT).
La presente invención contempla la posibilidad de que el módulo analizador sea una unidad computacional configurada para determinar el estado emocional del usuario de acuerdo a unos datos de entrenamiento proporcionado previamente. The present invention contemplates the possibility that the analyzer module is a computational unit configured to determine the emotional state of the user according to training data previously provided.
Opcionalmente, la presente invención comprende un módulo controlador configurado para un entrenamiento inicial del sistema, donde el módulo controlador comprende una primera base de datos de señales fisiológicas cuantificadas según estímulos audiovisuales asociados previamente a estados emocionales específicos y una segunda base de datos de señales de audio asociadas previamente a los estados emocionales específicos. Optionally, the present invention comprises a controller module configured for an initial training of the system, where the controller module comprises a first database of physiological signals quantified according to audiovisual stimuli previously associated with specific emotional states and a second database of audio signals previously associated with specific emotional states.
Una realización particular de la presente invención contempla incorporar en el segundo dispositivo portátil un micrófono configurado para adquirir la señal de audio. Adicionalmente, se contempla la posibilidad de que el segundo módulo procesador incorpore un detector de actividad vocal configurado para determinar la presencia de silencios en la señal de audio adquirida y contabilizar dichos silencios. A particular embodiment of the present invention contemplates incorporating into the second portable device a microphone configured to acquire the audio signal. Additionally, the possibility is considered that the second processor module incorporates a voice activity detector configured to determine the presence of silences in the acquired audio signal and to account for such silences.
Opcionalmente, al menos uno de los dispositivos portátiles de la presente invención comprende un pulsador configurado para, tras una pulsación del usuario, transmitir un mensaje de auxilio al dispositivo móvil de comunicación; y donde el dispositivo móvil de comunicación está además configurado para, en respuesta a la recepción del mensaje de auxilio, reenviar el mensaje de auxilio a un grupo de contactos del usuario previamente establecido. Optionally, at least one of the portable devices of the present invention comprises a button configured to, after a user press, transmit a distress message to the mobile communication device; and where the mobile communication device is further configured to, in response to receiving the distress message, forward the distress message to a previously established user group of contacts.
En una de las realizaciones de la invención, se contempla que el sistema comprenda una pulsera, con una primera carcasa que aloja en su interior el primer dispositivo portátil; y un colgante, con una segunda carcasa que aloja en su interior el segundo dispositivo portátil; donde el dispositivo móvil de comunicación es un teléfono móvil de tipo smartphone que integra el segundo módulo procesador. Alternativamente, otra realización de la invención contempla que el sistema comprenda una pulsera, con una primera carcasa que aloja en su interior el primer dispositivo portátil; un colgante, con una segunda carcasa que aloja en su interior el segundo dispositivo portátil y el segundo módulo procesador; donde el dispositivo móvil de comunicación es un teléfono móvil de tipo smartphone. In one of the embodiments of the invention, it is contemplated that the system comprises a bracelet, with a first casing that houses the first portable device inside; and a pendant, with a second housing that houses the second device inside portable; where the mobile communication device is a smartphone-type mobile phone that integrates the second processor module. Alternatively, another embodiment of the invention contemplates that the system comprises a bracelet, with a first casing that houses the first portable device inside; a pendant, with a second casing that houses inside it the second portable device and the second processor module; where the mobile communication device is a smartphone-type mobile phone.
Un segundo aspecto de la invención se refiere a un método para determinar la presencia de un estado emocional en un usuario, que comprende los pasos de: adquirir, mediante unos medios sensores dispuestos en un primer dispositivo portátil un conjunto de señales fisiológicas; determinar, por un primer módulo procesador, un primer estado emocional parcial a partir del conjunto de señales fisiológicas, con un primer nivel de dicho primer estado emocional parcial; determinar si el primer estado emocional parcial coincide con un estado emocional objetivo establecido previamente y si el primer nivel del primer estado emocional parcial supera un primer umbral previamente establecido; en caso afirmativo, transmitir un mensaje de alarma a un dispositivo móvil de comunicación del usuario, mediante un módulo de comunicación inalámbrico; como resultado de la recepción del mensaje de alarma, adquirir una señal de audio, mediante un segundo dispositivo portátil en comunicación con el dispositivo móvil de comunicación; determinar, en un segundo módulo procesador un segundo estado emocional parcial, a partir de la señal de audio, con un segundo nivel de dicho segundo estado emocional parcial; determinar si el segundo estado emocional parcial coincide con el estado emocional objetivo establecido previamente y si el segundo nivel de dicho segundo estado emocional parcial supera un segundo umbral previamente establecido; en caso afirmativo, determinar, en un módulo analizador, la presencia del estado emocional objetivo del usuario, basado en un análisis de los niveles de los estados emocionales parciales por un algoritmo de aprendizaje automático. A second aspect of the invention refers to a method for determining the presence of an emotional state in a user, comprising the steps of: acquiring, by means of sensor means arranged in a first portable device, a set of physiological signals; determining, by a first processor module, a first partial emotional state from the set of physiological signals, with a first level of said first partial emotional state; determining whether the first partial emotional state coincides with a previously established objective emotional state and whether the first level of the first partial emotional state exceeds a first previously established threshold; if so, transmit an alarm message to a mobile user communication device, using a wireless communication module; as a result of receiving the alarm message, acquiring an audio signal, through a second portable device in communication with the mobile communication device; determining, in a second processor module, a second partial emotional state, from the audio signal, with a second level of said second partial emotional state; determining whether the second partial emotional state coincides with the previously established objective emotional state and whether the second level of said second partial emotional state exceeds a previously established second threshold; if so, determine, in an analyzer module, the presence of the user's objective emotional state, based on an analysis of the levels of partial emotional states by a machine learning algorithm.
Adicionalmente, en una de las realizaciones de la invención, se contempla la posibilidad de que determinar la presencia del estado emocional objetivo del usuario comprenda los pasos de: determinar un nivel emocional total basado en los niveles de los estados emocionales parciales; comparar el nivel emocional total determinado con un tercer umbral previamente establecido; y determinar que el estado emocional está presente en caso de que dicho nivel emocional total supere dicho tercer umbral. De manera opcional, en una realización de la invención, una vez determinada la presencia de la emoción objetivo en el usuario, se envía automáticamente un mensaje con información del análisis del módulo analizador a un sistema remoto, a través de una red de telecomunicación (7), donde el sistema remoto se selecciona entre: una red de contactos del usuario, un servidor accesible por un servicio de emergencias, y un servidor privado accesible por un tercer usuario autorizado por el usuario. Additionally, in one of the embodiments of the invention, the possibility is contemplated that determining the presence of the user's objective emotional state comprises the steps of: determining a total emotional level based on the levels of the partial emotional states; compare the total emotional level determined with a third threshold previously established; and determining that the emotional state is present in the event that said total emotional level exceeds said third threshold. Optionally, in an embodiment of the invention, once the presence of the target emotion in the user has been determined, a message with information from the analysis of the analyzer module is automatically sent to a remote system, through a telecommunication network (7 ), where the remote system is selected from: a network of user contacts, a server accessible by an emergency service, and a private server accessible by a third user authorized by the user.
Se contempla la posibilidad de que el conjunto de señales fisiológicas comprenda tres señales fisiológicas y, adquirir por los medios sensores el conjunto de señales fisiológicas, comprenda adquirir una señal con información de la conductividad de la piel (GSR) del usuario, adquirir una señal con información del pulso cardíaco del usuario (BVP), y adquirir una señal con información de la temperatura de la piel (SKT) del usuario. The possibility is contemplated that the set of physiological signals comprises three physiological signals and, acquiring by means of the sensor means the set of physiological signals, comprises acquiring a signal with information on the conductivity of the skin (GSR) of the user, acquiring a signal with information from the user's heart pulse (BVP), and acquire a signal with information from the user's skin temperature (SKT).
En una de las realizaciones de la presente invención, determinar un estado emocional parcial con un nivel de dicho estado emocional parcial, comprende mapear un punto en un espacio de tres dimensiones que representa todos los estados emocionales, basado en unos valores numéricos de las tres variables de“Placer/Valencia” -“Excitación” - “Dominancia”, asignados para el conjunto de señales fisiológicas adquirido o para ciertas características de la señal de audio adquirida. In one of the embodiments of the present invention, determining a partial emotional state with a level of said partial emotional state, comprises mapping a point in a three-dimensional space that represents all emotional states, based on numerical values of the three variables of "Pleasure / Valencia" - "Excitation" - "Dominance", assigned for the set of acquired physiological signals or for certain characteristics of the acquired audio signal.
Adicionalmente, se contempla la posibilidad de añadir una etapa previa de entrenamiento que comprende: alimentar una primera base de datos con señales fisiológicas cuantificadas según estímulos audiovisuales asociados previamente a estados emocionales específicos; alimentar una segunda base de datos con señales de audio con unas características espectrales y/o prosódicas asociadas previamente a los estados emocionales específicos; registrar una desviación, respecto de la primera y segundas bases de datos, de las señales fisiológicas y las características espectrales y prosódicas de la señal de audio proporcionadas por el usuario; y adaptar el primer módulo procesador, el segundo módulo procesador y el módulo analizador a las desviaciones registradas para el usuario. Additionally, the possibility of adding a previous training stage is contemplated, which includes: feeding a first database with quantified physiological signals according to audiovisual stimuli previously associated with specific emotional states; feeding a second database with audio signals with spectral and / or prosodic characteristics previously associated with specific emotional states; recording a deviation, with respect to the first and second databases, of the physiological signals and the spectral and prosodic characteristics of the audio signal provided by the user; and adapting the first processor module, the second processor module and the analyzer module to the deviations registered for the user.
La invención propuesta se basa en tecnologías inalámbricas para ofrecer una solución distribuida en varios dispositivos conectados con el teléfono móvil del usuario, preferentemente una pulsera y un colgante. Así, el usuario puede transportar la invención sin que le resulte ninguna molestia y sin que sea percibida por terceros. Además, al estar totalmente integrada en distintos wearables, resulta totalmente transparente para un eventual agresor, lo que reduce las posibilidades de que sean detectados y anulados por dicho agresor. The proposed invention is based on wireless technologies to offer a distributed solution on various devices connected to the user's mobile phone, preferably a bracelet and a pendant. Thus, the user can transport the invention without any inconvenience and without being perceived by third parties. In addition, as it is fully integrated into different wearables, it is completely transparent to an eventual aggressor, reducing the chances that they will be detected and canceled by said aggressor.
La presente invención monitoriza, registrar y recoge eventos derivados de la detección de emociones del usuario, como es el caso del pánico y estrés motivados por ejemplo, por un ataque sexual o violento. De forma posterior a la detección de dichos eventos, ventajosamente pueden desencadenarse de manera automática un conjunto de alarmas hacia una red de contactos “guardianes”, previamente configurada a través de una aplicación móvil, o hacia servicios de emergencias/seguridad. The present invention monitors, registers and collects events derived from the detection of user emotions, such as panic and stress caused, for example, by a sexual or violent attack. Subsequent to the detection of said events, advantageously, a set of alarms can be triggered automatically towards a network of "guardians", previously configured through a mobile application, or towards emergency / security services.
De acuerdo a todo lo anterior, las características de la presente invención implican multitud de ventajas para el usuario. La incorporación de la señal de audio en el proceso de decisión, clasificación y determinación del estado emocional del usuario, proporciona un nuevo enfoque, puesto que, gracias a ella, y al algoritmo de aprendizaje que se aplica sobre la misma, se aporta un valor añadido al sistema propuesto, proporcionando una mayor robustez o precisión a la inferencia emocional y, además, identificando posibles sonidos externos al usuario (silencios, portazos, disparos, etc.). Por tanto, la presente invención proporciona al usuario la capacidad de reaccionar y avisar rápidamente ante posibles agresiones, por ejemplo de carácter sexual. According to all of the above, the characteristics of the present invention imply a multitude of advantages for the user. The incorporation of the audio signal in the process of decision, classification and determination of the emotional state of the user, provides a new approach, since, thanks to it, and the learning algorithm that is applied to it, it provides a value added to the proposed system, providing greater robustness or precision to emotional inference and, in addition, identifying possible external sounds to the user (silences, slamming doors, shots, etc.). Therefore, the present invention provides the user with the ability to react and quickly warn of possible assaults, for example of a sexual nature.
BREVE DESCRIPCIÓN DE LAS FIGURAS BRIEF DESCRIPTION OF THE FIGURES
Para completar la descripción de la invención y con objeto de ayudar a una mejor comprensión de sus características, de acuerdo con un ejemplo preferente de realización de la misma, se acompaña un conjunto de dibujos en donde, con carácter ilustrativo y no limitativo, se han representado las siguientes figuras:  To complete the description of the invention and in order to help a better understanding of its characteristics, according to a preferred example of embodiment thereof, a set of drawings is attached where, by way of illustration and not limitation, have been represented the following figures:
La figura 1 representa esquemáticamente una realización del sistema completo de la invención. Figure 1 schematically represents an embodiment of the complete system of the invention.
La figura 2 representa una realización de unos de los dispositivos portátiles con forma de colgante.  Figure 2 represents an embodiment of one of the portable pendant devices.
La figura 3 representa una realización de unos de los dispositivos portátiles con forma de pulsera. La figura 4 muestra en un diagrama de bloques la naturaleza multimodal de la presente invención. Figure 3 represents an embodiment of one of the portable devices in the form of a bracelet. Figure 4 shows on a block diagram the multimodal nature of the present invention.
La figura 5 muestra mediante un diagrama de bloques el tratamiento de las señales fisiológicas en una de las realizaciones de la invención.  Figure 5 shows by means of a block diagram the treatment of physiological signals in one of the embodiments of the invention.
La figura 6 muestra mediante un diagrama de bloques el tratamiento de la señal de audio en una de las realizaciones de la invención.  Figure 6 shows by means of a block diagram the treatment of the audio signal in one of the embodiments of the invention.
La figura 7 representa un espacio de tres dimensiones utilizado por la presente invención para repartir el espacio entre ocho cuadrantes emocionales.  Figure 7 represents a three-dimensional space used by the present invention to divide the space among eight emotional quadrants.
La figura 8 representa en un diagrama de bloques la fase de entrenamiento y el método de configuración inicial personalizada del sistema de la presente invención. La figura 9 representa esquemáticamente una realización del sistema completo de la invención.  Figure 8 represents in a block diagram the training phase and the personalized initial configuration method of the system of the present invention. Figure 9 schematically represents an embodiment of the complete system of the invention.
DESCRIPCIÓN DE UNA REALIZACIÓN PREFERENTE DE LA INVENCIÓN DESCRIPTION OF A PREFERRED EMBODIMENT OF THE INVENTION
La presente invención divulga un método y un sistema distribuido para detectar el estado emocional de un usuario, lo que puede corresponder a situaciones de riesgo o agresiones inminentes o a otras emociones susceptibles de ser utilizadas para fines médicos, deportivos, etc. Utiliza para ello una efectiva integración multimodal de señales fisiológicas y físicas, preferentemente audio externo y voz (aunque también podrían usarse otras señales audiovisuales), mediante sensores integrables en una solución “wearable” portable y camuflable en prendas de vestir y/o complementos, que es capaz de alertar a un círculo de confianza del usuario o a las fuerzas de seguridad.  The present invention discloses a method and a distributed system for detecting the emotional state of a user, which may correspond to situations of risk or imminent aggression or other emotions that can be used for medical, sports, etc. purposes. It uses for this an effective multimodal integration of physiological and physical signals, preferably external audio and voice (although other audiovisual signals could also be used), by means of sensors that can be integrated into a portable and camouflage “wearable” solution in clothing and / or accessories, which It is capable of alerting a user circle of trust or security forces.
Una de las realizaciones de la invención, especialmente ventajosa en la detección de una emoción objetivo asociada a estados de pánico o bloqueos referidos a situaciones de violencia o agresión sexual, se encuentra representada en las figuras 1 y 9, donde el sistema se compone de tres dispositivos principales: dos dispositivos portátiles camuflados en prendas de vestir o“wearables”, que en este caso son una pulsera 1 y un colgante 2, y un dispositivo móvil de comunicación, que en este caso es un teléfono móvil 3 de tipo smartphone en el que se ejecuta una aplicación 4 diseñada específicamente. One of the embodiments of the invention, especially advantageous in detecting an objective emotion associated with states of panic or blockages related to situations of violence or sexual assault, is represented in Figures 1 and 9, where the system is composed of three main devices: two portable devices camouflaged in clothing or “wearables”, which in this case are a bracelet 1 and a pendant 2, and a mobile communication device, which in this case is a smartphone-type mobile phone 3 in the that a specifically designed application 4 is running.
La pulsera adquiere y monitoriza señales fisiológicas, las cuales son capturadas a través de sensores biométricos 5 en pequeños periodos de tiempo, y aplica algoritmos de aprendizaje automático a fin de disponer de un primer nivel de alerta en caso de detección positiva de la emoción objetivo, que en esta realización es la detección de una emoción de pánico o bloqueo ante una posible agresión. Dicha alerta se envía mediante un módulo de comunicación inalámbrico de corto alcance 6, por ejemplo Bluetooth, al teléfono móvil del usuario, el cual evalúa el contenido y envía, a través de la red de comunicación inalámbrica, una orden al colgante para activar la siguiente capa de detección del sistema. El colgante comienza entonces a adquirir audio del entorno del usuario, lo comprime y lo envía al teléfono móvil, el cual aplica algoritmos de aprendizaje automático para detectar indicios de riesgo, como por ejemplo cierto nivel de estrés, utilizando dicho audio adquirido por el micrófono dispuesto en el colgante del usuario. Ambas informaciones, fisiológicas y físicas, se integran en el teléfono móvil para proporcionar un análisis más robusto y preciso. Finalmente, en caso de detectarse un positivo (un estado emocional que supera un umbral de riesgo preestablecido que determina que existe violencia o indicios de ésta), se avisa a una red de contactos 11 de confianza, previamente establecida por el usuario durante la configuración inicial de la aplicación software 4 o directamente a un servicio de emergencias 9, por medio de una red de telecomunicación 7. La red de telecomunicación utilizada puede ser cualquier red apta para telefonía móvil (GPRS/3G/4G) o estar basada directamente en una conexión WiFi a Internet, que envía notificaciones push 10 a la red de contactos elegidas a través de un servidor 8 dedicado. The bracelet acquires and monitors physiological signals, which are captured through biometric sensors 5 in small periods of time, and applies machine learning algorithms in order to have a first level of alert in case of positive detection of the target emotion, which in this embodiment is the detection of a emotion of panic or blockage before a possible aggression. Said alert is sent by means of a short-range wireless communication module 6, for example Bluetooth, to the user's mobile phone, which evaluates the content and sends, via the wireless communication network, an order to the pendant to activate the following system detection layer. The pendant then begins to acquire audio from the user's environment, compresses it and sends it to the mobile phone, which applies machine learning algorithms to detect signs of risk, such as a certain level of stress, using said audio acquired by the arranged microphone on the user's pendant. Both information, physiological and physical, is integrated into the mobile phone to provide a more robust and accurate analysis. Finally, if a positive is detected (an emotional state that exceeds a pre-established risk threshold that determines that there is violence or signs of it), a network of trusted contacts 11 is notified, previously established by the user during the initial configuration from the software application 4 or directly to an emergency service 9, through a telecommunication network 7. The telecommunication network used can be any network suitable for mobile telephony (GPRS / 3G / 4G) or be based directly on a connection WiFi to the Internet, which sends 10 push notifications to the chosen contact network through a dedicated server 8.
Para un mejor funcionamiento de la invención, es recomendable personalizar para cada usuario los umbrales de riesgo y los niveles a considerar como positivos mediante un proceso de entrenamiento o adaptación previo a su puesta en marcha, ya que cada usuario puede tener una reacción corporal muy diferente ante una determinada emoción. Así, un sistema personalizado permitirá identificar las emociones de una forma más robusta que uno genérico. For a better operation of the invention, it is recommended to personalize for each user the risk thresholds and the levels to consider as positive through a training or adaptation process prior to its implementation, since each user may have a very different body reaction before a certain emotion. Thus, a personalized system will allow emotions to be identified in a more robust way than a generic one.
La figura 2 representa una realización de uno de los dispositivos portátiles en los que se distribuye la presente invención. En este caso, el dispositivo portátil está implementado en un colgante 2, aunque en otras realizaciones también se contemplan wearables como pendientes, diademas, piercings o broches. El colgante comprende una carcasa que aloja en su interior los componentes electrónicos necesarios para la adquisición de una señal física, en este caso un micrófono 20 para captar una señal de audio, para las comunicaciones inalámbricas con el teléfono móvil, batería y microprocesador. La carcasa cuenta con una microperforación 21 , en su cara exterior, coincidente con el micrófono alojado en el interior de la carcasa para facilitar la recepción del audio. La cara exterior del colgante presenta un botón 22 de pánico accionable por el usuario manualmente, que envía inmediatamente, por medio siempre del teléfono móvil, un mensaje de auxilio a la red de contactos. El botón de pánico se encuentra camuflado en el diseño del colgante. En una de las realizaciones de la invención, la cara trasera del colgante dispone de un pequeño agujero que permite acceder a los componentes electrónicos del interior de la carcasa mediante un objeto alargado de tipo puntero o un alfiler. Este acceso se limita a un botón de reset para reiniciar el dispositivo. Figure 2 represents an embodiment of one of the portable devices in which the present invention is distributed. In this case, the portable device is implemented in a pendant 2, although in other embodiments wearables such as earrings, headbands, piercings or brooches are also contemplated. The pendant comprises a housing that houses inside it the electronic components necessary for acquiring a physical signal, in this case a microphone 20 for capturing an audio signal, for wireless communications with the mobile phone, battery and microprocessor. The casing has a microperforation 21, on its outer face, coinciding with the microphone housed inside the casing to facilitate audio reception. Face The exterior of the pendant has a manually operated panic button 22, which immediately sends, via the mobile phone, a distress message to the network of contacts. The panic button is camouflaged in the pendant design. In one of the embodiments of the invention, the rear face of the pendant has a small hole that allows access to the electronic components inside the casing by means of an elongated pointer-type object or a pin. This access is limited to a reset button to restart the device.
En una de las realizaciones de la presente invención, el colgante comprende adicionalmente una cámara para la adquisición de la señal física. En este caso, además de adquirir una señal de audio como opción preferente, añade funcionalidades e información adicional al sistema adquiriendo imágenes y vídeo. In one of the embodiments of the present invention, the pendant additionally comprises a camera for the acquisition of the physical signal. In this case, in addition to acquiring an audio signal as a preferred option, it adds functionalities and additional information to the system acquiring images and video.
La figura 3 representa una realización de otro de los dispositivos portátiles en los que se distribuye la presente invención. En este caso, el dispositivo portátil está implementado en una pulsera 1 , aunque en otras realizaciones también se contemplan wearables como brazaletes o tobilleras. La pulsera tiene una carcasa que aloja en su interior un microprocesador y los componentes electrónicos necesarios para la adquisición de las señales fisiológicas del usuario. En esta realización se incluye un primer sensor 31 detector de respuesta galvánica de la piel, que preferiblemente se dispone fuera de la carcasa para facilitar el contacto con la piel del usuario de un par de electrodos, dicho primer sensor está configurado para obtener una señal con información de la conductividad de la piel (GSR) del usuario; un segundo sensor 32 detector de volumen sanguíneo del pulso (BVP) configurados para obtener una señal con información del pulso cardíaco del usuario; y un tercer sensor 33 detector de temperatura configurado para obtener una señal con información de la temperatura de la piel (SKT) del usuario. Los sensores se disponen en la cara interna de la pulsera para que, al ser colocada en la muñeca del usuario, se mantengan en contacto con su piel. Figure 3 depicts an embodiment of another of the portable devices in which the present invention is distributed. In this case, the portable device is implemented in a bracelet 1, although in other embodiments wearables such as bracelets or anklets are also contemplated. The bracelet has a casing that houses a microprocessor and the electronic components necessary for the acquisition of the user's physiological signals. In this embodiment a first sensor 31 galvanic skin response detector is included, which is preferably arranged outside the housing to facilitate contact with the skin of the user of a pair of electrodes, said first sensor is configured to obtain a signal with user skin conductivity (GSR) information; a second pulse blood volume detector (BVP) sensor 32 configured to obtain a signal with information on the user's heart pulse; and a third temperature detector sensor 33 configured to obtain a signal with the user's skin temperature information (SKT). The sensors are arranged on the internal face of the bracelet so that, when placed on the user's wrist, they remain in contact with their skin.
El interior de la carcasa también aloja un módulo de comunicaciones inalámbrica de corto alcance, preferiblemente Bluetooth, una batería y microprocesador 36. En una de las realizaciones, la carcasa además cuenta con una perforación 34, en su cara exterior, coincidente con un botón de reset alojado en el interior de la carcasa y accesible con un objeto alargado de tipo puntero o un alfiler para reiniciar el dispositivo. La cara exterior de la pulsera presenta un botón 35 de pánico accionable por el usuario manualmente, que envía inmediatamente, por medio siempre del teléfono móvil, un mensaje de auxilio a la red de contactos o el destinatario que se haya configurado previamente. The interior of the housing also houses a short-range wireless communication module, preferably Bluetooth, a battery, and microprocessor 36. In one embodiment, the housing further features a perforation 34, on its outer face, coincident with a push button. reset housed inside the case and accessible with an elongated pointer or pin to reset the device. The outer face of the bracelet features a manually operable panic button 35, which Immediately sends, via the mobile phone, a distress message to the network of contacts or the recipient that has been previously configured.
Con referencia a la figura 4, el sistema multimodal de reconocimiento de emociones, en una realización preferente de la presente invención, se alimenta de las siguientes variables fisiológicas: conductividad de la piel 40 (“ Galvanic Skin Response", GSR), volumen sanguíneo del pulso 41 (“ Blood Volume Pulse", BVP), y temperatura 42 (“Skin Temperature”, SKT). Y por otro lado, se alimenta de una variable física, que en este caso es el audio 43 e incluye la voz del usuario junto con el sonido del entorno. Referring to Figure 4, the multimodal emotion recognition system, in a preferred embodiment of the present invention, feeds on the following physiological variables: skin conductivity 40 ("Galvanic Skin Response", GSR), blood volume of the pulse 41 (“Blood Volume Pulse", BVP), and temperature 42 (“Skin Temperature”, SKT). And on the other hand, it is fed by a physical variable, which in this case is audio 43 and includes the user's voice along with the sound of the environment.
Una vez adquiridas las señales, se procede a la extracción 44 de información de manera individual de todas y cada una de las variables. Posteriormente, todo el conjunto de datos o información sensible es fusionado 45 a fin de determinar 46 una única categoría de estado emocional para el usuario. Aunque el dispositivo de comunicación está configurado para recibir tanto la información procesada por el primer dispositivo portátil como por el segundo dispositivo portátil, el método de la presente invención se lleva a cabo en dos etapas, donde la primera etapa (procesado de las señales fisiológicas) actúa de llave para la segunda etapa (procesado del audio), de manera que sin una primera alarma, detectada exclusivamente a partir de las señales fisiológicas, no se establecen el resto de comunicaciones ni de procesos en el segundo dispositivo portátil y, sólo una vez que se ha producido una segunda alarma tras el procesado del audio, se activa el procesado multimodal en un módulo analizador 47 del dispositivo de comunicación, que fusiona 45 los datos de ambas clasificaciones, determinando 46 la emoción que está sintiendo el usuario. Once the signals have been acquired, information 44 is extracted individually from each and every one of the variables. Subsequently, the entire set of data or sensitive information is merged 45 in order to determine 46 a single category of emotional state for the user. Although the communication device is configured to receive both the information processed by the first portable device and by the second portable device, the method of the present invention is carried out in two stages, where the first stage (processing of physiological signals) it acts as a key for the second stage (audio processing), so that without a first alarm, detected exclusively from physiological signals, the rest of the communications or processes are not established in the second portable device and, only once that a second alarm has occurred after processing the audio, multimodal processing is activated in an analyzer module 47 of the communication device, which merges 45 the data of both classifications, determining 46 the emotion that the user is feeling.
Por tanto, existen dos bloques principales y diferenciados, el primero encargado del tratamiento de las señales fisiológicas y el segundo del audio. Dichos bloques son totalmente independientes, tanto en funcionalidad como en operación, aunque la conjunción de ambos proporciona una fusión multimodal que beneficia de manera notable a la inferencia emocional. Therefore, there are two main and differentiated blocks, the first responsible for the treatment of physiological signals and the second for audio. These blocks are totally independent, both in functionality and operation, although the conjunction of both provides a multimodal fusion that greatly benefits emotional inference.
La figura 5 representa el primero de dichos bloques, concretamente el bloque encargado del tratamiento de señales fisiológicas, donde un módulo procesador 50 que comprende un microprocesador integrado en la pulsera se encarga de todo el procesado. Las señales fisiológicas obtenidas por los sensores biométricos de la pulsera son procesadas en varias fases, en primer lugar, las señales en bruto 51 sufren una eliminación del ruido 52. A continuación, se realiza un proceso de normalización 53 de las señales y extracción 54 de características, como por ejemplo la media, desviación típica, media de los valores absolutos de la primera diferencia, media de los valores absolutos de la primera diferencia de la señal normalizada, media de los valores absolutos de la segunda diferencia, media de la primera diferencia de la señal suavizada, etc. Después de extraer las características de cada una de las señales fisiológicas por separado, se procede a fusionar 55 las características para finalmente clasificar 56 los resultados y obtener una señal de salida con un nivel emocional 57. Figure 5 represents the first of said blocks, specifically the block in charge of treating physiological signals, where a processor module 50 comprising a microprocessor integrated in the bracelet is in charge of all the processing. The physiological signals obtained by the bracelet's biometric sensors are processed in several phases, firstly, the raw signals 51 undergo noise removal 52. Then, a process of standardization 53 of the signals and extraction 54 of characteristics, such as the mean, standard deviation, mean of the absolute values of the first difference, mean of the absolute values of the first difference of the normalized signal, mean of the absolute values of the second difference, mean of the first difference of the smoothed signal, etc. After extracting the characteristics of each of the physiological signals separately, 55 the characteristics are merged to finally classify 56 the results and obtain an output signal with an emotional level 57.
El clasificador 56 aplica un algoritmo clásico de aprendizaje automático de bajo coste computacional para clasificar la emoción percibida por el sistema, por ejemplo siguiendo el método de los K-vecinos más cercanos, o abreviado KNN (del inglés“K-Nearest- Neighbors’). Este algoritmo necesita de unos datos de entrenamiento o puntos del espacio, los cuales se han obtenido previamente durante la configuración del sistema, la cual se desarrolla de manera offline y se detalla más adelante. Classifier 56 applies a classical low-cost machine learning algorithm to classify the emotion perceived by the system, for example following the method of the closest K-neighbors, or abbreviated KNN (from the English “K-Nearest- Neighbors') . This algorithm requires training data or space points, which have been previously obtained during the system configuration, which is carried out offline and is detailed later.
El nivel emocional 57 determinado a la salida del módulo procesador 50, presenta un nivel de confianza que indica, en tanto por ciento, la probabilidad de que la combinación de características extraída de las señales fisiológicas se corresponda con una emoción en concreto. El nivel de confianza de cada emoción es, por tanto, la métrica que cuantifica la presencia de dicha emoción en el usuario. The emotional level 57 determined at the output of the processor module 50, presents a confidence level that indicates, as a percentage, the probability that the combination of characteristics extracted from the physiological signals corresponds to a specific emotion. The confidence level of each emotion is, therefore, the metric that quantifies the presence of said emotion in the user.
Dado que el estado emocional del usuario se representa en un continuo en el que las emociones se solapan, varias emociones pueden darse a la vez con diferentes niveles de confianza. En el caso de definir una emoción objetivo particularmente interesante para una aplicación concreta, como es el caso de esta realización preferente para los estados de pánico, donde la emoción objetivo que se correspondería con el cuadrante -p+a-d de la figura 7, etiquetado como“miedo”, la señal de salida con un nivel emocional 59 comprende información de si el nivel de confianza de la emoción objetivo es superior o no a un umbral determinado previamente. El umbral se sitúa en el valor a partir del cual se considera que la emoción objetivo es predominante frente al resto de emociones. Este umbral de detección es el punto a partir del cual la emoción de interés es predominante (en tiempo), con respecto a las restantes emociones detectadas en el mismo periodo. En caso de superar el umbral establecido para el nivel de confianza de la emoción objetivo, la pulsera establece una comunicación con el teléfono del usuario, para transmitir el nivel de emoción objetivo detectado en un primer mensaje de alarma, el cual provoca que el teléfono del usuario inmediatamente envíe un mensaje de activación al colgante para que comience a grabar audio. Since the emotional state of the user is represented in a continuum in which the emotions overlap, several emotions can occur at the same time with different levels of confidence. In the case of defining a particularly interesting target emotion for a specific application, as in the case of this preferred embodiment for states of panic, where the target emotion would correspond to the -p + ad quadrant of Figure 7, labeled as "Fear", the exit signal with an emotional level 59 comprises information on whether or not the confidence level of the target emotion is above a previously determined threshold. The threshold is located at the value from which the objective emotion is considered to be predominant over the other emotions. This detection threshold is the point from which the emotion of interest is predominant (in time), with respect to the other emotions detected in the same period. In case of exceeding the threshold established for the confidence level of the target emotion, the bracelet establishes communication with the user's phone, to transmit the target emotion level detected in a first alarm message, which causes the phone of the User immediately send an activation message to the pendant to start recording audio.
La figura 6 representa el segundo de dichos bloques para llevar a cabo la extracción de la categoría o tipo de emoción, concretamente el bloque encargado del tratamiento de la señal física. Las señales físicas, en este caso el audio, comienzan a adquirirse tras recibir en el colgante una orden enviada desde el teléfono móvil, el cual envía dicha orden solo tras recibir una indicación, primer mensaje de alarma, desde la pulsera de que el análisis de las señales fisiológicas ha clasificado la emoción objetivo con una confianza superior al umbral predeterminado. En ese caso, el micrófono del colgante es activado y comienza a grabar una señal de audio 60. Esta señal de audio es procesada localmente por el microprocesador del colgante para comprimirla y transmitirla 61 al teléfono móvil por medio de la red de comunicación inalámbrica Bluetooth. Una vez recibida en el módulo procesador 62, que en esta realización está integrado en el teléfono móvil, la señal comprimida se descomprime 63 y se procede a la extracción 64 de características. Después de extraer las características de la señal de audio, se procede a clasificarla 65 y obtener una señal de salida con una clasificación de la emoción objetivo y un nivel de confianza 66 obtenidos independientemente del análisis de la señales fisiológicas. Figure 6 represents the second of said blocks to carry out the extraction of the category or type of emotion, specifically the block in charge of processing the physical signal. The physical signals, in this case the audio, begin to be acquired after receiving an order sent from the mobile phone on the pendant, which sends said order only after receiving an indication, the first alarm message, from the bracelet that the analysis of Physiological cues have classified the target emotion with confidence above the predetermined threshold. In that case, the pendant microphone is activated and starts recording an audio signal 60. This audio signal is processed locally by the pendant microprocessor to compress it and transmit it 61 to the mobile phone via the Bluetooth wireless communication network. Once received in the processor module 62, which in this embodiment is integrated into the mobile phone, the compressed signal is decompressed 63 and the extraction 64 of features is carried out. After extracting the characteristics of the audio signal, we proceed to classify it 65 and obtain an output signal with a classification of the target emotion and a confidence level 66 obtained independently of the analysis of the physiological signals.
En una realización alternativa, el módulo procesador 62 está integrado en el microprocesador del segundo dispositivo portátil, el colgante por ejemplo, de manera que la señal de salida con una clasificación de la emoción objetivo y un nivel de confianza 66 es obtenida en el propio colgante antes de transmitir nada al teléfono móvil. Una vez se determina la presencia de la emoción objetivo es cuando se activa la transmisión al teléfono móvil para que su módulo analizador 47 active el procesado multimodal explicado anteriormente, que fusiona 45 los datos de ambas clasificaciones, determinando 46 la presencia de la emoción objetivo en el usuario, en función del análisis mediante un algoritmo de aprendizaje automático de las señales de salida de los bloques de las figuras 5 y 6. In an alternative embodiment, the processor module 62 is integrated in the microprocessor of the second portable device, the pendant for example, so that the output signal with a classification of the target emotion and a confidence level 66 is obtained in the pendant itself before transmitting anything to the mobile phone. Once the presence of the target emotion is determined, it is when transmission to the mobile phone is activated so that its analyzer module 47 activates the multimodal processing explained above, which merges 45 the data from both classifications, determining 46 the presence of the target emotion in the user, based on the analysis by means of an automatic learning algorithm of the output signals of the blocks of Figures 5 and 6.
Para la clasificación y cálculo del nivel de confianza de una emoción, el módulo procesador extrae diversas características espectrales y prosódicas de la señal de audio que, posteriormente serán clasificadas 65 aplicando un algoritmo clásico de aprendizaje automático de bajo coste computacional, para clasificar y calcular el nivel de confianza 66 de la emoción percibida por el sistema y confirmar o rechazar así, la emoción objetivo detectada por el primer dispositivo portátil. En función de si la emoción objetivo detectada por este segundo dispositivo, el colgante, supera un cierto umbral de confianza preestablecido, que no tiene que ser el mismo umbral establecido para el primer dispositivo, se procede a enviar el mensaje de auxilio a la red de contactos configurada por el usuario, o de acuerdo a otras realizaciones, a un servicio de emergencias, un centro médico o cualquier otro agente que sea considerado apropiado para actuar ante la detección de una cierta emoción. For the classification and calculation of the confidence level of an emotion, the processor module extracts various spectral and prosodic characteristics from the audio signal that, later, 65 will be classified applying a classic low-cost machine learning algorithm, to classify and calculate the confidence level 66 of the emotion perceived by the system and thus confirm or reject, the objective emotion detected by the first portable device. Depending on whether the target emotion detected by this second device, the pendant, exceeds a certain pre-established confidence threshold, which does not have to be the same threshold established for the first device, the distress message is sent to the contacts configured by the user, or according to other embodiments, to an emergency service, a medical center or any other agent that is considered appropriate to act upon the detection of a certain emotion.
En una realización, previo a la propia clasificación, además se utiliza un detector de actividad de voz (“ Voice Activity Detection”, VAD) para eliminar y contabilizar posibles silencios dentro del audio grabado. De esta forma, no solamente se puede llevar a cabo la detección de la voz del usuario, sino que también se pueden detectar posibles sonidos del entorno relevantes (silencios, portazos, disparos, etc.). In one embodiment, prior to the classification itself, a voice activity detector ("Voice Activity Detection", VAD) is also used to eliminate and count possible silences within the recorded audio. In this way, it is not only possible to detect the user's voice, but also to detect possible relevant sounds from the environment (silences, slamming doors, shots, etc.).
Las salidas de los dos bloques representados en las figuras 5 y 6, que extraen parcialmente la categoría o tipo de emoción del usuario al procesar por separado la combinación de señales fisiológicas y la señal de audio respectivamente, se fusionan para determinar cuál es el estado emocional del usuario dentro de unos cuadrantes emocionales definidos previamente y, específicamente, para determinar si la emoción objetivo está presente o no en el usuario. La figura 7 muestra un espacio de tres dimensiones utilizado por la presente invención para repartir el espacio entre ocho cuadrantes emocionales. Esta distribución se basa en un modelo“PAD-Space” con tres ejes de coordenadas que representan todas los estados emocionales en función de los valores numéricos asignados para las variables de Placer/Valencia - Excitación - Dominancia (“Pleasure/Valence - Arousal - Dominance”, en inglés), donde cada uno de dichos estados emocionales se representa en un vértice del cubo representado en dicho espacio tridimensional. Así, se asigna un vértice del cubo a cada uno de los siguientes ocho estados emocionales: alegría 71 (+p+a+d), gratitud 72 (+p+a-d), sumisión 73 (+p-a- d), angustia 74 (-p-a-d), alivio 75 (+p-a+d), desprecio 76 (-p-a+d), miedo 77 (-p+a-d), e ira 78 (-p+a+d). Por ejemplo, la variable de“Placer/Valencia” mide lo agradable que el usuario percibe un cierto estímulo. Así que, la“ira” 78 o el“miedo” 77, al ser emociones catalogadas como no placenteras, se ubican en el extremo negativo (-p). En contraste, la “alegría” 71 , al ser una emoción catalogada como placentera, se sitúa en el extremo del placer (+p). The outputs of the two blocks represented in Figures 5 and 6, which partially extract the category or type of emotion from the user by separately processing the combination of physiological signals and the audio signal respectively, are fused to determine what the emotional state is. of the user within previously defined emotional quadrants and, specifically, to determine whether or not the target emotion is present in the user. Figure 7 shows a three-dimensional space used by the present invention to divide the space among eight emotional quadrants. This distribution is based on a “PAD-Space” model with three coordinate axes that represent all emotional states based on the numerical values assigned for the variables of Pleasure / Valencia - Excitation - Dominance (“Pleasure / Valence - Arousal - Dominance ”, In English, where each one of said emotional states is represented in a vertex of the cube represented in said three-dimensional space. Thus, a vertex of the cube is assigned to each of the following eight emotional states: joy 71 (+ p + a + d), gratitude 72 (+ p + ad), submission 73 (+ pa-d), anguish 74 ( -pad), relief 75 (+ p-a + d), contempt 76 (-p-a + d), fear 77 (-p + ad), and anger 78 (-p + a + d). For example, the variable "Pleasure / Valencia" measures how pleasant the user perceives a certain stimulus. So, “anger” 78 or “fear” 77, being emotions classified as unpleasant, are located on the negative end (-p). In contrast, the “Joy” 71, being an emotion classified as pleasant, is situated at the extreme of pleasure (+ p).
La situación emocional del usuario queda por tanto representada en uno de los ocho cuadrantes emocionales definidos por estas tres dimensiones, en función de los resultados obtenidos para cada una de las tres variables, Placer/Valencia - Excitación - Dominancia, al analizar, por un lado en el clasificador 58, la combinación de las características extraídas de todas las señales fisiológicas y, por otro lado, en el módulo clasificador 64, las características extraídas de la señal de audio. Finalmente, como se mostraba en la figura 4, la combinación 45 de la información proporcionada por las señales fisiológicas y la señal de audio resulta en una única caracterización final 46 del estado emocional del usuario, que comprende la emoción objetivo y un nivel de confianza. The user's emotional situation is therefore represented in one of the eight emotional quadrants defined by these three dimensions, depending on the results obtained for each of the three variables, Pleasure / Valencia - Excitation - Dominance, when analyzing, on the one hand in classifier 58, the combination of the characteristics extracted from all physiological signals and, on the other hand, in classifier module 64, the characteristics extracted from the audio signal. Finally, as shown in Figure 4, the combination 45 of the information provided by the physiological signals and the audio signal results in a single final characterization 46 of the user's emotional state, comprising the target emotion and a confidence level.
Una vez la presente invención ha detectado automáticamente el cuadrante emocional en el que se sitúa el usuario, puede configurarse para enviar esta información a una localización/usuario remoto por medio de una red de telecomunicación inalámbrica, donde se tomarán las acciones preventivas o resolutivas necesarias que se consideren en cada una de las aplicaciones de la invención. La información puede incluir también una información de geolocalización del usuario, que se envía automáticamente al determinarse que el nivel de la emoción objetivo ha superado un umbral preestablecido. Once the present invention has automatically detected the emotional quadrant in which the user is located, it can be configured to send this information to a remote location / user through a wireless telecommunication network, where the necessary preventive or decisive actions will be taken that are considered in each of the applications of the invention. The information may also include user geolocation information, which is automatically sent upon determination that the target emotion level has exceeded a preset threshold.
La figura 8 representa la fase de entrenamiento y método de configuración inicial personalizada del sistema de la presente invención, la cual proporciona al algoritmo clásico de aprendizaje automático del módulo analizador los vectores de características y las etiquetas necesarias para realizar el entrenamiento. Para ello, se dispone un módulo controlador con algoritmos de aprendizaje automático y acceso a dos bases de datos independientes, una de señales fisiológicas 81 y otra de señales de audio 82. Dicha configuración inicial se divide en dos procesos diferentes en base a las señales a capturar, que comparten el acondicionamiento de señal y extracción de características de las figuras 5 y 6. Figure 8 represents the training phase and personalized initial configuration method of the system of the present invention, which provides the classic machine learning algorithm of the analyzer module with the feature vectors and labels necessary to carry out the training. For this, a controller module with machine learning algorithms and access to two independent databases is available, one of physiological signals 81 and the other of audio signals 82. Said initial configuration is divided into two different processes based on the signals a capture, which share the signal conditioning and feature extraction of Figures 5 and 6.
En el caso de las señales fisiológicas, el usuario es estimulado mediante contenidos audiovisuales, previamente etiquetados con un cuadrante emocional específico. Las variaciones fisiológicas producidas son registradas y almacenadas 83 en la base de datos de señales fisiológicas 81 y, al final del proceso, el módulo controlador obtiene un modelo predictivo 84 entrenado para el usuario particular. De esta manera, la combinación de las características seleccionadas de las tres variables fisiológicas utilizadas en esta realización preferente, que obviamente podrían ser otras en realizaciones alternativas de la invención, queda caracterizada numéricamente y la influencia de sus variaciones se puede trasladar directamente a cada uno de los tres ejes del espacio PAD utilizado para representar todos los estados emocionales, que pueden ser mapeados en función de los valores numéricos asignados para las variables de Placer/Valencia - Excitación - Dominancia. In the case of physiological signals, the user is stimulated through audiovisual content, previously labeled with a specific emotional quadrant. The physiological variations produced are recorded and stored 83 in the database of physiological signals 81 and, at the end of the process, the controller module obtains a predictive model 84 trained for the particular user. In this way, the combination of the selected characteristics of the three physiological variables used in this preferred embodiment, which obviously could be others in alternative embodiments of the invention, is numerically characterized and the influence of its variations can be directly transferred to each of the three axes of the PAD space used to represent all emotional states, which can be mapped based on the numerical values assigned for the variables of Pleasure / Valencia - Excitation - Dominance.
El caso de las señales de audio es análogo al descrito para las señales fisiológicas, con la diferencia de que el proceso de entrenamiento se realiza con grabaciones de voz locutadas por el propio usuario, de textos previamente etiquetados con un cuadrante emocional específico. Así, las variaciones en las características espectrales y prosódicas de la voz del usuario/usuaria, son registradas y almacenadas 85 en la base de datos de señales de audio 82. Al final del proceso, el módulo controlador obtiene un modelo predictivo 86 entrenado para el usuario particular. De esta manera, la combinación de las características seleccionadas de la señal de audio queda caracterizada numéricamente y la influencia de sus variaciones puede trasladarse directamente a cada uno de los tres ejes del espacio PAD, utilizado para representar todos los estados emocionales, que pueden ser mapeados en función de los valores numéricos asignados para las variables de Placer/Valencia - Excitación - Dominancia. The case of audio signals is analogous to that described for physiological signals, with the difference that the training process is carried out with voice recordings spoken by the user himself, of texts previously labeled with a specific emotional quadrant. Thus, the variations in the spectral and prosodic characteristics of the user / user's voice are recorded and stored 85 in the audio signal database 82. At the end of the process, the controller module obtains a predictive model 86 trained for the private user. In this way, the combination of the selected characteristics of the audio signal is numerically characterized and the influence of its variations can be directly transferred to each of the three axes of the PAD space, used to represent all the emotional states, which can be mapped based on the numerical values assigned for the variables of Pleasure / Valencia - Excitation - Dominance.
La presente invención no debe verse limitada a la forma de realización aquí descrita. Otras configuraciones pueden ser realizadas por los expertos en la materia a la vista de la presente descripción. En consecuencia, el ámbito de la invención queda definido por las siguientes reivindicaciones. The present invention should not be limited to the embodiment described herein. Other configurations can be made by those skilled in the art in view of the present description. Accordingly, the scope of the invention is defined by the following claims.

Claims

REIVINDICACIONES
1. Sistema para determinar un estado emocional de un usuario, caracterizado por que comprende: 1. System to determine an emotional state of a user, characterized by comprising:
un primer dispositivo portátil (1) que dispone de unos medios sensores (5) configurados para la adquisición de un conjunto de señales fisiológicas (51) del usuario, donde el primer dispositivo portátil comprende:  a first portable device (1) that has sensor means (5) configured for the acquisition of a set of physiological signals (51) from the user, where the first portable device comprises:
o un primer módulo procesador (50) configurado para:  or a first processor module (50) configured to:
determinar, a partir del conjunto de señales fisiológicas, un primer estado emocional parcial con un primer nivel (57) de dicho estado emocional;  determining, from the set of physiological signals, a first partial emotional state with a first level (57) of said emotional state;
determinar si el primer estado emocional parcial coincide con un estado emocional objetivo establecido previamente y si el primer nivel del primer estado emocional parcial supera un primer umbral previamente establecido; y  determining whether the first partial emotional state coincides with a previously established objective emotional state and whether the first level of the first partial emotional state exceeds a first previously established threshold; and
transmitir, en caso afirmativo, un mensaje de alarma a un dispositivo móvil de comunicación del usuario;  transmit, if so, an alarm message to a mobile communication device of the user;
un segundo dispositivo portátil (2) configurado para adquirir una señal de audio (60) al recibir una instrucción de activación desde un dispositivo móvil de comunicación del usuario;  a second portable device (2) configured to acquire an audio signal (60) upon receiving an activation instruction from a mobile user communication device;
un segundo módulo procesador (62) configurado para determinar, a partir de la señal de audio, un segundo estado emocional parcial con un segundo nivel (66) de dicho estado emocional, y para determinar si coincide con el estado emocional objetivo y si supera un segundo umbral previamente establecido; y un dispositivo móvil de comunicación (3) que comprende:  a second processor module (62) configured to determine, from the audio signal, a second partial emotional state with a second level (66) of said emotional state, and to determine if it coincides with the objective emotional state and if it exceeds a second threshold previously established; and a mobile communication device (3) comprising:
o un módulo de comunicación inalámbrico (6) configurado para recibir el mensaje de alarma del primer dispositivo portátil y enviar la instrucción de activación al segundo dispositivo portátil; y  or a wireless communication module (6) configured to receive the alarm message from the first portable device and send the activation instruction to the second portable device; and
o un módulo analizador (47) configurado para determinar, en caso de que el segundo módulo analizador indique que el segundo nivel ha superado el segundo umbral, la presencia del estado emocional objetivo en el usuario, basado en un análisis de los niveles de los estados emocionales parciales (57, 66) por un algoritmo de aprendizaje automático. or an analyzer module (47) configured to determine, in case the second analyzer module indicates that the second level has exceeded the second threshold, the presence of the objective emotional state in the user, based on an analysis of the levels of the states partial emotionals (57, 66) by a machine learning algorithm.
2. Sistema de acuerdo a la reivindicación 1 donde el módulo analizador está además configurado para, una vez determinada la presencia de la emoción objetivo en el usuario, enviar automáticamente un mensaje con información del análisis del módulo analizador a un sistema remoto, a través de una red de telecomunicación (7), donde el sistema remoto se selecciona entre: una red de contactos del usuario, un servidor accesible por un servicio de emergencias, y un servidor privado accesible por un tercer usuario autorizado por el usuario. 2. System according to claim 1 where the analyzer module is further configured to, once the presence of the target emotion in the user has been determined, automatically send a message with information from the analysis of the analyzer module to a remote system, through a telecommunication network (7), where the remote system is selected from: a user contact network, a server accessible by an emergency service, and a private server accessible by a third user authorized by the user.
3. Sistema de acuerdo a cualquiera de las reivindicaciones anteriores donde los medios sensores del primer dispositivo portátil comprenden: 3. System according to any of the preceding claims, where the sensor means of the first portable device comprise:
unos medios detectores (31) de respuesta galvánica de la piel, configurados para obtener una señal con información de la conductividad de la piel (GSR) del usuario;  a galvanic skin response detecting means (31) configured to obtain a signal with information on the user's skin conductivity (GSR);
unos medios detectores (32) de volumen sanguíneo del pulso (BVP) configurados para obtener una señal con información del pulso cardíaco del usuario;  a pulse blood volume (BVP) detecting means (32) configured to obtain a signal with information on the user's heart pulse;
unos medios detectores (33) de temperatura configurados para obtener una señal con información de la temperatura de la piel (SKT) del usuario.  temperature detecting means (33) configured to obtain a signal with skin temperature information (SKT) from the user.
4. Sistema de acuerdo a cualquiera de las reivindicaciones anteriores donde el módulo analizador es una unidad computacional configurada para determinar el estado emocional del usuario de acuerdo a unos datos de entrenamiento proporcionado previamente. 4. System according to any of the preceding claims, where the analyzer module is a computational unit configured to determine the emotional state of the user according to previously provided training data.
5. Sistema de acuerdo a cualquiera de las reivindicaciones anteriores que además comprende un módulo controlador configurado para un entrenamiento inicial del sistema, donde el módulo controlador comprende una primera base de datos (81) de señales fisiológicas cuantificadas según estímulos audiovisuales asociados previamente a estados emocionales específicos y una segunda base de datos (82) de señales de audio asociadas previamente a los estados emocionales específicos. 5. System according to any of the preceding claims, which further comprises a controller module configured for initial training of the system, where the controller module comprises a first database (81) of quantified physiological signals according to audiovisual stimuli previously associated with emotional states. specific and a second database (82) of audio signals previously associated with specific emotional states.
6. Sistema de acuerdo a cualquiera de las reivindicaciones anteriores donde el segundo dispositivo portátil comprende además un micrófono (20) configurado para adquirir la señal de audio. 6. System according to any of the preceding claims, wherein the second portable device further comprises a microphone (20) configured to acquire the audio signal.
7. Sistema de acuerdo a la reivindicación 6, donde el segundo módulo procesador además comprende un detector de actividad vocal configurado para determinar la presencia de silencios en la señal de audio adquirida y contabilizar dichos silencios. 7. System according to claim 6, wherein the second processor module further comprises a voice activity detector configured to determine the presence of silences in the acquired audio signal and count these silences.
8. Sistema de acuerdo a cualquiera de las reivindicaciones anteriores donde, al menos uno de los dispositivos portátiles comprende un pulsador (22) configurado para, tras una pulsación del usuario, transmitir un mensaje de auxilio al dispositivo móvil de comunicación; y donde el dispositivo móvil de comunicación está además configurado para, en respuesta a la recepción del mensaje de auxilio, reenviar el mensaje de auxilio a un grupo de contactos del usuario previamente establecido. 8. System according to any of the preceding claims, where at least one of the portable devices comprises a button (22) configured to, after a user click, transmit a distress message to the mobile communication device; and where the mobile communication device is further configured to, in response to receiving the distress message, forward the distress message to a previously established user group of contacts.
9. Sistema de acuerdo a cualquiera de las reivindicaciones anteriores que además comprende: 9. System according to any of the preceding claims, which further comprises:
una pulsera con una primera carcasa que aloja en su interior el primer dispositivo portátil;  a bracelet with a first casing that houses the first portable device inside;
un colgante con una segunda carcasa que aloja en su interior el segundo dispositivo portátil;  a pendant with a second casing that houses inside it the second portable device;
donde el dispositivo móvil de comunicación es un teléfono móvil de tipo smartphone que integra el segundo módulo procesador. where the mobile communication device is a smartphone-type mobile phone that integrates the second processor module.
10. Sistema de acuerdo a cualquiera de las reivindicaciones 1-8 que además comprende: una pulsera con una primera carcasa que aloja en su interior el primer dispositivo portátil; 10. System according to any of claims 1-8 which further comprises: a bracelet with a first casing housing the first portable device inside;
un colgante con una segunda carcasa que aloja en su interior el segundo dispositivo portátil y el segundo módulo procesador;  a pendant with a second housing housing inside it the second portable device and the second processor module;
donde el dispositivo móvil de comunicación es un teléfono móvil de tipo smartphone. where the mobile communication device is a smartphone-type mobile phone.
11. Método para determinar la presencia de un estado emocional en un usuario, caracterizado por que comprende los siguientes pasos: 11. Method to determine the presence of an emotional state in a user, characterized by comprising the following steps:
a) adquirir, mediante unos medios sensores (5) dispuestos en un primer dispositivo portátil (1) un conjunto de señales fisiológicas (51);  a) acquiring, by means of sensor means (5) arranged in a first portable device (1), a set of physiological signals (51);
b) determinar, por un primer módulo procesador (50), un primer estado emocional parcial a partir del conjunto de señales fisiológicas, con un primer nivel (57) de dicho primer estado emocional parcial;  b) determining, by a first processor module (50), a first partial emotional state from the set of physiological signals, with a first level (57) of said first partial emotional state;
c) determinar si el primer estado emocional parcial coincide con un estado emocional objetivo establecido previamente y si el primer nivel del primer estado emocional parcial supera un primer umbral previamente establecido; d) en caso afirmativo, transmitir un mensaje de alarma a un dispositivo móvil de comunicación del usuario (3), mediante un módulo de comunicación inalámbrico (6); c) determining whether the first partial emotional state coincides with a previously established objective emotional state and whether the first level of the first partial emotional state exceeds a first previously established threshold; d) in the affirmative, transmit an alarm message to a mobile communication device of the user (3), by means of a wireless communication module (6);
e) como resultado de la recepción del mensaje de alarma, adquirir una señal de audio (60), mediante un segundo dispositivo portátil (2) en comunicación con el dispositivo móvil de comunicación (3);  e) as a result of receiving the alarm message, acquiring an audio signal (60), by means of a second portable device (2) in communication with the mobile communication device (3);
f) determinar, en un segundo módulo procesador (62) un segundo estado emocional parcial, a partir de la señal de audio, con un segundo nivel (66) de dicho segundo estado emocional parcial;  f) determining, in a second processor module (62) a second partial emotional state, from the audio signal, with a second level (66) of said second partial emotional state;
g) determinar si el segundo estado emocional parcial coincide con el estado emocional objetivo establecido previamente y si el segundo nivel de dicho segundo estado emocional parcial supera un segundo umbral previamente establecido;  g) determining if the second partial emotional state coincides with the previously established objective emotional state and if the second level of said second partial emotional state exceeds a previously established second threshold;
h) en caso afirmativo, determinar, en un módulo analizador (47), la presencia del estado emocional objetivo del usuario, basado en un análisis de los niveles de los estados emocionales parciales (57, 66) por un algoritmo de aprendizaje automático.  h) if so, determine, in an analyzer module (47), the presence of the user's objective emotional state, based on an analysis of the levels of partial emotional states (57, 66) by a machine learning algorithm.
12. Método de acuerdo a la reivindicación 11 donde determinar la presencia del estado emocional objetivo del usuario comprende: 12. Method according to claim 11 where determining the presence of the user's objective emotional state comprises:
determinar un nivel emocional total basado en los niveles de los estados emocionales parciales;  determining a total emotional level based on the levels of the partial emotional states;
comparar el nivel emocional total determinado con un tercer umbral previamente establecido; y  compare the total emotional level determined with a third threshold previously established; and
determinar que el estado emocional está presente en caso de que dicho nivel emocional total supere dicho tercer umbral.  determining that the emotional state is present in the event that said total emotional level exceeds said third threshold.
13. Método de acuerdo a cualquiera de las reivindicaciones 11-12 que además comprende, una vez determinada la presencia de la emoción objetivo en el usuario, enviar automáticamente un mensaje con información del análisis del módulo analizador a un sistema remoto, a través de una red de telecomunicación (7), donde el sistema remoto se selecciona entre: una red de contactos del usuario, un servidor accesible por un servicio de emergencias, y un servidor privado accesible por un tercer usuario autorizado por el usuario. 13. Method according to any of claims 11-12 which further comprises, once the presence of the target emotion in the user has been determined, automatically sending a message with information from the analysis of the analyzer module to a remote system, through a telecommunication network (7), where the remote system is selected from: a user contact network, a server accessible by an emergency service, and a private server accessible by a third user authorized by the user.
14. Método de acuerdo a cualquiera de las reivindicaciones 11-12 donde el conjunto de señales fisiológicas comprende tres señales fisiológicas y, donde adquirir por los medios sensores el conjunto de señales fisiológicas, comprende adquirir una señal con información de la conductividad de la piel (GSR) del usuario, adquirir una señal con información del pulso cardíaco del usuario (BVP), y adquirir una señal con información de la temperatura de la piel (SKT) del usuario. 14. Method according to any of claims 11-12 where the set of physiological signals comprises three physiological signals and, where acquiring by means of the sensor means the set of physiological signals, comprises acquiring a signal with information on the conductivity of the skin ( GSR) of the user, acquire a signal with information on the user's heart rate (BVP), and acquire a signal with information on the user's skin temperature (SKT).
15. Método de acuerdo a cualquiera de las reivindicaciones 11-14 donde determinar un estado emocional parcial con un nivel de dicho estado emocional parcial, comprende mapear un punto en un espacio de tres dimensiones que representa todos los estados emocionales, basado en unos valores numéricos de las tres variables de “Placer/Valencia” -“Excitación” -“Dominancia”, asignados para el conjunto de señales fisiológicas adquirido o para ciertas características de la señal de audio adquirida. 15. Method according to any of claims 11-14, where determining a partial emotional state with a level of said partial emotional state, comprises mapping a point in a three-dimensional space that represents all emotional states, based on numerical values. of the three variables of "Pleasure / Valencia" - "Excitation" - "Dominance", assigned for the set of acquired physiological signals or for certain characteristics of the acquired audio signal.
16. Método de acuerdo a cualquiera de las reivindicaciones 11-15 que además comprende una etapa previa de entrenamiento que comprende: 16. Method according to any of claims 11-15 which further comprises a previous training stage comprising:
alimentar una primera base de datos (81) con señales fisiológicas cuantificadas según estímulos audiovisuales asociados previamente a estados emocionales específicos;  feeding a first database (81) with quantified physiological signals according to audiovisual stimuli previously associated with specific emotional states;
alimentar una segunda base de datos (82) con señales de audio con unas características espectrales y/o prosódicas asociadas previamente a los estados emocionales específicos;  feeding a second database (82) with audio signals with spectral and / or prosodic characteristics previously associated with the specific emotional states;
registrar una desviación, respecto de la primera y segundas bases de datos, de las señales fisiológicas y las características espectrales y prosódicas de la señal de audio proporcionadas por el usuario; y  recording a deviation, with respect to the first and second databases, of the physiological signals and the spectral and prosodic characteristics of the audio signal provided by the user; and
adaptar el primer módulo procesador, el segundo módulo procesador y el módulo analizador a las desviaciones registradas para el usuario.  adapt the first processor module, the second processor module and the analyzer module to the deviations registered for the user.
PCT/ES2019/070797 2018-11-21 2019-11-21 System and method for determining the emotional state of a user WO2020104722A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
ES201831130A ES2762277A1 (en) 2018-11-21 2018-11-21 SYSTEM AND METHOD FOR DETERMINING AN EMOTIONAL STATUS OF A USER (Machine-translation by Google Translate, not legally binding)
ESP201831130 2018-11-21

Publications (1)

Publication Number Publication Date
WO2020104722A1 true WO2020104722A1 (en) 2020-05-28

Family

ID=70736838

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/ES2019/070797 WO2020104722A1 (en) 2018-11-21 2019-11-21 System and method for determining the emotional state of a user

Country Status (2)

Country Link
ES (1) ES2762277A1 (en)
WO (1) WO2020104722A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022042924A1 (en) 2020-08-24 2022-03-03 Viele Sara Method and device for determining a mental state of a user

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210192536A1 (en) * 2019-12-24 2021-06-24 Avaya Inc. System and method for adaptive agent scripting

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110288379A1 (en) * 2007-08-02 2011-11-24 Wuxi Microsens Co., Ltd. Body sign dynamically monitoring system
US20120308971A1 (en) * 2011-05-31 2012-12-06 Hyun Soon Shin Emotion recognition-based bodyguard system, emotion recognition device, image and sensor control apparatus, personal protection management apparatus, and control methods thereof

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110288379A1 (en) * 2007-08-02 2011-11-24 Wuxi Microsens Co., Ltd. Body sign dynamically monitoring system
US20120308971A1 (en) * 2011-05-31 2012-12-06 Hyun Soon Shin Emotion recognition-based bodyguard system, emotion recognition device, image and sensor control apparatus, personal protection management apparatus, and control methods thereof

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MIRANDA CALERO JOSE ANGEL ET AL.: "Embedded Emotion Recognition within Cyber-Physical Systems using Physiological Signals", 2018 CONFERENCE ON DESIGN OF CIRCUITS AND INTEGRATED SYSTEMS (DCIS, 14 November 2018 (2018-11-14), pages 1 - 6, XP033534676, DOI: 10.1109/DCIS.2018.8681496 *
WIOLETA SZWOCH: "Using physiological signals for emotion recognition", 2013 6TH INTERNATIONAL CONFERENCE ON HUMAN SYSTEM INTERACTIONS (HSI, 6 June 2013 (2013-06-06), pages 556 - 561, XP032475731, ISSN: 2158-2246, ISBN: 978-1-4673-5635-0, DOI: 10.1109/HSI.2013.6577880 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022042924A1 (en) 2020-08-24 2022-03-03 Viele Sara Method and device for determining a mental state of a user

Also Published As

Publication number Publication date
ES2762277A1 (en) 2020-05-22

Similar Documents

Publication Publication Date Title
US11158179B2 (en) Method and system to improve accuracy of fall detection using multi-sensor fusion
US11024142B2 (en) Event detector for issuing a notification responsive to occurrence of an event
Erden et al. Sensors in assisted living: A survey of signal and image processing methods
EP3416146B1 (en) Human body condition and behaviour monitoring and alarm system
US11308744B1 (en) Wrist-wearable tracking and monitoring device
KR20160054397A (en) The method and apparatus for early warning the danger
US20190228633A1 (en) Fall Warning For A User
Shiba et al. Fall detection utilizing frequency distribution trajectory by microwave Doppler sensor
KR101654708B1 (en) Individual safety System based on wearable Sensor and the method thereof
WO2020104722A1 (en) System and method for determining the emotional state of a user
Ramachandiran et al. A survey on women safety device using IoT
US20200037904A1 (en) Systems, Devices, and/or Methods for Managing Health
Banjar et al. Fall event detection using the mean absolute deviated local ternary patterns and BiLSTM
KR102386182B1 (en) Impairment detection with biological considerations
Kulkarni et al. Smart AIOT based woman security system
KR102188076B1 (en) method and apparatus for using IoT technology to monitor elderly caregiver
ES1269890U (en) SYSTEM TO DETERMINE AN EMOTIONAL STATE OF A USER (Machine-translation by Google Translate, not legally binding)
Khawandi et al. Applying machine learning algorithm in fall detection monitoring system
Kalaiselvi et al. Emergency Tracking system using Intelligent agent
Khel et al. Technical analysis of fall detection techniques
US20230410636A1 (en) System for managing a network of personal safety accessories
Abraham et al. Pro-Safe: An IoT based Smart Application for Emergency Help
Chatterjee et al. A Novel Approach Towards Identification of Alcohol and Drug Induced People
Salama et al. An intelligent mobile app for fall detection
Khawandi et al. Applying neural network architecture in a multi-sensor monitoring system for the elderly

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19886826

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19886826

Country of ref document: EP

Kind code of ref document: A1