GB2572182A - Emotion signals to train AI - Google Patents

Emotion signals to train AI Download PDF

Info

Publication number
GB2572182A
GB2572182A GB1804537.7A GB201804537A GB2572182A GB 2572182 A GB2572182 A GB 2572182A GB 201804537 A GB201804537 A GB 201804537A GB 2572182 A GB2572182 A GB 2572182A
Authority
GB
United Kingdom
Prior art keywords
models
data
emotion
signals
emotion signals
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1804537.7A
Other versions
GB201804537D0 (en
Inventor
Edward Francis Harper Ross
De Vries Sebastiaan
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Limbic Ltd
Original Assignee
Limbic Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Limbic Ltd filed Critical Limbic Ltd
Priority to GB1804537.7A priority Critical patent/GB2572182A/en
Publication of GB201804537D0 publication Critical patent/GB201804537D0/en
Priority to EP19714754.9A priority patent/EP3769306A1/en
Priority to PCT/GB2019/050816 priority patent/WO2019180452A1/en
Priority to US16/982,997 priority patent/US20210015417A1/en
Publication of GB2572182A publication Critical patent/GB2572182A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/10Interfaces, programming languages or software development kits, e.g. for simulating neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

A computer implemented method for training an artificial intelligence (AI), machine learning or a learnt models. Parameters the model are trained based on emotion signals by optimisation of an objective function based on one or more emotional signals and training data. The emotional signal data may be physiological biometric data, e.g. skin conductance or temperature, body posture, EEG, or heartbeat. The data may be from wearable sensors, audio sensors or image sensors. Machine learning models may comprise regression, classification, regularisation, deep learning or instance-based models. The method aims to fulfil a need for trained models and training data which are readily available for developers on all platforms, thus enabling inclusion of emotional consideration in communications between humans and technology such as social robots, autonomous cars, etc.

Description

(57) A computer implemented method fortraining an artificial intelligence (Al), machine learning or a learnt models. Parameters the model are trained based on emotion signals by optimisation of an objective function based on one or more emotional signals and training data. The emotional signal data may be physiological biometric data, e.g. skin conductance or temperature, body posture, EEG, or heartbeat. The data may be from wearable sensors, audio sensors or image sensors. Machine learning models may comprise regression, classification, regularisation, deep learning or instance-based models. The method aims to fulfil a need for trained models and training data which are readily available for developers on all platforms, thus enabling inclusion of emotional consideration in communications between humans and technology such as social robots, autonomous cars, etc.
104
106
Figure 1
1/1
LD O
100 Physiological Data ------- 102
Φ M—
“O o Ο σι
E 00 03 C c
&D ω ω>
c ν’ i/i
Φ
--------► c Φ c
Π5 o o S— · —
Le o
Q_ IB E
Φ Π5 LU
Φ Q
Q
Figure 1
EMOTION SIGNALS TO TRAIN Al
Field
The present invention relates to a computer implemented method for training one or more parameters of a model. More particularly, the present invention relates to a computer implemented method for training one or more parameters of a model based on emotion signals.
Background
Emotion based Al seeks to understand changes in facial expressions, gestures and speech in order to communicate with technology. Applications of emotion based training range from social robots, autonomous cars to emotion based digital interactions. Although technology has come far with the combination of user data and sentiment analysis for natural language processing, in most cases, physiology can serve to depict more about neural activity of humans compared to written text and online user profiles. The subconscious and natural way of communication that is emotion, can provide a non-verbal, unbiased and unfiltered way humans interact with what surround them as well as technology. In order to dive deeper into human connection with technology in order to develop more efficient and effective ways of assisting humans, there is currently a need for trained models and training data which are readily available for developers on all platforms.
Summary of Invention
Aspects and/or embodiments seek to provide a computer implemented method for training Al based on emotion signals.
According to a first aspect, there is provided a computer implemented method for training one or more parameters of a model, the method comprising the steps of: inputting one or more emotion signals; inputting one or more training data; optimising an objective function based on the one or more emotional signals and the one or more training data; and determining the one or more parameters based on the optimised objective function.
The trained learnt models using emotion signals can be used by developers in any platform to integrate emotion based optimisation into their systems or applications.
Optionally, further comprising a step of regularisation based on the one or more emotion signals.
Optionally, the step of regularisation comprises adapting a loss function: optionally wherein the objective function comprises the loss function.
The step of regularisation based on the one or more emotion signals can generalise the function to fit data from other sources or other users.
Optionally, further comprising inputting one or more physiological data: optionally wherein the one or more physiological data comprises the one or more emotion signals. Optionally, the one or more emotion signals are determined from one or more physiological data. Optionally, the one or more physiological data is obtained from one or more sources and/or sensors. Optionally, the one or more sensors comprise any one or more of: wearable sensors; audio sensors; and/or image sensors. Optionally, the one or more physiological data comprises one or more biometric data. Optionally, the one or more biometric data comprise any one or more of: skin conductance; skin temperature; actigraphy; body posture; EEG; and/or heartbeat.
The method for training a learnt model can take into account a variety of input data as physiological data generalising the scope of application for the optimised emotion signals.
Optionally, one or more data related to the one or more emotion signals over time is extracted from the one or more physiological data.
Optionally, the model comprises one or more machine learning models. Optionally, the one or more data related to the one or more emotion signals over time input into the one or more machine learning models.
Optionally, the one or more machine learning models comprises any one or more of: regression models; regularisation models; classification models; deep learning models; and/or instance-based models.
Optionally, the one or more emotion signals comprise one or more of: classification and/or category based emotion signals; overall emotion signals; emotional response; predicted emotional response; continuous emotion signals; and/or end emotion signals.
The one or more emotion signals comprising one or more of: classification based emotion signals; overall emotion signals; emotional response; predicted emotional response;
continuous emotion signals; and/or end emotion signals, can be used to further optimise the training of the models.
Optionally, the model optimises an outcome of one or more tasks: optionally wherein the one or more tasks is unrelated to detection of emotion. Optionally, the one or more physiological data and/or the one or more emotion signals is stored as training data.
The training data and/or the output of the trained model may be used for the learning of other machine learning classifiers seeking to optimise a task using emotion signals.
According to a second aspect, there is provided one or more learnt models output from the method for training the one or more parameters of the model.
According to a third aspect, there is provided a use of the one or more learnt models.
According to a fourth aspect, there is provided an apparatus operable to perform the method of any preceding feature.
According to a fifth aspect, there is provided a system operable to perform the method of any preceding feature.
According to a sixth aspect, there is provided a computer program operable to perform the method and/or apparatus and/or system of any preceding feature.
Brief Description of Drawings
Embodiments will now be described, by way of example only and with reference to the accompanying drawing having like-reference numerals, in which:
Figure 1 shows an overview of the training process for one or more parameters of a model.
Specific Description
Referring to Figure 1, example embodiments for a computer implemented method for training one or more parameters of a model using emotion signals will now be described.
In this embodiment, physiological data as shown as 102 may consist multiple varieties of data collected from detection systems. Physiological data may include, but is not limited to the scope of, image data, audio data and/or biometric data. Examples of such data include, skin conductance, skin temperature, actigraphy, body posture, EEG, heartbeat, muscle tension, skin colour, noise detection, data obtained using eye tracking technology, galvanic skin response, body posture, facial expression, body movement, and speech analysis data obtained through speech processing techniques.
In an embodiment, emotion signals may be extracted from physiological data received or collected using a camera, wearable device or a microphone etc. for example by means of a mobile device, personal digital assistant, a computer, personal computer or laptop, handheld device or a tablet, a wearable computing device such as a smart watch. All of which may be capable of detecting a physiological characteristic of a particular user of the device.
In an embodiment, physiological data obtained over a period of time for a user is input into a machine learning model such as, but not limited to, deep learning models, reinforcement learning models and representation learning models. For example, a deep learning model such as a long short term memory recurrent neural network (LSTM RNN), as shown as 104, may be implemented. The implemented deep learning model may be learnt to process the input physiological data such as extracting temporal data from the physiological data. In an example of a user providing their heartbeat, RR values i.e. inter-beat intervals may be extracted from the obtained heartbeat signal via a sensor over a course of time. The RR values are represented as emotion signals which can predict the emotional state or emotional states of the user. In other examples, an emotional time series i.e. the emotion signal may be extracted from a physiological time series i.e. the signal generated from the received data via image, audio or wearable devices/sensors. In such examples, emotion signals can be extracted as appropriate to the type of data received in order to classify and/or predict the emotional state or emotional states of a user. Physiological data collected may be processed within different time frames for the emotion experienced by the user of the physiological data.
In this embodiment, biometric information can be collected from a wearable device strapped to a user, or extracted from video footage by measuring minute changes in facial flushing, or via other methods or a combination of methods. By having obtained a biometric time series, an emotion based time series can be constructed with an emotion detection model i.e. the deep learning model such as the LSTM.
In an embodiment, training signal which is optimised for Al correspond to emotion signals and in most cases, it is common for the optimisation process to be task specific. The emotion signals may accompany the predictions of machine learning models such as a regression model as shown as 106. Regression algorithms which may be used include but is not limited to, Ordinary Least Squares Regression (OLSR), Linear Regression, Logistic Regression,
Stepwise Regression, Multivariate Adaptive Regression Splines (MARS) and Locally Estimated Scatterplot Smoothing (LOESS). The output of emotion signals forms all or part of the input to train such regression algorithms.
In an embodiment, emotional time series i.e. the emotion signals, may accompany the prediction of the classifier models i.e. emotion detection models used to train classifier algorithms. The classifier algorithm used e.g. Logistic Regression, may be further modified by means of a regularisation model which adds an emotion based parameter for the optimisation of a learning model. The algorithm used such as the regularisation model may seek to learn a parameter which minimises unwanted characteristics. For example, in a situation where happiness of a user is sought for optimisation, the sadness of the user may be minimised through algorithm modification of a loss function. Using regularised algorithms which conventionally penalise models such as the logistic regression model based on parameter complexity may help to generalise a model for new datasets i.e. the adaptation to a loss function within any suitable model by means of using an emotion signal based parameter which can be generalised.
In an embodiment, the emotion parameter which is added to the algorithm may take into account emotion signals of a user within various time frames. User emotion may be added as a sum over individual emotional state moments on a per classification basis, or by measuring the overall accumulated emotional state of the user, or the user’s emotional state solely at the end of training.
In an embodiment, a variety of other algorithms which focus on the addition of a emotion based parameter may be implemented. Such algorithms may include for example Instance-based algorithms which compare new data points against an existing database according to a similarity based measure. Examples of instance-based algorithms include, k-Nearest Neighbour (k-NN), Learning Vector Quantisation (LVQ), Self-Organising Map (SOM) and Locally Weighted Learning (LWL).
In an embodiment, learnt signals are stored as training data. Learnt models may be used by developers in any platform in order to incorporate learnt signals into their digital products. Developers may implement a set of instructions such as computer based code into an application and use signals obtained via a cloud through an Application Programming Interface (API) or via a user interface through a Software Development Kit (SDK) be it either directly on the hardware for through a software package which may be installed on the device. In other embodiments, signals may be obtained via a combination of both API and SDK.
In an embodiment, processed emotion signals through deep learning algorithms may be used as input to train other classifiers wherein the training data may be used for training other machine learning models whether in the cloud or offline. In such cases, signals may not need to be obtained via an API or an SDK.
Any system feature as described herein may also be provided as a method feature, and vice versa. As used herein, means plus function features may be expressed alternatively in terms of their corresponding structure.
Any feature in one aspect may be applied to other aspects, in any appropriate combination. In 10 particular, method aspects may be applied to system aspects, and vice versa. Furthermore, any, some and/or all features in one aspect can be applied to any, some and/or all features in any other aspect, in any appropriate combination.
It should also be appreciated that particular combinations of the various features described 15 and defined in any aspects can be implemented and/or supplied and/or used independently.

Claims (19)

CLAIMS:
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
A computer implemented method for training one or more parameters of a model, the method comprising the steps of:
inputting one or more emotion signals;
inputting one or more training data;
optimising an objective function based on the one or more emotional signals and the one or more training data; and determining the one or more parameters based on the optimised objective function.
The method of Claim 1 further comprising a step of regularisation based on the one or more emotion signals.
The method of Claim 2 wherein the step of regularisation comprises adapting a loss function: optionally wherein the objective function comprises the loss function.
The method of any preceding claim further comprising inputting one or more physiological data: optionally wherein the one or more physiological data comprises the one or more emotion signals.
The method of any preceding claim wherein the one or more emotion signals are determined from one or more physiological data.
The method according to Claim 4 or 5 wherein the one or more physiological data is obtained from one or more sources and/or sensors.
The method of Claim 6 wherein the one or more sensors comprise any one or more of: wearable sensors; audio sensors; and/or image sensors.
The method of Claims 4 or 5 or 6, or Claim 7 when dependent on Claims 4 or 5 or 6, wherein the one or more physiological data comprises one or more biometric data.
The method of Claim 8 wherein the one or more biometric data comprise any one or more of: skin conductance; skin temperature; actigraphy; body posture; EEG; and/or heartbeat.
The method of any preceding claim wherein one or more data related to the one or more emotion signals overtime is extracted from the one or more physiological data.
11. The method of any preceding claim wherein the model comprises one or more machine learning models.
12. The method of Claim 11 wherein the one or more data related to the one or more emotion signals over time is input into the one or more machine learning models.
13. The method of Claim 11 or 12 wherein the one or more machine learning models comprises any one or more of: regression models; regularisation models; classification models; deep learning models; and/or instance-based models.
14. The method of any preceding claim wherein the one or more emotion signals comprise one or more of: classification and/or category based emotion signals; overall emotion signals; emotional response; predicted emotional response; continuous emotion signals; and/or end emotion signals.
15. The method of any preceding claim wherein the model optimises an outcome of one or more tasks: optionally wherein the one or more tasks is unrelated to detection of emotion.
16. The method of any preceding claim wherein the one or more physiological data and/or the one or more emotion signals is stored as training data.
17. One or more learnt models output from the method of any preceding claim.
18. Use of the one or more learnt models of Claim 17.
19. A computer program product operable to perform the method and/or apparatus and/or system of any preceding claim.
GB1804537.7A 2018-03-21 2018-03-21 Emotion signals to train AI Withdrawn GB2572182A (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
GB1804537.7A GB2572182A (en) 2018-03-21 2018-03-21 Emotion signals to train AI
EP19714754.9A EP3769306A1 (en) 2018-03-21 2019-03-21 Emotion data training method and system
PCT/GB2019/050816 WO2019180452A1 (en) 2018-03-21 2019-03-21 Emotion data training method and system
US16/982,997 US20210015417A1 (en) 2018-03-21 2019-03-21 Emotion data training method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1804537.7A GB2572182A (en) 2018-03-21 2018-03-21 Emotion signals to train AI

Publications (2)

Publication Number Publication Date
GB201804537D0 GB201804537D0 (en) 2018-05-02
GB2572182A true GB2572182A (en) 2019-09-25

Family

ID=62017966

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1804537.7A Withdrawn GB2572182A (en) 2018-03-21 2018-03-21 Emotion signals to train AI

Country Status (1)

Country Link
GB (1) GB2572182A (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11513596B2 (en) * 2019-04-16 2022-11-29 SiliconIntervention Inc. EEG with artificial intelligence as control device
CN117315745B (en) * 2023-09-19 2024-05-28 中影年年(北京)科技有限公司 Facial expression capturing method and system based on machine learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
None *

Also Published As

Publication number Publication date
GB201804537D0 (en) 2018-05-02

Similar Documents

Publication Publication Date Title
Nicolaou et al. Output-associative RVM regression for dimensional and continuous emotion prediction
JP2021057057A (en) Mobile and wearable video acquisition and feedback platform for therapy of mental disorder
Liu et al. Reinforcement online learning for emotion prediction by using physiological signals
JP7392492B2 (en) Method, server and program for detecting cognitive and speech disorders based on temporal and visual facial features
EP3769306A1 (en) Emotion data training method and system
Garcia-Ceja et al. User-adaptive models for activity and emotion recognition using deep transfer learning and data augmentation
KR20090055425A (en) Emotion recognition mothod and system based on decision fusion
Kumar et al. MEmoR: A multimodal emotion recognition using affective biomarkers for smart prediction of emotional health for people analytics in smart industries
CN114995657B (en) Multimode fusion natural interaction method, system and medium for intelligent robot
Liang et al. Strong and simple baselines for multimodal utterance embeddings
Fan et al. Transformer-based multimodal feature enhancement networks for multimodal depression detection integrating video, audio and remote photoplethysmograph signals
Patlar Akbulut Hybrid deep convolutional model-based emotion recognition using multiple physiological signals
Tanwar et al. Attention based hybrid deep learning model for wearable based stress recognition
Spaulding et al. Frustratingly easy personalization for real-time affect interpretation of facial expression
Shah et al. Evaluating contrastive learning on wearable timeseries for downstream clinical outcomes
GB2572182A (en) Emotion signals to train AI
Helaly et al. Deep convolution neural network implementation for emotion recognition system
Tripathi et al. TripCEAiR: A multi-loss minimization approach for surface EMG based airwriting recognition
Soni et al. CABMNet: An adaptive two-stage deep learning network for optimized spatial and temporal analysis in fall detection
Tang et al. Eye movement prediction based on adaptive BP neural network
Botzheim et al. Spiking neural network based emotional model for robot partner
Raina et al. Intelligent and Interactive Healthcare System (I 2 HS) Using Machine Learning
Vinola et al. Smile intensity recognition in real time videos: fuzzy system approach
Kang et al. Beyond superficial emotion recognition: Modality-adaptive emotion recognition system
Schwenker et al. Multimodal affect recognition in the context of human-computer interaction for companion-systems

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)