GB2597944A - A method for predicting a user of a user with an autoencoder algorithm, as well as electronic computing device - Google Patents

A method for predicting a user of a user with an autoencoder algorithm, as well as electronic computing device Download PDF

Info

Publication number
GB2597944A
GB2597944A GB2012447.5A GB202012447A GB2597944A GB 2597944 A GB2597944 A GB 2597944A GB 202012447 A GB202012447 A GB 202012447A GB 2597944 A GB2597944 A GB 2597944A
Authority
GB
United Kingdom
Prior art keywords
user
signal data
computing device
electronic computing
motor vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB2012447.5A
Other versions
GB202012447D0 (en
Inventor
Michael Krell Mario
Smiroldo Rigel
Reddy Samina
Li Lichi
Durr Hans-Bernd
Merz Matthias
Zhang Jiong
Voigt Martin
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mercedes Benz Group AG
Original Assignee
Daimler AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Daimler AG filed Critical Daimler AG
Priority to GB2012447.5A priority Critical patent/GB2597944A/en
Publication of GB202012447D0 publication Critical patent/GB202012447D0/en
Publication of GB2597944A publication Critical patent/GB2597944A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/0097Predicting future conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0001Details of the control system
    • B60W2050/0043Signal treatments, identification of variables or parameters, parameter estimation or state estimation
    • B60W2050/005Sampling
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2420/00Indexing codes relating to the type of sensors based on the principle of their operation
    • B60W2420/40Photo or light sensitive means, e.g. infrared sensors
    • B60W2420/403Image sensing, e.g. optical camera
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/22Psychological state; Stress level or workload
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/221Physiology, e.g. weight, heartbeat, health or special needs
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/223Posture, e.g. hand, foot, or seat position, turned or inclined
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/225Direction of gaze
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/227Position in the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/229Attention level, e.g. attentive to driving, reading or sleeping
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/24Drug level, e.g. alcohol
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/26Incapacity
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/30Driving style

Abstract

The invention relates to a method for predicting a user state (12) of a user of a motor vehicle by an electronic computing device (10), wherein at least first signal data (20) are detected by at least one detection device (26) and the first signal data (20) are analysed by an autoencoder (14) algorithm of the electronic computing device (10), wherein low dimensional second signal data (22) are processed from the analysed first signal data (20), and the second signal data (22) are transferred to an actuator device (24) of the motor vehicle; characterized in that first signal data (20), which characterize the user (80) of the motor vehicle, are detected by the detection device (26) and as the second signal data (22) the at least one user state (12) of the user (80) is predicted by the autoencoder (14). Furthermore, the invention relates to an electronic computing device (10).

Description

A METHOD FOR PREDICTING A USER STATE OF A USER WITH AN
AUTOENCODER ALGORITHM, AS WELL AS ELECTRONIC COMPUTING DEVICE
FIELD OF THE INVENTION
[0001] The present disclosure relates to the field of automobiles. More particularly, but not specifically, the present disclosure relates to an electronic computing device.
BACKGROUND INFORMATION
[0002] From the state of the art, it is known that for having an intuitive interaction with a customer, it is important to know the different states of the user. There are different methods available to measure the status of a user directly with emotion recognition via image and voice analysis as well as measuring body functions (e.g. heartbeat). For instance, US 2018/0293814 Al introduces a method to determine the status of a motor vehicle. The method includes the steps of collecting a first output signal data from at least one device which is outputting the signal data related to a first plurality of operational parameters and a first plurality of environmental parameters of the motor vehicle. The method further includes identifying patterns within the first output signal data and analyzing the patterns within the first output signal data; and generating a second output signal data defining a second plurality of operational parameters distinct from the first operational parameters. Even though these methods are present in the current state of the art, there is still a need for a more efficient method and/or an electronic computing device, to improve the quality of measurements for predicting the user's state.
[0003] Therefore, the current disclosure is introducing a method and an electronic computing device according to the embodiments of this disclosure. Therefore, there is a need in the art to provide a method and an electronical computing device, by which a quality of a prediction of a user state may be raised.
SUMMARY OF THE INVENTION
[0004] One aspect of the invention relates to a method for predicting a user state of a motor vehicle by an electronic computing device of the motor vehicle, wherein at least first signal data are detected by at least one detection device of the motor vehicle and the first signal data are analyzed by an autoencoder algorithm of the electronic computing device, wherein low dimensional second signal data are processed from the analyzed first signal data, and the second signal data are transferred to an actuator device of the motor vehicle.
[0005] It is provided that a first signal data, which characterize the user of the motor vehicle, are detected by the detection device and as second signal data the at least one state of the user is predicted by the autoencoder.
[0006] Thereby, it is facilitated that instead of using all the detection devices of the motor vehicle directly, a compression beforehand with another learning algorithm, in particular with the autoencoder algorithm, is performed. For example, the output of 100 sensors would be, for example, reduced in two values as second signal data. This could then be combined with, for example, three direct detection devices captured in the predictive algorithm. This reduction of dimensionality largely improves the prediction quality.
[0007] In other words, there is a virtual sensor that abstracts the state of the user. The abstraction can be obtained by modelling how the user interacts with the motor vehicle's latent factor model, for example, the autoencoder. The respectively learned latency factors can then be used in other applications as input, for example, a latent factor model is built based on how the user interacts with the climate control or with a steering wheel, wherein the interaction is detected by various detection devices. The factor that is closest to describing the aggressiveness of the person is then used from the virtual sensor for predicting the destination, to learn which comfort function is of interest, for example, a more intense massage, or even to directly activate a more aggressive controller for climate functions as the actuator device.
[0008] The virtual sensor will improve funcfionalities within the motor vehicle like climate control and other predictive features and thus improve the customer experience and satisfaction. It may improve safety in the motor vehicle when the motor vehicle knows if the user is observed in more dangerous situations such as if the user is less attentive, and adapts to this user state.
[0009] According to an advantageous embodiment, a virtual sensor is provided by the electronic computing device. Thereby a very low dimensional representation is proposed, for example, a maximum of three features. This compressed data is the new virtual sensor in the motor vehicle. It can be calculated on trip basis as well as on a certain frequency, for example every second, since the status of the user in the motor vehicle might change over time.
[0010] Further, it is advantageous if a variety of first signal data is detected by a variety of the detection devices and the variety of first signal data are analyzed by the autoencoder and processed to the low dimensional second signal data.
[0011] According to another advantageous embodiment, the autoencoder is learned by observing the interaction from the user with the motor vehicle.
[0012] According to a further advantageous embodiment, depending on the low dimensional second signal data a control signal for a functional unit of the motor vehicle as the actuator device is produced by the electronic computing device.
[0013] A further aspect of the invention relates to an electronic computing device for predicting a user state of a user of a motor vehicle with at least an autoencoder, wherein the electronic computing device is configured to perform a method according to the first aspect of the invention. In particular, the method is performed by the electronic computing device.
[0014] Another aspect of the invention relates to a motor vehicle with the electronic computing device.
[0015] Further advantageous embodiments of the method are to be regarded as advantageous embodiments of the electronic computing device as well as to the motor vehicle. The electronic computing device as well as the motor vehicle for this purpose comprise substantive features, which facilitate a performance of the method or advantageous embodiments thereof.
[0016] Further advantages, features, and details of the invention derive from the following description of the preferred embodiments as well as from the drawings. The features and feature combinations previously mentioned in the description as well as the features and feature combinations mentioned in the following description of the figure and/or shown in the figure alone can be employed not only in the respectively indicated combination but also in any other combination or taken alone without leaving the scope of the invention.
BRIEF DESCRIPTION OF THE DRAWINGS
[0017] The novel features and characteristic of the disclosure are set forth in the appended claims. The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and together with the description, serve to explain the disclosed principles. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same numbers are used throughout the figures to reference like features and components. Some embodiments of system and/or methods in accordance with embodiments of the present subject matter are now described below, by way of example only, and with reference to the accompanying figures.
[0018] Fig. 1 is a schematic view of an embodiment of the electronic computing device.
[0019] Fig. 2 is a schematic view of another embodiment of the electronic computing device.
[0020] Fig. 3 is a schematic view of another embodiment of the electronic computing device.
[0021] Fig. 4 is another schematic view of another embodiment of the electronic computing device.
[0022] In the figures the same elements or elements having the same function are indicated by the same reference signs.
DETAILED DESCRIPTION
[0023] In the present document, the word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any embodiment or implementation of the present subject matter described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
[0024] While the disclosure is susceptible to various modifications and alternative forms, specific embodiment thereof has been shown by way of example in the drawings and will be described in detail below. It should be understood, however that it is not intended to limit the disclosure to the particular forms disclosed, but on the contrary, the disclosure is to cover all modifications, equivalents, and alternatives falling within the scope of the disclosure.
[0025] The terms "comprises'', "comprising", or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a setup, device or method that comprises a list of components or steps does not include only those components or steps but may include other components or steps not expressly listed or inherent to such setup or device or method. In other words, one or more elements in a system or apparatus proceeded by "comprises.., a" does not, without more constraints, preclude the existence of other elements or additional elements in the system or method.
[0026] In the following detailed description of the embodiments of the disclosure, reference is made to the accompanying drawings that form a part hereof, and in which are shown by way of illustration specific embodiments in which the disclosure may be practiced. These embodiments are described in sufficient details to enable those skilled in the art to practice the disclosure, and it is to be understood that other embodiments may be utilized and that changes may be made without departing from the scope of the present disclosure. The following description is, therefore, not to be taken in a limiting sense.
[0027] Fig. 1 shows a schematic view of an embodiment of an electronic computing device 10. A facial expression can be computed by this embodiment. The electronic computing device 10 is configured to predict a user state 12 of a user 80 of a motor vehicle. The electronic computing device 10 comprises at least one autoencoder 14. In this embodiment, the autoencoder 14 comprises an encoder 16 and a decoder 18. The autoencoder 14 may be, for example, a mulfilayer neural network.
[0028] Fig. 1 shows a method for predicting the user state 12 of the user 80 of the motor vehicle by the electronic computing device 10 of the motor vehicle, wherein at least a first signal data 20 are computed by the autoencoder 14 algorithm of the electronic computing device 10, wherein a low dimensional second signal data 22 is processed from the computed first signal data 20, and the second signal data 22 are transferred to an actuator device 24 of the motor vehicle.
[0029] In particular it is shown in Fig. 1 that a generated data signal is computed from the first signal data 20 by the autoencoder 14, and the generated second signal data 22 is used to drive the activation of the actuator device 24.
[0030] It is provided that the first signal data 20, which characterize the user 80 of the motor vehicle, are detected by a detection device 26 and the second signal data 22 the at least one user state 12 of the user 80 is predicted by the autoencoder 14.
[0031] Further, Fig. 1 shows that a virtual sensor is provided by the electronic computing device 10. It is shown that a variety of first signal data 20 from a variety of detection devices 26 and the variety of first signal data 20 are analyzed by the autoencoder 14 and processed to the low dimensional second signal data 22. Furthermore, it is shown how the autoencoder 14 may be analyzing the data by observing the interaction from the user 80 with the motor vehicle 12. Furthermore, it is shown in Fig. 1 that depending on the low dimensional second signal data 22 a control signal for a functional unit of the motor vehicle as the actuator device 24 is produced by the electronic computing device 10.
[0032] According to some aspects of the invention, for training the autoencoder 14, different data sources in the car/motor vehicle may be used. For example, an unsupervised emotion recognition approach may be used as a data source based on a video data 28 and/or other sensors on the body of the user 80. The optimal air conditioning and/or comfort settings could be learned. An input would be different weather settings, potentially even gathered over a cloud server, and an output would be how the user 80 chooses the settings. This could be generalized to any settings in the car, including drive mode, radio station, etc. Also, other actions in the car may be used as a data source like phone calls, properties of radio stations, and how they change, voice interaction, force and speeds when opening and closing doors. Also, the driving behavior may be modelled especially in an unsupervised fashion. Depending on the type of sensors in the car, it could be learned how to predict how the user 80 is going to interact with the car and how the user 80 interacts with a steering wheel, brake, and gas pedal.
[0033] Also driving behavior can be modelled especially in an unsupervised fashion. Depending on the different sensors in the car, we could learn how to predict how the driver is going to interact with the car and use steering wheel, break, and gas pedal.
[0034] For example, as shown in Fig. 1, the video/photo sensor 28 of the user 80 can be used as an input and with the autoencoder 14 algorithm, the image can be compressed to a small number of dimensions 30, in this embodiment for example to three dimensions 30. In this embodiment, the autoencoder 14 is an unsupervised machine learning algorithm that learns an encoder 16 and decoder 18 with a neural network. The encoder 16 transforms the data to a lower dimension 30 and the decoder 18 tries to obtain the original image from this transformation. The different dimensions 30 could stand for a distinction between smile versus frown, flushes versus normal complexion, wide eyes versus normal versus almost closed eyes, daylight versus dark, face rotated left versus face in straight view versus rotation to the right or more. The decoder 18 just tries to reconstruct the original image and is only relevant for the training but not for the inventive method. A caveat of the low dimensional representation that quite often is not clear what is truly stands for.
[0035] Furthermore, Fig. 1 shows that an error propagation 32 for optimization during training can be realized. Further, an in-car usage 34 after training can be used.
[0036] However, directly the interpretation is not needed. The algorithm could be even output a larger number of features, for example, an offline version. In that case there could be a standard feature selection technique applied to only use meaningful features. It is also possible to adapt the algorithm that uses these features for learning. It could, for example, have an online optimization of its hyperparameters which scale each feature and thus indirectly remove useless features. In case of online learning, a photo is taken and stored, for example, every one minute. If a memory limit is reached, the oldest image is deleted or the oldest image that is most similar to a more recent image. The algorithm then produces the image. This way, the different facial expressions/emotions as well as light conditions can be encoded. This encoding can then be used as input for other predictive algorithms. Assuming the autoencoder 14 learns useful features, the prediction algorithm can now use this information for a predictive algorithm. For example, the predictive algorithm could be a destination prediction point. If the destination of the customer/user 80 depends on some emotional state or something else that can be read from the facial image, the image features would be useful in the predictive algorithm and it could learn that when the person is sleepy the person is going home, whereas when the person looks energetic, the person goes to the gym. Note that the invention is just working with a latent description/latent features. Using the feature in the prediction of whom the person calls, the electronic computing device can learn that there is a specific
S
person that the person is happy to call, whereas the person calls another person when the person is angry.
[0037] Fig. 2 shows another embodiment of the electronic computing device 10. It is shown that an environment parameter 34, and general features like, for example, a time of the day 36, an air quality 38, a GPS location 40, an outside temperature 42, an inside temperature 44 and a humidity 46 can be used. A single setting change 48 can be monitored. Furthermore, a continuous feature generation 50 during testing can be realized. Furthermore, an error propagation 52 for optimization during training can be realized. Furthermore, continuously generated features 54 are used for training and testing of prediction, even when settings are not changed. Especially in the decoder 18, a classification or regression on the layers can be realized.
[0038] Fig. 2 shows how the user 80 learns to interact with the control of the temperature by using a climate control or other features. Instead of using the video or a photo, the electronic computing device 10 takes the settings and external sensor data and learns a low-dimensional representation. This could encode how comfortable the user 80 feels in the car and/or how the general weather situation is for this specific customer/user 80. This can be used in a car feature like energizing comfort which tries to make the customer feel better in a model that predicts the perfect climate or comfort settings, for example, settings of the massage program of the customer.
[0039] Fig. 3 shows another embodiment of the electronic computing device 10, in particular for observing a driving behavior of the user 80. For example, a potential trip segmentation 56 and a classification as well as a feature preprocessing a feature aggregation can be done. For that, the potential trip segmentation 56 gets an information from sensor data 58 which may include but not limited to: street type, speed and limit, time of the day/week, location, life traffic, proximity, traffic signs, acceleration, braking, steering, passenger as well as weather. Also, the actuator device 24 can be, for example, a drive mode change or an open speech channel.
[0040] Fig. 3 shows that the electronic computing device 10 can use the general driving behavior and encode it. For example, the Input could be a steering angle and accelerations or aggregations thereof. The compressed data could represent different driving situations as well as the emotional situation of the driver/user 80. Is the user 80 for example driving today more aggressive, this information may be used by the user interface to trigger certain notifications when the driving situation is appropriate. The user interface can also learn when the expected interaction of the user 80 and the use of the latent representation is one of many features. A different driving situation will result in different preferred drive modes. Thereby, it is sufficiently that it is a low dimensional representation of a driving behavior.
[0041] Fig. 4 shows another embodiment of the electronic computing device 10. Fig. 4 shows that a plurality of networks 60 for the user 80 may be used as training data 62. The training data 62 can be clustered 64. This may be used for a reduced number of models via a plurality of prototypes or a cluster center, which is shown in the block 66. The reduced number of models may then be assigned by hand, in particular manually, which of the climate controllers fits best, which is shown in the block 68. The reduced number of models may also be used to apply prototype models, which is shown in the block 70. For the new prototype model data from new customers may be used, which is shown in the block 78. Coming from the blocks 68 and 70, decision which model causes the smallest prediction error is done, which is shown in the block 72. From the block 72 a respective controller is selected, which is shown in the block 74. Depending on the selected controller the controller is applied to the temperature, potentially in combination with a predictive algorithm, which is shown in the block 76. Data of the new customer can be used for the applying 70.
[0042] Fig. 4 shows for improving climate control clustering 64 with different types of car usage behavior, for example, different types of interaction with temperature settings, or just use one component of representation of the settings can be realized. This assessment of the user behavior is then forwarded to climate control. Depending on the group, a different aggressiveness of the climate control is used. The training in this case may be done offline with data from different customers and then applies within the car the predictive algorithm that observes the customer's interaction and tell him which kind of climate control is the right one. This way, it can detect the user 80 that uses extreme temperature settings, for example, low and high temperature. For these customers, there is a system used that blows the air strongly. However, it could still be predicted the correct targeted temperature. That way, the user 80 would not switch between up and down but get the right setting. In another example, the user 80 might be very sensitive and only slightly changes temperatures. For this customer, a softer controller would be activated. Now, the user 80 may set the target temperature but would not get distributed by the blow, even though the temperature difference between in-car temperatures and outside temperature may be large.
[0043] These low dimensional representations could be either trained while the car is running or they could be trained with externally recorded data. If trained within the car the sensor input should be provided to the predictive algorithm. Alternatively, using a predictive algorithm, it automatically trains for certain sensors/inputs and learns how much to react to the low dimensional representation. So, at the beginning the preprocessed input would have less influence and later on with more updates become more relevant. The generated low dimensional representation is used as an input for other algorithms in the car. For example, for predicting a next destination, the reduced representation of the temperature setting might not be too relevant because the weather rarely influences the destination. The driving behavior is not relevant because there is a prediction needed before the customer starts driving. However, the reduced representation of the photo might give a prediction on where the customer is going. If the customer is happy, the customer might be going to a friend, if the customer is very tense, the customer might be going to his or her family where there might be a lot of disagreements. For example, for predicting the temperature settings or the comfort functions, the representation shown in Fig. 2 may be used. For predicting the drive mode or if the driver will have a speech interaction with a head unit, the representation shown in Fig. 3 may be used.
[0044] Fig. 1 to Fig. 4 show a latent factor modelling of sensor for in-car user-action prediction.
Reference list electronical computing device 12 user state 14 autoencoder 16 encoder 18 decoder first signal data 22 second signal data 24 actuator device 26 variety of detection devices 28 photo dimensions 32 error propagation 34 in-car usage 36 time of day 38 air quality GPS location 42 outside temperature 44 inside temperature 46 humidity 48 setting continuous feature generating 54 continuously generated features 56 potential trip segmentation 58 sensor data network 62 training data 64 clustering 66 models 68 assignment prototype model 72 prediction error 74 selection 76 application 78 Data user

Claims (6)

  1. Claims 1. A method for predicting a user state (12) of a user (80) of a motor vehicle by an electronic computing device (10) of the motor vehicle, wherein at least first signal data (20) are detected by at least one detection device (26) of the motor vehicle and the first signal data (20) are analyzed by an autoencoder (14) algorithm of the electronic computing device (10), wherein low dimensional second signal data (22) are processed from the analyzed first signal data (20), and the second signal data (22) are transferred to an actuator device (24) of the motor vehicle, characterized in that first signal data (20), which are characterizing the user (80) of the motor vehicle, are detected by the detection device (26) and as the second signal data (22) the at least one user state (12) of the user (80) is predicted by the autoencoder (14).
  2. 2. The method according to claim 1, characterized in that a virtual sensor is provided by the electronic computing device (10).
  3. 3. The method according to claim 1 or 2, characterized in that a variety of first signal data (20) is detected by a variety of detection devices (26) and the variety of first signal data (20) are analyzed by the autoencoder (14) and processed to the low dimensional second signal data (22).
  4. 4. The method according to any one of the preceding claims, characterized in that the autoencoder (14) is learned by observing the interaction from the user (80) with the motor vehicle.
  5. 5. The method according to any one of the preceding claims, characterized in that depending on the low dimensional second signal data (22) a control signal for a functional unit of the motor vehicle as the actuator device (24) is produced by the electronic computing device (10).
  6. 6. Electronic computing device (10) for predicting a user state (12) of a user (80) of a motor vehicle, with at least an autoencoder (14), wherein the electronic computing device (10) is configured to perform a method according to any one of the claims 1 to 5.
GB2012447.5A 2020-08-11 2020-08-11 A method for predicting a user of a user with an autoencoder algorithm, as well as electronic computing device Withdrawn GB2597944A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB2012447.5A GB2597944A (en) 2020-08-11 2020-08-11 A method for predicting a user of a user with an autoencoder algorithm, as well as electronic computing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB2012447.5A GB2597944A (en) 2020-08-11 2020-08-11 A method for predicting a user of a user with an autoencoder algorithm, as well as electronic computing device

Publications (2)

Publication Number Publication Date
GB202012447D0 GB202012447D0 (en) 2020-09-23
GB2597944A true GB2597944A (en) 2022-02-16

Family

ID=72520113

Family Applications (1)

Application Number Title Priority Date Filing Date
GB2012447.5A Withdrawn GB2597944A (en) 2020-08-11 2020-08-11 A method for predicting a user of a user with an autoencoder algorithm, as well as electronic computing device

Country Status (1)

Country Link
GB (1) GB2597944A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180157918A1 (en) * 2016-12-02 2018-06-07 Bayerische Motoren Werke Aktiengesellschaft System and Method for Estimating Vehicular Motion Based on Monocular Video Data
US20180170357A1 (en) * 2016-12-16 2018-06-21 Hyundai Motor Company Hybrid vehicle and method of controlling mode transition
US20190038204A1 (en) * 2017-08-01 2019-02-07 Panasonic Intellectual Property Management Co., Lrtd. Pupillometry and sensor fusion for monitoring and predicting a vehicle operator's condition
KR101951595B1 (en) * 2018-05-18 2019-02-22 한양대학교 산학협력단 Vehicle trajectory prediction system and method based on modular recurrent neural network architecture
WO2019161766A1 (en) * 2018-02-22 2019-08-29 Huawei Technologies Co., Ltd. Method for distress and road rage detection
KR20190103078A (en) * 2019-07-11 2019-09-04 엘지전자 주식회사 Method and apparatus for providing service of vehicle in autonomous driving system
US20200070657A1 (en) * 2019-07-11 2020-03-05 Lg Electronics Inc. Method and apparatus for detecting status of vehicle occupant
US20200079369A1 (en) * 2018-09-12 2020-03-12 Bendix Commercial Vehicle Systems Llc System and Method for Predicted Vehicle Incident Warning and Evasion
US20200241525A1 (en) * 2019-01-27 2020-07-30 Human Autonomous Solutions LLC Computer-based apparatus system for assessing, predicting, correcting, recovering, and reducing risk arising from an operator?s deficient situation awareness

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180157918A1 (en) * 2016-12-02 2018-06-07 Bayerische Motoren Werke Aktiengesellschaft System and Method for Estimating Vehicular Motion Based on Monocular Video Data
US20180170357A1 (en) * 2016-12-16 2018-06-21 Hyundai Motor Company Hybrid vehicle and method of controlling mode transition
US20190038204A1 (en) * 2017-08-01 2019-02-07 Panasonic Intellectual Property Management Co., Lrtd. Pupillometry and sensor fusion for monitoring and predicting a vehicle operator's condition
WO2019161766A1 (en) * 2018-02-22 2019-08-29 Huawei Technologies Co., Ltd. Method for distress and road rage detection
KR101951595B1 (en) * 2018-05-18 2019-02-22 한양대학교 산학협력단 Vehicle trajectory prediction system and method based on modular recurrent neural network architecture
US20200079369A1 (en) * 2018-09-12 2020-03-12 Bendix Commercial Vehicle Systems Llc System and Method for Predicted Vehicle Incident Warning and Evasion
US20200241525A1 (en) * 2019-01-27 2020-07-30 Human Autonomous Solutions LLC Computer-based apparatus system for assessing, predicting, correcting, recovering, and reducing risk arising from an operator?s deficient situation awareness
KR20190103078A (en) * 2019-07-11 2019-09-04 엘지전자 주식회사 Method and apparatus for providing service of vehicle in autonomous driving system
US20200070657A1 (en) * 2019-07-11 2020-03-05 Lg Electronics Inc. Method and apparatus for detecting status of vehicle occupant

Also Published As

Publication number Publication date
GB202012447D0 (en) 2020-09-23

Similar Documents

Publication Publication Date Title
Alkinani et al. Detecting human driver inattentive and aggressive driving behavior using deep learning: Recent advances, requirements and open challenges
KR20060080317A (en) An emotion-based software robot for automobile
CN109416733B (en) Portable personalization
Omerustaoglu et al. Distracted driver detection by combining in-vehicle and image data using deep learning
Craye et al. A multi-modal driver fatigue and distraction assessment system
Lin et al. An overview on study of identification of driver behavior characteristics for automotive control
Okuda et al. Modeling and analysis of driving behavior based on a probability-weighted ARX model
EP3751474A1 (en) Evaluation device, action control device, evaluation method, and evaluation program
US11237565B2 (en) Optimal driving characteristic adjustment for autonomous vehicles
JP7053432B2 (en) Control equipment, agent equipment and programs
US11379705B2 (en) System and computer-based method for simulating a human-like control behaviour in an environmental context
CN111132703A (en) Recognizing and mitigating vehicle odors
EP3750765A1 (en) Methods, apparatuses and computer programs for generating a machine-learning model and for generating a control signal for operating a vehicle
US20230012186A1 (en) System and method for vibroacoustic diagnostic and condition monitoring a system using neural networks
Nareshkumar et al. Feasibility and Necessity of Affective Computing in Emotion Sensing of Drivers for Improved Road Safety
CN113780062A (en) Vehicle-mounted intelligent interaction method based on emotion recognition, storage medium and chip
EP2052376B1 (en) Adaptive interface providing apparatus and method
CN111223479A (en) Operation authority control method and related equipment
CN114715163A (en) Passenger discomfort measurement during vehicle maneuvering
CN110555346A (en) Driver emotion detection method and device, electronic equipment and storage medium
CN114684152A (en) Method, device, vehicle and medium for processing driving experience data
Lu et al. A review of sensory interactions between autonomous vehicles and drivers
GB2597944A (en) A method for predicting a user of a user with an autoencoder algorithm, as well as electronic computing device
Bedi et al. Smart automobile health monitoring system
CN116340332A (en) Method and device for updating scene library of vehicle-mounted intelligent system and vehicle

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)