GB2618313A - A method and system for detecting a state of abnormality within a cabin - Google Patents

A method and system for detecting a state of abnormality within a cabin Download PDF

Info

Publication number
GB2618313A
GB2618313A GB2205911.7A GB202205911A GB2618313A GB 2618313 A GB2618313 A GB 2618313A GB 202205911 A GB202205911 A GB 202205911A GB 2618313 A GB2618313 A GB 2618313A
Authority
GB
United Kingdom
Prior art keywords
motor vehicle
cabin
state
ground truth
truth data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
GB2205911.7A
Other versions
GB202205911D0 (en
Inventor
Kannan Srividhya
Tanksale Tejas
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Continental Automotive Technologies GmbH
Original Assignee
Continental Automotive Technologies GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Continental Automotive Technologies GmbH filed Critical Continental Automotive Technologies GmbH
Priority to GB2205911.7A priority Critical patent/GB2618313A/en
Publication of GB202205911D0 publication Critical patent/GB202205911D0/en
Publication of GB2618313A publication Critical patent/GB2618313A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/44Event detection
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W10/00Conjoint control of vehicle sub-units of different type or different function
    • B60W10/18Conjoint control of vehicle sub-units of different type or different function including control of braking systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2433Single-class perspective, e.g. one-against-all classification; Novelty detection; Outlier detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/803Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/15Biometric patterns based on physiological signals, e.g. heartbeat, blood flow
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B17/00Fire alarms; Alarms responsive to explosion
    • G08B17/10Actuation by presence of smoke or gases, e.g. automatic alarm devices for analysing flowing fluid materials by the use of optical means
    • G08B17/103Actuation by presence of smoke or gases, e.g. automatic alarm devices for analysing flowing fluid materials by the use of optical means using a light emitting and receiving device
    • G08B17/107Actuation by presence of smoke or gases, e.g. automatic alarm devices for analysing flowing fluid materials by the use of optical means using a light emitting and receiving device for detecting light-scattering due to smoke
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/143Alarm means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/22Psychological state; Stress level or workload
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/221Physiology, e.g. weight, heartbeat, health or special needs
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/04Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons
    • G08B21/0438Sensor means for detecting
    • G08B21/0476Cameras to detect unsafe condition, e.g. video cameras
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/06Alarms for ensuring the safety of persons indicating a condition of sleep, e.g. anti-dozing alarms
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B29/00Checking or monitoring of signalling or alarm systems; Prevention or correction of operating errors, e.g. preventing unauthorised operation
    • G08B29/18Prevention or correction of operating errors
    • G08B29/185Signal analysis techniques for reducing or preventing false alarms or for enhancing the reliability of the system
    • G08B29/186Fuzzy logic; neural networks

Abstract

A method is provided of detecting a state of abnormality within a cabin of a motor vehicle, comprising the steps of acquiring 302 a set of ground truth data occurring within a motor vehicle cabin; transmitting 304 the dataset to a processing unit for analysis; identifying A a state of abnormality within the cabin of the motor vehicle, by way of the processing unit; and executing 310 an emergency response function in response to the abnormality. The step of identifying the state of abnormality includes fusing the set of ground truth data acquired by applying a deep learning algorithm. The method may include identification of in cabin smoke, cardiological factors, a pulse and/or an emotion of an occupant and may use recurrent neural network architecture. A system, a computer program product and a computer-readable medium are also disclosed.

Description

A METHOD AND SYSTEM FOR DETECTING A STATE OF ABNORMALITY WITHIN A CABIN
TECHNICAL FIELD
This disclosure relates to monitoring systems, and in particular monitoring systems within a confined space and identifying events occurring to a human within the confined space by applying appropriate machine learning technique.
BACKGROUND
There is an increase in demand for driver monitoring systems in automotive applications, to increase safety of operating a motor vehicle and avoid traffic accidents. Typically, such monitoring systems address status of a driver operating a motor vehicle to avoid traffic accidents caused by driver's fatigue. A commonly known approach in driver monitoring system is to issue a warning to driver.
However, driver monitoring systems only focuses on certain aspects of the driver.
Driver monitoring systems are unable to assess situation of an entire passenger cabin of a motor vehicle. Further, the existing approach discussed above is unable to assess whether a driver is unsuitable to operate a motor vehicle due to other conditions apart from fatigue.
There is therefore a need to provide a method and system for detecting a state of abnormality within a cabin of a motor vehicle that overcomes, or at least ameliorates, the problems described above. Furthermore, other desirable features and characteristics will become apparent from the subsequent detailed description and the appended claims, taking in conjunction with the accompanying drawings
and this background of the disclosure.
SUMMARY
A purpose of this disclosure is to ameliorate the problem of safety of a motor vehicle occupant as discussed above, by providing the subject-matter of the independent claims.
The objective of this disclosure is solved by a method of detecting a state of abnormality within a cabin of a motor vehicle, the method comprising: * acquiring a set of ground truth data occurring within a cabin of a motor vehicle; * transmitting the set of ground truth data to a processing unit for analyzing the set of ground truth data acquired; * identifying a state of abnormality within the cabin of the motor vehicle, by way of the processing unit; and * executing, by way of the processing unit, an emergency response function in response to the state of abnormality identified, wherein identifying the state of abnormality occurring within the cabin further comprises fusing the set of ground truth data acquired by applying a deep learning algorithm.
An advantage of the above described aspect of this disclosure yields a method of detecting a state of abnormality within a cabin of a motor vehicle by applying deep learning technique. In particular, the method disclosed integrates different sources of ground truth data to achieve accuracy of identifying state of abnormalities within a cabin of a motor vehicle. The pain or emotion detection in a single image is not effective and it requires sequence of images to be trained with a deep learning network, apart from facial landmarks, the EEG/ECG signals provide accurate information of heart rate which is crucial to determine the pain or emotion.
Preferred is a method as described above or as described above as being preferred, in which the set of ground truth data comprises: an event occurring within the cabin of the motor vehicle; an event occurring to the motor vehicle occupant within the cabin of the motor vehicle, or combination thereof.
The advantage of the above aspect of this disclosure is to yield a method which accurately verify the identification of a state of abnormality by the deep learning algorithm is accurate, through a real-time observation of an event occurring within the cabin of a motor vehicle and/or an event occurring to the cabin of a motor vehicle.
Preferred is a method as described above or as described above as being preferred, in which the set of ground truth data further comprises: * acquiring, by way of an image sensing device, one or more images of a motor vehicle occupant within the cabin; * acquiring, by way of a collection of health sensing devices, at least one type of vital sign of a motor vehicle occupant within the cabin; * acquiring, by way of a smoke detector, a smoke signal within the cabin, or combination thereof The advantage of the above aspect of this disclosure is to yield a set of ground truth data from different sensing devices or detectors, to data fusion or data integration, to increase the accuracy of deep learning algorithm.
Preferred is a method as described above or as described above as being preferred, in which the set of ground truth data includes: determining a state of emotion of a motor vehicle occupant, wherein the state of emotion of a motor vehicle occupant is determined from analyzing the one or more images acquired by the image sensing device.
The advantage of the above aspect of this disclosure is to determine a state of emotion of a motor vehicle occupant using images captured by an image sensing device. An example of an image sensing device may be a camera. Example of state of emotion may be facial expression correlating to discomfort, pain or shock. More advantageously, this ground truth data is fused with other ground truth data acquired to analyze and identify a state of abnormality within the cabin of the motor vehicle. Since this method applies deep learning algorithm to analyze and identify a state of abnormality, accurately of results yield is achieved over time.
Preferred is a method as described above or as described above as being preferred, in which: the set of ground truth data is selected from the at least one type of vital sign consisting of: a rate of heart beat; electroencephalogram (EEG); and electrocardiogram (ECG).
The advantage of the above aspect of this disclosure is to acquire different types of vital sign of a motor vehicle occupant to identify health status of the motor vehicle occupant. By integrating or fusing vital sign data with other types of ground truth data allows the deep learning algorithm to accurately analyze and identify more complex situation occurring to the motor vehicle occupant and appropriate response may be executed.
Preferred is a method as described above or as described above as being preferred, in which the state of abnormality comprises: a fire hazard occurring within the cabin of a motor vehicle; an amount of smoke within the cabin of a motor vehicle; a motor vehicle occupant in a condition unfit for operating a motor vehicle, or combination thereof.
The advantage of the above aspect of this disclosure is to yield a method of identifying, through a deep learning algorithm, a state of abnormality occurring within a cabin of a motor vehicle unrelated to driver per se, for example smoke or potential fire hazard within the cabin in real-time and/or identifying whether the motor vehicle occupant is in a condition unfit for operating the motor vehicle, thereby achieving safety management for the driver and passengers onboard the motor vehicle. Examples of types of condition unfit for operating a motor vehicle may include drunken state, or conditions which requires urgent medical attention and/or life threatening situation.
Preferred is a method as described above or as described above as being preferred, in which the emergency response function comprises: * issuing an alert warning to the motor vehicle occupant; * transmitting wireless communication signals for requesting assistance; * executing motor vehicle braking function, or combination thereof.
The advantage of the above aspect of this disclosure is to yield a method of responding to the state of abnormality identified, to ensure safety of the motor vehicle occupant.
Preferred is a method as described above or as described above as being preferred, in which the deep learning algorithm further comprises: processing, by way of a recurrent neural network (RNN) architecture, the set of ground truth data fused.
The advantage of the above aspect of this disclosure is to apply the integrated or fused set of ground truth data acquired for deep learning. Preferably, the deep learning architecture may be a recurrent neural network (RNN) architecture.
Preferred is a method as described above or as described above as being preferred, in which the deep learning algorithm further comprises: storing, by way of the recurrent neural network (RNN) architecture, information fused from the set of ground truth data acquired from a previous sequence; and storing, by way of the recurrent neural network (RNN) architecture, information fused from the set of ground truth data acquired in real-time.
The advantage of the above aspect of this disclosure is to yield a method which 10 analyzes and determine state of abnormality through comparison of past events observed within the cabin and real-time events occurring withing the cabin.
Preferred is a method as described above or as described above as being preferred, in which the deep learning algorithm further comprises: analyzing, by way of an attention layer of the RNN architecture, at last one facial characteristics of the motor vehicle occupant for determining the state of emotion of the motor vehicle occupant.
The advantage of the above aspect of this disclosure is to accurately identify facial features of the motor vehicle occupant associated with an emotion of the motor vehicle occupant by applying deep learning to correlate facial features with emotions. Example of how facial features or facial characteristics may be used to determine emotions include analyzing images captured, to determine changes to facial features which may be correlated to feelings or mood. Facial landmarks can be estimated and tracked to detect change in face and emotion. Other ways to detect include body movement and gestures, and speech Preferred is a method as described above or as described above as being preferred, in which the deep learning algorithm further comprises: in response to state of emotion of the motor vehicle occupant determined, classifying, by way of a classifier of the RNN architecture, the state of abnormality within the cabin.
The advantage of the above aspect of this disclosure is to classify the state of abnormality according to normal or abnormal in response to observation of the set of ground truth data acquired.
Preferred is a method as described above or as described above as being preferred, in which the deep learning algorithm further comprises: predicting, by way of a regressor of the RNN architecture, a potential state of abnormality within the cabin in response to.
information fused from the set of ground truth data acquired from a previous sequence; and information fused from the set of ground truth data acquired in real-time.
The advantage of the above aspect of this disclosure is to apply regression model to determine a correlation between events occurring in the past and events occurring in real-time according to the set of ground truth data acquired, to yield an accurate prediction of a potential state of abnormality within the cabin and/or the motor vehicle occupant within the cabin.
The objective of this disclosure is solved by a system for detecting a state of abnormality within a cabin of a motor vehicle comprising: a processing unit and means adapted to execute the steps as defined above.
An advantage of the above described aspect of this disclosure yields a system for having a processing unit and means to execute the method as disclosed herein.
The objective of this disclosure is solved by a computer program product comprising instructions to cause: detecting a state of abnormality within a cabin of a motor 30 vehicle A system as disclosed herein to execute the steps as disclosed herein.
An advantage of the above described aspect of this disclosure yields computer program product for detecting a state of abnormality within a cabin of a motor vehicle The objective of this disclosure is solved by a computer-readable medium having stored thereon the computer program product as defined above.
An advantage of the above described aspect of this disclosure yields a computer-readable medium having stored with a computer program product suitable for detecting a state of abnormality within a cabin of a motor vehicle.
BRIEF DESCRIPTION OF DRAWINGS
Other objects and aspects of this disclosure will become apparent from the following description of embodiments with reference to the accompanying drawings in which: FIG 1. shows a system block diagram of a system in accordance with an exemplary embodiment.
FIG. 2 shows a neural network architecture in accordance with an exemplary embodiment.
FIG. 3 shows a flowchart of a method of detecting and responding to a state of 25 abnormality in accordance with an exemplary embodiment.
FIG. 4 shows a flowchart of a deep learning algorithm in accordance with an exemplary embodiment.
In various embodiments described by reference to the above figures, like reference signs refer to like components in several perspective views and/or configurations.
DETAILED DESCRIPTION
The following detailed description is merely exemplary in nature and is not intended to limit the disclosure or the application and uses of the disclosure. Furthermore, there is no intention to be bound by any theory presented in the preceding background of the disclosure or the following detailed description. It is the intent of this disclosure to present a system and method of detaching a state of abnormality within a cabin of a motor vehicle by applying deep learning algorithm.
Hereinafter, the term "processing unit" used herein may broadly encompass a general-purpose processor, a central processing unit (CPU), a microprocessor, a digital signal processor (DSP), a controller, a microcontroller, a state machine, and so forth. Under some circumstances, a "processing unit" may refer to an application specific integrated circuit (ASIC), a programmable logic device (PLD), a field programmable gate array (FPGA), etc. The term "processing unit" may refer to a combination of processing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. The "processing unit" may include a memory, for loading a sequence of instruction, causing the "processing unit" to perform steps of actions. The term "memory" should be interpreted broadly to encompass any electronic component capable of storing electronic information. The term "memory" may refer to various types of processor-readable media such as random access memory (RAM), read-only memory (ROM), non-volatile random access memory (NVRAM), programmable read-only memory (PROM), erasable programmable read only memory (EPROM), electrically erasable PROM (EEPROM), flash memory, magnetic or optical data storage, registers, etc. "Memory" is said to be in electronic communication with a processor if the processor can read information from and/or write information to the memory. Memory that is integral to a processor is in electronic communication with the processor. Henceforth, the term "processing unit" may also be taken to encompass "system on chip" (SoC) which uses a single integrated circuit (IC) chip that contains multiple resources, computational units, processors and/or cores integrated on a single substrate. A single SOC may contain circuitry for digital.
analog, mixe,d-signal, and radio-frequency functions; as well as any number of general purpose and/or specialized processors (digital signal processors, modem processors, video processors, etc.), memory blocks (e.g., ROM, RAM, Flash, etc.). Unless otherwise specifically stated, the "processing unit" is not limited to any specific combination of hardware circuitry and software, nor to any particular source for the instructions executed by the processing unit.
The term "ground truth" may refer to factual data ascertainable through direct observation in real time. In the context used herein, the term "a set of ground truth data" refers to a group of selection of factual data acquired through direct observation in real time, as a means for checking the results of machine learning accuracy against real world situation. In some embodiments, the exemplary examples described herein uses "set of ground truth data" to refer to a multiple sources of ground truth data fused and analyzed centrally. In some embodiments, the exemplary embodiment described herein uses "ground truth data" to refer to one type of data acquired or observed.
The term "data fusion" or "data integration" and its grammatical variations thereof used in the context herein collectively refers to a process of acquiring multiple sources of data information and combining the multiple sources of data information acquired, to build a more sophisticated learning model, in particular deep learning model, for purposes of performing a centralized analysis process.
The term "a state of emotion" used in the context herein may refer to a particular condition of a person derived from a feeling or mood in a specific circumstance.
Examples of "a state of emotion" used in the context herein may refer to feelings or mood associated with discomfort caused by physical or health conditions, such as pain, anxiety, fear, frustration, stress, panicky, terrified, confused, nauseated, etc. The term "a condition unfit for operating a motor vehicle" may refer to a condition unsuitable for operating a motor vehicle, which may endanger the operator and/or other road users.
System 100 Turning now to the accompanying drawings, FIG. 1 shows system 100 in accordance with a preferred embodiment disclosed herein. System 100 includes multiple sensing devices to acquire ground truth data in real-time, including a sensing device 102 to capture multiple images, a smoke detector 104 to detect smoke signals and a collection of health sensing devices 106 to acquire data in relation of health of a user, in which a user may refer to an operator of a motor vehicle. In some embodiments, the collection of health sensing devices 106 may include in-vehicle sensing devices 108, electroencephalogram (EEG) sensing devices 108' and electrocardiogram (ECG) sensing devices 108". In some embodiments, the collection of health sensing devices 106 may refer to a single health sensing device operable to sense multiple vital signs.
The system 100 further comprises a processing unit 110 for performing a sequence of instructions in response to detection of a state of abnormality by a deep learning algorithm. In response to the state of abnormality classified by the processing unit 110, the processing unit 110 executes an instruction to perform at least one type of emergency response functions. Suitable types of emergency response functions include warning alert 116 to inform the motor vehicle occupant, transmission of wireless signals 118 for assistance, for example triggering a call for emergency response team and/or executing a car braking function 118 to ensure motor vehicle comes to a halt, such that an emergency response team, i.e., paramedics, ambulance or traffic police may reach out to motor vehicle occupant for medical attention.
A main advantage of the invention concept disclosed herein is to utilize data fusion or integrating multiple sources of ground truth data to improve accuracy of deep learning model, to identify whether an abnormal event is occurring within a cabin of a motor vehicle and/or an abnormal event is occurring to a motor vehicle occupant within a cabin of a motor vehicle in real time.
Facial Images from Image Sensing Device 102 In some embodiments, the image sensing device 102 may be an independent image sensing device 102 operable to capture images of the motor vehicle operator.
In some embodiments, the image sensing device 102 is an in-vehicle camera for monitoring the motor vehicle occupant, for example an in-vehicle camera which forms part of a driver monitoring system. In some embodiments, the image sensing device 102 is part of system 100 Images captured by the image sensing device 102 forms part of the set of ground truth data, acquired to determine a state of emotion of the motor vehicle occupant. As explained above, the state of emotion of the motor vehicle occupant may make reference to feelings or mood associated with discomfort caused by physical or health conditions, such as pain, anxiety, fear, frustration, stress, panicky, terrified, confused, nauseated, etc. An exemplary technique for detection of pain involves an analysis of permanent and transient facial features. Conventional approaches focuses on the analysis of still images, but such analysis is not applicable to the context of motor vehicle occupants and more in particular driver of a motor vehicle, where the driver is expected to be constantly turning or tilting his head, watching the traffic and/or checking displays for vehicular information updates. In other words, the analysis of drivers requires multiple images captured when the driver is in motion. As such, analysis has to be made to multiple images captured in sequence to extract meaningful information to determine a state of emotion rather than making observation from a standalone still image. . In some embodiments, the analysis may involve using a benchmark image containing facial features or landmark features of a motor vehicle occupant to enable comparison of images during such analysis. Apart from facial features, EEG/EEG signals, audio signals can be combined to improve the accuracy of determining the pain emotion. An example of a suitable device for detecting audio signals may be a microphone. This can be trained by a deep neural network and then the generated detection can be tracked by a tracker. Another approach to extract the facial features, is by directly passing the input image to a convolutional neural network (CNN) that has a gaussian weighted filter or an entropy filter to detect the facial features and then classify the emotion. The classifier could be support vector machine or a simple CNN to detect the pain emotion in the facial images. The detected features can be tracked by a traditional tracker like Kalman or an optical flow approach also could be used. Objectively, the state of emotion identifies whether the motor vehicle occupant is in a condition suitable for driving, or in the contrary, a condition unfit for operating a motor vehicle.
An exemplary embodiment of how the images captured may be used to identify a state of emotion includes identifying facial characteristics of at least one image captured by the image sensing device 102, for example position of the motor vehicle occupant's eyebrow position, and a distance between two eyebrows. If there is a change in the distance between two eyebrows, it may be an indication that the motor vehicle occupant is in pain or frowning. Prediction of whether the motor vehicle occupant is indeed in pain, may be concluding by using data fusion with multiple sources of ground truth data, to increase accuracy of the prediction by a deep learning algorithm.
Smoke Detector 104 In some embodiment, a smoke detector 104 is used to acquire ground truth data related to smoke detection, for example a potential fire hazard. The smoke detector 104 may be any type of smoke detecting device operable to detect an amount of 20 smoke within a confined space, such as a cabin of a motor vehicle.
Collection of health sensing devices 106 Examples of in-vehicle sensing devices may include biosensors installed in vehicle seat or seat belts, strategically positioned to detect vital signs of a driver or operator of a motor vehicle, such as a rate of heart beat. A suitable type of biosensor may be a pulse rate sensor. Further, in some embodiments, a wearable device 108108" is a single device suitable for detecting EEG and ECG signals from the operator. In some embodiments, the EEG sensing device 108' may be a wearable EEG sensing device 108' to detect EEG signals, for example an EEG headset dedicated to detect signals relating to brain activities. In some embodiments the ECG sensing device 108" is a portable, wearable sensing device operable to detect heart beat or pulses. Suitable example of ECG sensing devices includes portable cardiac monitoring device, smartwatches and pulse oximeter. As explained above, a combination of pulse sensing and audio signals improve the accuracy of determining the pain emotion.
In most embodiments, the ground truth data acquired through collection of health sensing devices, in particular vital signs of a motor vehicle occupant, may be integrated with other source of ground truth data acquired, say, facial features to determine state of emotion using images captured by the image sensing device 102, by applying data fusion technique, to predict whether the motor vehicle occupant is indeed in a condition unfit for operating a motor vehicle. Example of conditions unfit for operating a motor vehicle may include, under influence of alcohol, medical condition such as heart attack which requires urgent medical attention and life threatening situation such as a traffic accident.
Advantageously, the integration of multiple sources of ground truth data in real time enriches a deep learning model as disclosed herein, to yield a system 100 operable to accurately identify and classify whether there is a state of abnormality within a cabin of a motor vehicle, such that appropriate response may be executed to ensure safety of the motor vehicle occupant.
RNN Architecture 200 FIG. 2 of the accompanying drawings shows a neural network architecture, in particular a recurrent neural network (RNN) architecture 200, and more in particular long short term memory (LSTM) architecture in accordance with an exemplary embodiment. It shall be understood by a skilled practitioner other types of deep learning architectures may be applicable, but a RNN architecture is preferred as RNN architectures provides input information from past sequences to identify real-time issues.
As can be seen from FIG. 2, a set of ground truth data is acquired from multiple sources of system 100, namely the image sensing device 102, the smoke detector 104 and the collection of health sensing devices 106 to acquire signals in relation to rate of heart beat, EEG signals and ECG signals, which defines a set of ground truth data 202, 202' and 202" to acquire factual data through direct observation in real-time, such that the set of ground truth data may be used to check against results determined by machine learning model, and more in particular deep learning model disclosed herein.
The set of ground truth data acquired from multiple sources of sensing devices, i.e. image sensing device 102, smoke detector 104 and a collection of health sensing device106 including in-vehicle sensing device 108, EEG sensing device108' and ECG sensing device 108" is integrated or fused, specifically, the set of ground truth data is fused for an analysis. The data fusion analyzed is that used to identify a state of abnormality within a cabin of a motor vehicle, by using a long short-term memory (LSTM) architecture 204. One of the advantages of including a LSTM architecture in a RNN architecture is to retain value of ground truth data analyzed over an arbitrary time interval, i.e. long time or short time, such that the value retained may be used as a piece of information for future input. A typical LSTM architecture comprises a cell, an input gate, an output gate, and a forget gate, where the cell is the component to retain values over arbitrary time intervals. The input gate controls when new information can flow into the memory of the LSTM architecture and the output gate controls when the information that is contained in the cell is used in the output. The forget gate controls when a piece of information can be forgotten, thus allowing the cell to process new data. Decision on whether the information requires attention or is to be ignored is decided in response to the output of the function. The application of LSTM architecture 204 allows the set of ground truth data analyzed to be compared to what is determined by the processing unit 110 to solve the problem of identifying state of abnormality in real-time accurately.
In an exemplary embodiment, the images of the motor vehicle occupant captured by image sensing device 102 and analyzed from past sequence, which were classified as state of normality, i.e. normal events occurring within a cabin and/or normal events occurring to a motor vehicle occupant is retained in the cell of the LSTM architecture 204. Subsequent to the retention of ground truth data analyzed, when a state of abnormality within the cabin is identified, the analyzed ground truth data retained by the LSTM architecture 204 is retrieved from the output gate of the LSTM architecture 204 and compared with a set of ground truth data acquired in real-time. In this matter, the accuracy of the deep learning model improves over time.
The RNN architecture further comprises an attention layer 206 operable to memorize a sequence of ground truth data analyzed. In machine learning techniques, an attention layer is one which mimics analysis akin to cognitive attention of human brain. In the context herein, the attention layer 206 is operable to identify integrated ground truth data that are important to support classifying events considered normal and events considered abnormal. This achieves accuracy of identifying a state of abnormality.
The RNN architecture further comprises a classifier and regressor layer 208, operable to perform a classification task to map the retain values against the output value from the LSTM architecture 204, to classify whether an event occurring within a cabin of a motor vehicle or an event occurring to a motor vehicle occupant within a cabin of a motor vehicle or to perform a regression task, to determine variance, bias and/or error in the output value. Once the output value is determined, the classifier and regression layer 208 labels or classify the output value as state of normality 210 or state of abnormality 212.
Flowchart to Execute Emergency Response 300 FIG. 3 shows a flowchart of a method of detecting and responding to a state of abnormality in accordance with an exemplary system disclosed herein.
At step 302, a set of ground truth data is acquired. The set of ground truth data comprises an integrated or fused data acquired by an image module, a smoke detector and a collection of health sensing devices. The collection of health sensing devices may include in-vehicle sensing device, EEG sensing device and ECG sensing device for observing or sensing vital signs of a motor vehicle occupant.
At step 304, the set of ground truth data acquired is transmitted to a LSTM architecture 204 of a RNN architecture as disclosed in this disclosure.
In a next step, i.e. step A, a decision to identify a state of abnormality occurring within a cabin of a motor vehicle and/or a state of abnormality occurring to a motor vehicle occupant within a cabin of a motor vehicle using a deep learning algorithm is required. At step A, if an output value of the deep learning algorithm classifies or identifies the event is normal, the set of ground truth data fused and analyzed is retained in memory or cell of LSTM architecture 204. In the event the output value of the deep learning algorithm indicates the event occurring within a cabin of a motor vehicle and/or the event occurring to a motor vehicle occupant within a cabin of a motor vehicle is in a state of abnormality, a response may be performed at step 310, to ensure safety of the motor vehicle occupant.
For clarity, the emergency response functions may be executed by a processing unit of a system as disclosed above.
Deep Learning Algorithm 400 FIG. 4 shows a flowchart of a deep learning algorithm in accordance with an exemplary embodiment.
At step 402, a set of ground truth data acquired is integrated or fused and processed for an analysis. An output value analyze from a past event may be stored at step 404, and a subsequent output value analyzed in real-time may also be stored at step 404', for mapping input of the set of ground truth data acquired, against an output value analyzed.
At step 406, the set of ground truth data fused may be analyzed for details. By way of example, the deep learning algorithm analyzes of facial feature or facial characteristic of image captured, to identify changes Step 406 helps to classify the changes in facial characteristics to specific state of emotions of the motor vehicle occupant. The same concept of analyzing ground truth data acquired may be applied to detection of smoke and detection of health vital signs. As explained above, by retaining output values in cell of LSTM architecture 204 enriches the accuracy of classification and prediction of a state of abnormality by the deep learning algorithm.
At step 408, the deep learning algorithm 400 classify whether a state of abnormality is observed using the classifier and regressor layer 208 and in the next step 410, predict a state of abnormality, in response to the information fused from the set of ground truth data.
The aforesaid sequence of receiving ground truth data acquired, performing and calculating an output value by the deep learning algorithm may be in the form of a computer software program product, executed by a processing unit.
Thus, a system and method of detecting a state of abnormality within a cabin of a motor vehicle having a deep learning algorithm to perform and process data fusion of multiple sources of ground truth data has been provided. In particular, the system and method disclosed increase accuracy of predicting a state of abnormality by using data fusion and deep learning algorithm, such that appropriate emergency response can be executed, to ensure safety of the driver and other traffic users.
While exemplary embodiments have been presented in the foregoing detailed description of the disclosure, it should be appreciated that a vast number of variation exist.
It should further be appreciated that the exemplary embodiments are only examples, and are not intended to limit the scope, applicability, operation or configuration of the disclosure in any way. Rather, the foregoing detailed description will provide those skilled in the art with a convenient road map for implementing an exemplary embodiment of the disclosure, it being understood that various changes may be made in the function and arrangement of elements and method of operation described in the exemplary embodiment without departing from the scope of the disclosure as set forth in the appended claims.
List of Reference Signs System block diagram 102 Image sensing device 104 Smoke detector 106 Collection of health sensing devices 108 In-vehicle sensing device 108' EEG sensing device 108" ECG sensing device Processing unit 112 State of abnormality classifier 114 Emergency response functions 116 Warning alert 118 Transmission of wireless signals for assistance Execute car braking function RNN architecture 202, 202', 202" Acquiring of ground truth data 204 Long short term memory (LSTM) architecture.
206 Attention layer 208 Classifier and regressor layer 210 State of normality determined 212 State of abnormality determined 300 Flowchart for response to state of abnormality 302 Acquiring set of ground truth data 304 Transmitting set of ground truth data A Identifying state of normality using deep learning algorithm 306 State of normality identified 308 State of abnormality identified 310 Execute emergency response functions 400 Flowchart of deep learning algorithm 402 Processing integrated ground truth data fusion 404 Storing information acquired (past sequence) 404 Storing information acquired (real time) 406 Analyzing details of set of ground truth data 408 Classifying state of abnormality 410 Predict state of abnormality

Claims (15)

  1. Patent claims 1 A method of detecting a state of abnormality within a cabin of a motor vehicle, the method comprising: * acquiring a set of ground truth data occurring within a cabin of a motor vehicle; * transmitting the set of ground truth data to a processing unit for analyzing the set of ground truth data acquired; * identifying a state of abnormality within the cabin of the motor vehicle, by way of the processing unit; and * executing, by way of the processing unit, an emergency response function in response to the state of abnormality identified, wherein identifying the state of abnormality occurring within the cabin further comprises fusing the set of ground truth data acquired by applying a deep learning algorithm.
  2. 2 The method of claim 1, wherein the set of ground truth data comprises: an event occurring within the cabin of the motor vehicle; an event occurring to the motor vehicle occupant within the cabin of the motor vehicle, or combination thereof.
  3. 3 The method of claim 1 or 2, wherein the set of ground truth data further comprises: * acquiring, by way of an image sensing device, one or more images of a motor vehicle occupant within the cabin; * acquiring, by way of a collection of health sensing devices, at least one type of vital sign of a motor vehicle occupant within the cabin; * acquiring, by way of a smoke detector, a smoke signal within the cabin, or combination thereof 4 5.
  4. The method of claims 1 to 3, wherein the set of ground truth data includes determining a state of emotion of a motor vehicle occupant, wherein the state of emotion of a motor vehicle occupant is determined from analyzing the one or more images acquired by the image sensing device.
  5. The method of claims 1 to 3, wherein the set of ground truth data is selected from the at least one type of vital sign consisting of: a rate of heart beat; electroencephalogram (EEG); and electrocardiogram (ECG).
  6. The method of claim 1, wherein the state of abnormality comprises: a fire hazard occurring within the cabin of a motor vehicle; an amount of smoke within the cabin of a motor vehicle; a motor vehicle occupant in a condition unfit for operating a motor vehicle, or combination thereof.
  7. The method of claim 1, wherein the emergency response function comprises: * issuing an alert warning to the motor vehicle occupant; * transmitting wireless communication signals for requesting assistance; * executing motor vehicle braking function, or combination thereof.
  8. The method of claim 1, wherein the deep learning algorithm further comprises: processing, by way of a recurrent neural network (RNN) architecture, the set of ground truth data fused.
  9. 9 The method of claim 8, wherein the deep learning algorithm further comprises: storing, by way of the recurrent neural network (RNN) architecture, information fused from the set of ground truth data acquired from a previous sequence; and storing, by way of the recurrent neural network (RNN) architecture, information fused from the set of ground truth data acquired in real-time.
  10. 10. The method of claims 1 or 8 -9, wherein the deep learning algorithm further comprises: analyzing, by way of an attention layer of the RNN architecture, at last one facial characteristics of the motor vehicle occupant for determining the state of emotion of the motor vehicle occupant.
  11. 11. The method of claims 1 or 8-10, wherein the deep learning algorithm further comprises: in response to state of emotion of the motor vehicle occupant determined, classifying, by way of a classifier of the RNN architecture, the state of abnormality within the cabin.
  12. 12 The method of claim 11, wherein the deep learning algorithm further comprises: predicting, by way of a regressor of the RNN architecture, a potential state of abnormality within the cabin in response to.information fused from the set of ground truth data acquired from a previous sequence; and information fused from the set of ground truth data acquired in real-time.
  13. 13.A system for detecting a state of abnormality within a cabin of a motor vehicle comprising a processing unit and means adapted to execute the steps of claims 1 -12
  14. 14.A computer program product comprising instructions to cause the system of claim 13 to execute the steps of claims 1 to 12.
  15. 15.A computer-readable medium having stored thereon the computer program product of claim 14.
GB2205911.7A 2022-04-22 2022-04-22 A method and system for detecting a state of abnormality within a cabin Pending GB2618313A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB2205911.7A GB2618313A (en) 2022-04-22 2022-04-22 A method and system for detecting a state of abnormality within a cabin

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB2205911.7A GB2618313A (en) 2022-04-22 2022-04-22 A method and system for detecting a state of abnormality within a cabin

Publications (2)

Publication Number Publication Date
GB202205911D0 GB202205911D0 (en) 2022-06-08
GB2618313A true GB2618313A (en) 2023-11-08

Family

ID=81851934

Family Applications (1)

Application Number Title Priority Date Filing Date
GB2205911.7A Pending GB2618313A (en) 2022-04-22 2022-04-22 A method and system for detecting a state of abnormality within a cabin

Country Status (1)

Country Link
GB (1) GB2618313A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109300276A (en) * 2018-07-27 2019-02-01 昆明理工大学 A kind of car inside abnormity early warning method based on Fusion
US20190049957A1 (en) * 2018-03-30 2019-02-14 Intel Corporation Emotional adaptive driving policies for automated driving vehicles
US20190092337A1 (en) * 2017-09-22 2019-03-28 Aurora Flight Sciences Corporation System for Monitoring an Operator
WO2019161766A1 (en) * 2018-02-22 2019-08-29 Huawei Technologies Co., Ltd. Method for distress and road rage detection
CN113370786A (en) * 2021-06-10 2021-09-10 桂林电子科技大学 Vehicle-mounted drunk driving comprehensive detection system for unit vehicle based on multi-source information fusion
US20210331681A1 (en) * 2019-05-31 2021-10-28 Lg Electronics Inc. Vehicle control method and intelligent computing device for controlling vehicle

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190092337A1 (en) * 2017-09-22 2019-03-28 Aurora Flight Sciences Corporation System for Monitoring an Operator
WO2019161766A1 (en) * 2018-02-22 2019-08-29 Huawei Technologies Co., Ltd. Method for distress and road rage detection
US20190049957A1 (en) * 2018-03-30 2019-02-14 Intel Corporation Emotional adaptive driving policies for automated driving vehicles
CN109300276A (en) * 2018-07-27 2019-02-01 昆明理工大学 A kind of car inside abnormity early warning method based on Fusion
US20210331681A1 (en) * 2019-05-31 2021-10-28 Lg Electronics Inc. Vehicle control method and intelligent computing device for controlling vehicle
CN113370786A (en) * 2021-06-10 2021-09-10 桂林电子科技大学 Vehicle-mounted drunk driving comprehensive detection system for unit vehicle based on multi-source information fusion

Also Published As

Publication number Publication date
GB202205911D0 (en) 2022-06-08

Similar Documents

Publication Publication Date Title
WO2021174618A1 (en) Training method for electroencephalography mode classification model, classification method and system
Chen et al. Detecting driving stress in physiological signals based on multimodal feature analysis and kernel classifiers
US10322728B1 (en) Method for distress and road rage detection
Xing et al. Identification and analysis of driver postures for in-vehicle driving activities and secondary tasks recognition
Omerustaoglu et al. Distracted driver detection by combining in-vehicle and image data using deep learning
Karuppusamy et al. Multimodal system to detect driver fatigue using EEG, gyroscope, and image processing
Begum Intelligent driver monitoring systems based on physiological sensor signals: A review
Wu et al. Reasoning-based framework for driving safety monitoring using driving event recognition
US10877444B1 (en) System and method for biofeedback including relevance assessment
Sharma et al. Drowsiness warning system using artificial intelligence
CN112220480A (en) Driver state detection system and vehicle based on millimeter wave radar and camera fusion
Urbano et al. Cooperative driver stress sensing integration with eCall system for improved road safety
Cheon et al. Sensor-based driver condition recognition using support vector machine for the detection of driver drowsiness
Singh et al. Physical and physiological drowsiness detection methods
Abbas et al. A methodological review on prediction of multi-stage hypovigilance detection systems using multimodal features
US20210323559A1 (en) Data processing device, human-machine interface system including the device, vehicle including the system, method for evaluating user discomfort, and computer-readable medium for carrying out the method
Liu et al. Toward nonintrusive camera-based heart rate variability estimation in the car under naturalistic condition
CN117272155A (en) Intelligent watch-based driver road anger disease detection method
Murphey et al. Driver lane change prediction using physiological measures
GB2618313A (en) A method and system for detecting a state of abnormality within a cabin
Oueida et al. A fair and ethical healthcare artificial intelligence system for monitoring driver behavior and preventing road accidents
Vesselenyi et al. Fuzzy Decision Algorithm for Driver Drowsiness Detection
CN114435373A (en) Fatigue driving detection method, device, computer equipment and storage medium
Farha et al. Artifact removal of eye tracking data for the assessment of cognitive vigilance levels
Subbaiah et al. Driver drowsiness detection methods: A comprehensive survey

Legal Events

Date Code Title Description
COOA Change in applicant's name or ownership of the application

Owner name: CONTINENTAL AUTOMOTIVE TECHNOLOGIES GMBH

Free format text: FORMER OWNER: CONTINENTAL AUTOMOTIVE GMBH