WO2020136658A1 - Systems, devices and methods for vehicle post-crash support - Google Patents
Systems, devices and methods for vehicle post-crash support Download PDFInfo
- Publication number
- WO2020136658A1 WO2020136658A1 PCT/IL2019/051422 IL2019051422W WO2020136658A1 WO 2020136658 A1 WO2020136658 A1 WO 2020136658A1 IL 2019051422 W IL2019051422 W IL 2019051422W WO 2020136658 A1 WO2020136658 A1 WO 2020136658A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- crash
- data
- vehicle
- images
- occupant
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 71
- 230000001953 sensory effect Effects 0.000 claims abstract description 73
- 230000006378 damage Effects 0.000 claims abstract description 47
- 208000027418 Wounds and injury Diseases 0.000 claims description 54
- 238000004458 analytical method Methods 0.000 claims description 46
- 208000014674 injury Diseases 0.000 claims description 45
- 238000004891 communication Methods 0.000 claims description 39
- 238000001514 detection method Methods 0.000 claims description 38
- 230000029058 respiratory gaseous exchange Effects 0.000 claims description 32
- 230000033001 locomotion Effects 0.000 claims description 29
- 238000004422 calculation algorithm Methods 0.000 claims description 27
- 230000003287 optical effect Effects 0.000 claims description 18
- 230000000007 visual effect Effects 0.000 claims description 17
- 238000005286 illumination Methods 0.000 claims description 14
- 230000002123 temporal effect Effects 0.000 claims description 13
- 206010052428 Wound Diseases 0.000 claims description 11
- 238000010801 machine learning Methods 0.000 claims description 11
- 238000013528 artificial neural network Methods 0.000 claims description 10
- 238000013135 deep learning Methods 0.000 claims description 7
- 238000011156 evaluation Methods 0.000 claims description 2
- 238000003672 processing method Methods 0.000 claims description 2
- 238000012545 processing Methods 0.000 description 30
- 238000004590 computer program Methods 0.000 description 13
- 238000003384 imaging method Methods 0.000 description 13
- 238000010586 diagram Methods 0.000 description 11
- 238000013519 translation Methods 0.000 description 10
- 206010019196 Head injury Diseases 0.000 description 8
- 230000004927 fusion Effects 0.000 description 8
- 230000008569 process Effects 0.000 description 8
- 230000008901 benefit Effects 0.000 description 6
- 238000012544 monitoring process Methods 0.000 description 6
- 238000010348 incorporation Methods 0.000 description 5
- 238000005259 measurement Methods 0.000 description 5
- 238000001228 spectrum Methods 0.000 description 5
- 238000012549 training Methods 0.000 description 5
- 230000001133 acceleration Effects 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 4
- 125000004122 cyclic group Chemical group 0.000 description 4
- 238000011161 development Methods 0.000 description 4
- 230000018109 developmental process Effects 0.000 description 4
- 238000006073 displacement reaction Methods 0.000 description 4
- 238000000605 extraction Methods 0.000 description 4
- 230000007246 mechanism Effects 0.000 description 4
- 230000001960 triggered effect Effects 0.000 description 4
- 238000013461 design Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 230000003595 spectral effect Effects 0.000 description 3
- 238000010183 spectrum analysis Methods 0.000 description 3
- 229920001621 AMOLED Polymers 0.000 description 2
- 241000282412 Homo Species 0.000 description 2
- 230000009471 action Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 230000001427 coherent effect Effects 0.000 description 2
- 238000004883 computer application Methods 0.000 description 2
- 230000001276 controlling effect Effects 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 230000000875 corresponding effect Effects 0.000 description 2
- 238000013480 data collection Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000036039 immunity Effects 0.000 description 2
- 238000011835 investigation Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 206010061245 Internal injury Diseases 0.000 description 1
- 230000000740 bleeding effect Effects 0.000 description 1
- 239000008280 blood Substances 0.000 description 1
- 210000004369 blood Anatomy 0.000 description 1
- 238000009835 boiling Methods 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000001816 cooling Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000005670 electromagnetic radiation Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000010438 heat treatment Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
- 239000010454 slate Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 238000012731 temporal analysis Methods 0.000 description 1
- WZWYJBNHTWCXIM-UHFFFAOYSA-N tenoxicam Chemical compound O=C1C=2SC=CC=2S(=O)(=O)N(C)C1=C(O)NC1=CC=CC=N1 WZWYJBNHTWCXIM-UHFFFAOYSA-N 0.000 description 1
- 229960002871 tenoxicam Drugs 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 239000010409 thin film Substances 0.000 description 1
- 230000000451 tissue damage Effects 0.000 description 1
- 231100000827 tissue damage Toxicity 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 230000008733 trauma Effects 0.000 description 1
- 238000002604 ultrasonography Methods 0.000 description 1
- 230000003966 vascular damage Effects 0.000 description 1
- 238000011179 visual inspection Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0059—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
- A61B5/0077—Devices for viewing the surface of the body, e.g. camera, magnifying lens
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/02—Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
- A61B5/0205—Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/02—Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
- A61B5/024—Detecting, measuring or recording pulse rate or heart rate
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/68—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
- A61B5/6887—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient mounted on external non-worn devices, e.g. non-medical devices
- A61B5/6893—Cars
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H40/00—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
- G16H40/60—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
- G16H40/63—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/254—Image signal generators using stereoscopic image cameras in combination with electromagnetic radiation sources for illuminating objects
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B2505/00—Evaluating, monitoring or diagnosing in the context of a particular type of medical care
- A61B2505/01—Emergency care
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B2576/00—Medical imaging apparatus involving image processing or analysis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/02—Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
- A61B5/024—Detecting, measuring or recording pulse rate or heart rate
- A61B5/02416—Detecting, measuring or recording pulse rate or heart rate using photoplethysmograph signals, e.g. generated by infrared radiation
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/08—Detecting, measuring or recording devices for evaluating the respiratory organs
- A61B5/0816—Measuring devices for examining respiratory frequency
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30268—Vehicle interior
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/15—Biometric patterns based on physiological signals, e.g. heartbeat, blood flow
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C5/00—Registering or indicating the working of vehicles
- G07C5/008—Registering or indicating the working of vehicles communicating information to a remotely located station
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B25/00—Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
Definitions
- the present invention in some embodiments thereof, relates to a system and method for analyzing in-cabin data to provide medical related information following a vehicle accident.
- the present disclosure provides a system, device and method to evaluate the medical condition of one or more vehicle-occupants, for example in post-crash situation.
- a system for detecting breathing or heartbeat of a vehicle occupant comprises at least one illumination source configured to project light in a predefined light pattern on a scene; an imaging device configured to capture a plurality of images, said plurality of images comprising reflections of said light pattern from one or more occupants in the scene; and at least one processor configured to extract breathing or heartbeat data of said one or more occupants by analyzing the reflections of said light pattern.
- the present disclosure also provides a method for analyzing heartbeat or breathing signals from said plurality of images of the said reflected light pattern.
- the analysis comprises the following steps: detecting one or more changes in one or more speckle patterns of at least one of the reflections of said light pattern in at least some consecutive images of the plurality of images; identifying micro-vibrations of the at least one object based on said speckle pattern analysis; and analyzing said micro-vibrations to extract breathing or heartbeat signal.
- the breathing or heartbeat signal is used to assess the medical condition of the occupant.
- the system comprises the use of an illumination source in the near infra-red (NIR) spectral range.
- NIR near infra-red
- the system comprises a communication module configured to send the medical -related data to a first responder’s rescue team.
- the data is uploaded to a cloud service to be extracted by a first responders’ rescue team.
- the system comprises a memory.
- the system is configured to record post-crash images captured by the imaging device.
- said post-crash images are sent to a first responders’ team by communication module or uploaded to a cloud service.
- the system is configured to store pre-crash images.
- the system is further configured to send the pre-crash images to a first responders’ team or to upload said images to a cloud service to be extracted by a first responders’ team.
- the processor is configured to analyze one or more post-crash images and extract a post-crash body pose thereby providing an assessment of the nature or severity of an injury.
- the processor is configured to analyze one or more stored pre-crash images in order to extract a pre-crash body pose.
- the processor is configured to analyze said pre-crash body pose, optionally in combination with any available information regarding the physical parameters of the impact, thereby providing an assessment of the nature or severity of an injury.
- the processor is configured to analyze one or more pre-crash images in order to extract at least one body attribute including, but not limited to, mass, height, width, volume, age, and gender.
- the processor is further configured to use said at least one body attribute, optionally in combination with any available information regarding the physical parameters of the impact, in order to analyze and assess the nature or severity of an injury.
- the assessment is sent to a first responders’ team by communication module or uploaded to a cloud service.
- a vehicle impact sensor comprises a microswitch located inside the vehicle’s bumper, said microswitch is configured to detect strong impacts.
- the system uses the same sensors as the vehicle’s airbag deployment system.
- the system is triggered to initiate the analysis by visually detecting rapid strictly motion of an occupant inside the vehicle.
- An example for such a detection mechanism is based on optical flow algorithm configured to analyze the relative displacement of objects between two or more consecutive frames.
- the system comprises at least one high speed imaging device.
- the high speed imaging device is configured to capture a plurality of images at high rate during accident occurrence.
- the system processor is configured to analyze said high rate images thereby assessing the nature or severity of injuries.
- the assessment is based on visually tracking the impacts occurring inside the cabin during the accident.
- the system comprises a depth sensor configured to produce an in-cabin depth map.
- the system is configured to store the depth data.
- the processor is configured to use the depth map to analyze post-crash body pose thereby assessing the nature and severity of an injury.
- the processor is configured to analyze the stored pre-crash depth map thereby providing information regarding a pre-crash body pose of a vehicle occupant.
- the processor is further configured to use said body pose, optionally in combination with any available information regarding the physical parameters of the impact, in order to analyze and assess the nature or severity of an injury.
- the processor is configured to analyze one or more pre-crash depth maps to extract a body attribute including, but not limited to, mass, height, width, volume, age, and gender.
- the processor is further configured to use the at least one body attribute, optionally in combination with any available information regarding the physical parameters of the impact, in order to analyze and assess the nature or severity of an injury.
- the analysis comprises the use of a combination of at least one image and at least one depth map.
- the system uses a deep learning method such as, but not limited to, convolutional neural network, in order to provide said analysis.
- a deep learning method such as, but not limited to, convolutional neural network
- a system for providing medical information of at least one occupant in a vehicle cabin comprising: a sensing module comprising at least one sensor configured to capture sensory data of the vehicle cabin; and a control module comprising at least one processor said processor is configured to: receive said sensory data from said sensor; and analyze said sensory data using one or more analysis methods to provide the medical information of said at least one occupant.
- the analysis methods are one or more of: computer vision methods; machine learning methods; deep neural network methods; signal processing methods.
- the medical information comprises the medical status of said at least one vehicle occupant following an accident of said vehicle. [0032] In an embodiment, the medical information comprises medical condition evaluation or injury assessment of the at least one occupant.
- the information comprises the medical status of said at least one vehicle occupant following an accident of the vehicle.
- the system comprises a communication module configured to send the information to a first responder team.
- the at least one sensor is an image sensor.
- the sensing module further comprises at least one illuminator, said at least one illuminator comprises a light source configured to project light pattern onto the vehicle cabin.
- the light source comprises a laser or a Light Emitting Diode (LED).
- LED Light Emitting Diode
- the light source comprises one or more optical elements for splitting a single light beam generated by the light source, said one or more optical elements are selected from the group consisting of: DOE; split mirrors; and diffuser.
- the sensing module comprises a depth sensor.
- the depth sensor is configured to capture depth data by projecting a light pattern onto the vehicle cabin and wherein the at least one processor is configured to analyze the location of known light pattern elements in said depth data.
- the pose of said at least one occupant is estimated by analyzing the depth data using said computer vision methods.
- the sensing module comprises a micro-vibration sensor.
- the micro-vibration sensor is configured to: project light onto the vehicle cabin scene; capture a plurality of images, wherein each image of said plurality of images comprises reflected diffused light elements; and
- the processor is configured to: receive the captured images; and analyze one or more temporal changes in the speckle pattern of at least one of the plurality of reflected diffused light elements in at least some consecutive images of the plurality of images to yield micro-vibration data.
- the sensing module comprising a combination of at least two sensors selected from a comprising: image sensor, depth sensor, and micro-vibration sensor.
- the processor is further configured to classify or identify an attribute of one or more objects based on at least one micro-vibration data.
- the sensory data comprises a plurality of images and wherein the at least one processor is further configured to classify the at least one vehicle occupant using said computer vision methods by visually analyzing at least one image of the plurality of images captured by the at least one sensor.
- the sensing module is configured to detect an imminent crash of the vehicle.
- the detection of an imminent crash is performed by one or more of:
- the at least one processor is configured to: analyze said sensory data, wherein said sensory data is captured prior to said detection of imminent crash, during said crash and following said crash; and asses the medical status of said at least one vehicle occupant following the crash based on said analyzed sensory data.
- the sensory data captured prior to the detection of the imminent crash is analyzed using said machine vision methods to extract pre-crash categorization.
- the pre-crash categorization comprises one or more of the at least one occupant body pose, body mass, age, body dimensions, and gender.
- the system is configured to provide a measure of the likelihood and severity of an injury from a car accident
- the system is configured to provide high-rate data of the vehicle cabin following the detection of the imminent crash.
- the high-rate data is used to extract trajectories of the at least one occupant or one or more objects in the vehicle.
- the extracted trajectories are used to asses the likelihood and severity of the injury of said at least one occupant.
- the system is configured to record in-cabin post-crash information; and assess the medical status of at least one car occupant.
- the system is configured to provide at least one of pre-crash, during-crash and post-crash medical status assessment of at least one car occupant.
- control module is configured to be in wireless communication with an external and transmit the information to a first responder team.
- the information is uploaded to a cloud service.
- the sensing module is mounted on said vehicle roof or ceiling.
- the analysis is conducted using at least one deep learning algorithm.
- a system for estimating the medical state of one or more occupants in a vehicle cabin following an accident of the vehicle comprising; a sensing module comprising: an illuminator comprising one or more illumination sources configured to project light in a structured light pattern on the vehicle cabin; at least one image sensor configured to capture sensory data prior to, during and following a crash of said vehicle, said sensory data comprising a sequence of 2D (two dimensional) images and 3D (three dimensional) images of the vehicle cabin wherein at least one of the 2D images comprise reflections of said structured light pattern from the one or more occupants in the cabin; a control module comprising: a memory module; at least one processor, said at least one processor is configured to: receive said captured sensory data from the sensing module; receive an impact detection signal of an imminent crash of the vehicle; store the received sensory data, captured prior to said impact detection receptance, in said memory module; analyze said received sensory data using computer vision or machine learning algorithm to yield pre crash assessment data comprising
- the at least one processor is further configured to: receive sensory data captured following the crash; analyze the received sensory data captured following the crash data using computer vision or machine learning algorithm to yield one or more of post-crash assessment data of said one or more occupants.
- the post-crash assessment data comprises one or more of: heartbeat; respiration rate; body pose; body motion; visible wounds.
- the processor is further configured to combine said post-crash assessment data with said during crash assessment data and said pre-crash assessment data to yield said medical information of said one or more occupants following the crash.
- the heartbeat or respiration rate are identified by analyzing one or more changes in one or more speckle patterns of at least one of the reflections of said structured light pattern in at least some consecutive images of the plurality of images; and identify the vibrations of the one or more occupants based on said speckle pattern analysis.
- the one or more of the body pose, body motion; and visible wounds are detected by analyzing the sequence of 2D images or 3D images of the vehicle cabin
- the body pose or body motion are identified using one or more of a skeleton model, optical flow of visual tracking methods.
- a method for providing medical information of at least one occupant in a vehicle cabin comprising: receiving captured sensory data of said vehicle cabin from a sensing module; receiving an impact detection signal of an imminent crash of the vehicle; storing the received sensory data, captured prior to said impact detection receptance, in a memory module; analyzing said received sensory data using computer vision or machine learning algorithm to yield pre-crash assessment data comprising identification of the at least one occupant state prior to the crash; receiving sensory data captured during said crash from the sensing module; analyzing the received sensory data captured during said crash to yield during-crash assessment data comprising body trajectories of the one or more occupants; providing medical information of said one or more occupants following the crash based on said during crash assessment data and pre-crash assessment data.
- a method for providing a first responder with information regarding the medical status or injury of vehicle a vehicle occupant after an accident comprises the steps of: i. utilizing an in-cabin sensor to monitor the vehicle’s cabin;
- Figures 1A-1C illustrate, respectively, a side view of a vehicle prior to the vehicle’s accident during the accident and following the accident, in accordance with some embodiments of the present disclosure
- Figure 2A shows an example of system raw data, in accordance with some embodiments of the present disclosure
- Figure 2B is a flow diagram illustrating steps of identifying objects, such as hidden objects in a vehicle cabin and providing information on the identified objects, in accordance with some embodiments of the present disclosure
- FIG. 2C illustrates a more detailed block diagram of the vehicle comprising the monitoring system, in accordance with some embodiments of the present disclosure
- Figure 3A shows a high-level block design of the system comprising a sensing module and a control module, in accordance with some embodiments of the present disclosure
- Figure 3B shows a high-level block design of the system comprising different types of sensors, in accordance with some embodiments of the present disclosure
- Figure 4 shows a flowchart for capturing and processing in-cabin data, in accordance with some embodiments of the present disclosure
- Figure 5A shows a high-level block diagram of a vehicle post-crash data, during-crash data, and pre-cash data collection procedure, in accordance with some embodiments of the present disclosure
- Figure 5B shows a flowchart of a method for detecting the heartbeats and/or breathing of one or more occupants, in accordance with some embodiments of the present disclosure
- Figure 6 shows a schematic of possible ways to handle the communication between the system and a first responders’ team, in accordance with some embodiments of the present disclosure
- Figure 7A illustrates an exemplary configuration for capturing a light pattern image(s) and translating an identified spackle pattern for measuring the heartbeat and/or breathing beat of one or more of the occupants, in accordance with some embodiments of the present disclosure
- Figure 7B shows exemplary captured images, in accordance with some embodiments of the present disclosure
- Figure 7C-7E shows exemplary graph results, suitable for incorporation in accordance with embodiments
- Figure 8A shows an exemplary magnification of a captured image of an occupant's chest, suitable for incorporation in accordance with embodiments.
- Figure 8B-8D shows exemplary graph results, suitable for incorporation in accordance with embodiments.
- the configurations disclosed herein can be combined in one or more of many ways to provide a way to inform first responders or rescue team information such as medical information including the status of the injured occupant(s), for example in a post-crash accident situation.
- first responders or rescue team information such as medical information including the status of the injured occupant(s), for example in a post-crash accident situation.
- One or more components and methods of the configurations disclosed herein can be combined with each other in many ways.
- System and methods as described herein uses an in-cabin sensing module to monitor the medical status of a vehicle occupant as well as analyze possible injuries and their severity.
- the system may be installed, and/or mounted, and/or integrated and/or embedded in a vehicle, specifically in a cabin of the vehicle.
- the system comprises one or more imaging devices and optionally or additionally one or more illumination sources configured and enabled to project light in a predefined pattern on the in-cabin scene.
- the one or more imaging devices are configured to capture a plurality of images of the scene.
- the plurality of images can contain reflections of the light pattern from one or more objects in the scene (e.g. vehicle occupants).
- the system may also comprise one or more processors configured and enabled to analyze the plurality of captured images to conduct post crash medical status analysis, and/or to monitor the medical condition of the vehicle occupants and/or analyze possible injuries of the occupants and their severity as detailed hereinbelow.
- the one or more processors are configured to analyze a speckle content of the light pattern induced on the scene by the one or more illumination sources.
- the speckle pattern may be used to obtain the micro-vibrations pattern of the one or more objects in the scene.
- the micro-vibration signal may be correlated with breathing motion or with heartbeat induced skin motion or a combination thereof.
- the disclosed system can, therefore, monitor vital signs of the vehicle’s occupants, for example in a post-crash situation.
- the system provides both the vital signs signal of breathing or heartbeats and a plurality of image data of the post-crash cabin, all using a single device or system.
- the system provides both post-crash data, and pre-crash data that can be used to further assess the details and of an injury.
- the term "light” encompasses electromagnetic radiation having wavelengths in one or more of the ultraviolet, visible, or infrared (IR) portions, including short-wave IR, near IR, and long IR, of the electromagnetic spectrum.
- IR infrared
- the term "light pattern” as used herein is defined as the process of projecting a known pattern of pixels onto a scene.
- the term“pattern” is used to denote the forms and shapes produced by any non-uniform illumination, in particularly patterned illumination employed a plurality of pattern features, such as lines, stripes, dots, geometric shapes, etc., having a uniform or different characteristics such as shape, size, intensity etc.
- a pattern light illumination pattern may comprise multiple parallel lines as pattern features.
- depth map is defined as an image that contains information relating to the distance of the surfaces of a scene object from a viewpoint.
- a depth map may be in the form of a mesh connecting all dots with z-axis data.
- the term“occupant” as used herein is defined as any individual present in a vehicle, including, the driver and any of the passengers. The term also includes non-human occupants.
- vehicle refers to a private car, a commercial car, a truck, a bus, a transporter, drivable mechanical equipment or any compartment used to transport humans on roads or tracks.
- normal vehicle operation mode refers to any operation of a vehicle for example on driving or while the vehicle stops prior to an accident of the vehicle.
- pre-crash data refers to any type of data such as visual images
- post-crash data refers to any type of data such as visual images, 3D images of objects within a vehicle obtained following the vehicle’s crash.
- Figure 1A, Figure IB and Figure 1C illustrate, respectively a side view of a vehicle 100 and a passenger cabin 120 prior to the vehicle’s accident (Figure 1A) during the accident ( Figure IB) and following the accident ( Figure 1C), in accordance with embodiments.
- the vehicle 100 includes a sensing system 110, configured and enabled to capture and obtain data before the accident (e.g. pre-crash data), during the accident (e.g. during - crash data) and after the accident occurs (post-crash data).
- Each data may include visual (e.g. video images) and stereoscopic data (e.g. depth maps), for example, 2D (two dimensional) images and/or 3D (three dimensional) images and vibration data (e.g. micro-vibrations) of areas and objects within the vehicle 100.
- Each data may be analyzed, for example in real-time or close to real-time, to assess the medical status of one or more occupants following the vehicle accident, in accordance with embodiments.
- the sensing system 110 is configured to monitor areas and objects within the vehicle before an accident occurs, during the accident and following the accident to obtain various types of sensory data of the areas and objects (e.g. occupants), and analyze the obtained data, using one or more processors, to extract visual data and depth data and to detect speckle pattern dynamic for identifying vibrations (e.g. micro-vibrations) and use the extracted data to estimate the medical status of the occupants.
- various types of sensory data of the areas and objects e.g. occupants
- speckle pattern dynamic for identifying vibrations e.g. micro-vibrations
- Nonlimiting examples of such objects may be one or more of the vehicle's occupants such as driver 111 or passenger(s) 112, in accordance with embodiments.
- methods and systems configured to analyze the captured sensory data to detect the occupants pose and location before an accident occurs, as well as their trajectory 118 resulted from the vehicle crash and finally the accident results (e.g. post-crash data) including medical information such as information related to the occupants wounds and heartbeat.
- the medical information may be transmitted to external units and forces such a rescue team, for example in real-time, in accordance with embodiments.
- the sensing system 110 may be connected or may be in communication with the vehicle’s units such as the vehicle’s sensors such as impact detection sensor or 105 and/or airbag control unit (ACU) 108 and/or to Vehicle Computing System (VCS) 109 and/or a collision avoidance system (CAS) 105.
- the sensing system may receive one or more signals including additional sensory data from CAS 105.
- the sensing system 110 may be in communication with other types of sensors of the vehicle such as an accelerometer and may fuse and use the additional data with the extracted data to estimate the medical status of the occupants.
- the sensing system 110 may be connected to or may be in communication, such as wireless communication, with the vehicle’s CAS which is configured to prevent or reduce the severity of a collision.
- vehicle such as wireless communication
- An example of such CAS may include radar (all-weather) and/or laser (LIDAR) and camera (employing image recognition) to detect an imminent crash.
- the sensing system 110 may combine the data received from the CAS with the extracted data use data to estimate the medical status of the occupants, for example following the vehicle’s collision.
- the sensing system 110 may be mounted at the cabin’s ceiling or roof, for example in the front section 115 of the vehicle’s roof, in a way that allows a full cabin view.
- the sensing system 110 may be installed, mounted, integrated and/or embedded in a vehicle, specifically in a cabin of the vehicle such that the scene is the cabin interior 120 and the object(s) present in the cabin may include, for example, one or more of: vehicle occupant(s) (e.g. a driver(s), passenger(s), pet(s), etc.); one or more objects associated with the cabin (e.g. seat, door, window, headrest, armrest, etc.); items associated with one or more of the vehicle occupant(s) (e.g. an infant seat, a pet cage, a briefcase, a toy, etc.) and/or the like.
- vehicle occupant(s) e.g. a driver(s), passenger(s), pet(s), etc.
- objects associated with the cabin e.g. seat, door, window
- the systems and methods are configured to generate an output, such as one or more output signals including medical information, such as medical status in real-time, of the one or more occupants following the vehicle’s accident.
- the sensing system 110 may include one or more sensors, for example of different types, such as a 2D imaging sensor and/or a 3D imaging sensor (e.g. stereoscopic camera) and/or an RF imaging sensor and/or a vibration sensor (micro-vibration), structured light sensor, ultrasonic sensor and the like to capture sensory data of the vehicle cabin, as will be further illustrated in Figure 3B.
- the 2D imaging sensor may capture images of the vehicle cabin, for example from different angels, and generate original visual images of the cabin.
- the sensing system 110 may include an imaging sensor configured to capture 2D and 3D images of the vehicle cabin and at least one processor to analyze the images to generate a depth map of the cabin.
- the system 110 may detect vibrations (e.g. micro-vibrations) of one or more objects in the cabin using one or more vibration sensors and/or analyzing the captured 2D or 3D images to identify vibrations (e.g. micro-vibrations) of the objects.
- vibrations e.g. micro-vibrations
- the system 110 may detect vibrations (e.g. micro-vibrations) of one or more objects in the cabin using one or more vibration sensors and/or analyzing the captured 2D or 3D images to identify vibrations (e.g. micro-vibrations) of the objects.
- the system 110 may further include a face detector sensor and/or face detection and/or face recognition software module for analyzing the captured 2D and/or 3D images.
- the system 110 may include or may be in communication with a computing module comprising one or more processors configured to receive the sensory data captured by the system's 110 sensors and analyze the data according to one or more of computer vision and/or machine learning algorithms to yield medical information including an estimation the medical condition of the one or more occupants in the vehicle cabin as will be illustrated hereinbelow.
- a computing module comprising one or more processors configured to receive the sensory data captured by the system's 110 sensors and analyze the data according to one or more of computer vision and/or machine learning algorithms to yield medical information including an estimation the medical condition of the one or more occupants in the vehicle cabin as will be illustrated hereinbelow.
- the one or more processors are configured to combine various types of sensory data (e.g. 2D data (e.g. captured 2D images) 3D data (depth maps)) of the vehicle cabin along a period of time such few seconds or minutes (e.g. 1, 2, 3, 4-100 seconds and more) before an accident, during the accident and following the accident to yield the medical information relating to the one or more occupants in the vehicle cabin.
- 2D data e.g. captured 2D images
- 3D data depth maps
- system 110 provides merely the minimal hardware such as one or more sensors and imagers for capturing visual and depth images of the vehicle 110 interior.
- an interface connecting to system 110 may supply the necessary power and transfer the data acquired to the vehicle’s computing and/or processing units such as VCS 109 and/or ACU 108, where all the processing is being carried out, taking advantage of its computing power.
- computing and/or processing units such as VCS 109 and/or ACU 108, where all the processing is being carried out, taking advantage of its computing power.
- installing system 110 becomes very easy and using off-the-shelf components.
- the sensing system 110 comprises one or more illuminators such as an illuminator 103 and a sensor 101.
- the illuminator 103 creates light such as a pattern of light, schematically indicated here as rays of spots 102.
- the created light pattern may cover all or selected portions of the occupants of vehicle 100 such as passenger 203 and driver 207 as shown in Figure 2A.
- sensor 101 may be selected from a group consisting of: a Time of Flight (ToF) image sensor; a camera; RF (radio frequency) device; stereoscopic camera, structured light sensor, ultrasonic sensor.
- TOF Time of Flight
- RF radio frequency
- the senor 101 may be an image sensor equipped with a fish-eye lens allowing coverage of the full cabin. In some cases, to cover all possible positions of the occupants in a vehicle a typical coverage of more than 150 degrees may be used.
- the sensor may be a CMOS sensor (Complementary Metal Oxide Semiconductor) or CCD sensor (charge-coupled device) with resolution such as VGA, 2MP, 5MP, 8MP.
- the and lens can work in the visible range, or IR range.
- the illuminator 103 creates a light pattern that may cover one or more portions or the whole cabin 120.
- An example of such a pattern is a spot pattern.
- Figure 2A shows such an example of a spot pattern 206 covering the front and back seat in a standard passenger’s car.
- the pattern can be spots pattern, as shown in Figure 2A, lines, grid, or any other pre configured shape.
- having the light concentrated in small regions such as the spot shown in Figure 2A may improve the signal to background ratio and hence provide a clearer speckle signal.
- FIG. 2B is a flow diagram 296 illustrating steps of detecting occupancy state, including identifying objects, such as hidden objects in a vehicle cabin and providing information on the identified objects, according to one embodiment.
- a sensing system 280 includes one or more illumination units such as an illuminator 274 which provides structured light with a specific illumination pattern (e.g., spots or strips or other patterns) to objects 271 and 272 located for example at the rear section in the vehicle cabin 100 and one or more sensors such as sensor 276 captures an image of the objects 271 and 272 in the vehicle cabin 100.
- illumination units such as an illuminator 274 which provides structured light with a specific illumination pattern (e.g., spots or strips or other patterns) to objects 271 and 272 located for example at the rear section in the vehicle cabin 100 and one or more sensors such as sensor 276 captures an image of the objects 271 and 272 in the vehicle cabin 100.
- a specific illumination pattern e.g., spots or strips or other patterns
- the sensing system 280 may include one or more processors such as processor 252.
- the processor 252 may be in wired or wireless communication with devices and other processors.
- the output from processor 252 may trigger a process within the processor 252 or may be transmitted to another processor or device to activate a process at the other processor or device.
- the processor 252 may be external to the sensing system 280 and may be embedded in the vehicle or may be part of the vehicle's processing unit.
- the processor 252 may instruct the illuminator 265 to illuminate specific areas in the vehicle cabin.
- the sensing system 280 may further include an RF transmit- receive unit 275 including, for example, an RF transmit-receive unit 275 for such as an RF transceiver configured to generate and direct RF beams towards the objects 271 and 272 using RF antennas 275 and receive the reflected RF beams to provide an RF image of the vehicle cabin 100 and objects 271 and 272.
- the captured images including for example the RF signals and reflected pattern images are provided to the processor 252 to generate a depth map representation 291 and 2D/3D segmentation of the vehicle cabin 100 and/or the objects 271 and 272.
- the sensing system may include a sound sensor 269 such as one or more ultrasound sensors and/or a directional microphone configured and enabled to detect the presence of a person and/or vitality signs to locate for example the mouth of a person which is speaking and generate data inputs to detect one or more objects location in the vehicle.
- the processor 252 may further receive and/or generate additional data 279 including for example info on the vehicle state 278, speed and acceleration 277 as captured by vehicle sensors 273. The sensory data 282 and the additional data 279 are analyzed by the processor 252.
- the sensory data 282 and the additional data 279 are analyzed using a multitude of computer vision and machine learning algorithms. These may include but are not limited to a Convolutional Neural Network detecting people, networks that specifically detect the face, hands, torso and other body parts, networks that can segment the image and specifically the passengers in the image based on the 2D and 3D images, algorithms that can calculate the volume of objects and people and algorithms that can determine if there is motion in a certain region of the car
- the analysis output multiple types of data on objects in the vehicle cabin, such as information on occupants or inanimate objects such as a box or bag of groceries or an empty child seat including for example.
- the data may include: detected body parts of objects 289 and/or motion of objects 288 and/or volume of objects 287 and/or occupancy state based on deep learning 286 in the vehicle cabin.
- the multiple types of data may include depth data 294 including one or more depth images, obtained for example by the sensor 276.
- the depth data may be specifically used to detect body parts of the passengers and segment the body or body parts in the image.
- the depth data may also be used to determine the pose of a person, such as sitting up-right, leaning to the side or leaning sideways.
- the multiple types of data may include prior knowledge
- Nonlimiting examples of prior knowledge may include: information on the vehicle units such as doors/window/ seatbelt state and/or prior information on objects and/or passengers in the vehicle; and/or rules or assumptions such as physical assumptions or seat transition rules relating to likelihood of movements inside the vehicle, for example, to rule out unlikely changes in the occupancy prediction or alternatively confirm expected changes in the vehicle occupancy state.
- Specific examples of the physical assumptions may include for example a high probability that the driver seat is occupied in a driving vehicle (nonautonomous vehicle) and/or low probability that a passenger may move to one of the rear seat in a predetermined short time, or from one seat to another seat in a single frame.
- the multiple types of data is fused to determine at step 294 the number of seat occupancy and/or object (e.g. passenger) such as objects 271 and 272 position and attribution (e.g. whether a passenger is sitting straight, leaning to the side or forward) are detected and/or identified.
- the detection may include identifying one or more passengers, such as a complete body or body portion such as a face or hand at the rear section of the vehicle.
- the multiple types of data are fused by a fusion algorithm which outputs the best decision considering the reliability of each data input (e.g. the motion of objects 288 and/or volume of objects 287 and/or occupancy state based on deep learning 286 face detection 290 prior knowledge 292).
- the fusing algorithm includes analyzing the fused data to yield a stochastic prediction model (e.g. Markov chain, for example in the form of a Markov matrix) of one or more predicted occupancy states probabilities.
- the prediction model is used to continuously update over time the probability of one or more current occupancy states (e.g. probability vector) to yield an updated occupancy state, e.g. determine in real time the location of objects 271 and 272 and/or the number of objects in the vehicle cabin.
- the predicted state and the current state are combined by weighting their uncertainties using for example Linear time-invariant (LTI) methods such as Infinite impulse response (HR) filters.
- LTI Linear time-invariant
- HR Infinite impulse response
- the processor 252 may output at step 295 data or signals which may be used to provide information and/or for controlling devices, which may be remote or integral to the vehicle, for example, an electronic device such as an alarm, alert or a lighting may alert on out-of-position and accordingly activate an occupant protection apparatus (e.g. airbag) or other devices.
- the device may be controlled, such as activated or modulated, by the signal output according to embodiments.
- the output signals may include seat belt reminder (SBR), out of position indication (OOP) for example for airbags suppression and driver monitoring system (DMS) for driver’s alert.
- SBR seat belt reminder
- OOP out of position indication
- DMS driver monitoring system
- the neural network is trained to output a number of points such as a predefined number of points corresponding to certain body parts or a skeleton of lines.
- these points and lines can be tracked in time using conventional tracking algorithms such as Kalman filters. It is stressed that the location of these points may be estimated by the tracker even if the tracking is lost, for example when the full body or parts of the body are obstructed, i.e. by the front seats.
- the body parts of the passengers may be detected using computer vision algorithms and/or neural networks that are trained to detect persons such as human contour, for example, a complete human body and/or specific human body parts.
- the face of a person may be detected by well-known detection or recognition methods.
- Non-limiting examples of such methods include Viola- Jones algorithm or SSD neural networks.
- Body parts may be detected by a neural network that is specifically trained to detect the body pose.
- Non-limiting examples of such methods include OpenPose or OpenPose Plus methods.
- FIG. 2C illustrates a more detailed block diagram of the vehicle 100.
- the vehicle 100 can include more or less components than those shown in Figure 2B.
- the vehicle 100 can include a wired or a wireless system interface, such as a USB interface or Wi-Fi, interfaces to connect the sensing system 110 and the vehicle's computing unit 205 with vehicle systems 200 such as vehicle systems 202-208, vehicle sensors 201 such as vehicle sensors 212-218 seat locations 232- 238 and respective seat sensors 232'-238'.
- the hardware architecture of Figure 2B represents one embodiment of a representative vehicle comprising a sensing system 110 configured to monitor and identify automatically objects and areas in the vehicle, such as hidden objects.
- the vehicle of Figure 2C also implements a method for monitoring objects and/or passenger to identify the attributes of the objects and/or passengers also in cases where due to momentary poor visibility or false detection they may not be identified.
- the identification may be based on various types of information received, for example in real time, from various types of sensors in the vehicle such as dedicated sensors embedded in the sensing system 110 and existing vehicle sensors embedded in various locations in the vehicle 100.
- the vehicle 100 includes a vehicle computing unit 205 which controls the vehicle's systems and sensors and a sensing system 110 including a control board 250, which controls the sensing system 110 and which may be in communication with the vehicles units and systems such as the vehicle computing unit 205 and the vehicle systems 200.
- the control board may be included in the computing unit 205.
- Vehicle 100 also comprises passenger seat position 232-238 including a driver's seat position 232, a front passenger seat position 234, and left and right rear passenger seat positions 236 and 238. Although four seat positions are shown in Figure 2B for illustrative purposes, the present invention is not so limited and may accommodate any number of seats in any arrangement within the vehicle.
- each passenger seat position has automatically adjustable settings for seat comfort, including but not limited to, seat height adjustment, fore and aft adjustment position, seatback angle adjustment.
- Each passenger seat position may also include respectively and separately configurable one or more sensors 232 '-238' for controlling the passenger's seats and windows and environmental controls for heating, cooling, vent direction, and audio/video consoles as appropriate.
- the passengers may have communication devices 242-248, one for each passenger position, indicating that each passenger in vehicle 100 is carrying a communication device.
- the exemplary embodiment illustrated in Figure 2C shows each passenger carrying a communication device, various implementations envision that not all passengers need to carry a device.
- the sensing system 110 may include a plurality of sensors of different modalities.
- the sensing system 110 may include a vibrations sensor 241, and/or an acceleration sensor 242, and/or 3D sensor 243, and/or an RF sensor 244, and/or a video camera 245, such as a 2D camera.
- the sensing system may include an image sensor for mapping a speckle field generated by each spot formed on the vehicle surface and a light source, such as a coherent light source adapted to project a structured light pattern, for example, a multi-beam pattern on the vehicle cabin.
- a processor such as processor 252 is configured to process speckle field information received by the image sensor and derive surface vibration information to identify one or more objects in the cabin, including for example motion and micro-vibrations of one or more of the detected objects.
- the projected structured light pattern may be constructed of a plurality of diffused light elements, for example, a dot, a line, a shape and/or a combination thereof may be reflected by one or more objects present in the scene and captured by an imaging sensor integrated in the unified imaging device.
- control unit 250 may be connected to the vehicle systems 202-208, the vehicle's sensors 110, the sensing system sensors, the seat 232-238 and/or the seat's sensors' 232'-238' via one or more wired connections.
- control unit 250 may be connected to the vehicle systems 202-208, the vehicle's sensors 110, the sensing system sensors, the seat 232-238 and/or the seat's sensors' 232'-238' through a wireless interface via wireless connection unit 252.
- Wireless connection 252 may be any wireless connection, including but not limited to, Wifi (IEEE 802.1 lx), Bluetooth or other known wireless protocols and/or.
- the computing unit 205 is also preferably controllably connected to the vehicle systems 202-208, the vehicle's sensors 110, the sensing system sensors, the seat 232-238 and/or the seat's sensors' 232'-238'.
- Vehicle systems 222-228 may be connected through a wired connection, as shown in Figure 2, or by other means.
- the vehicle systems may include, but are not limited to, engine tuning systems, engine limiting systems, vehicle lights, air-condition, multimedia, GPS/navigation systems, and the like.
- the control board 250 may comprise one or more of a processor 252, memory 254 and communication circuitry 256. Components of the control board 150 can be configured to transmit, store, and/or analyze the captured sensory data, as described in further detail herein.
- the control unit may also be connected to a user interface 260.
- the user interface 260 may include input devices 261, output devices 263, and software routines configured to allow a user to interact with the control board.
- Such input and output devices respectively include, but are not limited to, a display 268, a speaker 266, a keypad 264, a directional pad, a directional knob, a microphone 265, a touch screen 262, and the like.
- the microphone 265 facilitates the capturing of sound (e.g. voice commands) and converting the captured sound into electrical signals.
- the electrical signals may be used by the onboard computer 104 interface with various applications 352.
- the processor 252 may comprise a tangible medium comprising instructions of a computer program; for example, the processor may comprise a digital signal processing unit, which can be configured to analyze and fuse data such as sensory data received from the various sensors using multiple types of detection methods.
- the processed data output can then be transmitted to the communication circuitry 256, which may comprise a data encryption/transmission component such as BluetoothTM. Once encrypted, the data output can be transmitted via Bluetooth to the vehicle computing unit and/or the vehicle user interface and may be further presented to the driver on the vehicle. Alternatively or in combination, the output data may be transmitted to the monitoring unit interface.
- Figure 3A shows a schematic diagram of a sensing system 310, configured and enabled to capture sensory data of a scene such as a vehicle cabin 320, including one or more occupants (e.g. driver 311 and/or passenger 312) and analyze the sensory data to estimate the medical condition of the one or more occupants, for example following an accident of the vehicle, in accordance with embodiments.
- the sensing system 310 may be the system 110 of Figures 1A and IB.
- System 310 includes a sensing module 300 and a control module 315.
- the two modules can reside in the same package or can be separated into two different physical modules.
- the sensing module 300 can reside in the ceiling of the car, while the control module 315 can reside behind the dashboard.
- the two modules are connected by communication lines and/or may be in communication electrically and/or wirelessly for example through a dedicated connection such a USB connection, wireless connection or any connection known in the art.
- the sensing system 310 is connected to the vehicle’s power.
- system 310 comprises a battery, optionally chargeable battery which allows operation even when the vehicle’s power is down. Such a design would allow the system to keep operating even if the vehicle power fails during an accident.
- the battery is chargeable from the car’s battery or from the car’s alternator.
- the sensing system 310 may also be equipped or has an interface to an impact sensor 325 configured to detect an imminent crash.
- a non-limiting example of such impact sensor is the impact sensor that exists in cars and is responsible for airbag deployment.
- the notification of an impact is being transferred to the system 310 by an electronic signal from the impact sensor.
- the electronic signal may be transferred directly or by a communication system such as a vehicle’s CAN bus interface.
- the impact sensor may be or may be included in CAS as mentioned herein with respect to Figure 1A.
- the sensing system 310 is equipped with a built-in impact sensor such as an accelerometer 312. When an acceleration (or rather a deceleration) above a certain threshold is detected, the system 310 considers the impact signal as being provided, and that a collision will soon occur.
- a built-in impact sensor such as an accelerometer 312.
- an impact may be determined by analyzing data captured from the in-cabin vehicle, for example using the sensing module 300.
- the system 310 can monitor the video motion of the driver, passenger and objects in the cabin. When rapid or hectic movement is detected, beyond a predefined threshold, the system concludes that an impact is occurring.
- Such analysis can be based on computer vision algorithms such as, but not limited to, optical flow or tracking.
- the sensing module 300 comprises an image sensor 301 and at least one illuminator 303 which can be configured to capture the sensory data including one or more images of the scene (e.g. car cabin) and further transmit the visual data to extract visual data, depth map(s) (e.g. density depth map(s)) and vibrations (e.g. micro-vibration data) of the scene using the control module as described in further detail herein.
- the scene e.g. car cabin
- the visual data e.g. density depth map(s)
- vibrations e.g. micro-vibration data
- the image sensor 301 is equipped with a lens 302.
- the lens can be a fish-eye lens covering the entire cabin.
- the illuminator 303 creates a light pattern illuminating the scene 320.
- the light pattern can be optionally designed to cover approximately the same field of view as the image sensor.
- the image sensor 301 can be a CMOS or CCD sensors of various formats of resolution. For example, it can be a VGA format (640x480 pixels), or a 2 MPixel format (1920x1080).
- the illuminator 303 may include a light source 324 that may be, as an example, a coherent laser light source.
- the illuminator 303 may also be equipped with a pattern creation optics, which may be as an example, a diffractive optical element (DOE) 326.
- DOE diffractive optical element
- Other examples for pattern creation may be a mask, or splitting mirrors and diffuser.
- the light source 324 may include electromagnetic energy of wavelengths in an optical range or portion of the electromagnetic spectrum including wavelengths in a human-visible range or portion thereof (e.g., approximately 390 nm-750 nm) and/or wavelengths in the near-infrared (NIR) (e.g., approximately 750 nm-1400 nm) or infrared (e.g., approximately 750 nm-1 mm) portions and/or the near-ultraviolet (NUV) (e.g., approximately 400 nm-300 nm) or ultraviolet (e.g., approximately 400 nm-122 nm) portions of the electromagnetic spectrum.
- NIR near-infrared
- NUV near-ultraviolet
- the particular wavelengths are exemplary and not meant to be limiting. Other wavelengths of the electromagnetic range may be employed.
- the illuminator 303 wavelength may be any one of about 830nm, about 840nm, about 850nm, or about 940
- the illumination is performed in the near infra-red (NIR, about 750-1400nm) spectral range, in order to prevent the pattern from being visible to the naked eye of an occupant.
- NIR near infra-red
- the image sensor 301 can be equipped with a band-pass spectral filter preventing it from capturing light in wavelengths not matching those of the illuminator.
- a band-pass filter is added to the image sensor 301, the signal to background ratio improves.
- both the image sensor 301 and the illuminator 303 are connected to or may in communication with one or more processors such as processor 304 located in the control module 315.
- the processor 304 may operate the illuminator 303 and the image sensor 301 according to a monitoring policy.
- the processor 304 can operate the image sensor 301 at a frame rate of 30 frames per second (FPS) and the illuminator 303 as a constant light source.
- the processor 304 can operate the image sensor 301 at 30 FPS and the illuminator 303, alternatively, may be obtained only every second frame with a pattern, while the remaining frames are without a pattern.
- one option may include capturing a clean frame (e.g. which does not include reflected pattern) and then one patterned frame. Alternatively, capturing 5 clean frames and then one pattern frame and etc.
- the later mode may be desirable as it may be advantageously used in order to obtain alternatively clean image frames for deep learning algorithm, and also pattern frames that can be used for vibration monitoring and vital signs extraction. Other option including a different relation between the number of pattern frames and clean frames may be used.
- the processor 304 is configured to extract any one or more or a combination of images such as video images, depth data and vibration data from the video stream.
- extracting video images is performed by selecting video frames in which the illumination source 324 was off and no light pattern was projected onto the scene.
- the video images can include the light pattern created by the illuminator.
- extracting depth data is performed by using a structured light technique.
- the light pattern induced by the illuminator 303 is captured by the image sensor 301.
- the processor 304 is configured to receive the captured images and analyze the captured images to identify the displacement of each of the pre-known light pattern elements thereby calculating the depth of the object at this location.
- Equation 1 Equation 1:
- z denotes the depth estimation
- B denotes the baseline distance between the image sensor and the pattern illuminator
- f denotes the camera module lens’s focal length
- D denotes the disparity, i.e. the distance in which the pattern element has shifted across the image plane.
- the depth can also be obtained by employing a look-up table mechanism.
- the system needs to undergo a calibration process in which the light pattern is projected onto a screen positioned at several predefined distances. For each distance, the pattern is recorded and stored in memory such as memory module 305. Then, for each small region that includes some light pattern element observed by the camera during operation, a correlation-based algorithm is employed to asses at which screen distance the closest matching pattern element occurs. The distance in which the correlation is the highest is chosen as the estimated distance for this region.
- the number of extracted depth points need not match the number of pixels in the image. Rather, it is related to the light pattern induced by the illuminator.
- the number of depth points is defined by the number of distinct light pattern elements in the projected pattern. As an example, this can be the number of spots, in a pseudo-random spot pattern, or a small factor of this number resulting from using a cluster of 2-3 points as the distinct pattern.
- the processor 304 is also configured to extract vibration such as micro-vibration information from the scene.
- the extraction of the micro-vibration is performed by analyzing the speckle content of at least some of the light (e.g. spots) pattern elements in the captured images.
- the changes over time of the speckle pattern can be indicative of micro-vibrations, i.e. very small and subtle movements that may be too minor to be detected by analyzing variations in the depth data or in the video images which do not include the light pattern elements.
- the speckles pattern analysis may include detecting the changes to the speckle pattern by measuring a temporal standard deviation in the intensity of the respective reflected diffused light element over multiple consecutive captured images to identify a temporal distortion pattern. For example, assuming I j is the gray level intensity of a certain pixel depicting light pattern element and/or a part thereof in an image number i,one or more processors such as processor 304 may calculate the temporal standard deviation according to equation 2 below.
- n denotes the current image and k denotes the number of previous images.
- the analysis may further include comparing the result of the temporal standard deviation to a predefined threshold value to determine whether a micro-vibration occurred.
- a predefined threshold value it is determined, for example by the processor 304, that the magnitude of the micro-vibrations in the specific light pattern element increased.
- the processor 304 may determine that there is no change in the magnitude of the micro-vibrations.
- the predefined threshold value may be fixed and set in advance.
- the predefined threshold value can be dynamically adjusted according to the value of the temporal standard deviation measured over time.
- the temporal standard deviation may be averaged over multiple pixels (e.g. 5x5 pixels) of each spot.
- the temporal standard deviation may be averaged over multiple speckle patterns of diffused light elements reflected from the same surface and portrayed in the same region in the captured images.
- the changes to the speckle pattern may be detected, for example by the processor 304, by analyzing the speckle pattern for lateral translation which is indicative of a tilt of the reflecting object with respect to the image sensor.
- the tilt which may be very minor, for example, on a scale of micro-radians, may be derived from the translational velocity of one or more speckle pattern point(s) over time (consecutive frames).
- the lateral speckle pattern translation may be derived from analysis of the diffused light pattern element(s) depicted in a plurality of consecutive captured images according to equation 3 below.
- / denotes the intensity of the pixel in the captured image in gray level differentiated with respect to a time t or position x.
- the angular velocity in a change of a certain pixel (i,y) with respect to its neighboring pixels in the i direction in captured image n may be expressed by equation 4 below.
- the angular velocity in a change of a certain pixel (i,j) may be expressed similarly in the j direction.
- the result of the angular velocity is expressed in pixel per frame units.
- the intensity 7 j of the pixel (i,y) may be normalized, for example by the processor 304, over time to compensate for non-uniformity in intensity I j due to spot intensity envelope effects.
- the intensity I j j may be normalized by applying a sliding temporal window for averaging the intensity I j j of one or more pixels (i,y) in the consecutive captured images.
- the processor 304 may smooth the intensity I j j in the time domain by applying an infinite impulse response to the I j j to produce a smoothed intensity T L j as expressed in equation 5 below.
- the intensity 7 ; of one or more of the pixels (i,j) may be normalized by dividing it with the average intensity measured over time in a plurality of consecutive captured images to produce a normalized intensity /A as expressed in equation 6 below.
- the angular velocity may be expressed by equation 7 below.
- the processor 304 may further spatially average the intensity over multiple adjacent reflected diffused light elements (e.g. dots, spots, etc.) in the captured images.
- the processor may further apply temporal filtering over the spatially averaged intensity value to improve the resulting intensity signal.
- depth data and vibrations can be extracted from the captured images separately or together (e.g. simultaneity), according to the configuration of the processor 304.
- the processor 304 may also analyze the extracted depth data and vibrations (e.g. micro-vibration) and captured visual data to provide medical information including an analysis of the medical status such as injury related status of the one or more occupants in the vehicle. In some cases, the medical information is provided in real-time or close to real-time. In certain embodiments, the processor analyzes post-crash vital-signs, pose, and injury severity as described in further details herein. [00185] According to various aspects and embodiments, the processor 304 is connected to or may be in communication with a memory module 305.
- the memory module is configured to store data such as 2D data (e.g. visual images) and/or 3D data and/or vibrations during analysis. It may also store pre crash data to be retrieved in case an accident occurs. In some cases, the memory module 305 may receive data from internal units in the vehicle such as the vehicle’s computer device and/or memory units and/or from external devices such as mobile phone devices.
- the processor 304 may be connected to or may be in communication (e.g. wirelessly) with a communication module 306 in order to communicate the results of the analysis (e.g. the medical data) to the external world.
- the results can be transmitted to a cloud based storage system 307 or to a rescue team 309.
- the sensing system 310 may be in wireless communication 116 with the cloud-based storage system 307.
- the system can transmit the data to a mobile device 350 using communication module 306 with a communication link, such as a wireless serial communication link, for example, BluetoothTM.
- the hand held device can receive the data from the system and transmit the data to a back end server of the cloud based storage system 307
- the transmitted data can be structured in the form of a message including information such as the position or status of one or more occupants inside the post-crash vehicle.
- the transmitted data can also include images or video sequences form the sensor.
- the estimation of the post-crash medical condition of a vehicle occupant can be performed based on any combination of the pre-crash data, during-crash data and post-crash data as described in further details herein.
- Figure 3B shows a schematic diagram of a sensing system 370, configured and enabled to capture sensory data of a scene such as a vehicle cabin 320, including one or more occupants (e.g. driver 311 and/or passenger 312) and analyze the sensory data to estimate the medical condition of the one or more occupants, for example following an accident of the vehicle, in accordance with embodiments.
- a sensing system 370 configured and enabled to capture sensory data of a scene such as a vehicle cabin 320, including one or more occupants (e.g. driver 311 and/or passenger 312) and analyze the sensory data to estimate the medical condition of the one or more occupants, for example following an accident of the vehicle, in accordance with embodiments.
- System 370 presents all elements of the aforementioned system 310 but further includes a number of sensors such as an image sensor 372, a depth sensor 374, for example, a stereoscopic camera and a micro-vibration sensor 376.
- sensors such as an image sensor 372, a depth sensor 374, for example, a stereoscopic camera and a micro-vibration sensor 376.
- FIG. 4 shows a flowchart of a method 400 for capturing and processing in-cabin data including pre-crash data, during-crash data and post-crash data to provide information such as medical information following or during a vehicle accident, in accordance with embodiments.
- a system such as system 110 of Figures 1A-1C or system 310 of Figure 3A receives and records data such as captured sensory data (e.g. raw data -defined as primary data collected from one or more sensors such as the sensors in the sensing module) of the in-cabin vehicle.
- the sensory data may be captured by the sensing module 300 and may include any one or more of the video images, depth data, and vibration data.
- the captured sensory data may be stored using, for example, a buffer such as buffer 301.
- the buffer 301 may be a cyclic buffer comprising, for example, a storage capability of 10 seconds length, 20 seconds length, or more.
- the term“cyclic” as used herein refers to a fixed amount of memory that is used to store the data captured in an amount of time, such that when the memory has been filled, the old data is overwritten by new data in a cyclic manner.
- the sensory data may include vehicle internal data obtained from the vehicle units such as the vehicle embedded sensors.
- an impact detection signal may be received for example at the control module.
- the impact detection signal may be generated by the impact sensor 325 and/or the processor and/or sensing module following an imminent crash detection.
- the impact detection is obtained from the vehicle embedded sensors such as the CAS or other sensing modules.
- a non-limiting example of such impact sensor is the impact sensor that exists in cars and is responsible for airbag deployment.
- the notification of an impact is being transferred to the system by an electronic signal from the impact sensor.
- the electronic signal may be transferred directly or by a communication system such as vehicle’s CAN bus interface.
- the system is equipped with a built-in impact sensor such as an accelerometer as mentioned above.
- the impact case is analyzed from the system data.
- the system can monitor the video motion of the driver, passenger and objects in the cabin. When rapid or hectic movement is detected, beyond a predefined threshold, the system concludes that an impact is occurring.
- Such analysis can be based on computer vision algorithm such as, but not limited to, optical flow or tracking.
- the system mode changes from“normal vehicle operation mode” to “accident mode”.
- the system operates one or more actions as follows: at step 432 the recording on the cyclic buffer 301 is stopped and the raw data (e.g. sensory data) contained in the buffer is“frozen” in memory 303 for further analysis.
- the sensory data can also be transmitted to external sources such as the cloud based memory units 307 located at emergency agencies such as the police, hospital, etc., for advanced crash investigations.
- the transmission can happen during the crash or at a later stage when an investigation is initiated.
- the details of the impact as identified for example by the processor may also be recorded by the processor. Such details can include the magnitude and/or the direction of the impact. These details can then be used to asses the results of the accident.
- the sensory data stored at the buffer is analyzed, for example by the processor, to yield pre-crash data including for example identification of the vehicle’s occupants state prior to the crash (e.g. pre-crash state), in accordance with embodiments.
- pre-crash data including for example identification of the vehicle’s occupants state prior to the crash (e.g. pre-crash state), in accordance with embodiments.
- Such analysis can provide any combination of the position of the occupant, the body pose, the body attributes such as mass, height, age, gender and body type.
- the analysis can also include objects that exist inside the cabin and can pose injury risks during an accident.
- the analysis described herein can be performed using computer vision and/or machine learning algorithms.
- the position of an occupant is obtained by training a computer vision module to identify the existence of individuals inside the car.
- the body pose can be obtained by training a neural network system to estimate body pose from video images.
- Body attributes can be obtained by training a neural network to estimate the attributes from video images.
- the analysis can also be performed or be enhanced by the use of the depth data layer, providing information regarding the volume of an individual and its physical dimensions.
- the depth information can also provide information relevant to the body pose.
- the processor is configured to analyze one or more pre-crash depth maps to extract a body attribute including, such as mass, height, width, volume, age, and gender as illustrated in Figures 2B and 2C.
- the analysis comprises the use of a combination of at least one image and at least one depth map.
- step 450 following the pre-crash car occupant state identification at step 440, the system assesses the possible accident outcome in terms of injuries and medical issues based on the analyzed pre-crash data which included identification of the occupants' state prior to the accident, in accordance with embodiments.
- the system employs a human body mechanical model which can use the recorded pre-crash data, to assess the outcome of the vehicle crash in terms of injuries and medical issues.
- the available pre-crash information may include one or more or all of the vehicle occupants' body’s attributes, body’s initial pose, body mass, age, body dimensions, and gender and impact parameters such as magnitude and direction.
- the human body mechanical model might be a finite-element model of the body.
- the human body mechanical model can contain heuristic equation based on existing injuries databases.
- the system is triggered to initiate the analysis of the nature or severity of injury by utilizing vehicle impact sensors.
- a vehicle impact sensor such as sensor 325 comprises a microswitch located inside the vehicle’s bumper, the microswitch is configured to detect strong impacts.
- the system uses the same sensors as the vehicle’s airbag deployment system.
- the pre-crash head pose of the occupant 102 relative to the body is addressed.
- the whiplash effect during a car accident as illustrated in Figure IB can cause various types of injuries to the occupant’s neck. Knowing the direction and magnitude of the impact as obtained, for example, as obtained from the impact sensor, and the head position and orientation relative to the body and to the car, as obtained and extracted by analyzing the captured images the system can provide a prediction as to the nature and severity of the injury. Specifically, the analysis may include identifying which body part got impact. Based on the nature of this body part and the estimated strength of the impact predict tissue damage, vascular damage, bone damage, etc.
- the system is configured to store pre-crash images.
- the system is further configured to send the pre-crash images and the pre-crash data to a first responders’ team or to upload said images to a cloud service to be extracted by a first responders’ team.
- the processor is configured to analyze one or more stored pre-crash images in order to extract a pre-crash body pose.
- the processor is configured to analyze the pre-crash depth maps (e.g. body pose), optionally in combination with any available information regarding the physical parameters of the impact, thereby providing an assessment of the nature or severity of an injury.
- the processor is configured to analyze one or more pre-crash images in order to extract at least one body attribute including, but not limited to, mass, height, width, volume, age, and gender.
- the processor is further configured to use said at least one body attribute, optionally in combination with any available information regarding the physical parameters of the impact, in order to analyze and assess the nature or severity of an injury.
- step 460 at step 460 once a detection of an impact (e.g. of step 420) is received, for example at the system (e.g. the processor) it may instruct the system’s sensors such as the system sensing module to shift sensors and/or camera’s operating mode to a high-frame rate capturing mode, such that a high-frame rate data will be collected during the occurrence of the accident.
- the system e.g. the processor
- the system e.g. the processor
- the system is configured to capture data such as sensory data obtained during the accident(e.g. during-crash data), for example during the high frame-rate period, in order to follow the trajectories of the occupant(s) and object(s) inside the car.
- the system may use any of the captured video images and/or the depth data in order to analyze the trajectories using computer vision methods as known in the art.
- the trajectories are especially important in order to detect body impact with the vehicle’s structure and various surfaces as well as to detect impact of occupants by objects in the vehicle.
- the trajectories and impact data are used to provide medical information output which includes an assessment and prediction of the nature and severity of injuries of the occupants.
- the medical information output may be provided based on additional data including, for example, pre-cash data.
- additional data including, for example, pre-cash data.
- detecting an impact between the occupant’s head and the car’s structure might indicate a high probability for head trauma. More specifically, as shown in Figure 1A the data including the distance‘d’ between the passenger 112 head and the front seat headrest 113 and the vehicle 100 speed during“normal vehicle operation mode” may be analyzed and combined with the passenger trajectory 118 and broken window 114 trajectory extracted from data captured during the vehicle 100 crash.
- the medical information output may be transmitted as one or more output signals to external units such as first responders units or to the vehicle’s internal units to activate one or more devices or system, such as a vehicle's devices.
- FIG. 5A shows a high-level block diagram 500 of a vehicle post-crash data, during-crash data, pre-cash data collection procedure, in accordance with embodiments.
- the collection procedure may be carried out by a sensing system such as the sensing system 110 of Figure 1 or sensing system 310 Figure 3.
- the system may be in communication or may include additional sensors.
- the system 110 collects some or all of the information available in the vehicle cabin to produce medical information including an assessment of the medical condition status and injuries nature and severity of the occupants.
- each type of captured information may be transformed to one or more signal including the related information.
- the system may collect the occupants’ heartbeat 501 by capturing and analyzing micro-vibrations of the body, such as one or more micro-vibrations signals obtained by the system 110 and analyzed by one or more processors.
- the micro- vibrations are captured using speckle temporal analysis as detailed hereinabove.
- the heartbeat causes skin mechanical micro vibration due to the pulse, such micro-vibrations being detectable by the system 110 configured to measure micro-vibration.
- the heart-rate or lack of heartbeat provides crucial information regarding the medical condition status of the one or more occupants in the vehicle and the nature and severity of their injury.
- system 110 is configured to collect the occupants’ respiration rate 502 by analyzing micro-vibration of the body, and specifically, of the chest area.
- the breathing motion causes micro-vibrations or vibrations that can be detected by system 110 which is configured to measure micro-vibrations.
- Breathing rate or lack of breathing provides crucial information regarding the medical status of the one or more occupants and the nature and severity of their injury.
- both signals of heart-rate and raspatory rate can be obtained by analyzing post-crash bodily micro-vibration signals.
- a spectral analysis may be employed.
- the micro-vibration signal may contain a superposition of a stronger raspatory signal and a weaker heart-rate signal.
- the heart signal may be of a typical range of 50 beats per minute (BPM) or more (-1-1.2 Hz), while the breathing signal may be of a typical range of 15-20 breaths per minute ( ⁇ 3-4 Hz).
- BPM beats per minute
- the breathing signal may be of a typical range of 15-20 breaths per minute ( ⁇ 3-4 Hz).
- the same spectral analysis can be used in another embodiment to discriminate between vital signs signals and mechanical vibrations resulting from the vehicle or environment.
- RPM idle rotations per minute
- the system is configured to also detect the post-crash body-pose 503.
- the system 110 can use some or all of the captured images and the depth data.
- the system then employs a skeleton model or other heuristics in order to estimate the body-pose.
- the body pose, and specifically the relative position and orientation of the head and limbs, can provide valuable information regarding the medical condition status and the nature and severity of an injury.
- the occupant's body pose may be identified using systems and methods as described in USP application number 62/806,840 entitled“SYSTEM, DEVICE, AND METHODS FOR DETECTING AND OBTAINING INFORMATION ON OBJECTS IN A VEHICLE” which is incorporated herein by reference.
- the system is configured to also use images such as video images and/or depth data in order to detect bodily motion 504 of the occupant(s).
- the detection can be performed by known computer vision algorithms such as, but not limited to, optical flow and visual tracking methods. For example, Lucas-Kanade optical flow algorithm in OpenCV.
- optical flow and visual tracking methods For example, Lucas-Kanade optical flow algorithm in OpenCV.
- the system is triggered to initiate the analysis by visually detecting rapid strictly motion of an occupant inside the vehicle.
- An example for such a detection mechanism is based on optical flow algorithm configured to analyze the relative displacement of objects between two or more consecutive frames.
- the system is further configured to detect visual signs of body wounds 505 of the one or more occupants.
- An example of visually detectable wounds can be skin rapture or a fracture.
- One way to configure the system to detect wounds is by training a deep learning classifier to detect broken skin or bleeding, for example, by showing wounds images and training the classifier to recognize the wounds.
- captured images of the injured one or more occupants are obtained at the processer which operates a deep neural network to get a prediction and to detect and identify the type of injuries.
- each type of captured information such as heartbeat 501 respiration rate 502 may be transformed into one or more signal including the related information.
- the conversion of the information to one or more signals may be performed either locally (with a processor and software supplied with the system) or remotely. Heavier calculations for more complicated analyses, for example, can be performed remotely.
- all or some of the above-mentioned signals and assessments 510 are transmitted to a data assessment module such as single post-crash data assessment module 506.
- the collected data signals 501, 502, 503, 504, and 505 may be used together or separately. It is stressed that in some cases only some of the signals may be used as some of the signals were not received or due to some other desired configuration. For example, in some cases, only signals 501 and 502 may be received.
- the post-crash data assessment module 506 is configured to fuse the different received signals, including the captured information, and process the received signals to yield a post-crash assessment data 506’ including the medical condition of the vehicle’s one or more occupants.
- the assessment can include using a check list such as a binary check list including a question list comprising medical questions utilized to assess the medical condition of the occupants
- the check list may include one or more of the following questions : weather there are pulse and breathing; and/or whether there are signs of consciousness; and/or whether there is suspicion for fractures and on which body part; and/or whether there is a suspicion for internal injury; and/or whether there is a suspicion for head injury; and/or whether there is an open wound or a suspicion for blood loss, etc. It is understood that embodiments of the present invention may use any other kind of diagnosing check lists or other methods to identify the medical condition of the occupants based on the captured data.
- the system further comprises a pre-crash assessment module 508 configured to receive raw data (e.g. sensory data), such as pre-crash data 508’ stored, for example at the buffer and analyze the raw data to yield pre-crash assessment data 508”.
- the pre-crash assessment data 508 includes identifications of the pre-crash state of the vehicle’s one or more occupants.
- Such analysis can include any combination of the position of the occupant, the body pose, the body attributes such as mass, height, age, gender and body type.
- the analysis can also include objects that exist inside the cabin and can pose injury risks during an accident.
- the system comprises a during-crash data assessment module 507 configured to receive store and analyze sensory data, such as during-crash data 507' captured during a vehicle accident to yield during-crash assessment data 507’ including information which allows the system to follow the trajectories of the vehicle’s occupant(s) and/or object(s).
- a during-crash data assessment module 507 configured to receive store and analyze sensory data, such as during-crash data 507' captured during a vehicle accident to yield during-crash assessment data 507’ including information which allows the system to follow the trajectories of the vehicle’s occupant(s) and/or object(s).
- the during crash assessment data 507”, post-crash assessment data 506” and pre-crash assessment data extracted accordingly by modules 506, 507 and 508 are transmitted to a fusion module 509 configured to combine and processes the received data according to one or more fusing methods and logics to yield one or more final post-crash assessment results 520 relating to the medical status of the occupant(s).
- An example of fusion logics used by the fusion module 509 includes a scenario where the system is configured to assess the likelihood of head trauma of one or more occupants in the vehicle cabin following an accident, in accordance with embodiments.
- the assessment process includes acquiring at the post-crash data assessment module 506 and one or more types of post-crash data 510 such as body pose 503 body motion 504 and the like, and analyzing the acquired post-crash data 510 in order to check if there are signs for injury in the occupant’s head.
- the system can then further use the during-crash data assessment module 507 to identify whether there was a head impact with car elements or other objects during collision based on received during-crash data. Such an impact would strengthen the likelihood of head trauma.
- the system may further obtain pre-crash data 508’ at the pre-crash assessment data module 508 in order to analyze the position of the occupant’s head relative to various car elements and yield pre-crash assessment data. Then, the pre-crash assessment data 508”, during crash assessment data and post-crash assessment data 506” are fused at the fusion and final assessment module 509 to estimate the risk of head trauma. Specifically, the fusion estimation may be based on the direction and force of the external impact to the vehicle, using body modeling, module 509 can infer an increase or decrease the risk of head trauma.
- the fusion module 509 may provide a final assessment result 520 based on any partial and/or combination of the data available from the pre-crash data 508’, during crash data 507’ and post-crash data 510.
- the fusion module 509 may include stochastic predication models (e.g. Markov chain, for example in the form of a Markov matrix) of one or more predicted states probabilities.
- stochastic predication models e.g. Markov chain, for example in the form of a Markov matrix
- Figure 1C shows an example of a final assessment result 122, structured in the form of a message, in accordance with embodiments.
- the message may include one or more of the following details:
- Figure 5B shows a flowchart of a method 540 for detecting the heartbeats and/or breathing of one or more occupants, for example in a vehicle, by analyzing a plurality of captured images comprising reflected light pattern, in accordance with embodiments.
- the method comprises the following steps: at step 550 the plurality of images are received, for example from the sensing module at a processor. At step 560 one or more changes in one or more speckle patterns of at least one of the reflections of the light pattern are detected in at least some consecutive images of the plurality of images. At step 570 micro-vibrations of the at least one occupant based on the speckle pattern analysis are identified.
- the micro-vibrations are analyzed using computer vision algorithms such as, but not limited to, optical flow or tracking to extract breathing rate or heartbeat signal of the occupant.
- computer vision algorithms such as, but not limited to, optical flow or tracking to extract breathing rate or heartbeat signal of the occupant.
- Specific examples of the analysis method for extracting breathing rate or heartbeat signals of the occupant are illustrated in Figures 7A-7E and Figures 8A- 8D.
- the breathing or heartbeat signal is used to assess the medical condition of the occupant as shown in Figure 5A.
- the detected breathing or heartbeat signal is used to assess the medical condition of the occupant.
- Figure 6 illustrates two options in which the final post-crash assessment results 520, including for example the medical condition and injuries assessment provided in Figure 5A, may be transmitted to an outsource responder unit such as first rescue responders, in accordance with embodiments.
- an outsource responder unit such as first rescue responders
- the assessment results 520 are transmitted directly from the system 610 in the vehicle 600 to the first responders 603 via a direct communication link 605.
- the system 610 may broadcast one or more transmission signals including the assessment results 520 via a local wireless network to the first responders 603.
- the first responders 603 can download the assessment results 520 which includes, for example, medical information regarding the in-cabin status of one or more of the vehicle’s occupants.
- the system 610 may send data including for example the assessment results 520 to a cloud service 604 by a communication channel 606.
- the cloud service may be a virtual service that stores all the information from the system. Then, the first responders 603 can log into the cloud service and download the relevant information.
- the main advantage of the second method over the first method is that the medical condition assessment data can be accessed even before arriving at the crash scene. This will allow sending the right first responders tools and vehicles to the scene.
- the cloud service can be also connected to treatment facilities such as hospitals to alert the medical team of the upcoming patients.
- Another advantage of the cloud service is that it can receive partial data even if the system 601 does not survive the crash.
- the pre-crash data 508’ and during-crash data 507’ may be sent to the cloud service as soon as they are generated. In this case, even if the system fails or may not be operated due to the crash, an assessment could be made based on the data available on the cloud.
- the system can use the impact detection 302 of Fig 4 in order to send an automatic alert to the local first responders’ organization such as police, fire department and medical teams.
- FIG 7A illustrating an exemplary configuration used in accordance with embodiments for capturing a light pattern image(s) and translating an identified spackle pattern for measuring, for example, the heartbeat and/or breathing beat of one or more of the occupants breathing while the vehicle is moving.
- the defocused spot size (circle of confusion) is given by the lens aperture (z 2 » z 3 ) as follows:
- Figure 7B shows exemplary captured image 750, in accordance with embodeiments.
- the displacement of each speckle pattern relative to the initial pattern is identified.
- the speckle translation amplitude in pixels is extracted using, for example, a sine fit such as the sin fit graph 760 of Figure 7C.
- the rate is extracted directly from the translation measurement.
- the translation rate is linear with spot size as shown in graphs 770 and 780 of Figure 7D.
- the M of each measurement is calculated from dT Ida and compared with M form the spot size as shown for example in graph 790 of Figure 7E.
- the expected measurements limits may be extracted, such as the maximal linear and angular velocities that allow speckle tracking. To avoid boiling the surface linear velocity perpendicular to the beam should be:
- the translation of the speckle field should be smaller than the size of the spot in the image: z 1 f
- the maximal exposure time may be also estimated.
- Tt ⁇ d sp T is the speckle translation speed in pixels/sec, and t is the exposure time.
- Figure 8A shows an exemplary magnification of a captured image 810 of an occupant chest suitable for incorporation in accordance with embodiments.
- the image comprises reflected light using an illuminator including a laser power of an 830 nm illuminating the occupant chest area.
- the captured image 810 shows characteristic features including detected changes to the speckle pattern.
- the tilt which may be very minor, for example, on a scale of micro-radians, may be derived from the translational velocity of one or more speckle pattern point(s) over time (consecutive frames).
- the lateral speckle pattern translation may be derived from the analysis of the diffused light pattern element(s) depicted in a plurality of consecutively captured images.
- Figure 8B shows respectively graphs results 820 830 and 840 of processed captured images of an occupant, where the occupant is not breathing and only his heartbeats are detected, in accordance with embodiments.
- Figure 8C shows respectively graphs results 850 860 and 870 of processed captured images of an occupant in a vehicle including the occupant heartbeat change while the occupant is breathing normally, in accordance with embodiments.
- Graph 870 shows exemplary spectrum (fft) of the processed images. Specifically, in scenario the occupants breathing is strong compared to his heartbeat and therefore masks it. In this example, the same parameters of Figure 8B were used.
- Figure 8D shows respectively graphs results 880 and 890 of processed captured images of an empty vehicle (without any occupants in a vehicle), hence neither heartbeat or breathing are detected, in accordance with embodiments.
- the measurements were performed in a
- the sensing system does not analyze the data collected, and the sensing module relays data to a remote processing and control unit, such as a back end server.
- a remote processing and control unit such as a back end server.
- the sensing system may partially analyze the data prior to transmission to the remote processing and control unit.
- the remote processing and control unit can be a cloud based system which can transmit analyzed data or results to a user.
- a handheld device is configured to receive analyzed data and can be associated with the sensing system. The association can be through a physical connection or wireless communication, for example.
- the sensing system comes equipped with memory with a database of data stored therein and a microprocessor with analysis software programmed with instructions.
- the sensing system is in communication with a computer memory having a database stored therein and a microprocessor with analysis software programmed in.
- the memory can be volatile or non-volatile in order to store the measurements in the memory.
- the database and/or all or part of the analysis software can be stored remotely, and the sensing system can communicate with the remote memory via a network (e.g. a wireless network) by any appropriate method.
- the conversion of the raw data to medical information may be performed either locally (with a processor and software supplied with the sensing system) or remotely. Heavier calculations for more complicated analyses, for example, can be performed remotely.
- the system disclosed here includes a processing unit which may be a digital processing device including one or more hardware central processing units (CPU) that carry out the device’s functions.
- the digital processing device further comprises an operating system configured to perform executable instructions.
- the digital processing device is optionally connected to a computer network.
- the digital processing device is optionally connected to the Internet such that it accesses the World Wide Web.
- the digital processing device is optionally connected to a cloud computing infrastructure.
- the digital processing device is optionally connected to an intranet.
- the digital processing device is optionally connected to a data storage device.
- suitable digital processing devices include, by way of non-limiting examples, server computers, desktop computers, laptop computers, notebook computers, sub-notebook computers, netbook computers, notepad computers, set-top computers, handheld computers, Internet appliances, mobile smartphones, tablet computers, personal digital assistants, video game consoles, and vehicles.
- server computers desktop computers, laptop computers, notebook computers, sub-notebook computers, netbook computers, notepad computers, set-top computers, handheld computers, Internet appliances, mobile smartphones, tablet computers, personal digital assistants, video game consoles, and vehicles.
- smartphones are suitable for use in the system described herein.
- Suitable tablet computers include those with booklet, slate, and convertible configurations, known to those of skill in the art.
- the digital processing device includes an operating system configured to perform executable instructions.
- the operating system is, for example, software, including programs and data, which manages the device’s hardware and provides services for execution of applications.
- suitable server operating systems include, by way of non-limiting examples, FreeBSD, OpenBSD, NetBSD®, Linux, Apple® Mac OS X Server®, Oracle® Solaris®, Windows Server®, and Novell® NetWare®.
- suitable personal computer operating systems include, by way of non-limiting examples, Microsoft® Windows®, Apple® Mac OS X®, UNIX®, and UNIX-like operating systems such as GNU/Linux®.
- the operating system is provided by cloud computing.
- suitable mobile smart phone operating systems include, by way of non-limiting examples, Nokia® Symbian® OS, Apple® iOS®, Research In Motion® BlackBerry OS®, Google® Android®, Microsoft® Windows Phone® OS, Microsoft® Windows Mobile® OS, Linux®, and Palm® WebOS®.
- the device includes a storage and/or memory device.
- the storage and/or memory device is one or more physical apparatuses used to store data or programs on a temporary or permanent basis.
- the device is volatile memory and requires power to maintain stored information.
- the device is non-volatile memory and retains stored information when the digital processing device is not powered.
- the non-volatile memory comprises flash memory.
- the non-volatile memory comprises dynamic random-access memory (DRAM).
- the non-volatile memory comprises ferroelectric random-access memory (FRAM).
- the non volatile memory comprises phase-change random access memory (PRAM).
- the device is a storage device including, by way of non-limiting examples, CD-ROMs, DVDs, flash memory devices, magnetic disk drives, magnetic tapes drives, optical disk drives, and cloud computing-based storage.
- the storage and/or memory device is a combination of devices such as those disclosed herein.
- the digital processing device includes a display to send visual information to a user.
- the display is a cathode ray tube (CRT).
- the display is a liquid crystal display (LCD).
- the display is a thin film transistor liquid crystal display (TFT-LCD).
- the display is an organic light emitting diode (OLED) display.
- OLED organic light emitting diode
- on OLED display is a passive- matrix OLED (PMOLED) or active-matrix OLED (AMOLED) display.
- the display is a plasma display.
- the display is a video projector.
- the display is a combination of devices such as those disclosed herein.
- the digital processing device includes an input device to receive information from a user.
- the input device is a keyboard.
- the input device is a pointing device including, by way of non-limiting examples, a mouse, trackball, track pad, joystick, game controller, or stylus.
- the input device is a touch screen or a multi-touch screen.
- the input device is a microphone to capture voice or other sound input.
- the input device is a video camera to capture motion or visual input.
- the input device is a combination of devices such as those disclosed herein.
- the system disclosed herein includes one or more non-transitory computer readable storage media encoded with a program including instructions executable by the operating system of an optionally networked digital processing device.
- a computer readable storage medium is a tangible component of a digital processing device.
- a computer readable storage medium is optionally removable from a digital processing device.
- a computer readable storage medium includes, by way of non-limiting examples, CD-ROMs, DVDs, flash memory devices, solid state memory, magnetic disk drives, magnetic tape drives, optical disk drives, cloud computing systems and services, and the like.
- the program and instructions are permanently, substantially permanently, semi-permanently, or non-transitorily encoded on the media.
- the system disclosed herein includes at least one computer program, or use of the same.
- a computer program includes a sequence of instructions, executable in the digital processing device’s CPU, written to perform a specified task.
- Computer readable instructions may be implemented as program modules, such as functions, objects, Application Programming Interfaces (APIs), data structures, and the like, that perform particular tasks or implement particular abstract data types.
- program modules such as functions, objects, Application Programming Interfaces (APIs), data structures, and the like, that perform particular tasks or implement particular abstract data types.
- APIs Application Programming Interfaces
- a computer program may be written in various versions of various languages.
- a computer program comprises one sequence of instructions. In some embodiments, a computer program comprises a plurality of sequences of instructions. In some embodiments, a computer program is provided from one location. In other embodiments, a computer program is provided from a plurality of locations. In various embodiments, a computer program includes one or more software modules. In various embodiments, a computer program includes, in part or in whole, one or more web applications, one or more mobile applications, one or more standalone applications, one or more web browser plug-ins, extensions, add-ins, or add ons, or combinations thereof.
- a computer program includes a mobile application provided to a mobile digital processing device.
- the mobile application is provided to a mobile digital processing device at the time it is manufactured.
- the mobile application is provided to a mobile digital processing device via the computer network described herein.
- a mobile application is created by techniques known to those of skill in the art using hardware, languages, and development environments known to the art. Those of skill in the art will recognize that mobile applications are written in several languages. Suitable programming languages include, by way of non-limiting examples, C, C++, C#, Objective - C, JavaTM, Javascript, Pascal, Object Pascal, PythonTM, Ruby, VB.NET, WML, and XHTML/HTML with or without CSS, or combinations thereof.
- Suitable mobile application development environments are available from several sources. Commercially available development environments include, by way of non-limiting examples, AirplaySDK, alcheMo, Appcelerator®, Celsius, Bedrock, Flash Lite, .NET Compact Framework, Rhomobile, and WorkLight Mobile Platform. Other development environments are available without cost including, by way of non-limiting examples, Lazarus, MobiFlex, MoSync, and Phonegap. Also, mobile device manufacturers distribute software developer kits including, by way of non-limiting examples, iPhone and iPad (iOS) SDK, AndroidTM SDK, BlackBerry® SDK, BREW SDK, Palm® OS SDK, Symbian SDK, webOS SDK, and Windows® Mobile SDK.
- iOS iPhone and iPad
- the system disclosed herein includes software, server, and/or database modules, or use of the same.
- software modules are created by techniques known to those of skill in the art using machines, software, and languages known to the art.
- the software modules disclosed herein are implemented in a multitude of ways.
- a software module comprises a file, a section of code, a programming object, a programming structure, or combinations thereof.
- a software module comprises a plurality of files, a plurality of sections of code, a plurality of programming objects, a plurality of programming structures, or combinations thereof.
- the one or more software modules comprise, by way of non-limiting examples, a web application, a mobile application, and a standalone application.
- software modules are in one computer program or application. In other embodiments, software modules are in more than one computer program or application. In some embodiments, software modules are hosted on one machine. In other embodiments, software modules are hosted on more than one machine. In further embodiments, software modules are hosted on cloud computing platforms. In some embodiments, software modules are hosted on one or more machines in one location. In other embodiments, software modules are hosted on one or more machines in more than one location.
- the system disclosed herein includes one or more databases, or use of the same.
- suitable databases include, by way of non-limiting examples, relational databases, non relational databases, object-oriented databases, object databases, entity-relationship model databases, associative databases, and XML databases.
- a database is internet-based.
- a database is web-based.
- a database is cloud computing-based.
- a database is based on one or more local computer storage devices.
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- Public Health (AREA)
- General Health & Medical Sciences (AREA)
- Pathology (AREA)
- Heart & Thoracic Surgery (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Veterinary Medicine (AREA)
- Biophysics (AREA)
- Physiology (AREA)
- Cardiology (AREA)
- Primary Health Care (AREA)
- Epidemiology (AREA)
- Multimedia (AREA)
- Pulmonology (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Signal Processing (AREA)
- Business, Economics & Management (AREA)
- General Business, Economics & Management (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Electromagnetism (AREA)
- Evolutionary Computation (AREA)
- Fuzzy Systems (AREA)
- Mathematical Physics (AREA)
- Psychiatry (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Traffic Control Systems (AREA)
Abstract
System and methods are provided for providing first responders after a vehicle accident with useful information regarding the medical status and injuries of the vehicle's occupants. The system includes an in-cabin sensor comprising at least one or more of an image sensor, depth sensor and micro-vibration sensor for capturing sensory data of the vehicle cabin including pre-crash data, during-crash and post-crash. The system also includes at least one processor configured to analyze the sensory data.
Description
SYSTEMS, DEVICES AND METHODS FOR VEHICLE POST-CRASH SUPPORT
CROSS-REFERENCE
[0001] The present application claims priority to U.S. Provisional Application Ser. No. 62/785,724 filed on December 28, 2018, entitled“SYSTEM, DEVICE AND METHOD FOR VEHICLE POST CRASH SUPPORT” (attorney docket no. GR003/USP) which is incorporated herein by reference in its entirety.
TECHNICAL FIELD
[0002] The present invention, in some embodiments thereof, relates to a system and method for analyzing in-cabin data to provide medical related information following a vehicle accident.
INCORPORATION BY REFERENCE
[0003] All publications, patents, and patent applications mentioned in this specification are herein incorporated by reference to the same extent as if each individual publication, patent, or patent application was specifically and individually indicated to be incorporated by reference.
BACKGROUND OF THE INVENTION
[0004] When a vehicle accident, such as a car accident occurs, the correct action of first responders and medical teams can have long term significance. Understanding the nature and the severity of the injury of the car occupants can improve the response in terms of prioritizing and immediate treatment. It can also help to direct the right first responders’ teams thereby shortening the time required for the right equipment to arrive at the scene.
[0005] While there is a large effort to reduce the number and severity of vehicle accidents, vehicle accidents do keep occurring and unfortunately will do so for the foreseen future.
[0006] Currently, there is no way for the rescue team to know in advance what is the situation inside the vehicle, and typically updates regarding the medical condition of the occupants are received only when arriving at the scene. Moreover, even when the rescue team arrives at the scene, the only available information is obtained by visual inspection and by communicating with the involved occupants, if such communication is possible. Such communication, while providing important information, is limited by the level of consciousness and knowledge of the occupants.
[0007] In light of the above, there still remains an unmet need for an in-cabin system that can provide, for example in real-time, medically relevant information as to the post-crash status of the occupant. Such a system would improve the prospects of surviving a vehicle accident as well as reduce the long-
term medical implications. It would also lead to economic benefit which might have an impact on insurance cost.
SUMMARY OF THE INVENTION
[0008] The present disclosure provides a system, device and method to evaluate the medical condition of one or more vehicle-occupants, for example in post-crash situation. According to one embodiment, there is provided a system for detecting breathing or heartbeat of a vehicle occupant. The system comprises at least one illumination source configured to project light in a predefined light pattern on a scene; an imaging device configured to capture a plurality of images, said plurality of images comprising reflections of said light pattern from one or more occupants in the scene; and at least one processor configured to extract breathing or heartbeat data of said one or more occupants by analyzing the reflections of said light pattern.
[0009] The present disclosure also provides a method for analyzing heartbeat or breathing signals from said plurality of images of the said reflected light pattern. According to an embodiment, the analysis comprises the following steps: detecting one or more changes in one or more speckle patterns of at least one of the reflections of said light pattern in at least some consecutive images of the plurality of images; identifying micro-vibrations of the at least one object based on said speckle pattern analysis; and analyzing said micro-vibrations to extract breathing or heartbeat signal. In some embodiments, the breathing or heartbeat signal is used to assess the medical condition of the occupant. In some embodiments, the system comprises the use of an illumination source in the near infra-red (NIR) spectral range.
[0010] In an embodiment, the system comprises a communication module configured to send the medical -related data to a first responder’s rescue team.
[0011] In an embodiment, the data is uploaded to a cloud service to be extracted by a first responders’ rescue team.
[0012] In an embodiment, the system comprises a memory. In further embodiments, the system is configured to record post-crash images captured by the imaging device.
[0013] In an embodiment, said post-crash images are sent to a first responders’ team by communication module or uploaded to a cloud service.
[0014] In an embodiment, the system is configured to store pre-crash images. In additional embodiments, the system is further configured to send the pre-crash images to a first responders’ team or to upload said images to a cloud service to be extracted by a first responders’ team.
[0015] In an embodiment, the processor is configured to analyze one or more post-crash images and extract a post-crash body pose thereby providing an assessment of the nature or severity of an injury.
[0016] In an embodiment, the processor is configured to analyze one or more stored pre-crash images in order to extract a pre-crash body pose. In further embodiments, the processor is configured to analyze said pre-crash body pose, optionally in combination with any available information regarding the physical parameters of the impact, thereby providing an assessment of the nature or severity of an injury.
[0017] In an embodiment, the processor is configured to analyze one or more pre-crash images in order to extract at least one body attribute including, but not limited to, mass, height, width, volume, age, and gender. In some embodiments, the processor is further configured to use said at least one body attribute, optionally in combination with any available information regarding the physical parameters of the impact, in order to analyze and assess the nature or severity of an injury.
[0018] In an embodiment, the assessment is sent to a first responders’ team by communication module or uploaded to a cloud service.
[0019] In an embodiment, the system is triggered to initiate the analysis of the nature or severity of injury by utilizing vehicle impact sensors. In various embodiments, a vehicle impact sensor comprises a microswitch located inside the vehicle’s bumper, said microswitch is configured to detect strong impacts. In an embodiment, the system uses the same sensors as the vehicle’s airbag deployment system.
[0020] In an embodiment, the system is triggered to initiate the analysis by visually detecting rapid hectic motion of an occupant inside the vehicle. An example for such a detection mechanism is based on optical flow algorithm configured to analyze the relative displacement of objects between two or more consecutive frames.
[0021] According to an embodiment, the system comprises at least one high speed imaging device. In certain embodiments, the high speed imaging device is configured to capture a plurality of images at high rate during accident occurrence. In further embodiments, the system processor is configured to analyze said high rate images thereby assessing the nature or severity of injuries. In additional
embodiments, the assessment is based on visually tracking the impacts occurring inside the cabin during the accident.
[0022] According to an embodiment, the system comprises a depth sensor configured to produce an in-cabin depth map.
[0023] In an embodiment, the system is configured to store the depth data.
[0024] In an embodiment, the processor is configured to use the depth map to analyze post-crash body pose thereby assessing the nature and severity of an injury.
[0025] In an embodiment, the processor is configured to analyze the stored pre-crash depth map thereby providing information regarding a pre-crash body pose of a vehicle occupant. In several embodiments, the processor is further configured to use said body pose, optionally in combination with any available information regarding the physical parameters of the impact, in order to analyze and assess the nature or severity of an injury.
[0026] In an embodiment, the processor is configured to analyze one or more pre-crash depth maps to extract a body attribute including, but not limited to, mass, height, width, volume, age, and gender. In further embodiments, the processor is further configured to use the at least one body attribute, optionally in combination with any available information regarding the physical parameters of the impact, in order to analyze and assess the nature or severity of an injury.
[0027] In an embodiment, the analysis comprises the use of a combination of at least one image and at least one depth map.
[0028] In an embodiment, the system uses a deep learning method such as, but not limited to, convolutional neural network, in order to provide said analysis.
[0029] According to another aspect there is provided a system for providing medical information of at least one occupant in a vehicle cabin, the system comprising: a sensing module comprising at least one sensor configured to capture sensory data of the vehicle cabin; and a control module comprising at least one processor said processor is configured to: receive said sensory data from said sensor; and analyze said sensory data using one or more analysis methods to provide the medical information of said at least one occupant.
[0030] In an embodiment, the analysis methods are one or more of: computer vision methods; machine learning methods; deep neural network methods; signal processing methods.
[0031] In an embodiment, the medical information comprises the medical status of said at least one vehicle occupant following an accident of said vehicle.
[0032] In an embodiment, the medical information comprises medical condition evaluation or injury assessment of the at least one occupant.
[0033] In an embodiment, the information comprises the medical status of said at least one vehicle occupant following an accident of the vehicle.
[0034] In an embodiment, the system comprises a communication module configured to send the information to a first responder team.
[0035] In an embodiment, the at least one sensor is an image sensor.
[0036] In an embodiment, the sensing module further comprises at least one illuminator, said at least one illuminator comprises a light source configured to project light pattern onto the vehicle cabin.
[0037] In an embodiment, the light source comprises a laser or a Light Emitting Diode (LED).
[0038] In an embodiment, the light source comprises one or more optical elements for splitting a single light beam generated by the light source, said one or more optical elements are selected from the group consisting of: DOE; split mirrors; and diffuser.
[0039] In an embodiment, the sensing module comprises a depth sensor.
[0040] In an embodiment, the depth sensor is configured to capture depth data by projecting a light pattern onto the vehicle cabin and wherein the at least one processor is configured to analyze the location of known light pattern elements in said depth data.
[0041] In an embodiment, the pose of said at least one occupant is estimated by analyzing the depth data using said computer vision methods.
[0042] In an embodiment, the sensing module comprises a micro-vibration sensor.
[0043] In an embodiment, the micro-vibration sensor is configured to: project light onto the vehicle cabin scene; capture a plurality of images, wherein each image of said plurality of images comprises reflected diffused light elements; and
wherein the processor is configured to: receive the captured images; and analyze one or more temporal changes in the speckle pattern of at least one of the plurality of reflected diffused light elements in at least some consecutive images of the plurality of images to yield micro-vibration data.
[0044] In an embodiment, the sensing module comprising a combination of at least two sensors selected from a comprising: image sensor, depth sensor, and micro-vibration sensor.
[0045] In an embodiment, the processor is further configured to classify or identify an attribute of one or more objects based on at least one micro-vibration data.
[0046] In an embodiment, the sensory data comprises a plurality of images and wherein the at least one processor is further configured to classify the at least one vehicle occupant using said computer vision methods by visually analyzing at least one image of the plurality of images captured by the at least one sensor.
[0047] In an embodiment, the sensing module is configured to detect an imminent crash of the vehicle.
[0048] In an embodiment, the detection of an imminent crash is performed by one or more of:
vehicle impact sensor; hectic movement detection sensor; and external sensory system.
[0049] In an embodiment, the at least one processor is configured to: analyze said sensory data, wherein said sensory data is captured prior to said detection of imminent crash, during said crash and following said crash; and asses the medical status of said at least one vehicle occupant following the crash based on said analyzed sensory data.
[0050] In an embodiment, the sensory data captured prior to the detection of the imminent crash is analyzed using said machine vision methods to extract pre-crash categorization.
[0051] In an embodiment, the pre-crash categorization comprises one or more of the at least one occupant body pose, body mass, age, body dimensions, and gender.
[0052] In an embodiment, the system is configured to provide a measure of the likelihood and severity of an injury from a car accident
[0053] In an embodiment, the system is configured to provide high-rate data of the vehicle cabin following the detection of the imminent crash.
[0054] In an embodiment, the high-rate data is used to extract trajectories of the at least one occupant or one or more objects in the vehicle.
[0055] In an embodiment, the extracted trajectories are used to asses the likelihood and severity of the injury of said at least one occupant.
[0056] In an embodiment, the system is configured to record in-cabin post-crash information; and assess the medical status of at least one car occupant.
[0057] In an embodiment, the system is configured to provide at least one of pre-crash, during-crash and post-crash medical status assessment of at least one car occupant.
[0058] In an embodiment, the control module is configured to be in wireless communication with an external and transmit the information to a first responder team.
[0059] In an embodiment, the information is uploaded to a cloud service.
[0060] In an embodiment, the sensing module is mounted on said vehicle roof or ceiling.
[0061] In an embodiment, the analysis is conducted using at least one deep learning algorithm.
[0062] According to another aspect there is provided a system for estimating the medical state of one or more occupants in a vehicle cabin following an accident of the vehicle, the system comprising; a sensing module comprising: an illuminator comprising one or more illumination sources configured to project light in a structured light pattern on the vehicle cabin; at least one image sensor configured to capture sensory data prior to, during and following a crash of said vehicle, said sensory data comprising a sequence of 2D (two dimensional) images and 3D (three dimensional) images of the vehicle cabin wherein at least one of the 2D images comprise reflections of said structured light pattern from the one or more occupants in the cabin; a control module comprising: a memory module; at least one processor, said at least one processor is configured to: receive said captured sensory data from the sensing module; receive an impact detection signal of an imminent crash of the vehicle; store the received sensory data, captured prior to said impact detection receptance, in said memory module; analyze said received sensory data using computer vision or machine learning algorithm to yield pre crash assessment data comprising an identification of the one or more occupants state prior to the crash; receive sensory data captured during said crash from the sensing module; analyze the received sensory data captured during said crash to yield during-crash assessment data comprising body trajectories of the one or more occupants; provide medical information of said one or more occupants following the crash based on said during crash assessment data and pre-crash assessment data.
[0063] In an embodiment, the at least one processor is further configured to: receive sensory data captured following the crash; analyze the received sensory data captured following the crash data using computer vision or machine learning algorithm to yield one or more of post-crash assessment data of said one or more occupants.
[0064] In an embodiment, the post-crash assessment data comprises one or more of: heartbeat; respiration rate; body pose; body motion; visible wounds.
[0065] In an embodiment, the processor is further configured to combine said post-crash assessment data with said during crash assessment data and said pre-crash assessment data to yield said medical information of said one or more occupants following the crash.
[0066] In an embodiment, the heartbeat or respiration rate are identified by analyzing one or more changes in one or more speckle patterns of at least one of the reflections of said structured light pattern in at least some consecutive images of the plurality of images; and identify the vibrations of the one or more occupants based on said speckle pattern analysis.
[0067] In an embodiment, the one or more of the body pose, body motion; and visible wounds are detected by analyzing the sequence of 2D images or 3D images of the vehicle cabin
[0068] In an embodiment, the body pose or body motion are identified using one or more of a skeleton model, optical flow of visual tracking methods.
[0069] According to another aspect, there is provided a method for providing medical information of at least one occupant in a vehicle cabin, the method comprising: receiving captured sensory data of said vehicle cabin from a sensing module; receiving an impact detection signal of an imminent crash of the vehicle; storing the received sensory data, captured prior to said impact detection receptance, in a memory module; analyzing said received sensory data using computer vision or machine learning algorithm to yield pre-crash assessment data comprising identification of the at least one occupant state prior to the crash; receiving sensory data captured during said crash from the sensing module; analyzing the received sensory data captured during said crash to yield during-crash assessment data comprising body trajectories of the one or more occupants; providing medical information of said one or more occupants following the crash based on said during crash assessment data and pre-crash assessment data.
[0070] According to another aspect, there is provided a method for providing a first responder with information regarding the medical status or injury of vehicle a vehicle occupant after an accident, the method comprises the steps of: i. utilizing an in-cabin sensor to monitor the vehicle’s cabin;
ii. combining analysis of at least one of pre-crash, during-crash and post-crash data; and iii. providing first-responders with the analysis.
BRIEF DESCRIPTION OF THE DRAWINGS
[0071] A better understanding of the features and advantages of the present disclosure will be obtained by reference to the following detailed description that sets forth illustrative embodiments, in which the principles of embodiments of the present disclosure are utilized, and the accompanying drawings.
[0072] Figures 1A-1C illustrate, respectively, a side view of a vehicle prior to the vehicle’s accident during the accident and following the accident, in accordance with some embodiments of the present disclosure;
[0073] Figure 2A shows an example of system raw data, in accordance with some embodiments of the present disclosure;
[0074] Figure 2B is a flow diagram illustrating steps of identifying objects, such as hidden objects in a vehicle cabin and providing information on the identified objects, in accordance with some embodiments of the present disclosure;
[0075] Figure 2C illustrates a more detailed block diagram of the vehicle comprising the monitoring system, in accordance with some embodiments of the present disclosure;
[0076] Figure 3A shows a high-level block design of the system comprising a sensing module and a control module, in accordance with some embodiments of the present disclosure;
[0077] Figure 3B shows a high-level block design of the system comprising different types of sensors, in accordance with some embodiments of the present disclosure;
[0078] Figure 4 shows a flowchart for capturing and processing in-cabin data, in accordance with some embodiments of the present disclosure;
[0079] Figure 5A shows a high-level block diagram of a vehicle post-crash data, during-crash data, and pre-cash data collection procedure, in accordance with some embodiments of the present disclosure;
[0080] Figure 5B shows a flowchart of a method for detecting the heartbeats and/or breathing of one or more occupants, in accordance with some embodiments of the present disclosure;
[0081] Figure 6 shows a schematic of possible ways to handle the communication between the system and a first responders’ team, in accordance with some embodiments of the present disclosure;
[0082] Figure 7A illustrates an exemplary configuration for capturing a light pattern image(s) and translating an identified spackle pattern for measuring the heartbeat and/or breathing beat of one or more of the occupants, in accordance with some embodiments of the present disclosure;
[0083] Figure 7B shows exemplary captured images, in accordance with some embodiments of the present disclosure;
[0084] Figure 7C-7E shows exemplary graph results, suitable for incorporation in accordance with embodiments;
[0085] Figure 8A shows an exemplary magnification of a captured image of an occupant's chest, suitable for incorporation in accordance with embodiments; and
[0086] Figure 8B-8D shows exemplary graph results, suitable for incorporation in accordance with embodiments.
DETAILED DESCRIPTION OF THE INVENTION
[0087] In the following description, various aspects of the invention will be described. For the purposes of explanation, specific details are set forth in order to provide a thorough understanding of the invention. It will be apparent to one skilled in the art that there are other embodiments of the invention that differ in details without affecting the essential nature thereof. Therefore, the invention is not limited by that which is illustrated in the figure and described in the specification, but only as indicated in the accompanying claims, with the proper scope determined only by the broadest interpretation of said claims.
[0088] It is stressed that the particulars shown hereinabove are by way of example and for purposes of illustrative discussion of the preferred embodiments of the present invention only and are presented in the cause of providing what is believed to be the most useful and readily understood description of the principles and conceptual aspects of the invention.
[0089] The configurations disclosed herein can be combined in one or more of many ways to provide a way to inform first responders or rescue team information such as medical information including the status of the injured occupant(s), for example in a post-crash accident situation. One or more components and methods of the configurations disclosed herein can be combined with each other in many ways. System and methods as described herein uses an in-cabin sensing module to monitor the medical status of a vehicle occupant as well as analyze possible injuries and their severity.
[0090] According to some embodiments, the system may be installed, and/or mounted, and/or integrated and/or embedded in a vehicle, specifically in a cabin of the vehicle.
[0091] According to some embodiments, the system comprises one or more imaging devices and optionally or additionally one or more illumination sources configured and enabled to project light in a predefined pattern on the in-cabin scene. The one or more imaging devices are configured to capture
a plurality of images of the scene. The plurality of images can contain reflections of the light pattern from one or more objects in the scene (e.g. vehicle occupants). The system may also comprise one or more processors configured and enabled to analyze the plurality of captured images to conduct post crash medical status analysis, and/or to monitor the medical condition of the vehicle occupants and/or analyze possible injuries of the occupants and their severity as detailed hereinbelow.
[0092] Alternatively or in combination, the one or more processors are configured to analyze a speckle content of the light pattern induced on the scene by the one or more illumination sources. The speckle pattern may be used to obtain the micro-vibrations pattern of the one or more objects in the scene.
[0093] According to one embodiment, when the one or more occupants are humans present at the scene, the micro-vibration signal may be correlated with breathing motion or with heartbeat induced skin motion or a combination thereof. The disclosed system can, therefore, monitor vital signs of the vehicle’s occupants, for example in a post-crash situation.
[0094] Advantageously, the system provides both the vital signs signal of breathing or heartbeats and a plurality of image data of the post-crash cabin, all using a single device or system.
[0095] Moreover, the system provides both post-crash data, and pre-crash data that can be used to further assess the details and of an injury.
[0096] As used herein, like characters refer to like elements.
[0097] As used herein, the term "light" encompasses electromagnetic radiation having wavelengths in one or more of the ultraviolet, visible, or infrared (IR) portions, including short-wave IR, near IR, and long IR, of the electromagnetic spectrum.
[0098] The term "light pattern" as used herein is defined as the process of projecting a known pattern of pixels onto a scene. The term“pattern” is used to denote the forms and shapes produced by any non-uniform illumination, in particularly patterned illumination employed a plurality of pattern features, such as lines, stripes, dots, geometric shapes, etc., having a uniform or different characteristics such as shape, size, intensity etc. As a non-limiting example, a pattern light illumination pattern may comprise multiple parallel lines as pattern features.
[0099] The term“depth map” as used herein is defined as an image that contains information relating to the distance of the surfaces of a scene object from a viewpoint. A depth map may be in the form of a mesh connecting all dots with z-axis data.
[00100] The term“occupant” as used herein is defined as any individual present in a vehicle, including, the driver and any of the passengers. The term also includes non-human occupants.
[00101] The term“vehicle” as used herein refers to a private car, a commercial car, a truck, a bus, a transporter, drivable mechanical equipment or any compartment used to transport humans on roads or tracks.
[00102] The term“normal vehicle operation mode” as used herein refers to any operation of a vehicle for example on driving or while the vehicle stops prior to an accident of the vehicle.
[00103] The term“pre-crash data” as used herein refers to any type of data such as visual images,
3D images of objects within a vehicle obtained prior to the vehicle’s crash.
[00104] The term“post-crash data” as used herein refers to any type of data such as visual images, 3D images of objects within a vehicle obtained following the vehicle’s crash.
[00105] Referring now to the drawings, Figure 1A, Figure IB and Figure 1C illustrate, respectively a side view of a vehicle 100 and a passenger cabin 120 prior to the vehicle’s accident (Figure 1A) during the accident (Figure IB) and following the accident (Figure 1C), in accordance with embodiments. The vehicle 100 includes a sensing system 110, configured and enabled to capture and obtain data before the accident (e.g. pre-crash data), during the accident (e.g. during - crash data) and after the accident occurs (post-crash data). Each data may include visual (e.g. video images) and stereoscopic data (e.g. depth maps), for example, 2D (two dimensional) images and/or 3D (three dimensional) images and vibration data (e.g. micro-vibrations) of areas and objects within the vehicle 100. Each data may be analyzed, for example in real-time or close to real-time, to assess the medical status of one or more occupants following the vehicle accident, in accordance with embodiments.
[00106] Specifically, the sensing system 110 is configured to monitor areas and objects within the vehicle before an accident occurs, during the accident and following the accident to obtain various types of sensory data of the areas and objects (e.g. occupants), and analyze the obtained data, using one or more processors, to extract visual data and depth data and to detect speckle pattern dynamic for identifying vibrations (e.g. micro-vibrations) and use the extracted data to estimate the medical status of the occupants. Nonlimiting examples of such objects may be one or more of the vehicle's occupants such as driver 111 or passenger(s) 112, in accordance with embodiments.
[00107] More specifically, there are provided methods and systems configured to analyze the captured sensory data to detect the occupants pose and location before an accident occurs, as well as their trajectory 118 resulted from the vehicle crash and finally the accident results (e.g. post-crash data) including medical information such as information related to the occupants wounds and heartbeat. In
some cases, the medical information may be transmitted to external units and forces such a rescue team, for example in real-time, in accordance with embodiments.
[00108] According to some embodiments, the sensing system 110 may be connected or may be in communication with the vehicle’s units such as the vehicle’s sensors such as impact detection sensor or 105 and/or airbag control unit (ACU) 108 and/or to Vehicle Computing System (VCS) 109 and/or a collision avoidance system (CAS) 105. For example, the sensing system may receive one or more signals including additional sensory data from CAS 105. The sensing system 110 may be in communication with other types of sensors of the vehicle such as an accelerometer and may fuse and use the additional data with the extracted data to estimate the medical status of the occupants.
[00109] Specifically, according to some embodiments, the sensing system 110 may be connected to or may be in communication, such as wireless communication, with the vehicle’s CAS which is configured to prevent or reduce the severity of a collision. An example of such CAS may include radar (all-weather) and/or laser (LIDAR) and camera (employing image recognition) to detect an imminent crash. The sensing system 110 may combine the data received from the CAS with the extracted data use data to estimate the medical status of the occupants, for example following the vehicle’s collision.
[00110] According to some embodiments, the sensing system 110 may be mounted at the cabin’s ceiling or roof, for example in the front section 115 of the vehicle’s roof, in a way that allows a full cabin view. According to some embodiments, the sensing system 110 may be installed, mounted, integrated and/or embedded in a vehicle, specifically in a cabin of the vehicle such that the scene is the cabin interior 120 and the object(s) present in the cabin may include, for example, one or more of: vehicle occupant(s) (e.g. a driver(s), passenger(s), pet(s), etc.); one or more objects associated with the cabin (e.g. seat, door, window, headrest, armrest, etc.); items associated with one or more of the vehicle occupant(s) (e.g. an infant seat, a pet cage, a briefcase, a toy, etc.) and/or the like.
[00111] It is stressed that one of the main advantages of mounting the system 110 in the vehicle’s roof or ceiling is that in this location in the vehicle the system 110 is less exposed to damages and to strong impacts during accidents.
[00112] According to some embodiments, the systems and methods are configured to generate an output, such as one or more output signals including medical information, such as medical status in real-time, of the one or more occupants following the vehicle’s accident.
[00113] According to some embodiments, the sensing system 110 may include one or more sensors, for example of different types, such as a 2D imaging sensor and/or a 3D imaging sensor (e.g.
stereoscopic camera) and/or an RF imaging sensor and/or a vibration sensor (micro-vibration), structured light sensor, ultrasonic sensor and the like to capture sensory data of the vehicle cabin, as will be further illustrated in Figure 3B. Specifically, the 2D imaging sensor may capture images of the vehicle cabin, for example from different angels, and generate original visual images of the cabin. In an embodiment, the sensing system 110 may include an imaging sensor configured to capture 2D and 3D images of the vehicle cabin and at least one processor to analyze the images to generate a depth map of the cabin.
[00114] In another embodiment, the system 110 may detect vibrations (e.g. micro-vibrations) of one or more objects in the cabin using one or more vibration sensors and/or analyzing the captured 2D or 3D images to identify vibrations (e.g. micro-vibrations) of the objects.
[00115] According to another embodiment, the system 110 may further include a face detector sensor and/or face detection and/or face recognition software module for analyzing the captured 2D and/or 3D images.
[00116] In an embodiment, the system 110 may include or may be in communication with a computing module comprising one or more processors configured to receive the sensory data captured by the system's 110 sensors and analyze the data according to one or more of computer vision and/or machine learning algorithms to yield medical information including an estimation the medical condition of the one or more occupants in the vehicle cabin as will be illustrated hereinbelow.
[00117] Specifically, in accordance with embodiments, the one or more processors are configured to combine various types of sensory data (e.g. 2D data (e.g. captured 2D images) 3D data (depth maps)) of the vehicle cabin along a period of time such few seconds or minutes (e.g. 1, 2, 3, 4-100 seconds and more) before an accident, during the accident and following the accident to yield the medical information relating to the one or more occupants in the vehicle cabin.
[00118] Advantageously, system 110 provides merely the minimal hardware such as one or more sensors and imagers for capturing visual and depth images of the vehicle 110 interior. In some cases, an interface connecting to system 110 may supply the necessary power and transfer the data acquired to the vehicle’s computing and/or processing units such as VCS 109 and/or ACU 108, where all the processing is being carried out, taking advantage of its computing power. Thus, in accordance with some embodiments, installing system 110 becomes very easy and using off-the-shelf components.
[00119] According to some embodiments, the sensing system 110 comprises one or more illuminators such as an illuminator 103 and a sensor 101. The illuminator 103 creates light such as a pattern of
light, schematically indicated here as rays of spots 102. In some cases, the created light pattern may cover all or selected portions of the occupants of vehicle 100 such as passenger 203 and driver 207 as shown in Figure 2A.
[00120] According to some embodiments, sensor 101 may be selected from a group consisting of: a Time of Flight (ToF) image sensor; a camera; RF (radio frequency) device; stereoscopic camera, structured light sensor, ultrasonic sensor.
[00121] According to some embodiments, the sensor 101 may be an image sensor equipped with a fish-eye lens allowing coverage of the full cabin. In some cases, to cover all possible positions of the occupants in a vehicle a typical coverage of more than 150 degrees may be used. The sensor may be a CMOS sensor (Complementary Metal Oxide Semiconductor) or CCD sensor (charge-coupled device) with resolution such as VGA, 2MP, 5MP, 8MP. The and lens can work in the visible range, or IR range.
[00122] According to some embodiments, the illuminator 103 creates a light pattern that may cover one or more portions or the whole cabin 120. An example of such a pattern is a spot pattern. Figure 2A shows such an example of a spot pattern 206 covering the front and back seat in a standard passenger’s car. The pattern can be spots pattern, as shown in Figure 2A, lines, grid, or any other pre configured shape. Advantageously, having the light concentrated in small regions such as the spot shown in Figure 2A, may improve the signal to background ratio and hence provide a clearer speckle signal.
[00123] Figure 2B is a flow diagram 296 illustrating steps of detecting occupancy state, including identifying objects, such as hidden objects in a vehicle cabin and providing information on the identified objects, according to one embodiment. As shown in Figure 2B, a sensing system 280 includes one or more illumination units such as an illuminator 274 which provides structured light with a specific illumination pattern (e.g., spots or strips or other patterns) to objects 271 and 272 located for example at the rear section in the vehicle cabin 100 and one or more sensors such as sensor 276 captures an image of the objects 271 and 272 in the vehicle cabin 100.
[00124] According to one embodiment, the sensing system 280 may include one or more processors such as processor 252. The processor 252 may be in wired or wireless communication with devices and other processors. For example, the output from processor 252 may trigger a process within the processor 252 or may be transmitted to another processor or device to activate a process at the other processor or device.
[00125] According to another embodiment, the processor 252 may be external to the sensing system 280 and may be embedded in the vehicle or may be part of the vehicle's processing unit.
[00126] In one embodiment, the processor 252 may instruct the illuminator 265 to illuminate specific areas in the vehicle cabin.
[00127] According to some embodiments, the sensing system 280 may further include an RF transmit- receive unit 275 including, for example, an RF transmit-receive unit 275 for such as an RF transceiver configured to generate and direct RF beams towards the objects 271 and 272 using RF antennas 275 and receive the reflected RF beams to provide an RF image of the vehicle cabin 100 and objects 271 and 272. The captured images, including for example the RF signals and reflected pattern images are provided to the processor 252 to generate a depth map representation 291 and 2D/3D segmentation of the vehicle cabin 100 and/or the objects 271 and 272.
[00128] According to one embodiment, the sensing system may include a sound sensor 269 such as one or more ultrasound sensors and/or a directional microphone configured and enabled to detect the presence of a person and/or vitality signs to locate for example the mouth of a person which is speaking and generate data inputs to detect one or more objects location in the vehicle. According to some embodiments, the processor 252 may further receive and/or generate additional data 279 including for example info on the vehicle state 278, speed and acceleration 277 as captured by vehicle sensors 273. The sensory data 282 and the additional data 279 are analyzed by the processor 252.
[00129] According to some embodiments, the sensory data 282 and the additional data 279 are analyzed using a multitude of computer vision and machine learning algorithms. These may include but are not limited to a Convolutional Neural Network detecting people, networks that specifically detect the face, hands, torso and other body parts, networks that can segment the image and specifically the passengers in the image based on the 2D and 3D images, algorithms that can calculate the volume of objects and people and algorithms that can determine if there is motion in a certain region of the car
[00130] The analysis output multiple types of data on objects in the vehicle cabin, such as information on occupants or inanimate objects such as a box or bag of groceries or an empty child seat including for example. Specifically, in some cases the data may include: detected body parts of objects 289 and/or motion of objects 288 and/or volume of objects 287 and/or occupancy state based on deep learning 286 in the vehicle cabin.
[00131] According to some embodiments, the multiple types of data may include depth data 294 including one or more depth images, obtained for example by the sensor 276. The depth data may be specifically used to detect body parts of the passengers and segment the body or body parts in the image. In some cases, the depth data may also be used to determine the pose of a person, such as sitting up-right, leaning to the side or leaning sideways.
[00132] According to some embodiments, the multiple types of data may include prior knowledge
292.
[00133] Nonlimiting examples of prior knowledge may include: information on the vehicle units such as doors/window/ seatbelt state and/or prior information on objects and/or passengers in the vehicle; and/or rules or assumptions such as physical assumptions or seat transition rules relating to likelihood of movements inside the vehicle, for example, to rule out unlikely changes in the occupancy prediction or alternatively confirm expected changes in the vehicle occupancy state. Specific examples of the physical assumptions may include for example a high probability that the driver seat is occupied in a driving vehicle (nonautonomous vehicle) and/or low probability that a passenger may move to one of the rear seat in a predetermined short time, or from one seat to another seat in a single frame.
[00134] At the following step 293 the multiple types of data is fused to determine at step 294 the number of seat occupancy and/or object (e.g. passenger) such as objects 271 and 272 position and attribution (e.g. whether a passenger is sitting straight, leaning to the side or forward) are detected and/or identified. For example, the detection may include identifying one or more passengers, such as a complete body or body portion such as a face or hand at the rear section of the vehicle.
[00135] According to one embodiment, the multiple types of data are fused by a fusion algorithm which outputs the best decision considering the reliability of each data input (e.g. the motion of objects 288 and/or volume of objects 287 and/or occupancy state based on deep learning 286 face detection 290 prior knowledge 292).
[00136] Specifically, the fusing algorithm includes analyzing the fused data to yield a stochastic prediction model (e.g. Markov chain, for example in the form of a Markov matrix) of one or more predicted occupancy states probabilities. The prediction model is used to continuously update over time the probability of one or more current occupancy states (e.g. probability vector) to yield an updated occupancy state, e.g. determine in real time the location of objects 271 and 272 and/or the number of objects in the vehicle cabin. In some cases, the predicted state and the current state are combined by weighting their uncertainties using for example Linear time-invariant (LTI) methods
such as Infinite impulse response (HR) filters. Once the body or body parts are detected, the objects are tracked at step 295. For example, the objects may be tracked in the 2D image or the depth image using conventional trackers such as correlation trackers and edge trackers.
[00137] According to one embodiment, based on determined occupancy state the processor 252 may output at step 295 data or signals which may be used to provide information and/or for controlling devices, which may be remote or integral to the vehicle, for example, an electronic device such as an alarm, alert or a lighting may alert on out-of-position and accordingly activate an occupant protection apparatus (e.g. airbag) or other devices. The device may be controlled, such as activated or modulated, by the signal output according to embodiments.
[00138] In some cases, the output signals may include seat belt reminder (SBR), out of position indication (OOP) for example for airbags suppression and driver monitoring system (DMS) for driver’s alert.
[00139] Alternatively or in combination, once the objects are identified using a neural network, then the neural network is trained to output a number of points such as a predefined number of points corresponding to certain body parts or a skeleton of lines. At the following step, these points and lines can be tracked in time using conventional tracking algorithms such as Kalman filters. It is stressed that the location of these points may be estimated by the tracker even if the tracking is lost, for example when the full body or parts of the body are obstructed, i.e. by the front seats.
[00140] According to some embodiments, the body parts of the passengers may be detected using computer vision algorithms and/or neural networks that are trained to detect persons such as human contour, for example, a complete human body and/or specific human body parts. Specifically, the face of a person may be detected by well-known detection or recognition methods. Non-limiting examples of such methods include Viola- Jones algorithm or SSD neural networks. Body parts may be detected by a neural network that is specifically trained to detect the body pose. Non-limiting examples of such methods include OpenPose or OpenPose Plus methods.
[00141] Figure 2C illustrates a more detailed block diagram of the vehicle 100. Notably, the vehicle 100 can include more or less components than those shown in Figure 2B. For example, the vehicle 100 can include a wired or a wireless system interface, such as a USB interface or Wi-Fi, interfaces to connect the sensing system 110 and the vehicle's computing unit 205 with vehicle systems 200 such as vehicle systems 202-208, vehicle sensors 201 such as vehicle sensors 212-218 seat locations 232- 238 and respective seat sensors 232'-238'. The hardware architecture of Figure 2B represents one
embodiment of a representative vehicle comprising a sensing system 110 configured to monitor and identify automatically objects and areas in the vehicle, such as hidden objects. In this regard, the vehicle of Figure 2C also implements a method for monitoring objects and/or passenger to identify the attributes of the objects and/or passengers also in cases where due to momentary poor visibility or false detection they may not be identified. The identification may be based on various types of information received, for example in real time, from various types of sensors in the vehicle such as dedicated sensors embedded in the sensing system 110 and existing vehicle sensors embedded in various locations in the vehicle 100.
[00142] As shown in Figure 2C, the vehicle 100 includes a vehicle computing unit 205 which controls the vehicle's systems and sensors and a sensing system 110 including a control board 250, which controls the sensing system 110 and which may be in communication with the vehicles units and systems such as the vehicle computing unit 205 and the vehicle systems 200. In some cases, the control board may be included in the computing unit 205. Vehicle 100 also comprises passenger seat position 232-238 including a driver's seat position 232, a front passenger seat position 234, and left and right rear passenger seat positions 236 and 238. Although four seat positions are shown in Figure 2B for illustrative purposes, the present invention is not so limited and may accommodate any number of seats in any arrangement within the vehicle. In one implementation, each passenger seat position has automatically adjustable settings for seat comfort, including but not limited to, seat height adjustment, fore and aft adjustment position, seatback angle adjustment. Each passenger seat position may also include respectively and separately configurable one or more sensors 232 '-238' for controlling the passenger's seats and windows and environmental controls for heating, cooling, vent direction, and audio/video consoles as appropriate. In some cases, the passengers may have communication devices 242-248, one for each passenger position, indicating that each passenger in vehicle 100 is carrying a communication device. Although the exemplary embodiment illustrated in Figure 2C shows each passenger carrying a communication device, various implementations envision that not all passengers need to carry a device.
[00143] According to one embodiment, the sensing system 110 may include a plurality of sensors of different modalities. For example, the sensing system 110 may include a vibrations sensor 241, and/or an acceleration sensor 242, and/or 3D sensor 243, and/or an RF sensor 244, and/or a video camera 245, such as a 2D camera.
[00144] In some cases, the sensing system may include an image sensor for mapping a speckle field generated by each spot formed on the vehicle surface and a light source, such as a coherent light source adapted to project a structured light pattern, for example, a multi-beam pattern on the vehicle cabin. In accordance with embodiments a processor, such as processor 252 is configured to process speckle field information received by the image sensor and derive surface vibration information to identify one or more objects in the cabin, including for example motion and micro-vibrations of one or more of the detected objects.
[00145] The projected structured light pattern may be constructed of a plurality of diffused light elements, for example, a dot, a line, a shape and/or a combination thereof may be reflected by one or more objects present in the scene and captured by an imaging sensor integrated in the unified imaging device.
[00146] According to some embodiments, the control unit 250 may be connected to the vehicle systems 202-208, the vehicle's sensors 110, the sensing system sensors, the seat 232-238 and/or the seat's sensors' 232'-238' via one or more wired connections. Alternatively, control unit 250 may be connected to the vehicle systems 202-208, the vehicle's sensors 110, the sensing system sensors, the seat 232-238 and/or the seat's sensors' 232'-238' through a wireless interface via wireless connection unit 252. Wireless connection 252 may be any wireless connection, including but not limited to, Wifi (IEEE 802.1 lx), Bluetooth or other known wireless protocols and/or.
[00147] In accordance with embodiments, the computing unit 205 is also preferably controllably connected to the vehicle systems 202-208, the vehicle's sensors 110, the sensing system sensors, the seat 232-238 and/or the seat's sensors' 232'-238'. Vehicle systems 222-228 may be connected through a wired connection, as shown in Figure 2, or by other means.
[00148] The vehicle systems may include, but are not limited to, engine tuning systems, engine limiting systems, vehicle lights, air-condition, multimedia, GPS/navigation systems, and the like.
[00149] The control board 250 may comprise one or more of a processor 252, memory 254 and communication circuitry 256. Components of the control board 150 can be configured to transmit, store, and/or analyze the captured sensory data, as described in further detail herein.
[00150] The control unit may also be connected to a user interface 260. The user interface 260 may include input devices 261, output devices 263, and software routines configured to allow a user to interact with the control board. Such input and output devices respectively include, but are not limited
to, a display 268, a speaker 266, a keypad 264, a directional pad, a directional knob, a microphone 265, a touch screen 262, and the like.
[00151] The microphone 265 facilitates the capturing of sound (e.g. voice commands) and converting the captured sound into electrical signals. In some cases, the electrical signals may be used by the onboard computer 104 interface with various applications 352.
[00152] The processor 252 may comprise a tangible medium comprising instructions of a computer program; for example, the processor may comprise a digital signal processing unit, which can be configured to analyze and fuse data such as sensory data received from the various sensors using multiple types of detection methods. In some cases, the processed data output can then be transmitted to the communication circuitry 256, which may comprise a data encryption/transmission component such as BluetoothTM. Once encrypted, the data output can be transmitted via Bluetooth to the vehicle computing unit and/or the vehicle user interface and may be further presented to the driver on the vehicle. Alternatively or in combination, the output data may be transmitted to the monitoring unit interface.
[00153] Figure 3A shows a schematic diagram of a sensing system 310, configured and enabled to capture sensory data of a scene such as a vehicle cabin 320, including one or more occupants (e.g. driver 311 and/or passenger 312) and analyze the sensory data to estimate the medical condition of the one or more occupants, for example following an accident of the vehicle, in accordance with embodiments. In some cases, the sensing system 310 may be the system 110 of Figures 1A and IB.
[00154] System 310 includes a sensing module 300 and a control module 315. The two modules can reside in the same package or can be separated into two different physical modules. As an example, for a split configuration, the sensing module 300 can reside in the ceiling of the car, while the control module 315 can reside behind the dashboard. The two modules are connected by communication lines and/or may be in communication electrically and/or wirelessly for example through a dedicated connection such a USB connection, wireless connection or any connection known in the art.
[00155] In one embodiment, the sensing system 310 is connected to the vehicle’s power. In an alternative embodiment, system 310 comprises a battery, optionally chargeable battery which allows operation even when the vehicle’s power is down. Such a design would allow the system to keep operating even if the vehicle power fails during an accident. In a further embodiment, the battery is chargeable from the car’s battery or from the car’s alternator.
[00156] The sensing system 310 may also be equipped or has an interface to an impact sensor 325 configured to detect an imminent crash. A non-limiting example of such impact sensor is the impact sensor that exists in cars and is responsible for airbag deployment. In such a configuration, the notification of an impact is being transferred to the system 310 by an electronic signal from the impact sensor. The electronic signal may be transferred directly or by a communication system such as a vehicle’s CAN bus interface. In some cases, the impact sensor may be or may be included in CAS as mentioned herein with respect to Figure 1A.
[00157] Alternatively or in combination, the sensing system 310 is equipped with a built-in impact sensor such as an accelerometer 312. When an acceleration (or rather a deceleration) above a certain threshold is detected, the system 310 considers the impact signal as being provided, and that a collision will soon occur.
[00158] In another embodiment, an impact may be determined by analyzing data captured from the in-cabin vehicle, for example using the sensing module 300. As an example, the system 310 can monitor the video motion of the driver, passenger and objects in the cabin. When rapid or hectic movement is detected, beyond a predefined threshold, the system concludes that an impact is occurring. Such analysis can be based on computer vision algorithms such as, but not limited to, optical flow or tracking.
[00159] In accordance with embodiments, the sensing module 300 comprises an image sensor 301 and at least one illuminator 303 which can be configured to capture the sensory data including one or more images of the scene (e.g. car cabin) and further transmit the visual data to extract visual data, depth map(s) (e.g. density depth map(s)) and vibrations (e.g. micro-vibration data) of the scene using the control module as described in further detail herein.
[00160] The image sensor 301 is equipped with a lens 302. The lens can be a fish-eye lens covering the entire cabin. The illuminator 303 creates a light pattern illuminating the scene 320. The light pattern can be optionally designed to cover approximately the same field of view as the image sensor.
[00161] The image sensor 301 can be a CMOS or CCD sensors of various formats of resolution. For example, it can be a VGA format (640x480 pixels), or a 2 MPixel format (1920x1080).
[00162] In order to create the illumination pattern, the illuminator 303 may include a light source 324 that may be, as an example, a coherent laser light source. The illuminator 303 may also be equipped with a pattern creation optics, which may be as an example, a diffractive optical element (DOE) 326. Other examples for pattern creation may be a mask, or splitting mirrors and diffuser.
[00163] The light source 324, for instance, may include electromagnetic energy of wavelengths in an optical range or portion of the electromagnetic spectrum including wavelengths in a human-visible range or portion thereof (e.g., approximately 390 nm-750 nm) and/or wavelengths in the near-infrared (NIR) (e.g., approximately 750 nm-1400 nm) or infrared (e.g., approximately 750 nm-1 mm) portions and/or the near-ultraviolet (NUV) (e.g., approximately 400 nm-300 nm) or ultraviolet (e.g., approximately 400 nm-122 nm) portions of the electromagnetic spectrum. The particular wavelengths are exemplary and not meant to be limiting. Other wavelengths of the electromagnetic range may be employed. In some embodiments, the illuminator 303 wavelength may be any one of about 830nm, about 840nm, about 850nm, or about 940nm.
[00164] In a particular embodiment, the illumination is performed in the near infra-red (NIR, about 750-1400nm) spectral range, in order to prevent the pattern from being visible to the naked eye of an occupant.
[00165] In some cases, the image sensor 301 can be equipped with a band-pass spectral filter preventing it from capturing light in wavelengths not matching those of the illuminator. When such a band-pass filter is added to the image sensor 301, the signal to background ratio improves.
[00166] In some aspects of the embodiments, both the image sensor 301 and the illuminator 303 ( e.g. the sensing module 300) are connected to or may in communication with one or more processors such as processor 304 located in the control module 315.
[00167] In several embodiments, the processor 304 may operate the illuminator 303 and the image sensor 301 according to a monitoring policy. As an example, the processor 304 can operate the image sensor 301 at a frame rate of 30 frames per second (FPS) and the illuminator 303 as a constant light source. As another example, the processor 304 can operate the image sensor 301 at 30 FPS and the illuminator 303, alternatively, may be obtained only every second frame with a pattern, while the remaining frames are without a pattern. For example, one option may include capturing a clean frame (e.g. which does not include reflected pattern) and then one patterned frame. Alternatively, capturing 5 clean frames and then one pattern frame and etc. The later mode may be desirable as it may be advantageously used in order to obtain alternatively clean image frames for deep learning algorithm, and also pattern frames that can be used for vibration monitoring and vital signs extraction. Other option including a different relation between the number of pattern frames and clean frames may be used.
[00168] In an embodiment, the processor 304 is configured to extract any one or more or a combination of images such as video images, depth data and vibration data from the video stream.
[00169] In an embodiment, extracting video images is performed by selecting video frames in which the illumination source 324 was off and no light pattern was projected onto the scene. In another embodiment, the video images can include the light pattern created by the illuminator.
[00170] In an embodiment, extracting depth data is performed by using a structured light technique. The light pattern induced by the illuminator 303 is captured by the image sensor 301. The processor 304 is configured to receive the captured images and analyze the captured images to identify the displacement of each of the pre-known light pattern elements thereby calculating the depth of the object at this location.
[00171] The depth can be estimated for each pattern element by using the formula in equation 1 below: Equation 1:
where z denotes the depth estimation, B denotes the baseline distance between the image sensor and the pattern illuminator, f denotes the camera module lens’s focal length, and D denotes the disparity, i.e. the distance in which the pattern element has shifted across the image plane.
[00172] Alternatively or additionally, the depth can also be obtained by employing a look-up table mechanism. To employ such a mechanism, the system needs to undergo a calibration process in which the light pattern is projected onto a screen positioned at several predefined distances. For each distance, the pattern is recorded and stored in memory such as memory module 305. Then, for each small region that includes some light pattern element observed by the camera during operation, a correlation-based algorithm is employed to asses at which screen distance the closest matching pattern element occurs. The distance in which the correlation is the highest is chosen as the estimated distance for this region.
[00173] It is noted that the number of extracted depth points need not match the number of pixels in the image. Rather, it is related to the light pattern induced by the illuminator. In accordance with embodiments, the number of depth points is defined by the number of distinct light pattern elements in the projected pattern. As an example, this can be the number of spots, in a pseudo-random spot pattern, or a small factor of this number resulting from using a cluster of 2-3 points as the distinct pattern.
[00174] In another exemplary embodiment, the processor 304 is also configured to extract vibration such as micro-vibration information from the scene. The extraction of the micro-vibration is performed
by analyzing the speckle content of at least some of the light (e.g. spots) pattern elements in the captured images. The changes over time of the speckle pattern can be indicative of micro-vibrations, i.e. very small and subtle movements that may be too minor to be detected by analyzing variations in the depth data or in the video images which do not include the light pattern elements.
[00175] In some embodiments, the speckles pattern analysis may include detecting the changes to the speckle pattern by measuring a temporal standard deviation in the intensity of the respective reflected diffused light element over multiple consecutive captured images to identify a temporal distortion pattern. For example, assuming Ij is the gray level intensity of a certain pixel depicting light pattern element and/or a part thereof in an image number i,one or more processors such as processor 304 may calculate the temporal standard deviation according to equation 2 below.
where n denotes the current image and k denotes the number of previous images.
[00176] The analysis may further include comparing the result of the temporal standard deviation to a predefined threshold value to determine whether a micro-vibration occurred. In case the temporal standard deviation value exceeds the predefined threshold, it is determined, for example by the processor 304, that the magnitude of the micro-vibrations in the specific light pattern element increased. On the other hand, in a case where the temporal standard deviation value does not exceed the predefined threshold, the processor 304 may determine that there is no change in the magnitude of the micro-vibrations. In certain embodiments, the predefined threshold value may be fixed and set in advance. Optionally, the predefined threshold value can be dynamically adjusted according to the value of the temporal standard deviation measured over time.
[00177] Optionally, in order to improve immunity to noise which may affect the intensity level of the speckle pattern and increase the Signal to Noise Ratio (SNR) of the intensity of the speckle pattern, the temporal standard deviation may be averaged over multiple pixels (e.g. 5x5 pixels) of each spot.
[00178] Optionally, in order to improve immunity to noise which may affect the intensity level of the speckle pattern and increase the Signal to Noise Ratio (SNR) of the intensity of the speckle pattern, the temporal standard deviation may be averaged over multiple speckle patterns of diffused light elements reflected from the same surface and portrayed in the same region in the captured images.
[00179] According to another embodiment, the changes to the speckle pattern may be detected, for example by the processor 304, by analyzing the speckle pattern for lateral translation which is indicative of a tilt of the reflecting object with respect to the image sensor. The tilt which may be very minor, for example, on a scale of micro-radians, may be derived from the translational velocity of one or more speckle pattern point(s) over time (consecutive frames). The lateral speckle pattern translation may be derived from analysis of the diffused light pattern element(s) depicted in a plurality of consecutive captured images according to equation 3 below.
Equation 3:
(II
dt
V
dl
dx
where / denotes the intensity of the pixel in the captured image in gray level differentiated with respect to a time t or position x.
The angular velocity in a change of a certain pixel (i,y) with respect to its neighboring pixels in the i direction in captured image n may be expressed by equation 4 below.
The angular velocity in a change of a certain pixel (i,j) may be expressed similarly in the j direction. The result of the angular velocity is expressed in pixel per frame units.
[00180] Optionally, the intensity 7j of the pixel (i,y) may be normalized, for example by the processor 304, over time to compensate for non-uniformity in intensity Ij due to spot intensity envelope effects. For example, the intensity Ij j may be normalized by applying a sliding temporal window for averaging the intensity Ij j of one or more pixels (i,y) in the consecutive captured images. In another example, the processor 304 may smooth the intensity Ij j in the time domain by applying an infinite impulse response to the Ij j to produce a smoothed intensity TL j as expressed in equation 5 below.
Equation 5:
h,j - a j + (1 - a)h where a denotes a small factor, for example, 0.05.
The intensity 7; of one or more of the pixels (i,j) may be normalized by dividing it with the average intensity measured over time in a plurality of consecutive captured images to produce a normalized intensity /A as expressed in equation 6 below.
Replacing the expression of the intensity 7 A in equation 4 with the normalized intensity /A , the angular velocity may be expressed by equation 7 below.
[00181] In some embodiments, in order to further improve the robustness of the measured intensity against noise effects, the processor 304 may further spatially average the intensity over multiple adjacent reflected diffused light elements (e.g. dots, spots, etc.) in the captured images. The processor may further apply temporal filtering over the spatially averaged intensity value to improve the resulting intensity signal.
[00182] Further details on the speckle pattern analysis for detecting the micro-vibrations may be found in US Patent Number US 10,345, 137 entitled“System and Method for Detecting Surface Vibrations”, and PCT International Application Publication Number WO2019012535 entitled“Systems and methods for acquiring information from an environment”, each of which are incorporated herein by reference in their entirety.
[00183] It is noted that depth data and vibrations (e.g. micro-vibration) can be extracted from the captured images separately or together (e.g. simultaneity), according to the configuration of the processor 304.
[00184] In some cases, the processor 304 may also analyze the extracted depth data and vibrations (e.g. micro-vibration) and captured visual data to provide medical information including an analysis of the medical status such as injury related status of the one or more occupants in the vehicle. In some cases, the medical information is provided in real-time or close to real-time. In certain embodiments, the processor analyzes post-crash vital-signs, pose, and injury severity as described in further details herein.
[00185] According to various aspects and embodiments, the processor 304 is connected to or may be in communication with a memory module 305. The memory module is configured to store data such as 2D data (e.g. visual images) and/or 3D data and/or vibrations during analysis. It may also store pre crash data to be retrieved in case an accident occurs. In some cases, the memory module 305 may receive data from internal units in the vehicle such as the vehicle’s computer device and/or memory units and/or from external devices such as mobile phone devices.
[00186] The processor 304 may be connected to or may be in communication (e.g. wirelessly) with a communication module 306 in order to communicate the results of the analysis (e.g. the medical data) to the external world. In some cases, the results can be transmitted to a cloud based storage system 307 or to a rescue team 309.
[00187] In many embodiments, the sensing system 310 may be in wireless communication 116 with the cloud-based storage system 307. In some cases, the system can transmit the data to a mobile device 350 using communication module 306 with a communication link, such as a wireless serial communication link, for example, Bluetooth™. The hand held device can receive the data from the system and transmit the data to a back end server of the cloud based storage system 307
[00188] The transmitted data can be structured in the form of a message including information such as the position or status of one or more occupants inside the post-crash vehicle. In an embodiment, the transmitted data can also include images or video sequences form the sensor.
[00189] In accordance with embodiments, the estimation of the post-crash medical condition of a vehicle occupant can be performed based on any combination of the pre-crash data, during-crash data and post-crash data as described in further details herein.
[00190] Figure 3B shows a schematic diagram of a sensing system 370, configured and enabled to capture sensory data of a scene such as a vehicle cabin 320, including one or more occupants (e.g. driver 311 and/or passenger 312) and analyze the sensory data to estimate the medical condition of the one or more occupants, for example following an accident of the vehicle, in accordance with embodiments.
[00191] System 370 presents all elements of the aforementioned system 310 but further includes a number of sensors such as an image sensor 372, a depth sensor 374, for example, a stereoscopic camera and a micro-vibration sensor 376.
[00192] Figure 4 shows a flowchart of a method 400 for capturing and processing in-cabin data including pre-crash data, during-crash data and post-crash data to provide information such as medical
information following or during a vehicle accident, in accordance with embodiments. At step 410, during normal vehicle operation, a system such as system 110 of Figures 1A-1C or system 310 of Figure 3A receives and records data such as captured sensory data (e.g. raw data -defined as primary data collected from one or more sensors such as the sensors in the sensing module) of the in-cabin vehicle. In accordance with embodiments, the sensory data may be captured by the sensing module 300 and may include any one or more of the video images, depth data, and vibration data. In some cases, the captured sensory data may be stored using, for example, a buffer such as buffer 301. In some cases, the buffer 301 may be a cyclic buffer comprising, for example, a storage capability of 10 seconds length, 20 seconds length, or more. The term“cyclic” as used herein refers to a fixed amount of memory that is used to store the data captured in an amount of time, such that when the memory has been filled, the old data is overwritten by new data in a cyclic manner.
[00193] In some cases, the sensory data may include vehicle internal data obtained from the vehicle units such as the vehicle embedded sensors.
[00194] At step 420 an impact detection signal may be received for example at the control module. In some cases, the impact detection signal may be generated by the impact sensor 325 and/or the processor and/or sensing module following an imminent crash detection. In some cases, the impact detection is obtained from the vehicle embedded sensors such as the CAS or other sensing modules.
[00195] A non-limiting example of such impact sensor is the impact sensor that exists in cars and is responsible for airbag deployment. In such a configuration, the notification of an impact is being transferred to the system by an electronic signal from the impact sensor. The electronic signal may be transferred directly or by a communication system such as vehicle’s CAN bus interface. In another embodiment, the system is equipped with a built-in impact sensor such as an accelerometer as mentioned above. When an acceleration (or rather a deceleration) above a certain threshold is detected, the system considers the impact signal as being provided. In another embodiment, the impact case is analyzed from the system data. As an example, the system can monitor the video motion of the driver, passenger and objects in the cabin. When rapid or hectic movement is detected, beyond a predefined threshold, the system concludes that an impact is occurring. Such analysis can be based on computer vision algorithm such as, but not limited to, optical flow or tracking.
[00196] In some cases, at step 430, once an impact detection signal is received, for example from the impact detection sensor 302, the system mode changes from“normal vehicle operation mode” to “accident mode”. In accordance with embodiments, as part of the“accident mode” the system operates
one or more actions as follows: at step 432 the recording on the cyclic buffer 301 is stopped and the raw data (e.g. sensory data) contained in the buffer is“frozen” in memory 303 for further analysis.
[00197] Optionally or in combination, the sensory data can also be transmitted to external sources such as the cloud based memory units 307 located at emergency agencies such as the police, hospital, etc., for advanced crash investigations. The transmission can happen during the crash or at a later stage when an investigation is initiated.
[00198] In an embodiment, the details of the impact as identified for example by the processor may also be recorded by the processor. Such details can include the magnitude and/or the direction of the impact. These details can then be used to asses the results of the accident.
[00199] At step 440 the sensory data stored at the buffer is analyzed, for example by the processor, to yield pre-crash data including for example identification of the vehicle’s occupants state prior to the crash (e.g. pre-crash state), in accordance with embodiments. Such analysis can provide any combination of the position of the occupant, the body pose, the body attributes such as mass, height, age, gender and body type. The analysis can also include objects that exist inside the cabin and can pose injury risks during an accident.
[00200] The analysis described herein can be performed using computer vision and/or machine learning algorithms. The position of an occupant is obtained by training a computer vision module to identify the existence of individuals inside the car. The body pose can be obtained by training a neural network system to estimate body pose from video images. Body attributes can be obtained by training a neural network to estimate the attributes from video images.
[00201] In an embodiment, the analysis can also be performed or be enhanced by the use of the depth data layer, providing information regarding the volume of an individual and its physical dimensions. The depth information can also provide information relevant to the body pose.
[00202] In an embodiment, the processor is configured to analyze one or more pre-crash depth maps to extract a body attribute including, such as mass, height, width, volume, age, and gender as illustrated in Figures 2B and 2C.
[00203] In an embodiment, the analysis comprises the use of a combination of at least one image and at least one depth map.
[00204] At step 450, following the pre-crash car occupant state identification at step 440, the system assesses the possible accident outcome in terms of injuries and medical issues based on the analyzed
pre-crash data which included identification of the occupants' state prior to the accident, in accordance with embodiments.
[00205] In an embodiment, the system employs a human body mechanical model which can use the recorded pre-crash data, to assess the outcome of the vehicle crash in terms of injuries and medical issues. The available pre-crash information may include one or more or all of the vehicle occupants' body’s attributes, body’s initial pose, body mass, age, body dimensions, and gender and impact parameters such as magnitude and direction.
[00206] In some cases, the human body mechanical model might be a finite-element model of the body. Alternatively or in combination, the human body mechanical model can contain heuristic equation based on existing injuries databases.
[00207] In an embodiment, the system is triggered to initiate the analysis of the nature or severity of injury by utilizing vehicle impact sensors. In various embodiments, a vehicle impact sensor such as sensor 325 comprises a microswitch located inside the vehicle’s bumper, the microswitch is configured to detect strong impacts. In an embodiment, the system uses the same sensors as the vehicle’s airbag deployment system.
[00208] To illustrate possible considerations in such an assessment, the pre-crash head pose of the occupant 102 relative to the body is addressed. The whiplash effect during a car accident as illustrated in Figure IB can cause various types of injuries to the occupant’s neck. Knowing the direction and magnitude of the impact as obtained, for example, as obtained from the impact sensor, and the head position and orientation relative to the body and to the car, as obtained and extracted by analyzing the captured images the system can provide a prediction as to the nature and severity of the injury. Specifically, the analysis may include identifying which body part got impact. Based on the nature of this body part and the estimated strength of the impact predict tissue damage, vascular damage, bone damage, etc.
[00209] In an embodiment, the system is configured to store pre-crash images. In additional embodiments, the system is further configured to send the pre-crash images and the pre-crash data to a first responders’ team or to upload said images to a cloud service to be extracted by a first responders’ team.
[00210] In an embodiment, the processor is configured to analyze one or more stored pre-crash images in order to extract a pre-crash body pose. In further embodiments, the processor is configured to analyze the pre-crash depth maps (e.g. body pose), optionally in combination with any available
information regarding the physical parameters of the impact, thereby providing an assessment of the nature or severity of an injury.
[00211] In an embodiment, the processor is configured to analyze one or more pre-crash images in order to extract at least one body attribute including, but not limited to, mass, height, width, volume, age, and gender. In some embodiments, the processor is further configured to use said at least one body attribute, optionally in combination with any available information regarding the physical parameters of the impact, in order to analyze and assess the nature or severity of an injury.
[00212] In accordance with some embodiments, at step 460 once a detection of an impact (e.g. of step 420) is received, for example at the system (e.g. the processor) it may instruct the system’s sensors such as the system sensing module to shift sensors and/or camera’s operating mode to a high-frame rate capturing mode, such that a high-frame rate data will be collected during the occurrence of the accident. A non-limiting example for high-frame rate includes about 500 to 1000 frames per second or more. A non-limiting example of the duration of the accident occurrence includes about 1-5 seconds or more.
[00213] At step 470 the system is configured to capture data such as sensory data obtained during the accident(e.g. during-crash data), for example during the high frame-rate period, in order to follow the trajectories of the occupant(s) and object(s) inside the car. The system may use any of the captured video images and/or the depth data in order to analyze the trajectories using computer vision methods as known in the art. The trajectories are especially important in order to detect body impact with the vehicle’s structure and various surfaces as well as to detect impact of occupants by objects in the vehicle.
[00214] At step 480 the trajectories and impact data are used to provide medical information output which includes an assessment and prediction of the nature and severity of injuries of the occupants. In some cases, the medical information output may be provided based on additional data including, for example, pre-cash data. As an example, detecting an impact between the occupant’s head and the car’s structure might indicate a high probability for head trauma. More specifically, as shown in Figure 1A the data including the distance‘d’ between the passenger 112 head and the front seat headrest 113 and the vehicle 100 speed during“normal vehicle operation mode” may be analyzed and combined with the passenger trajectory 118 and broken window 114 trajectory extracted from data captured during the vehicle 100 crash.
[00215] In some cases, the medical information output may be transmitted as one or more output signals to external units such as first responders units or to the vehicle’s internal units to activate one or more devices or system, such as a vehicle's devices.
[00216] Reference is now made to Figure 5A which shows a high-level block diagram 500 of a vehicle post-crash data, during-crash data, pre-cash data collection procedure, in accordance with embodiments. The collection procedure may be carried out by a sensing system such as the sensing system 110 of Figure 1 or sensing system 310 Figure 3. In some cases, the system may be in communication or may include additional sensors. As schematically shown, the system 110 collects some or all of the information available in the vehicle cabin to produce medical information including an assessment of the medical condition status and injuries nature and severity of the occupants.
[00217] In accordance with embodiments, each type of captured information may be transformed to one or more signal including the related information.
[00218] In an embodiment, the system, such as system 110 may collect the occupants’ heartbeat 501 by capturing and analyzing micro-vibrations of the body, such as one or more micro-vibrations signals obtained by the system 110 and analyzed by one or more processors. The micro- vibrations are captured using speckle temporal analysis as detailed hereinabove. The heartbeat causes skin mechanical micro vibration due to the pulse, such micro-vibrations being detectable by the system 110 configured to measure micro-vibration. The heart-rate or lack of heartbeat provides crucial information regarding the medical condition status of the one or more occupants in the vehicle and the nature and severity of their injury.
[00219] In another embodiment, the system such as system 110 is configured to collect the occupants’ respiration rate 502 by analyzing micro-vibration of the body, and specifically, of the chest area. The breathing motion causes micro-vibrations or vibrations that can be detected by system 110 which is configured to measure micro-vibrations. Breathing rate or lack of breathing provides crucial information regarding the medical status of the one or more occupants and the nature and severity of their injury.
[00220] In accordance with embodiments, both signals of heart-rate and raspatory rate can be obtained by analyzing post-crash bodily micro-vibration signals. In order to discriminate between the two signals, a spectral analysis may be employed. As an example, the micro-vibration signal may contain a superposition of a stronger raspatory signal and a weaker heart-rate signal. The heart signal may be of a typical range of 50 beats per minute (BPM) or more (-1-1.2 Hz), while the breathing signal may
be of a typical range of 15-20 breaths per minute (~3-4 Hz). Thus, performing spectral analysis and separating signal in this example at about ~2 Hz into high-pass and low-pass, would allow obtaining the two separate signals.
[00221] In addition, the same spectral analysis can be used in another embodiment to discriminate between vital signs signals and mechanical vibrations resulting from the vehicle or environment. As an example, in a post-crash situation in which the engine of the car keeps running, it can be expected that there will be mechanical vibration associated with the working of said engine. Assuming the idle rotations per minute (RPM) of the engine is 600 RPM, this means a typical base frequency for the engine induced mechanical vibration of 10 Hz. By using a low-pass below this frequency, the engine induced vibration can be cleaned from the captured signals.
[00222] In an embodiment, the system is configured to also detect the post-crash body-pose 503. The system 110 can use some or all of the captured images and the depth data. The system then employs a skeleton model or other heuristics in order to estimate the body-pose. The body pose, and specifically the relative position and orientation of the head and limbs, can provide valuable information regarding the medical condition status and the nature and severity of an injury. In some cases, the occupant's body pose may be identified using systems and methods as described in USP application number 62/806,840 entitled“SYSTEM, DEVICE, AND METHODS FOR DETECTING AND OBTAINING INFORMATION ON OBJECTS IN A VEHICLE” which is incorporated herein by reference.
[00223] In another embodiment, the system is configured to also use images such as video images and/or depth data in order to detect bodily motion 504 of the occupant(s). The detection can be performed by known computer vision algorithms such as, but not limited to, optical flow and visual tracking methods. For example, Lucas-Kanade optical flow algorithm in OpenCV. The detection of motion or lack of motion of the one or more occupants in the vehicle provides valuable information regarding the medical condition status and nature and severity of an injury of the one or more occupants.
[00224] In some cases, the system is triggered to initiate the analysis by visually detecting rapid hectic motion of an occupant inside the vehicle. An example for such a detection mechanism is based on optical flow algorithm configured to analyze the relative displacement of objects between two or more consecutive frames.
[00225] In another embodiment, the system is further configured to detect visual signs of body wounds 505 of the one or more occupants. An example of visually detectable wounds can be skin rapture or a
fracture. One way to configure the system to detect wounds is by training a deep learning classifier to detect broken skin or bleeding, for example, by showing wounds images and training the classifier to recognize the wounds. In operation, captured images of the injured one or more occupants are obtained at the processer which operates a deep neural network to get a prediction and to detect and identify the type of injuries.
[00226] In accordance with embodiments, each type of captured information such as heartbeat 501 respiration rate 502 may be transformed into one or more signal including the related information.
[00227] In various embodiments, the conversion of the information to one or more signals may be performed either locally (with a processor and software supplied with the system) or remotely. Heavier calculations for more complicated analyses, for example, can be performed remotely.
[00228] In accordance with embodiments, all or some of the above-mentioned signals and assessments 510 are transmitted to a data assessment module such as single post-crash data assessment module 506. The collected data signals 501, 502, 503, 504, and 505 may be used together or separately. It is stressed that in some cases only some of the signals may be used as some of the signals were not received or due to some other desired configuration. For example, in some cases, only signals 501 and 502 may be received.
[00229] In accordance with embodiments, the post-crash data assessment module 506 is configured to fuse the different received signals, including the captured information, and process the received signals to yield a post-crash assessment data 506’ including the medical condition of the vehicle’s one or more occupants.
[00230] As an example, the assessment can include using a check list such as a binary check list including a question list comprising medical questions utilized to assess the medical condition of the occupants the check list may include one or more of the following questions : weather there are pulse and breathing; and/or whether there are signs of consciousness; and/or whether there is suspicion for fractures and on which body part; and/or whether there is a suspicion for internal injury; and/or whether there is a suspicion for head injury; and/or whether there is an open wound or a suspicion for blood loss, etc. It is understood that embodiments of the present invention may use any other kind of diagnosing check lists or other methods to identify the medical condition of the occupants based on the captured data.
[00231] In accordance with embodiments, the system further comprises a pre-crash assessment module 508 configured to receive raw data (e.g. sensory data), such as pre-crash data 508’ stored, for
example at the buffer and analyze the raw data to yield pre-crash assessment data 508”. The pre-crash assessment data 508” includes identifications of the pre-crash state of the vehicle’s one or more occupants. Such analysis can include any combination of the position of the occupant, the body pose, the body attributes such as mass, height, age, gender and body type. The analysis can also include objects that exist inside the cabin and can pose injury risks during an accident.
[00232] In accordance with further embodiments, the system comprises a during-crash data assessment module 507 configured to receive store and analyze sensory data, such as during-crash data 507' captured during a vehicle accident to yield during-crash assessment data 507’ including information which allows the system to follow the trajectories of the vehicle’s occupant(s) and/or object(s).
[00233] The during crash assessment data 507”, post-crash assessment data 506” and pre-crash assessment data extracted accordingly by modules 506, 507 and 508 are transmitted to a fusion module 509 configured to combine and processes the received data according to one or more fusing methods and logics to yield one or more final post-crash assessment results 520 relating to the medical status of the occupant(s).
[00234] An example of fusion logics used by the fusion module 509 includes a scenario where the system is configured to assess the likelihood of head trauma of one or more occupants in the vehicle cabin following an accident, in accordance with embodiments. The assessment process includes acquiring at the post-crash data assessment module 506 and one or more types of post-crash data 510 such as body pose 503 body motion 504 and the like, and analyzing the acquired post-crash data 510 in order to check if there are signs for injury in the occupant’s head. In some cases, the system can then further use the during-crash data assessment module 507 to identify whether there was a head impact with car elements or other objects during collision based on received during-crash data. Such an impact would strengthen the likelihood of head trauma. If no head impact was detected during crash, and no head obstruction occurred, the likelihood of head trauma is reduced. Finally, the system may further obtain pre-crash data 508’ at the pre-crash assessment data module 508 in order to analyze the position of the occupant’s head relative to various car elements and yield pre-crash assessment data. Then, the pre-crash assessment data 508”, during crash assessment data and post-crash assessment data 506” are fused at the fusion and final assessment module 509 to estimate the risk of head trauma. Specifically, the fusion estimation may be based on the direction and force of the external
impact to the vehicle, using body modeling, module 509 can infer an increase or decrease the risk of head trauma.
[00235] It should be stressed that for head trauma type of injuries the analysis is mostly based on visual detection and classification of injuries.
[00236] It is also stressed that the fusion module 509 may provide a final assessment result 520 based on any partial and/or combination of the data available from the pre-crash data 508’, during crash data 507’ and post-crash data 510.
[00237] The fusion module 509 may include stochastic predication models (e.g. Markov chain, for example in the form of a Markov matrix) of one or more predicted states probabilities.
[00238] Figure 1C shows an example of a final assessment result 122, structured in the form of a message, in accordance with embodiments. The message may include one or more of the following details:
Occupant initial position: Rear left
Occupants gender: Female
Occupants estimated age: 50-65
Occupants estimated mass: 60-75Kg
Suspected injuries: Neck trauma, ribs, lower back
Post crash vital signs: HR: 110BPM, Respirations per minutes:70
[00239] Figure 5B shows a flowchart of a method 540 for detecting the heartbeats and/or breathing of one or more occupants, for example in a vehicle, by analyzing a plurality of captured images comprising reflected light pattern, in accordance with embodiments. According to an embodiment, the method comprises the following steps: at step 550 the plurality of images are received, for example from the sensing module at a processor. At step 560 one or more changes in one or more speckle patterns of at least one of the reflections of the light pattern are detected in at least some consecutive images of the plurality of images. At step 570 micro-vibrations of the at least one occupant based on the speckle pattern analysis are identified. At step 580 the micro-vibrations are analyzed using computer vision algorithms such as, but not limited to, optical flow or tracking to extract breathing rate or heartbeat signal of the occupant. Specific examples of the analysis method for extracting breathing rate or heartbeat signals of the occupant are illustrated in Figures 7A-7E and Figures 8A- 8D. In some embodiments, the breathing or heartbeat signal is used to assess the medical condition of
the occupant as shown in Figure 5A. At step 590 the detected breathing or heartbeat signal is used to assess the medical condition of the occupant.
[00240] Figure 6 illustrates two options in which the final post-crash assessment results 520, including for example the medical condition and injuries assessment provided in Figure 5A, may be transmitted to an outsource responder unit such as first rescue responders, in accordance with embodiments.
[00241] In the first option, the assessment results 520 are transmitted directly from the system 610 in the vehicle 600 to the first responders 603 via a direct communication link 605. For example, the system 610 may broadcast one or more transmission signals including the assessment results 520 via a local wireless network to the first responders 603. Following the receipt of the signal, by logging into the local network, the first responders 603 can download the assessment results 520 which includes, for example, medical information regarding the in-cabin status of one or more of the vehicle’s occupants.
[00242] In the second option, the system 610 may send data including for example the assessment results 520 to a cloud service 604 by a communication channel 606. The cloud service may be a virtual service that stores all the information from the system. Then, the first responders 603 can log into the cloud service and download the relevant information.
[00243] The main advantage of the second method over the first method is that the medical condition assessment data can be accessed even before arriving at the crash scene. This will allow sending the right first responders tools and vehicles to the scene.
[00244] In some cases, the cloud service can be also connected to treatment facilities such as hospitals to alert the medical team of the upcoming patients.
[00245] Another advantage of the cloud service is that it can receive partial data even if the system 601 does not survive the crash. In an embodiment, the pre-crash data 508’ and during-crash data 507’ may be sent to the cloud service as soon as they are generated. In this case, even if the system fails or may not be operated due to the crash, an assessment could be made based on the data available on the cloud.
[00246] Once the in-car medical assessment is available for the first responders it can be used to prepare the correct extraction procedure and the immediate treatment needed. The assessment can also help first responders to determine the priority of extraction and treatment in case of multiple casualties.
[00247] In another embodiment, the system can use the impact detection 302 of Fig 4 in order to send an automatic alert to the local first responders’ organization such as police, fire department and medical teams.
[00248] Reference is now made to figure 7A illustrating an exemplary configuration used in accordance with embodiments for capturing a light pattern image(s) and translating an identified spackle pattern for measuring, for example, the heartbeat and/or breathing beat of one or more of the occupants breathing while the vehicle is moving.
[00249] Assuming the illuminator is almost parallel to the image sensor optical axis, the translation of the speckle pattern of the CMOS is according to the following Eq:
Z3 - F F
T(t) = 2 ³ a(t) M = M =—
F z
Where:
[00250] To measure the speckle size on the captured image, the primary speckle size is reduced by the system
[00251] The defocused spot size (circle of confusion) is given by the lens aperture (z2 » z3) as follows:
S FW
s
[00252] We find that— does not depend on M: G
dsp 1 sp l /lzZ Ί2
[00253] That may be expressed typical values which are calculated for example as follows:
[00254] Figure 7B shows exemplary captured image 750, in accordance with embodeiments. In these images, the displacement of each speckle pattern relative to the initial pattern is identified. Thereafter, the speckle translation amplitude in pixels is extracted using, for example, a sine fit such as the sin fit graph 760 of Figure 7C.
[00255] Accordingly, the rate is extracted directly from the translation measurement. The translation rate is linear with spot size as shown in graphs 770 and 780 of Figure 7D. Additionally, the M of each measurement is calculated from dT Ida and compared with M form the spot size as shown for example in graph 790 of Figure 7E.
[00256] At the next step, the expected measurements limits may be extracted, such as the maximal linear and angular velocities that allow speckle tracking. To avoid boiling the surface linear velocity perpendicular to the beam should be:
V
— « D v « Dfs
J S
[00257] In some cases, the translation of the speckle field should be smaller than the size of the spot in the image: z 1 f
2—ά— <— ά < F fs
M fs M 2 z 2
[00258] Accordingly, by using the typical values the following inequalities are calculated
[00259] Based on these inequalities we expect that at fs~200Hz the occupant will have to be almost still in order to measure his vital signs.
[00260] In accordance with embodiments, the maximal exposure time may be also estimated.
Assuming the speckles will blur insignificantly during the exposure time if:
Tt < d sp where T is the speckle translation speed in pixels/sec, and t is the exposure time. For typical values we find:
[00261] Using T = 2
a we obtain the following values:
[00262] t that will allow the maximal ά for the given fs (ά < e.g. the system will be therefore
Experimental data
[00263] Figure 8A shows an exemplary magnification of a captured image 810 of an occupant chest suitable for incorporation in accordance with embodiments. The image comprises reflected light using an illuminator including a laser power of an 830 nm illuminating the occupant chest area. The captured image 810 shows characteristic features including detected changes to the speckle pattern. As mentioned hereinabove by analyzing the speckle pattern for lateral translation which is indicative of a tilt of the reflecting object with respect to the image sensor the occupant heartbeat may be detected. The tilt which may be very minor, for example, on a scale of micro-radians, may be derived from the translational velocity of one or more speckle pattern point(s) over time (consecutive frames). The lateral speckle pattern translation may be derived from the analysis of the diffused light pattern element(s) depicted in a plurality of consecutively captured images.
[00264] Figure 8B shows respectively graphs results 820 830 and 840 of processed captured images of an occupant, where the occupant is not breathing and only his heartbeats are detected, in accordance with embodiments. Graph 840 shows exemplary spectrum (fft) of the processed images. In this example the following parameters were used: exposure = 5000/rs D = 1mm
fs = 200Hz S = 20 pixels (M = 29)
[00265] Figure 8C shows respectively graphs results 850 860 and 870 of processed captured images of an occupant in a vehicle including the occupant heartbeat change while the occupant is breathing normally, in accordance with embodiments. Graph 870 shows exemplary spectrum (fft) of the
processed images. Specifically, in scenario the occupants breathing is strong compared to his heartbeat and therefore masks it. In this example, the same parameters of Figure 8B were used.
[00266] Figure 8D shows respectively graphs results 880 and 890 of processed captured images of an empty vehicle (without any occupants in a vehicle), hence neither heartbeat or breathing are detected, in accordance with embodiments. In this example the measurements were performed in a
FIAT 500 car and the sensing system was positioned on the dashboard and the following parameters f — exposure = 5000 ps
were included: Is vvnz
z2 = 77cm to driver’s chest.
z2 = 96cm to the driver’s seat.
D = 1mm, S = 20 pixels (M = 29)
[00267] In some embodiments, the sensing system does not analyze the data collected, and the sensing module relays data to a remote processing and control unit, such as a back end server. Alternatively or in combination, the sensing system may partially analyze the data prior to transmission to the remote processing and control unit. The remote processing and control unit can be a cloud based system which can transmit analyzed data or results to a user. In some embodiments, a handheld device is configured to receive analyzed data and can be associated with the sensing system. The association can be through a physical connection or wireless communication, for example.
[00268] In some embodiments, the sensing system comes equipped with memory with a database of data stored therein and a microprocessor with analysis software programmed with instructions. In some embodiments, the sensing system is in communication with a computer memory having a database stored therein and a microprocessor with analysis software programmed in. The memory can be volatile or non-volatile in order to store the measurements in the memory. The database and/or all or part of the analysis software can be stored remotely, and the sensing system can communicate with the remote memory via a network (e.g. a wireless network) by any appropriate method.
[00269] In various embodiments of the invention, the conversion of the raw data to medical information may be performed either locally (with a processor and software supplied with the sensing system) or remotely. Heavier calculations for more complicated analyses, for example, can be performed remotely.
[00270] In further embodiments, the system disclosed here includes a processing unit which may be a digital processing device including one or more hardware central processing units (CPU) that carry
out the device’s functions. In still further embodiments, the digital processing device further comprises an operating system configured to perform executable instructions. In some embodiments, the digital processing device is optionally connected to a computer network. In further embodiments, the digital processing device is optionally connected to the Internet such that it accesses the World Wide Web. In still further embodiments, the digital processing device is optionally connected to a cloud computing infrastructure. In other embodiments, the digital processing device is optionally connected to an intranet. In other embodiments, the digital processing device is optionally connected to a data storage device.
[00271] In accordance with the description herein, suitable digital processing devices include, by way of non-limiting examples, server computers, desktop computers, laptop computers, notebook computers, sub-notebook computers, netbook computers, notepad computers, set-top computers, handheld computers, Internet appliances, mobile smartphones, tablet computers, personal digital assistants, video game consoles, and vehicles. Those of skill in the art will recognize that many smartphones are suitable for use in the system described herein. Those of skill in the art will also recognize that select televisions with optional computer network connectivity are suitable for use in the system described herein. Suitable tablet computers include those with booklet, slate, and convertible configurations, known to those of skill in the art.
[00272] In some embodiments, the digital processing device includes an operating system configured to perform executable instructions. The operating system is, for example, software, including programs and data, which manages the device’s hardware and provides services for execution of applications. Those of skill in the art will recognize that suitable server operating systems include, by way of non-limiting examples, FreeBSD, OpenBSD, NetBSD®, Linux, Apple® Mac OS X Server®, Oracle® Solaris®, Windows Server®, and Novell® NetWare®. Those of skill in the art will recognize that suitable personal computer operating systems include, by way of non-limiting examples, Microsoft® Windows®, Apple® Mac OS X®, UNIX®, and UNIX-like operating systems such as GNU/Linux®. In some embodiments, the operating system is provided by cloud computing. Those of skill in the art will also recognize that suitable mobile smart phone operating systems include, by way of non-limiting examples, Nokia® Symbian® OS, Apple® iOS®, Research In Motion® BlackBerry OS®, Google® Android®, Microsoft® Windows Phone® OS, Microsoft® Windows Mobile® OS, Linux®, and Palm® WebOS®.
[00273] In some embodiments, the device includes a storage and/or memory device. The storage and/or memory device is one or more physical apparatuses used to store data or programs on a temporary or permanent basis. In some embodiments, the device is volatile memory and requires power to maintain stored information. In some embodiments, the device is non-volatile memory and retains stored information when the digital processing device is not powered. In further embodiments, the non-volatile memory comprises flash memory. In some embodiments, the non-volatile memory comprises dynamic random-access memory (DRAM). In some embodiments, the non-volatile memory comprises ferroelectric random-access memory (FRAM). In some embodiments, the non volatile memory comprises phase-change random access memory (PRAM). In other embodiments, the device is a storage device including, by way of non-limiting examples, CD-ROMs, DVDs, flash memory devices, magnetic disk drives, magnetic tapes drives, optical disk drives, and cloud computing-based storage. In further embodiments, the storage and/or memory device is a combination of devices such as those disclosed herein.
[00274] In some embodiments, the digital processing device includes a display to send visual information to a user. In some embodiments, the display is a cathode ray tube (CRT). In some embodiments, the display is a liquid crystal display (LCD). In further embodiments, the display is a thin film transistor liquid crystal display (TFT-LCD). In some embodiments, the display is an organic light emitting diode (OLED) display. In various further embodiments, on OLED display is a passive- matrix OLED (PMOLED) or active-matrix OLED (AMOLED) display. In some embodiments, the display is a plasma display. In other embodiments, the display is a video projector. In still further embodiments, the display is a combination of devices such as those disclosed herein.
[00275] In some embodiments, the digital processing device includes an input device to receive information from a user. In some embodiments, the input device is a keyboard. In some embodiments, the input device is a pointing device including, by way of non-limiting examples, a mouse, trackball, track pad, joystick, game controller, or stylus. In some embodiments, the input device is a touch screen or a multi-touch screen. In other embodiments, the input device is a microphone to capture voice or other sound input. In other embodiments, the input device is a video camera to capture motion or visual input. In still further embodiments, the input device is a combination of devices such as those disclosed herein.
[00276] In some embodiments, the system disclosed herein includes one or more non-transitory computer readable storage media encoded with a program including instructions executable by the
operating system of an optionally networked digital processing device. In further embodiments, a computer readable storage medium is a tangible component of a digital processing device. In still further embodiments, a computer readable storage medium is optionally removable from a digital processing device.
[00277] In some embodiments, a computer readable storage medium includes, by way of non-limiting examples, CD-ROMs, DVDs, flash memory devices, solid state memory, magnetic disk drives, magnetic tape drives, optical disk drives, cloud computing systems and services, and the like. In some cases, the program and instructions are permanently, substantially permanently, semi-permanently, or non-transitorily encoded on the media. In some embodiments, the system disclosed herein includes at least one computer program, or use of the same. A computer program includes a sequence of instructions, executable in the digital processing device’s CPU, written to perform a specified task. Computer readable instructions may be implemented as program modules, such as functions, objects, Application Programming Interfaces (APIs), data structures, and the like, that perform particular tasks or implement particular abstract data types. In light of the disclosure provided herein, those of skill in the art will recognize that a computer program may be written in various versions of various languages.
[00278] The functionality of the computer readable instructions may be combined or distributed as desired in various environments. In some embodiments, a computer program comprises one sequence of instructions. In some embodiments, a computer program comprises a plurality of sequences of instructions. In some embodiments, a computer program is provided from one location. In other embodiments, a computer program is provided from a plurality of locations. In various embodiments, a computer program includes one or more software modules. In various embodiments, a computer program includes, in part or in whole, one or more web applications, one or more mobile applications, one or more standalone applications, one or more web browser plug-ins, extensions, add-ins, or add ons, or combinations thereof. In some embodiments, a computer program includes a mobile application provided to a mobile digital processing device. In some embodiments, the mobile application is provided to a mobile digital processing device at the time it is manufactured. In other embodiments, the mobile application is provided to a mobile digital processing device via the computer network described herein.
[00279] In view of the disclosure provided herein, a mobile application is created by techniques known to those of skill in the art using hardware, languages, and development environments known to the art. Those of skill in the art will recognize that mobile applications are written in several languages.
Suitable programming languages include, by way of non-limiting examples, C, C++, C#, Objective - C, Java™, Javascript, Pascal, Object Pascal, Python™, Ruby, VB.NET, WML, and XHTML/HTML with or without CSS, or combinations thereof.
[00280] Suitable mobile application development environments are available from several sources. Commercially available development environments include, by way of non-limiting examples, AirplaySDK, alcheMo, Appcelerator®, Celsius, Bedrock, Flash Lite, .NET Compact Framework, Rhomobile, and WorkLight Mobile Platform. Other development environments are available without cost including, by way of non-limiting examples, Lazarus, MobiFlex, MoSync, and Phonegap. Also, mobile device manufacturers distribute software developer kits including, by way of non-limiting examples, iPhone and iPad (iOS) SDK, Android™ SDK, BlackBerry® SDK, BREW SDK, Palm® OS SDK, Symbian SDK, webOS SDK, and Windows® Mobile SDK.
[00281] Those of skill in the art will recognize that several commercial forums are available for distribution of mobile applications including, by way of non-limiting examples, Apple® App Store, Android™ Market, BlackBerry® App World, App Store for Palm devices, App Catalog for webOS, Windows® Marketplace for Mobile, Ovi Store for Nokia® devices, Samsung® Apps, and Nintendo® DSi Shop.
[00282] In some embodiments, the system disclosed herein includes software, server, and/or database modules, or use of the same. In view of the disclosure provided herein, software modules are created by techniques known to those of skill in the art using machines, software, and languages known to the art. The software modules disclosed herein are implemented in a multitude of ways. In various embodiments, a software module comprises a file, a section of code, a programming object, a programming structure, or combinations thereof. In further various embodiments, a software module comprises a plurality of files, a plurality of sections of code, a plurality of programming objects, a plurality of programming structures, or combinations thereof. In various embodiments, the one or more software modules comprise, by way of non-limiting examples, a web application, a mobile application, and a standalone application. In some embodiments, software modules are in one computer program or application. In other embodiments, software modules are in more than one computer program or application. In some embodiments, software modules are hosted on one machine. In other embodiments, software modules are hosted on more than one machine. In further embodiments, software modules are hosted on cloud computing platforms. In some embodiments,
software modules are hosted on one or more machines in one location. In other embodiments, software modules are hosted on one or more machines in more than one location.
[00283] In some embodiments, the system disclosed herein includes one or more databases, or use of the same. In view of the disclosure provided herein, those of skill in the art will recognize that many databases are suitable for storage and retrieval of information as described herein. In various embodiments, suitable databases include, by way of non-limiting examples, relational databases, non relational databases, object-oriented databases, object databases, entity-relationship model databases, associative databases, and XML databases. In some embodiments, a database is internet-based. In further embodiments, a database is web-based. In still further embodiments, a database is cloud computing-based. In other embodiments, a database is based on one or more local computer storage devices.
[00284] In the above description, an embodiment is an example or implementation of the inventions. The various appearances of "one embodiment,” "an embodiment" or "some embodiments" do not necessarily all refer to the same embodiments.
[00285] Although various features of the invention may be described in the context of a single embodiment, the features may also be provided separately or in any suitable combination. Conversely, although the invention may be described herein in the context of separate embodiments for clarity, the invention may also be implemented in a single embodiment.
[00286] Reference in the specification to "some embodiments", "an embodiment", "one embodiment" or "other embodiments" means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least some embodiments, but not necessarily all embodiments, of the inventions.
[00287] It is to be understood that the phraseology and terminology employed herein is not to be construed as limiting and are for descriptive purpose only.
[00288] The principles and uses of the teachings of the present invention may be better understood with reference to the accompanying description, figures and examples.
[00289] It is to be understood that the details set forth herein do not construe a limitation to an application of the invention.
[00290] Furthermore, it is to be understood that the invention can be carried out or practiced in various ways and that the invention can be implemented in embodiments other than the ones outlined in the description above.
[00291] It is to be understood that the terms“including”,“comprising”,“consisting” and grammatical variants thereof do not preclude the addition of one or more components, features, steps, or integers or groups thereof and that the terms are to be construed as specifying components, features, steps or integers.
[00292] If the specification or claims refer to "an additional" element, that does not preclude there being more than one of the additional elements.
[00293] It is to be understood that where the claims or specification refer to "a" or "an" element, such reference is not be construed that there is only one of that elements. It is to be understood that where the specification states that a component, feature, structure, or characteristic "may", "might", "can" or "could" be included, that particular component, feature, structure, or characteristic is not required to be included. Where applicable, although state diagrams, flow diagrams or both may be used to describe embodiments, the invention is not limited to those diagrams or to the corresponding descriptions. For example, flow need not move through each illustrated box or state, or in exactly the same order as illustrated and described. Methods of the present invention may be implemented by performing or completing manually, automatically, or a combination thereof, selected steps or tasks.
[00294] The descriptions, examples, methods and materials presented in the claims and the specification are not to be construed as limiting but rather as illustrative only. Meanings of technical and scientific terms used herein are to be commonly understood as by one of ordinary skill in the art to which the invention belongs, unless otherwise defined. The present invention may be implemented in the testing or practice with methods and materials equivalent or similar to those described herein.
[00295] While the invention has been described with respect to a limited number of embodiments, these should not be construed as limitations on the scope of the invention, but rather as exemplifications of some of the preferred embodiments. Other possible variations, modifications, and applications are also within the scope of the invention. Accordingly, the scope of the invention should not be limited by what has thus far been described, but by the appended claims and their legal equivalents.
All publications, patents and patent applications mentioned in this specification are herein incorporated in their entirety by reference into the specification, to the same extent as if each individual publication, patent or patent application was specifically and individually indicated to be incorporated herein by reference. In addition, citation or identification of any reference in this application shall not
be construed as an admission that such reference is available as prior art to the present invention. To the extent that section headings are used, they should not be construed as necessarily limiting.
Claims
1. A system for providing medical information of at least one occupant in a vehicle cabin, the system comprising:
a sensing module comprising at least one sensor configured to capture sensory data of the vehicle cabin; and
a control module comprising at least one processor said processor is configured to:
receive said sensory data from said sensor; and
analyze said sensory data using one or more analysis methods to provide the medical information of said at least one occupant.
2. The system of claim 1 wherein said analysis methods are one or more of:
computer vision methods; machine learning methods; deep neural network methods; signal processing methods.
3. The system of claim 1, wherein said medical information comprises the medical status of said at least one vehicle occupant following an accident of said vehicle.
4. The system of claim 1 , wherein the medical information comprises medical condition evaluation or injury assessment of the at least one occupant.
5. The system of claim 1, wherein said information comprises the medical status of said at least one vehicle occupant following an accident of said vehicle.
6. The system of claim 1 , wherein the system comprises a communication module configured to send the information to a first responder team.
7. The system of claim 2, wherein the at least one sensor is an image sensor.
8. The system of claim 7, wherein the sensing module further comprises at least one illuminator, said at least one illuminator comprises a light source configured to project light pattern onto an the vehicle cabin.
9. The system of claim 8, wherein said light source comprises a laser or a Light Emitting Diode (LED).
10. The system of claim 8, wherein said light source comprises one or more optical elements for splitting a single light beam generated by said light source, said one or more optical elements are selected from the group consisting of:
DOE; split mirrors; and diffuser.
11. The system of claim 2, wherein said sensing module comprises a depth sensor.
12. The system of claim 11, wherein said depth sensor is configured to capture depth data by projecting a light pattern onto the vehicle cabin and wherein the at least one processor is configured to analyze the location of known light pattern elements in said depth data.
13. The system of claim 12 wherein the pose of said at least one occupant is estimated by analyzing the depth data using said computer vision methods.
14. The system of claim 2, wherein the sensing module comprises a micro-vibration sensor.
15. The system of claim 14, wherein the micro-vibration sensor is configured to:
project light onto the vehicle cabin scene;
capture a plurality of images, wherein each image of said plurality of images comprises reflected diffused light elements; and
wherein the processor is configured to:
receive the captured images; and
analyze one or more temporal changes in the speckle pattern of at least one of the plurality of reflected diffused light elements in at least some consecutive images of the plurality of images to yield micro-vibration data.
16. The system of claim 2, wherein the sensing module comprising a combination of at least two sensors selected from a comprising:
image sensor, depth sensor, and micro-vibration sensor.
17. The system of claim 15, wherein the processor is further configured to classify or identify an attribute of one or more objects based on at least one micro-vibration data.
18. The system of claim 2, wherein said sensory data comprises a plurality of images and wherein the at least one processor is further configured to classify the at least one vehicle occupant using said computer vision methods by visually analyzing at least one image of the plurality of images captured by the at least one sensor.
19. The system of claim 2 wherein the sensing module is configured to detect an imminent crash of said vehicle.
20. The system of claim 19, wherein the detection of an imminent crash is performed by one or more of:
vehicle impact sensor; hectic movement detection sensor; and external sensory system.
21. The system of claims 19 wherein said at least one processor is configured to:
analyze said sensory data, wherein said sensory data is captured prior to said detection of imminent crash, during said crash and following said crash; and
asses the medical status of said at least one vehicle occupant following the crash based on said analyzed sensory data.
22. The system of claim 21 wherein the sensory data captured prior to said detection of imminent crash is analyzed using said machine vision methods to extract pre-crash categorization.
23. The system of claim 19, wherein pre-crash categorization comprises one or more of the at least one occupant body pose, body mass, age, body dimensions, and gender.
24. The system of claim 19, wherein the system is configured to provide a measure of the likelihood and severity of an injury from a car accident
25. The system of claim 17, wherein the system is configured to provide high-rate data of the vehicle cabin following the detection of the imminent crash.
26. The system of claim 22, wherein the high-rate data is used to extract trajectories of the at least one occupant or one or more objects in the vehicle.
27. The system of claim 23, wherein said extracted trajectories are used to asses the likelihood and severity of the injury of said at least one occupant.
28. The system of claim 1, wherein the system is configured to record in-cabin post-crash information; and
assess medical status of at least one car occupant.
29. The system of claim 1 providing at least one of pre-crash, during crash and post-crash medical status assessment of at least one car occupant.
30. The system of claim 1, wherein said control module is configured to be in wireless communication with an external and transmit the information to a first responder team.
31. The system of claim 1 , wherein said information is uploaded to a cloud service.
32. The system of claim 1, wherein the sensing module is mounted on said vehicle roof or ceiling.
33. The system of claim 1, wherein the analysis is conducted using at least one deep learning algorithm.
34. A system for estimating the medical state of one or more occupants in a vehicle cabin following an accident of the vehicle, the system comprising;
a sensing module comprising:
an illuminator comprising one or more illumination sources configured to project light in a structured light pattern on the vehicle cabin;
at least one image sensor configured to capture sensory data prior to, during and following a crash of said vehicle, said sensory data comprising a sequence of 2D (two dimensional) images and 3D (three dimensional) images of the vehicle cabin wherein at least one of the 2D images comprise reflections of said structured light pattern from the one or more occupants in the cabin;
a control module comprising:
a memory module;
at least one processor, said at least one processor is configured to:
receive said captured sensory data from the sensing module;
receive an impact detection signal of an imminent crash of the vehicle; store the received sensory data, captured prior to said impact detection receptance, in said memory module;
analyze said received sensory data using computer vision or machine learning algorithm to yield pre-crash assessment data comprising an identification of the one or more occupants state prior to the crash;
receive sensory data captured during said crash from the sensing module;
analyze the received sensory data captured during said crash to yield during-crash assessment data comprising body trajectories of the one or more occupants;
provide medical information of said one or more occupants following the crash based on said during crash assessment data and pre-crash assessment data.
35. The system of claim 34 wherein said at least one processor is further configured to: receive sensory data captured following the crash;
analyze the received sensory data captured following the crash data using computer vision or machine learning algorithm to yield one or more of post-crash assessment data of said one or more occupants.
36. The system of claim 35 wherein said post-crash assessment data comprises one or more of: heartbeat; respiration rate; body pose; body motion; visible wounds.
37. The system of claim 35 wherein said processor is further configured to combine said post-crash assessment data with said during crash assessment data and said pre-crash assessment data to yield said medical information of said one or more occupants following the crash.
38. The system of claim 36 wherein the heartbeat or respiration rate are identified by analyzing one or more changes in one or more speckle patterns of at least one of the reflections of said structured light pattern in at least some consecutive images of the plurality of images; and identify the vibrations of the one or more occupants based on said speckle pattern analysis.
39. The system of claim 34 wherein said one or more of the body pose, body motion; and visible wounds are detected by analyzing the sequence of 2D images or 3D images of the vehicle cabin
40. The system of claim 34 wherein said body pose or body motion are identified using one or more of a skeleton model, optical flow of visual tracking methods.
41. A method for providing medical information of at least one occupant in a vehicle cabin, the method comprising:
receiving captured sensory data of said vehicle cabin from a sensing module;
receiving an impact detection signal of an imminent crash of the vehicle;
storing the received sensory data, captured prior to said impact detection receptance, in a memory module;
analyzing said received sensory data using computer vision or machine learning algorithm to yield pre-crash assessment data comprising identification of the at least one occupant state prior to the crash;
receiving sensory data captured during said crash from the sensing module;
analyzing the received sensory data captured during said crash to yield during-crash assessment data comprising body trajectories of the one or more occupants;
providing medical information of said one or more occupants following the crash based on said during crash assessment data and pre-crash assessment data.
42. A method for providing a first responder with information regarding the medical status or injury of vehicle a vehicle occupant after an accident, the method comprises the steps of:
i. utilizing an in-cabin sensor to monitor the vehicle’s cabin;
ii. combining analysis of at least one of pre-crash, during-crash and post-crash data; and
iii. providing first-responders with the analysis.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/419,213 US20220067410A1 (en) | 2018-12-28 | 2019-12-28 | System, device, and method for vehicle post-crash support |
CN201980087004.9A CN113302076A (en) | 2018-12-28 | 2019-12-28 | System, apparatus and method for vehicle post-crash support |
EP19902414.2A EP3902697A4 (en) | 2018-12-28 | 2019-12-28 | Systems, devices and methods for vehicle post-crash support |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201862785724P | 2018-12-28 | 2018-12-28 | |
US62/785,724 | 2018-12-28 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020136658A1 true WO2020136658A1 (en) | 2020-07-02 |
Family
ID=71129268
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IL2019/051422 WO2020136658A1 (en) | 2018-12-28 | 2019-12-28 | Systems, devices and methods for vehicle post-crash support |
Country Status (4)
Country | Link |
---|---|
US (1) | US20220067410A1 (en) |
EP (1) | EP3902697A4 (en) |
CN (1) | CN113302076A (en) |
WO (1) | WO2020136658A1 (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022042868A1 (en) * | 2020-08-31 | 2022-03-03 | Continental Automotive Gmbh | Method for a vehicle for determining an injury information and vehicle electronic control device for determining an injury information |
CN115113247A (en) * | 2021-03-23 | 2022-09-27 | 昆达电脑科技(昆山)有限公司 | Inspection damage reporting method and inspection damage reporting system |
EP4142317A1 (en) | 2021-08-30 | 2023-03-01 | Emsense AB | Vehicle emergency system and method for providing emergency information |
EP4198922A1 (en) * | 2021-12-16 | 2023-06-21 | Aptiv Technologies Limited | Computer implemented method, computer system and non-transitory computer readable medium for detecting a person in the passenger compartment of a vehicle |
DE102022104129A1 (en) | 2022-02-22 | 2023-08-24 | Audi Aktiengesellschaft | Method for determining a health effect, method for training a prognostic function, monitoring device, vehicle and training device |
EP4408043A1 (en) * | 2023-01-30 | 2024-07-31 | Valeo Telematik Und Akustik GmbH | Method for transmitting data during an emergency call and vehicle suitable for implementing such a method |
Families Citing this family (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10586122B1 (en) * | 2016-10-31 | 2020-03-10 | United Services Automobile Association | Systems and methods for determining likelihood of traffic incident information |
US10825196B2 (en) * | 2019-02-15 | 2020-11-03 | Universal City Studios Llc | Object orientation detection system |
EP3966740A4 (en) * | 2019-07-09 | 2022-07-06 | Gentex Corporation | Systems, devices and methods for measuring the mass of objects in a vehicle |
JP2022040539A (en) * | 2020-08-31 | 2022-03-11 | 株式会社Subaru | Vehicle with automatic notification function |
US11825182B2 (en) * | 2020-10-12 | 2023-11-21 | Waymo Llc | Camera module with IR LEDs for uniform illumination |
US11735017B2 (en) * | 2021-06-23 | 2023-08-22 | Bank Of America Corporation | Artificial intelligence (AI)-based security systems for monitoring and securing physical locations |
US11954990B2 (en) | 2021-06-23 | 2024-04-09 | Bank Of America Corporation | Artificial intelligence (AI)-based security systems for monitoring and securing physical locations |
US20230048359A1 (en) * | 2021-08-12 | 2023-02-16 | Toyota Connected North America, Inc. | Message construction based on potential for collision |
US12030489B2 (en) | 2021-08-12 | 2024-07-09 | Toyota Connected North America, Inc. | Transport related emergency service notification |
US12097815B2 (en) | 2021-08-12 | 2024-09-24 | Toyota Connected North America, Inc. | Protecting living objects in transports |
US11887460B2 (en) | 2021-08-12 | 2024-01-30 | Toyota Motor North America, Inc. | Transport-related contact notification |
US11894136B2 (en) * | 2021-08-12 | 2024-02-06 | Toyota Motor North America, Inc. | Occupant injury determination |
CN113743290A (en) * | 2021-08-31 | 2021-12-03 | 上海商汤临港智能科技有限公司 | Method and device for sending information to emergency call center for vehicle |
IT202200007022A1 (en) * | 2022-04-08 | 2023-10-08 | Fiat Ricerche | "Passenger monitoring system for motor vehicles and related procedure" |
DE102022108519A1 (en) | 2022-04-08 | 2023-10-12 | Bayerische Motoren Werke Aktiengesellschaft | Method for transmitting information about a health status of an occupant following a vehicle accident, computer-readable medium, system, and vehicle |
WO2023208751A1 (en) * | 2022-04-26 | 2023-11-02 | Trinamix Gmbh | Monitoring a condition of a living organism |
DE102022126736B4 (en) | 2022-10-13 | 2024-09-26 | Audi Aktiengesellschaft | Procedure for operating an emergency information system, motor vehicle and emergency information system |
Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030015898A1 (en) * | 1997-12-17 | 2003-01-23 | Breed David S. | System and method for moving a headrest based on anticipatory sensing |
US20030122554A1 (en) * | 2000-09-29 | 2003-07-03 | Fakhreddine Karray | Vehicle occupant proximity sensor |
US20040220705A1 (en) * | 2003-03-13 | 2004-11-04 | Otman Basir | Visual classification and posture estimation of multiple vehicle occupants |
US20050201103A1 (en) * | 2004-03-12 | 2005-09-15 | Honeywell International Inc. | Luminaires with batwing light distribution |
US20080212836A1 (en) * | 2003-05-29 | 2008-09-04 | Kikuo Fujimura | Visual Tracking Using Depth Data |
US20080243332A1 (en) * | 2002-01-25 | 2008-10-02 | Basir Otman A | Vehicle visual and non-visual data recording system |
US20090066065A1 (en) | 1995-06-07 | 2009-03-12 | Automotive Technologies International, Inc. | Optical Occupant Sensing Techniques |
US20090096783A1 (en) * | 2005-10-11 | 2009-04-16 | Alexander Shpunt | Three-dimensional sensing using speckle patterns |
US20120078472A1 (en) | 2010-09-27 | 2012-03-29 | Gm Global Technology Operations, Inc. | Individualizable Post-Crash Assist System |
US20140300739A1 (en) * | 2009-09-20 | 2014-10-09 | Tibet MIMAR | Vehicle security with accident notification and embedded driver analytics |
US20150313475A1 (en) * | 2012-11-27 | 2015-11-05 | Faurecia Automotive Seating, Llc | Vehicle seat with integrated sensors |
US20150379362A1 (en) | 2013-02-21 | 2015-12-31 | Iee International Electronics & Engineering S.A. | Imaging device based occupant monitoring system supporting multiple functions |
WO2017158155A1 (en) * | 2016-03-18 | 2017-09-21 | Jaguar Land Rover Limited | Vehicle analysis method and system |
US9886841B1 (en) | 2016-04-27 | 2018-02-06 | State Farm Mutual Automobile Insurance Company | Systems and methods for reconstruction of a vehicular crash |
WO2018146266A1 (en) | 2017-02-10 | 2018-08-16 | Koninklijke Philips N.V. | Driver and passenger health and sleep interaction |
KR20180120901A (en) | 2017-04-28 | 2018-11-07 | 쌍용자동차 주식회사 | Health care apparatus with a passenger physical condition measurement in a vehicle and method there of |
WO2019012535A1 (en) | 2017-07-12 | 2019-01-17 | Guardian Optical Technologies Ltd. | Systems and methods for acquiring information from an environment |
US10345137B2 (en) | 2014-12-27 | 2019-07-09 | Guardian Optical Technologies Ltd. | System and method for detecting surface vibrations |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6735506B2 (en) * | 1992-05-05 | 2004-05-11 | Automotive Technologies International, Inc. | Telematics system |
US20130267194A1 (en) * | 2002-06-11 | 2013-10-10 | American Vehicular Sciences Llc | Method and System for Notifying a Remote Facility of an Accident Involving a Vehicle |
US7925056B2 (en) * | 2005-02-08 | 2011-04-12 | Koninklijke Philips Electronics N.V. | Optical speckle pattern investigation |
JP6401269B2 (en) * | 2013-11-15 | 2018-10-10 | ジェンテックス コーポレイション | Imaging system including dynamic correction of color attenuation for vehicle windshields |
US20170015263A1 (en) * | 2015-07-14 | 2017-01-19 | Ford Global Technologies, Llc | Vehicle Emergency Broadcast |
US10916139B2 (en) * | 2016-03-18 | 2021-02-09 | Beyond Lucid Technologies, Inc. | System and method for post-vehicle crash intelligence |
US9919648B1 (en) * | 2016-09-27 | 2018-03-20 | Robert D. Pedersen | Motor vehicle artificial intelligence expert system dangerous driving warning and control system and method |
US11416942B1 (en) * | 2017-09-06 | 2022-08-16 | State Farm Mutual Automobile Insurance Company | Using a distributed ledger to determine fault in subrogation |
-
2019
- 2019-12-28 EP EP19902414.2A patent/EP3902697A4/en active Pending
- 2019-12-28 WO PCT/IL2019/051422 patent/WO2020136658A1/en unknown
- 2019-12-28 CN CN201980087004.9A patent/CN113302076A/en active Pending
- 2019-12-28 US US17/419,213 patent/US20220067410A1/en active Pending
Patent Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090066065A1 (en) | 1995-06-07 | 2009-03-12 | Automotive Technologies International, Inc. | Optical Occupant Sensing Techniques |
US20030015898A1 (en) * | 1997-12-17 | 2003-01-23 | Breed David S. | System and method for moving a headrest based on anticipatory sensing |
US20030122554A1 (en) * | 2000-09-29 | 2003-07-03 | Fakhreddine Karray | Vehicle occupant proximity sensor |
US20080243332A1 (en) * | 2002-01-25 | 2008-10-02 | Basir Otman A | Vehicle visual and non-visual data recording system |
US20040220705A1 (en) * | 2003-03-13 | 2004-11-04 | Otman Basir | Visual classification and posture estimation of multiple vehicle occupants |
US20080212836A1 (en) * | 2003-05-29 | 2008-09-04 | Kikuo Fujimura | Visual Tracking Using Depth Data |
US20050201103A1 (en) * | 2004-03-12 | 2005-09-15 | Honeywell International Inc. | Luminaires with batwing light distribution |
US20090096783A1 (en) * | 2005-10-11 | 2009-04-16 | Alexander Shpunt | Three-dimensional sensing using speckle patterns |
US20140300739A1 (en) * | 2009-09-20 | 2014-10-09 | Tibet MIMAR | Vehicle security with accident notification and embedded driver analytics |
US20120078472A1 (en) | 2010-09-27 | 2012-03-29 | Gm Global Technology Operations, Inc. | Individualizable Post-Crash Assist System |
US20150313475A1 (en) * | 2012-11-27 | 2015-11-05 | Faurecia Automotive Seating, Llc | Vehicle seat with integrated sensors |
US20150379362A1 (en) | 2013-02-21 | 2015-12-31 | Iee International Electronics & Engineering S.A. | Imaging device based occupant monitoring system supporting multiple functions |
US10345137B2 (en) | 2014-12-27 | 2019-07-09 | Guardian Optical Technologies Ltd. | System and method for detecting surface vibrations |
WO2017158155A1 (en) * | 2016-03-18 | 2017-09-21 | Jaguar Land Rover Limited | Vehicle analysis method and system |
US9886841B1 (en) | 2016-04-27 | 2018-02-06 | State Farm Mutual Automobile Insurance Company | Systems and methods for reconstruction of a vehicular crash |
WO2018146266A1 (en) | 2017-02-10 | 2018-08-16 | Koninklijke Philips N.V. | Driver and passenger health and sleep interaction |
KR20180120901A (en) | 2017-04-28 | 2018-11-07 | 쌍용자동차 주식회사 | Health care apparatus with a passenger physical condition measurement in a vehicle and method there of |
WO2019012535A1 (en) | 2017-07-12 | 2019-01-17 | Guardian Optical Technologies Ltd. | Systems and methods for acquiring information from an environment |
Non-Patent Citations (2)
Title |
---|
See also references of EP3902697A4 |
YUAN ET AL.: "Hetero-convlstm: A deep learning approach to traffic accident prediction on heterogeneous spatio-temporal data", PROCEEDINGS OF THE 24TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 23 August 2018 (2018-08-23), XP055710518, Retrieved from the Internet <URL:https:/Idl.acm.org/doi/pdf/10.1145/3219619.3219922> [retrieved on 20200330] * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022042868A1 (en) * | 2020-08-31 | 2022-03-03 | Continental Automotive Gmbh | Method for a vehicle for determining an injury information and vehicle electronic control device for determining an injury information |
CN115113247A (en) * | 2021-03-23 | 2022-09-27 | 昆达电脑科技(昆山)有限公司 | Inspection damage reporting method and inspection damage reporting system |
EP4142317A1 (en) | 2021-08-30 | 2023-03-01 | Emsense AB | Vehicle emergency system and method for providing emergency information |
EP4198922A1 (en) * | 2021-12-16 | 2023-06-21 | Aptiv Technologies Limited | Computer implemented method, computer system and non-transitory computer readable medium for detecting a person in the passenger compartment of a vehicle |
EP4198923A1 (en) * | 2021-12-16 | 2023-06-21 | Aptiv Technologies Limited | Computer implemented method, computer system and non-transitory computer readable medium for detecting a moving object in the passenger compartment of a vehicle |
DE102022104129A1 (en) | 2022-02-22 | 2023-08-24 | Audi Aktiengesellschaft | Method for determining a health effect, method for training a prognostic function, monitoring device, vehicle and training device |
EP4408043A1 (en) * | 2023-01-30 | 2024-07-31 | Valeo Telematik Und Akustik GmbH | Method for transmitting data during an emergency call and vehicle suitable for implementing such a method |
Also Published As
Publication number | Publication date |
---|---|
CN113302076A (en) | 2021-08-24 |
US20220067410A1 (en) | 2022-03-03 |
EP3902697A4 (en) | 2022-03-09 |
EP3902697A1 (en) | 2021-11-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20220067410A1 (en) | System, device, and method for vehicle post-crash support | |
US20240265554A1 (en) | System, device, and methods for detecting and obtaining information on objects in a vehicle | |
US11706377B2 (en) | Visual, depth and micro-vibration data extraction using a unified imaging device | |
US11375338B2 (en) | Method for smartphone-based accident detection | |
US11861867B2 (en) | Systems, devices and methods for measuring the mass of objects in a vehicle | |
US10730465B2 (en) | 3D time of flight active reflecting sensing systems and methods | |
JP5923180B2 (en) | Biological information measuring device and input device using the same | |
CN113056390A (en) | Situational driver monitoring system | |
CN111252073A (en) | System and method for detecting problematic health conditions | |
CN113286979B (en) | System, apparatus and method for microvibration data extraction using time-of-flight (ToF) imaging apparatus | |
US20180204078A1 (en) | System for monitoring the state of vigilance of an operator | |
JP2022502757A (en) | Driver attention state estimation | |
CN105193402A (en) | Method For Ascertaining The Heart Rate Of The Driver Of A Vehicle | |
US20190051414A1 (en) | System and method for vehicle-based health monitoring | |
EP1800964B1 (en) | Method of depth estimation from a single camera | |
JP6597524B2 (en) | Biological information recording device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19902414 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2019902414 Country of ref document: EP Effective date: 20210728 |