GB2558361A - Autonomous vehicle having an external movable shock-absorbing energy dissipation padding - Google Patents

Autonomous vehicle having an external movable shock-absorbing energy dissipation padding Download PDF

Info

Publication number
GB2558361A
GB2558361A GB1717339.4A GB201717339A GB2558361A GB 2558361 A GB2558361 A GB 2558361A GB 201717339 A GB201717339 A GB 201717339A GB 2558361 A GB2558361 A GB 2558361A
Authority
GB
United Kingdom
Prior art keywords
saedp
vehicle
occupant
autonomous
state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1717339.4A
Other versions
GB2558361B (en
GB201717339D0 (en
Inventor
Thieberger Gil
M Frank Ari
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Active Knowledge Ltd
Original Assignee
Active Knowledge Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Active Knowledge Ltd filed Critical Active Knowledge Ltd
Publication of GB201717339D0 publication Critical patent/GB201717339D0/en
Publication of GB2558361A publication Critical patent/GB2558361A/en
Application granted granted Critical
Publication of GB2558361B publication Critical patent/GB2558361B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R21/00Arrangements or fittings on vehicles for protecting or preventing injuries to occupants or pedestrians in case of accidents or other traffic risks
    • B60R21/02Occupant safety arrangements or fittings, e.g. crash pads
    • B60R21/04Padded linings for the vehicle interior ; Energy absorbing structures associated with padded or non-padded linings
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R21/00Arrangements or fittings on vehicles for protecting or preventing injuries to occupants or pedestrians in case of accidents or other traffic risks
    • B60R21/34Protecting non-occupants of a vehicle, e.g. pedestrians
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R21/00Arrangements or fittings on vehicles for protecting or preventing injuries to occupants or pedestrians in case of accidents or other traffic risks
    • B60R21/34Protecting non-occupants of a vehicle, e.g. pedestrians
    • B60R21/36Protecting non-occupants of a vehicle, e.g. pedestrians using airbags
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/0097Predicting future conditions
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0093Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B5/00Optical elements other than lenses
    • G02B5/08Mirrors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/12Mirror assemblies combined with other articles, e.g. clocks
    • B60R2001/1238Mirror assemblies combined with other articles, e.g. clocks with vanity mirrors
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R21/00Arrangements or fittings on vehicles for protecting or preventing injuries to occupants or pedestrians in case of accidents or other traffic risks
    • B60R21/34Protecting non-occupants of a vehicle, e.g. pedestrians
    • B60R2021/346Protecting non-occupants of a vehicle, e.g. pedestrians means outside vehicle body
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/20Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used
    • B60R2300/202Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used displaying a blind spot scene on the vehicle part responsible for the blind spot
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/20Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used
    • B60R2300/207Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used using multi-purpose displays, e.g. camera image and navigation or video on same display
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • B60R2300/304Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing using merged images, e.g. merging camera image with stored images
    • B60R2300/305Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing using merged images, e.g. merging camera image with stored images merging camera image with lines or icons
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
    • B60R2300/8006Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for monitoring and displaying scenes of vehicle interior, e.g. for monitoring passengers or cargo
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
    • B60R2300/802Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for monitoring and displaying vehicle exterior blind spot views
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/143Alarm means
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0141Head-up displays characterised by optical features characterised by the informative content of the display
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B2027/0178Eyeglass type
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0183Adaptation to parameters characterising the motion of the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Human Computer Interaction (AREA)
  • Transportation (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Traffic Control Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

An autonomous on-road vehicle includes a window (120, figure 17a) located at eye level of an occupant who sits in a front seat of the vehicle, a reusable nontransparent Shock-Absorbing Energy Dissipation Padding (SAEDP) 121, and a motor (122) that moves the SAEDP over a sliding mechanism 123 between first and second states multiple times without having to be repaired. The vehicle also includes a processor (124) that receives, from an autonomous-driving control system, an indication that a probability of an imminent pedestrian-vehicle collision reaches a threshold, and commands the motor to move the SAEDP from the first state to the second state. In the first state the SAEDP does not block the occupant's eye level frontal view to the outside environment, and in the second state the SAEDP blocks the occupant's eye level frontal view to the outside environment and also absorbs some of the crashing energy transmitted to a pedestrian during a pedestrian-vehicle collision. The SAEDP may be a pneumatic pad or a passive material. A camera and display may be provided so that the occupant may see the outside environment when the SAEDP is in the second state.

Description

(56) Documents Cited:
WO 2006/016052 A2 US 20070102126 A1 (58) Field of Search:
INT CL B60R Other: WPI, EPODOC (71) Applicant(s):
Active Knowledge Ltd
Hana Senesh, Kiryat Tivon 36036, Israel (72) Inventor(s):
Gil Thieberger Ari M. Frank (74) Agent and/or Address for Service:
Active Knowledge Ltd
Great Portland Street, First Floor, London, W1W 7LT, United Kingdom (54) Title of the Invention: Autonomous vehicle having an external movable shock-absorbing energy dissipation padding
Abstract Title: AUTONOMOUS VEHICLE HAVING AN EXTERNAL SHOCK ABSORBING ENERGY DISSIPATION PADDING (57) An autonomous on-road vehicle includes a window (120, figure 17a) located at eye level of an occupant who sits in a front seat of the vehicle, a reusable nontransparent Shock-Absorbing Energy Dissipation Padding (SAEDP) 121, and a motor (122) that moves the SAEDP over a sliding mechanism 123 between first and second states multiple times without having to be repaired. The vehicle also includes a processor (124) that receives, from an autonomous-driving control system, an indication that a probability of an imminent pedestrian-vehicle collision reaches a threshold, and commands the motor to move the SAEDP from the first state to the second state. In the first state the SAEDP does not block the occupant's eye level frontal view to the outside environment, and in the second state the SAEDP blocks the occupant's eye level frontal view to the outside environment and also absorbs some of the crashing energy transmitted to a pedestrian during a pedestrian-vehicle collision. The SAEDP may be a pneumatic pad or a passive material. A camera and display may be provided so that the occupant may see the outside environment when the SAEDP is in the second state.
126
Figure GB2558361A_D0001
FIG. 17b
Figure GB2558361A_D0002
This print incorporates corrections made under Section 117(1) of the Patents Act 1977.
1/14
Figure GB2558361A_D0003
FIG. 1
Figure GB2558361A_D0004
FIG. 2
2/14
Figure GB2558361A_D0005
FIG. 3
Figure GB2558361A_D0006
FIG. 4
3/14
41a
JO
Figure GB2558361A_D0007
FIG. 5a FIG. 5b
4/14
Figure GB2558361A_D0008
FIG. 6
Figure GB2558361A_D0009
FIG. 7
5/14
58
Figure GB2558361A_D0010
FIG. 8
6/14
71 70
Figure GB2558361A_D0011
Figure GB2558361A_D0012
FIG. 9b
7/14
75 70
Figure GB2558361A_D0013
FIG. 10b
8/14
Figure GB2558361A_D0014
205
FIG. 11a
Figure GB2558361A_D0015
FIG. lib
Figure GB2558361A_D0016
Figure GB2558361A_D0017
FIG. 11c
Figure GB2558361A_D0018
9/14
Figure GB2558361A_D0019
FIG. 12b
10/14
Figure GB2558361A_D0020
FIG. 13
160
Figure GB2558361A_D0021
143
FIG. 14
11/14
Figure GB2558361A_D0022
FIG. 15a
Figure GB2558361A_D0023
FIG. 15b
12/14
Figure GB2558361A_D0024
FIG. 16b
13/14
Figure GB2558361A_D0025
FIG. 17b
14/14
Figure GB2558361A_D0026
Figure GB2558361A_D0027
r
Figure GB2558361A_D0028
ι
1^410
FIG. 19b
AUTONOMOUS VEHICLE HAVING AN EXTERNAL MOVABLE SHOCKABSORBING ENERGY DISSIPATION PADDING
BACKGROUND [0001] Vehicle-pedestrian collisions claim many casualties. This is likely to persist even in the age of autonomous vehicles. In some cases, collisions with pedestrians may be simply impossible to avoid or too dangerous (to the vehicle occupants) to avoid. Due to many traditional vehicles having a front windshield, vehicle-pedestrian collisions often involve the pedestrian hitting the stiff windshield, which can lead to severe bodily harm to the pedestrian and maybe also to damage the windshield. Thus, there is a need for devices that can reduce the danger in vehicle-pedestrian collisions.
SUMMARY [0002] An aspect of this disclosure involves an autonomous on-road vehicle that includes a window located at eye level of an occupant who sits in a front seat of the vehicle (e.g., a windshield), a reusable nontransparent Shock-Absorbing Energy Dissipation Padding (SAEDP), a motor, and a processor. The window enables the occupant to see the outside environment. The motor is configured to move the SAEDP over a sliding mechanism between first and second states multiple times without having to be repaired. In the first state the SAEDP does not block the occupant’s eye level frontal view to the outside environment, and in the second state the SAEDP blocks the occupant’s eye level frontal view to the outside environment. Additionally, in the second state the SAEDP is configured to absorb some of the crashing energy transmitted to a pedestrian during a pedestrian-vehicle collision. The processor is configured to receive, from an autonomous-driving control system, an indication indicative of whether a probability of an imminent pedestrian-vehicle collision reaches a threshold. Responsive to receiving an indication of an imminent collision (e.g., within less than 2 seconds), the processor is configured to command the motor to move the SAEDP from the first state to the second state.
[0003] A non-limiting advantage of the vehicle described above is that it increases the safety of a pedestrian in case of a vehicle-pedestrian collision, without prohibiting the occupant of the vehicle from receiving a frontal view of the outside environment during normal driving.
BRIEF DESCRIPTION OF THE DRAWINGS [0004] The embodiments are herein described by way of example only, with reference to the accompanying drawings. No attempt is made to show structural details of the embodiments in more detail than is necessary for a fundamental understanding of the embodiments. In the drawings FIG. 1 is a schematic illustration of components of a system configured to combine video see-through (VST) with video-unrelated-to-the-VST (VUR);
[0005] FIG. 2 illustrates an HMD tracking module that measures the position of the HMD relative to the compartment;
[0006] FIG. 3 illustrates a vehicle in which an occupant wears an HMD;
[0007] FIG. 4 illustrates an occupant wearing an HMD and viewing large VUR and smaller VST;
[0008] FIG. 5a illustrates how the VST moves to the upper left when the occupant looks to the bottom right;
[0009] FIG. 5b illustrates how the VST moves to the bottom right when the occupant looks to the upper left;
[0010] FIG. 6 illustrates HMD-video that includes both a non-transparent VST and video that shows the hands of the occupant and the interior of the compartment;
[0011] FIG. 7 illustrates HMD-video that includes both a partially transparent VST and video that shows the hands of the occupant and the interior of the compartment;
[0012] FIG. 8 illustrates HMD-video that includes a VST and partially transparent video that shows the hands of the occupant and the interior of the compartment;
[0013] FIG. 9a illustrates HMD-video that includes a VUR in full FOV, a first window comprising compartment-video (CV) and a second smaller window comprising the VST;
[0014] FIG. 9b illustrates HMD-video that includes VUR in full FOV, a first window comprising the CV and a second partially transparent smaller window comprising the VST;
[0015] FIG. 10a illustrates HMD-video that includes VUR in full FOV, a first window comprising VST and a second smaller window comprising zoom out of the CV;
[0016] FIG. 10b illustrates HMD-video that includes VUR and a partially transparent CV;
[0017] FIG. 11a illustrates a FOV of a vehicle occupant when the occupant wears an HMD that presents HMD-video;
[0018] FIG. lib illustrates a FOV of a vehicle occupant when the vehicle occupant does not wear an HMD that presents the video, such as when watching an autostereoscopic display;
[0019] FIG. 11c illustrates FOV of a 3D camera that is able to capture sharp images from different focal lengths;
[0020] FIG. 12a and FIG. 12b illustrate vehicles with an SAEDP in their compartment were an occupant uses an HMD to receive a representation of the outside environment;
[0021] FIG. 13 illustrates a vehicle with an SAEDP in the vehicle’s compartment with displays; [0022] FIG. 14 illustrates how an SAEDP protects the occupant in a side collision;
[0023] FIG. 15a and FIG. 15b illustrate a vehicle with a motor configured to move a nontransparent SAEDP to cover a side window;
[0024] FIG. 16a illustrates an SAEDP mounted to the front of a vehicle at eye level of an occupant of the vehicle;
[0025] FIG. 16b illustrates an outer SAEDP that includes two air bags;
[0026] FIG. 17a and FIG. 17b illustrate a motorized external SAEDP that can move between first and second states multiple times;
[0027] FIG. 18 illustrates a vehicle compartment in which an occupant may lay down; and [0028] FIG. 19a and FIG. 19b are schematic illustrations of computers able to realize one or more of the embodiments discussed herein.
DETAILED DESCRIPTION [0029] The following are definitions of various terms that may be used to describe one or more of the embodiments in this disclosure.
[0030] The terms “autonomous on-road vehicle” and “autonomous on-road manned vehicle” refer to cars and motorcycles designed to drive on public roadways utilizing automated driving of level 3 and above according to SAE International® standard J3016 “Taxonomy and Definitions for Terms Related to On-Road Motor Vehicle Automated Driving Systems”. For example, the autonomous on-road vehicle may be a level 3 vehicle, in which within known, limited environments, drivers can safely turn their attention away from driving tasks; the autonomous on-road vehicle may be a level 4 vehicle, in which the automated system can control the vehicle in all but a few environments; and/or the autonomous on-road vehicle may be a level 5 vehicle, in which no human intervention is required and the automatic system can drive to any location where it is legal to drive. Herein, the terms “autonomous on-road vehicle” and “selfdriving on-road vehicle” are equivalent terms that refer to the same. The term “autonomous onroad vehicle” does not include trains, airplanes, boats, and armored fighting vehicles.
[0031] An autonomous on-road vehicle utilizes an autonomous-driving control system to drive the vehicle. The disclosed embodiments may use any suitable known and/or to be invented autonomous-driving control systems. The following three publications describe various autonomous-driving control systems that may be utilized with the disclosed embodiments: (i) Paden, Brian, et al. A Survey of Motion Planning and Control Techniques for Self-driving Urban Vehicles. arXiv preprint arXiv: 1604.07446 (2016); (ii) Surden, Harry, and Mary-Anne Williams. Technological Opacity, Predictability, and Self-Driving Cars. Predictability, and Self-Driving Cars (March 14, 2016) (2016); and (iii) Gonzalez, David, et al. A Review of Motion Planning Techniques for Automated Vehicles. IEEE Transactions on Intelligent Transportation Systems 17.4 (2016): 1135-1145.
[0032] Autonomous-driving control systems usually utilize algorithms such as machine learning, pattern recognition, neural network, machine vision, artificial intelligence, and/or probabilistic logic to calculate on the fly the probability of an imminent collision, or to calculate on the fly values that are indicative of the probability of an imminent collision (from which it is possible to estimate the probability of an imminent collision). The algorithms usually receive as inputs the trajectory of the vehicle, measured locations of at least one nearby vehicle, information about the road, and/or information about environmental conditions. Calculating the probability of an imminent collision is well known in the art, also for human driven vehicles, such as the anticipatory collision system disclosed in US patent num. 8,041,483 to Breed.
[0033] In order to calculate whether a Sudden Decrease in Ride Smoothness (SDRS) event is imminent, the autonomous-driving control system may compare parameters describing the state of the vehicle at time ti with parameters describing the state of the vehicle at time t2 that is shortly after ti. If the change in one or more of the parameters reaches a threshold (such as deceleration above a certain value, change of height in the road above a certain value, and/or an angular acceleration above a certain value) then it may be determined that an SDRS event is imminent.
[0034] An “occupant” of a vehicle, as the term is used herein refers to a person that is in the vehicle when it drives. The term “occupant” refers to a typical person having a typical shape, such as a 170 cm tall human (herein “cm” refers to centimeters). An occupant may be a driver, having some responsibilities and/or control regarding the driving of the vehicle (e.g., in a vehicle that is not completely autonomous), or may be a passenger. When an embodiment refers to “the occupant of the vehicle”, it may refer to one of the occupants of the vehicle. Stating that a vehicle has an “occupant” should not be interpreted that the vehicle necessarily accommodates only one occupant at a time, unless that is explicitly stated, such as stating that the vehicle is “designed for a single occupant”.
[0035] Herein, a “seat” may be any structure designed to hold an occupant travelling in the vehicle (e.g., in a sitting and/or reclining position). A “front seat” is a seat that positions an occupant it holds no farther from the front of the vehicle than any other occupants of the vehicle are positioned. Herein, sitting in a seat also refers to sitting on a seat. Sitting in a seat is to be interpreted in this disclosure as occupying the space corresponding the seat, even if the occupant does so by assuming a posture that does not necessarily correspond to sitting. For example, in some vehicles the occupant may be reclined or lying down, and in other vehicles the occupant may be more upright, such as when leaning into the seat in a half standing half seating position similar to leaning into a Locus Seat by Focal® Upright LLC.
[0036] The interchangeable terms “environment outside the vehicle” and “outside environment” refer to the environment outside the vehicle, which includes objects that are not inside the vehicle compartment, such as other vehicles, roads, pedestrians, trees, buildings, mountains, the sky, and outer space.
[0037] A sensor “mounted to the vehicle” may be connected to any relevant part of the vehicle, whether inside the vehicle, outside the vehicle, to the front, back, top, bottom, and/or to the side of the vehicle. A sensor, as used herein, may also refer to a camera.
[0038] The term “camera” refers herein to an image-capturing device that takes images of an environment. For example, the camera may be based on at least one of the following sensors: a CCD sensor, a CMOS sensor, a near infrared (NIR) sensor, an infrared sensor (IR), and a camera based on active illumination such as a LiDAR. The term “video” refers to a series of images that may be provided in a fixed rate, variable rates, a fixed resolution, and/or dynamic resolutions. The use of a singular “camera” should be interpreted herein as “one or more cameras”. Thus, when embodiments herein are described as including a camera that captures video and/or images of the outside in order to generate a representation of the outside, the representation may in fact be generated based on images and/or video taken using multiple cameras.
[0039] Various embodiments described herein involve providing an occupant of the vehicle with representation of the outside environment, generated by a computer and/or processor, based on video taken by a camera. In some embodiments, video from a single camera (e.g., which may be positioned on the exterior of the vehicle at eye level), may be sent to presentation to the occupant by the processor and/or computer following little, if any, processing. In other embodiments, video from a single camera or multiple cameras is processed in various ways, by the computer and/or processor, in order to generate the representation of the outside environment that is presented to the occupant.
[0040] Methods and systems for stitching live video streams from multiple cameras, stitching live video streams with database objects and/or other video sources, transforming a video stream or a stitched video stream from one point of view to another point of view (such as for generating a representation of the outside environment for an occupant at eye level, or for generating a compartment view for a person standing outside the compartment), tracking the position of an HMD relative to a compartment, and presenting rendered images that are perfectly aligned with the outside world - are all known in the art of computer graphics, video stitching, image registration, and real-time 360° imaging systems. The following publications are just a few examples of reviews and references that describe various ways to perform the video stitching, registration, tracking, and transformations, which may be utilized by the embodiments disclosed herein: (i) Wang, Xiaogang. Intelligent multi-camera video surveillance: A review. Pattern recognition letters 34.1 (2013): 3-19. (ii) Szeliski, Richard. Image alignment and stitching: A tutorial. Foundations and Trends® in Computer Graphics and Vision 2.1 (2006): 1104. (iii) Tanimoto, Masayuki. FTV: Free-viewpoint television. Signal Processing: Image Communication 27.6 (2012): 555-570. (iv) Ernst, Johannes M., Hans-Ullrich Doehler, and Sven Schmerwitz. A concept for a virtual flight deck shown on an HMD. SPIE Defense+ Security. International Society for Optics and Photonics, 2016. (v) Doehler, H-U., Sven Schmerwitz, and Thomas Lueken. Visual-conformal display format for helicopter guidance. SPIE Defense+ Security. International Society for Optics and Photonics, 2014. (vi) Sanders-Reed, John N., Ken Bernier, and Jeff Guell. Enhanced and synthetic vision system (ESVS) flight demonstration. SPIE Defense and Security Symposium. International Society for Optics and Photonics, 2008. And (vii) Bailey, Randall E., Kevin J. Shelton, and J. J. Arthur III. Head-worn displays for NextGen. SPIE Defense, Security, and Sensing. International Society for Optics and Photonics, 2011.
[0041] A video that provides “representation of the outside environment” refers to a video that enables the average occupant, who is familiar with the outside environment, to recognize the location of the vehicle in the outside environment from watching the video. In one example, the average occupant is a healthy 30 years old human who is familiar with the outside environment, and the threshold for recognizing a video as a “representation of the outside environment” is at least 20 correct recognitions of the outside environment out of 30 tests.
[0042] Herein, sentences such as “VST that represents a view of the outside environment from the point of view of the occupant”, or “VST representation of the outside environment, which could have been seen from the point of view of the occupant” refer to a video representing at least a portion of the outside environment, with a deviation of less than ±20 degrees from the occupant’s point of view of the outside environment, and zoom in the range of 30% to 300% (assuming the occupant’s unaided view is at 100% zoom level).
[0043] The VST may be generated based on at least one of the following resources: a video of the outside environment that is taken in real-time, a video of the outside environment that was taken in the past and is played/processed according to the trajectory of the vehicle, a database of the outside environment that is utilized for rendering the VST according to the trajectory of the vehicle, and/or a video that is rendered as function of locations of physical objects identified in the outside environment using detection and ranging systems such as RADAR and/or LIDAR. [0044] Moreover, the term “video see-through (VST)” covers both direct representations of the outside environment, such as a video of the outside environment, and/or enriched video of the outside environment, such as captured video and/or rendered video of the outside environment presented together with one or more layers of virtual objects, as long as more than 20% of the average vehicle occupants, who are familiar with the outside environment, would be able to determine their location in the outside environment, while the vehicle travels, without using a map, and with a margin of error below 200 meters. However, it is noted that showing a map that indicates the location of the vehicle on the driving path (such as from the start to the destination) is not considered herein as equivalent to the VST, unless the map includes all of the following properties: the map shows images of the path, the images of the path capture at least 5 degrees of the occupant's FOV at eye level, and the images of the path reflect the dynamics of the vehicle and change in a similar manner to a video taken by a camera mounted to the vehicle and directed to the outside environment.
[0045] Herein, “field of view (FOV) of the occupant to the outside environment” refers to the part of the outside environment that is visible to the occupant of a vehicle at a particular position and orientation in space. In one example, in order for an occupant-tracking module to calculate the FOV to the outside environment of an occupant sitting in a vehicle compartment, the occupant-tracking module determines the position and orientation of the occupant’s head. In another example, in order for an occupant-tracking module to calculate the FOV of an occupant sitting in a vehicle compartment, the occupant-tracking module utilizes an eye tracker.
[0046] It is noted that sentences such as “a three dimensional (3D) video see-through (VST) that represents a view of the outside environment, which could have been seen from the point of view of the occupant had the FOV not been obstructed by at least a portion of the nontransparent element” cover also just one or more portions of the FOV, and are to be interpreted as “a three dimensional (3D) video see-through (VST) that represents a view of at least a portion of the outside environment, which could have been seen from the point of view of the occupant had at least some of the FOV not been obstructed by at least a portion of the nontransparent element”. [0047] The term “display” refers herein to any device that provides a human user with visual images (e.g., text, pictures, and/or video). The images provided by the display may be twodimensional or three-dimensional images. Some non-limiting examples of displays that may be used in embodiments described in this disclosure include: (i) screens and/or video displays of various devices (e.g., televisions, computer monitors, tablets, smartphones, or smartwatches), (ii) headset- or helmet-mounted displays such as augmented-reality systems (e.g., HoloLens®), virtual-reality systems (e.g., Oculus Rift®, HTC® Vive®, or Samsung GearVR®), and mixedreality systems (e.g., Magic Leap®), and (iii) image projection systems that project images on a occupant’s retina, such as: Virtual Retinal Displays (VRD) that creates images by scanning low power laser light directly onto the retina, or light-field technologies that transmit light rays directly into the eye.
[0048] Vafous embodiments may include a reference to elements located at eye level. The “eye level” height is determined according to an average adult occupant for whom the vehicle was designed, who sits straight and looks to the horizon. Sentences in the form of “an element located at eye level of an occupant who sits in a vehicle” refer to the element and not to the occupant. The occupant is used in such sentences in the context of “eye level”, and thus claims containing such sentences do not require the existence of the occupant in order to construct the claim.
[0049] Sentences such as “SAEDP located at eye level”, “stiff element located at eye level”, and “crumple zone located at eye level” refer to elements that are located at eye level, but may also extended to other levels, such as from sternum level to the roof level, from floor level to eye level, and/or from floor level to roof level. For example, an SAEDP located at an eye level can extend from sternum level to above the occupant’s head, such that at least a portion of the SAEDP is located at the eye level.
[0050] Herein, “normal driving” refers to typical driving conditions, which persist most of the time the vehicle is in motion. During normal driving the probability of a collision is below a threshold that when reached typically involves one or more of the following: deployment of safety devices that are not usually in place (e.g., inflating airbags), taking evasive action to avoid a collision, and warning occupants of the vehicle about an imminent event that may cause a Sudden Decrease in Ride Smoothness (SDRS).
[0051] A Shock-Absorbing Energy Dissipation Padding (SAEDP) is an element that may be used to cushion impact of a body during a collision or during SDRS events. Various types of SAEDPs may be used in embodiments described herein, such as passive materials, airbags, and pneumatic pads.
[0052] Some examples of passive materials that may be used to the SAEDP in one or more of the disclosed embodiments include one or more of the following materials: CONFOR® foam by Trelleborg Applied Technology, Styrofoam® by The Dow Chemical Company®, Micro-Lattice Materials and/or Metallic Microlattices (such as by HRL Laboratories in collaboration with researchers at University of California and Caltech), non-Newtonian energy Absorbing materials (such as D3O® by D3O lab, and DEFLEXION® by Dow Corning®), Sorbothane® by Sorbothane Incorporated, and padding that includes compression cells and/or shock absorbers of the Xenith® LLC type (such as described in US patent num. 8,950,735 and US patent application num. 20100186150), and materials that include rubber such as a sponge rubber.
[0053] The term “stiff element”, together with any material mounted between an SAEDP and the outside environment, refers to a material having stiffness and impact resistance equal or greater than that of glazing materials for use in motor vehicles as defined in the following two standards: (i) “American National Standard for Safety Glazing Materials for Glazing Motor Vehicles and Motor Vehicle Equipment Operating on Land Highways-Safety Standard” ANSFSAE Z26.1-1996, and (ii) The Society of Automotive Engineers (SAE) Recommended Practice J673, revised April 1993, “Automotive Safety Glasses” (SAE J673, rev. April 93). The term “stiff element” in the context of low speed vehicles, together with any material mounted between the SAEDP and the outside environment, refers to a material having stiffness and impact resistance equal or greater than that of glazing materials for use in low speed motor vehicles as defined in Federal Motor Vehicle Safety Standard 205 - Glazing Materials (FMVSS 205), from 49 CFR Ch. V (10-1-04 Edition). The stiff element may be transparent (such as automotive laminated glass, or automotive tempered glass) or nontransparent (such as fiberreinforced polymer, carbon fiber reinforced polymer, steel, or aluminum).
[0054] Herein, a nontransparent element is defined as an element having Visible Light Transmittance (VLT) between 0% and 20%, which does not enable the occupant to recognize what lies on the other side of it. For example, a thick ground glass usually allows light to pass through it but does not let the occupant recognize the objects on the other side of it, unlike plain tint glass that usually lets the occupant recognize the objects on the other side of it, even when it features VLT below 10%. The nontransparent includes an opaque element having VLT of essentially 0% and includes a translucent element having VLT below 20%. VLT is defined as the amount of incident visible light that passes through the nontransparent element, where incident light is defined as the light that strikes the nontransparent element. VLT is also known as Luminous Transmittance of a lens, a light diffuser, or the like, and is defined herein as the ratio of the total transmitted light to the total incident light. The common clear vehicle windshield has a VLT of approximately 85%, although US Federal Motor Vehicle Safety Standard No. 205 allows the VLT to be as low as 70%.
[0055] Sentence such as “video unrelated to the VST (VUR)” mean that an average occupant would not recognize the video as a representation of the outside environment. In some embodiments, the content of the VUR does not change as function of the position of the occupant’s head, which means that the point of view from which the occupant watches the VUR does not change essentially when the occupant’s head moves. Herein, stabilization effects, image focusing, dynamic resolution, color corrections, and insignificant changes to less than 10% of the frame as function of the position of the position of the occupant’s head occupant’s head - are still considered as content that does not change as function of the position of the occupant’s head. Examples of such content (common in the year 2016) include cinema movies, broadcast TV shows, standard web browsers, and Microsoft Office® applications (such as Word, Excel and PowerPoint®).
[0056] Herein, a “crumple zone” refers to a structure designed to slow down inertia and absorb energy from impact during a traffic collision by controlled deformation. The controlled deformation absorbs some of the impact within the outer parts of the vehicle, rather than being directly transferred to the occupants, while also preventing intrusion into and/or deformation of the compartment. Crumple zone may be achieved by various configurations, such as one or more of the following exemplary configurations: (i) by controlled weakening of sacrificial outer parts of the vehicle, while strengthening and increasing the stiffness of the inner parts of the vehicle, such as by using more reinforcing beams and/or higher strength steels for the compartment; (ii) by mounting composite fiber honeycomb or carbon fiber honeycomb outside the compartment; (iii) by mounting an energy absorbing foam outside the compartment; and/or (iv) by mounting an impact attenuator that dissipates impact.
[0057] In one example, a system configured to combine video see-through (VST) with videounrelated-to-the-VST (VUR) includes at least the following components: a head-mounted display (HMD), such as HMD 15, a camera (e.g., camera 12), an HMD tracking module 27, and a computer 13. FIG. 1 provides a schematic illustration of at least some of the relationships between the components mentioned above.
[0058] The HMD 15 is configured to be worn by an occupant of a compartment of a moving vehicle and to present an HMD-video 16 to the occupant. In one example, the HMD 15 is an augmented-reality (AR) HMD. In another example, the HMD 15 is a virtual reality (VR) HMD. Optionally, the system further comprises a video camera mounted to the VR HMD, and the VST video comprises video of the compartment received from the video camera mounted to the VR HMD. In yet another example, the HMD 15 is a mixed reality HMD. The term “Mixed Reality” (MR) are used herein involves a system that is able to combine real world data with virtual data. Mixed Reality encompasses Augmented Reality and encompasses Virtual Reality that does not immerse its user 100% of the time in the virtual world. Examples of mixed reality HMDs include, but are not limited to, Microsoft HoloLens® HMD and MagicLeap® HMD.
[0059] The camera 12, which is mounted to the vehicle, is configured to take video of the outside environment (Vout). Optionally, the data captured by the camera comprises 3D data. For example, the camera may be based on at least one of the following sensors: a CCD sensor, a CMOS sensor, a near infrared (NIR) sensor, an infrared sensor (IR), and a camera based on active illumination such as a LiDAR.
[0060] The HMD tracking module 27 is configured to calculate position of the HMD 15 relative to the compartment, based on measurements of a sensor. The HMD tracking module 27 may have different configurations.
[0061] In one example, the sensor comprises first and second Inertial Measurement Units (IMUs). The first EMU is physically coupled to the HMD 15 and is configured to measure a position of the HMD 15, and the second IMU is physically coupled to the compartment and is configured to measure a position of the compartment. The HMD tracking module 27 is configured to calculate the position of the HMD 15 in relation to the compartment based on the measurements of the first and second IMUs.
[0062] In another example, the sensor comprises an Inertial Measurement Unit (IMU) and a location measurement system. In The IMU is physically coupled to the HMD 15 and is configured to measure an orientation of the HMD 15. The location measurement system is physically coupled to the compartment and is configured to measure a location of the HMD in relation to the compartment. The HMD tracking module 27 is configured to calculate the position of the HMD 15 in relation to the compartment based on the measurements of the IMU and the location measurement system. Optionally, the location measurement system measures the location of the HMD 15 in relation to the compartment based on at least one of the following inputs: a video received from a camera that captures the HMD 15, a video received from a stereo vision system, measurements of magnetic fields inside the compartment, wireless triangulation measurements, acoustic positioning measurements, and measurements of an indoor positioning systems.
[0063] FIG. 2 illustrates a scenario in which the HMD tracking module 27 is physically coupled to the compartment and is configured to measure the position of the HMD relative to the compartment. The HMD tracking module 27 may utilize a passive camera system, an active camera system that captures reflections of a transmitted grid, and/or a real-time locating systems based on microwaves and/or radio waves.
[0064] The computer 13 is configured to receive a location of a video see-through window (VSTW) in relation to the compartment, and to calculate, based on the position of the HMD relative to the compartment, a window-location for the VSTW on the HMD-video. The computer 13 is also configured to generate, based on the window-location and the Vout, the VST that represents a view of the outside environment from the point of view of the occupant. Optionally, the VST is rendered as a 3D video content. Additionally, the computer 13 is further configured to generate the HMD-video 16 based on combining the VUR with the VST in the windowlocation. The computer 13 may use various know in the art computer graphics functions and/or libraries to generate the VST, transform the VST to the occupant’s point of view, render the 3D video content, and/or combine the VUR with the VST.
[0065] In one example, the content of the VUR does not change when the occupant moves the head, and the content of the VUR is unrelated to the video taken by the camera. Additionally, the content of the VUR is generated based on data that is more than 2 seconds before the HMDvideo 16 is displayed to the occupant. Some examples of the VUR include a video stream of at least one of the following types of content: a recorded television show, a computer game, an email, and a virtual computer desktop.
[0066] FIG. 3 illustrates a scenario in which the occupant 14 wears an HMD 15. The HMD 15 provides video to the occupant 14 through the display of the HMD 15. The vehicle includes a camera 12 that takes video of the outside environment 11a and processes it in a manner suitable for the location of the occupant. The output in the HMD 15 provides video to the occupant’s display in the HMD 15 as a VSTW and the position of the VSTW is calculated in relation to the compartment of the vehicle and moves with the compartment. While the vehicle is in motion, the VSTW change its content to represent the outside environment 11a of the vehicle. Whereas the video-unrelated-to-the-VST doesn’t change when the occupant moves his head. The computer is configured to receive a location of a VSTW in relation to the compartment, and to calculate, based on the position of the occupant’s head, a window-location for the VSTW on the video. [0067] FIG. 4 illustrates a scenario in which the occupant 44 wears HMD 45 and views large VUR 40 and smaller VST 41a. The VUR 40 does not change when the occupant’s head 44 moves. The VSTW presents video of the street based on video taken by the camera that is mounted to the vehicle. The location of the video-see-through window in relation to the compartment does not change when the occupant 44 moves his/her head in order to imitate a physical window that does not change its position relative to the compartment when the occupant’s head moves.
[0068] FIG. 5a illustrates how the VST moves to the upper left when the occupant 44 looks to the bottom right. FIG. 5b illustrates how the VST moves to the bottom right when the occupant 44 looks to the upper left, while the VUR moves with the head. In both cases, the VUR moves with the head while the location of the VST changes according to the movement of the head relative to the compartment as measured by the HMD tracking module 27.
[0069] The content of the VUR may be augmented-reality content, mixed-reality content, and/or virtual-reality content rendered to correspond to the occupant’s viewing direction. Optionally, the VUR is unrelated to the video taken by the camera. In one example, the VUR may include a video description of a virtual world in which the occupant may be playing in a game (e.g., represented by an avatar). Optionally, in this example, most of the features of the virtual world are different from the view of the outside of the vehicle (as seen from the occupant’s viewing direction). For example, the occupant may be driving in a city, while the virtual world displays woods, a meadow, or outer space. In another example, the VUR may include augmented reality content overlaid above a view of the inside of the compartment.
[0070] In addition to the components described above the system may include a second camera that is mounted to the HMD and is configured to take video of the compartment (VCOmp). In this case, the computer is further configured to generate a compartment-video (CV), based on VCOmp and a location of a compartment-video window (CVW) in relation to the HMD-video (e.g., HMD-video 16), and to generate the HMD-video also based on the CV in the CVW, such that the HMD-video combines the VUR with the VST in the window-location with the CV in the CVW. There are various ways in which the CVW may be incorporated into the HMD-video. Some examples of these approaches are illustrated in the following figures.
[0071] FIG. 6 illustrates HMD-video that includes both a non-transparent VST 55 in the window-location and a CV 56 that shows the hands of the occupant and the interior of the compartment in the CVW. FIG. 7 illustrates HMD-video that includes both a partially transparent VST 57 in the window-location and the CV 56 that shows the hands of the occupant and the interior of the compartment in the CVW. FIG. 8 illustrates HMD-video that includes a VST 58 and partially transparent CV 59. The figure illustrates that the occupant sees the outside environment in full field-of-view (FOV), while on top of it there is a partially transparent image (illustrated as dotted image) of the compartment and the hands of the occupant, in order to help the occupant not to hit things in the compartment.
[0072] FIG. 9a illustrates HMD-video that includes a VUR 70 in full FOV, a first window comprising the CV 71 in the CVW and a second smaller window comprising the VST 72in the window-location.
[0073] FIG. 9b illustrates HMD-video that includes VUR 70 in full FOV, a first window comprising the CV 71 in the CVW and a second partially transparent smaller window comprising the VST 73 in the window-location.
[0074] FIG. 10a illustrates HMD-video that includes VUR 70 in full FOV, a first window comprising VST 75 in the window-location and a second smaller window comprising zoom out of the CV 76 in the CVW. Optionally, the cabin view in the zoom out is smaller than reality, and may enable the occupant to orient in the cabin. Optionally, the occupant may move the CVW, as illustrated in FIG. 10a where the zoom out of the CV in the CVW is somewhat above its location in reality.
[0075] FIG. 10b illustrates HMD-video that includes VUR 70 and a partially transparent CV 72. Here a first occupant sees the VUR in full field-of-view (FOV), and on top of it there is a partially transparent image of the compartment and a second occupant that sits to the left of the first occupant, which may help the first occupant not to hit the second occupant.
[0076] There may be various ways in which the system determines the location and/or size of the VSTW. In one example, the VSTW is pinned to at least one of the following locations: a specific physical location and a location of an object in the compartment, such that the location of the VSTW in relation to the compartment does not change when the occupant moves his/her head with the HMD 15 as part of watching the HMD-video 16 and without commanding the VSTW to move in relation to the compartment.
[0077] In another example, the system includes a user interface configured to receive a command from the occupant to move and/or resize the VSTW in relation to the compartment. In one example, the command is issued through a voice command (e.g., saying “move VST to the bottom”). In another example, the command may be issued by making a gesture, which is detected by a gesture control module in the compartment and/or on a device of the occupant (e.g., as part of the HMD). Optionally, the computer is further configured to: update the windowlocation based on the command from the occupant, and generate an updated VST based on the updated window-location and the video taken by the camera. Optionally, the VST and the updated VST present different VSTW locations and/or dimensions in relation to the compartment. Optionally, the HMD is configured not to present any part of the VST to the occupant when the window-location is not in the field of view presented to the occupant through the HMD.
[0078] In yet another example, the system may further include a video analyzer configured to identify an Object Of Interest (OOI) in the outside environment. For example, the OOI of interest may be a certain landmark (e.g., a building), a certain object (e.g., a store or a certain model of automobile), or a person. Optionally, the computer is further configured to receive, from the video analyzer, an indication of the position of the OOI, and to track the 001 by adjusting the window-location according to the movements of the vehicle, such that the OOI is visible via the VST. Optionally, the HMD is configured not to present any part of the VST to the occupant when the window-location is not in the field of view presented to the occupant through the HMD.
[0079] The VST that represents the view of the outside environment from the point of view of the occupant may not necessarily match the video taken by the cameras. In one example, the VST may utilize image enhancement techniques to compensate for outside lighting conditions, to give an occupant an experience similar to looking out through a conventional vehicle window but without the view being distorted by raindrops or dirt on the window, to improve the visual impression of the outside environment e.g. by showing background images which are different from those retrievable from the outside environment. Additionally or alternatively, the VST may mimic the outside environment, alter the outside environment, and/or be completely different from what can be seen on the outside environment. The VST may be focus on providing visual information that makes the travelling more fun. The vehicle may provide different styles of the outside environment to different occupants in the vehicle, such that a first VST provided to a first occupant may mimics the outside environment, while a second VST provided to a second occupant may alter the outside environment and/or be completely different from the outside environment optionally for comfort enhancement and/or entertainment.
[0080] In some cases, the VST may be informative, and aid at least some of the occupants to determine the location of the vehicle in the environment. In one example, at least some of those occupants could not determine their location without the VST. In one example, less than 20% of average vehicle occupants, who are familiar with the outside environment, are able to determine their real location in the outside environment by watching the VUR, without using a map, with a margin of error that is less than 100 meters, and while the vehicle travels; while more than 20% of the average vehicle occupants, who are familiar with the outside environment, are able to determine their real location in the outside environment by watching the VST, without using a map, and with a margin of error that is less than 100 meters, and while the vehicle travels.
[0081] FIG. 11a illustrates a FOV in the context of presented video and terminology used herein. The vehicle occupant 200 wears an HMD 201 that presents HMD-video (such as HMDvideo 16). The HMD-video may be presented at a single focal plane, or at multiple focal planes, depending on the characteristics of the HMD 201 (when the occupant focuses on a certain focal plane, then his/her point of gaze is said to be on the certain focal plane). In addition, the presented objects may be two-dimensional (2D) virtual objects and/or three-dimensional (3D) virtual objects that may also be referred to as holographic objects. Element 204 represents the location of a nontransparent element fixed to the vehicle compartment. In one example, the HMD 201 is a holographic HMD, such as Microsoft HoloLens®, which can present content displayed on a series of focal planes that are separated by some distance. The virtual objects may be presented before the nontransparent element (e.g., polygons 202, 203), essentially on the nontransparent element 204, and/or beyond the nontransparent element (e.g., polygons 205, 206). As a result, the occupant’s gaze distance may be shorter than the distance to the nontransparent element (e.g., distance to polygons 202, 203), essentially equal to the distance to the nontransparent element 204, and/or longer than the distance to the nontransparent element (e.g., distance to polygons 205, 206). Polygon 207 represents a portion of the presented video at eye level of the vehicle occupant, which in one example is within ±7 degrees from the horizontal line of sight. Although the figure illustrates overlapping FOVs of polygons 202, 203, 204, and 205, the HMD may show different objects, capturing different FOVs, at different focal planes. It is noted that using a multi focal plane HMD is not limited to displaying content on a plane. For example, the HMD may project an image throughout a portion of, or all of, a display volume. Further, a single object such as a vehicle could occupy multiple volumes of space.
[0082] According to the terminology used herein, the nontransparent element 204 is said to be located on FOV overlapping the FOV of polygons 205 and 203 because polygons 203, 204, 205 share the same FOV. FOV of polygon 206 is contained in the FOV of polygon 204, and FOV of polygon 207 intersects the FOV of polygon 204. FOV of polygon 203 is before the nontransparent element 204 and therefore may hide the nontransparent element 204 partially or entirely, especially when utilizing a multi-focal plane HMD.
[0083] FIG. lib illustrates a FOV in the context of the presented video, where the vehicle occupant 210 does not wear an HMD that presents the video, such as when watching an autostereoscopic display. The autostereoscopic display is physically located on plane 214 and the presented video may be presented at a single focal plane, or at multiple focal planes, depending on the characteristics of the autostereoscopic display. In one example, the autostereoscopic display is a holographic display, such as SeeReal Technologies holographic display, where the presented video may present virtual objects before the focal plane autostereoscopic display (e.g., planes 212, 213), essentially on the focal plane of the autostereoscopic display 214, and/or beyond the focal plane of the autostereoscopic display (e.g., planes 215, 216). As a result, the occupant’s gaze distance may be shorter than the distance to the autostereoscopic display (e.g., planes plans 212, 213), essentially equal to the distance to the autostereoscopic display 214, and/or longer than the distance to the autostereoscopic display (e.g., planes 215, 216). The term “autostereoscopic” includes technologies such as automultiscopic, glasses-free 3D, glassesless 3D, parallax barrier, integral photography, lenticular arrays, Compressive Light Field Displays, holographic display based on eye tracking, color filter pattern autostereoscopic display, volumetric display that reconstructed light field, integral imaging that uses a fly's-eye lens array, and/or High-Rank 3D (HR3D).
[0084] FIG. 11c illustrates FOV of a 3D camera that is able to capture sharp images from different focal lengths.
[0085] The vehicle and/or the HMD may utilize at least one Inertial Measurement Unit (IMU), and the system utilizes an Inertial Navigation System (INS) to compensate imperfections in the IMU measurements. An INS typically has one or more secondary navigation sensors that provide direct measurements of the linear velocity, position and/or orientation of the vehicle. These secondary navigation sensors could be anything from stereo vision systems, to GPS receivers, to digital magnetic compasses (DMCs) or any other type of sensor that could be used to measure linear velocity, position and/or orientation. In one example, the information from these secondary navigation sensors is incorporated into the INS using an Extended Kalman Filter (EKF). The EKF produces correction that are used to adjust the initial estimations of linear velocity, position and orientation that are calculated from the imperfect IMU measurements. Adding secondary navigation sensors into an INS can increase its ability to produce accurate estimations of the linear velocity, position and orientation of the vehicle over long periods of time.
[0086] In one example, the system utilizes domain specific assumptions in order to reduce drift of an INS used to calculate the HMD spatial position in relation to the compartment. More specifically, the following methods may be used to reduce or correct drift. Such methods generally fall the categories of using sensor fusion and/or domain specific assumptions.
[0087] (i) Sensor fusion refers to processes in which signals from two or more types of sensors are used to update and/or maintain the state of a system. In the case of INS, the state generally includes the orientation, velocity and displacement of the device measured in a global frame of reference. A sensor fusion algorithm may maintain this state using IMU accelerometer and gyroscope signals together with signals from additional sensors or sensor systems. There are many techniques to perform sensor fusion, such as Kalman filter and particle filter.
[0088] One example of periodically correcting drift is to use position data from a triangulation positioning system relative to the compartment. Such systems try to combine the drift free nature of positions obtained from the triangulation positioning system with the high sampling frequency of the accelerometers and gyroscopes of the IMU. Roughly speaking, the accelerometer and gyroscope signals are used to ‘fill in the gaps’ between successive updates from the triangulation positioning system.
[0089] Another example of reducing the drift is using a vector magnetometer that measures magnetic field strength in a given direction. The IMU may contain three orthogonal magnetometers in addition to the orthogonal gyroscopes and accelerometers. The magnetometers measure the strength and direction of the local magnetic field, allowing the north direction to be found.
[0090] (ii) In some cases, it is possible to make domain specific assumptions about the movements of the occupant and/or the vehicle. Such assumptions can be used to minimize drift. One example in which domain specific assumptions may be exploited is the assumption that when the vehicle accelerates or decelerates significantly, the HMD accelerates or decelerates essentially the same as the vehicle, allowing HMD drift in velocity to be periodically corrected based on a more accurate velocity received from the autonomous-driving control system of the vehicle. Another example in which domain specific assumptions may be exploited is the assumption that when the vehicle accelerates or decelerates significantly, the HMDs of two occupants travelling in the same vehicle accelerate or decelerate essentially the same, allowing HMD drifts to be periodically corrected based on comparing the readings of the two HMDs. Still another example in which domain specific assumptions are exploited is the assumption that the possible movement of an HMD of a belted occupant is most of the time limited to a portion of the compartment, allowing HMD drifts to be periodically corrected based on identifying when the HMD exceeds beyond that portion of the compartment.
[0091] In one example, it may be desirable to adjust the position of displaying a virtual object in response to relative motion between the vehicle and the HMD so that the virtual object would appear stationary in location. However, the HMD IMU may indicate that the HMD is moving even when the detected motion is a motion of the vehicle carrying the HMD. In order to distinguish between motion of the HMD caused by the vehicle and motion of the HMD relative to the vehicle, non-HMD sensor data may be obtained by the HMD from sensor such as an IMU located in the vehicle and/or the GPS system of the vehicle, and the motion of the vehicle may be subtracted from the motion of the HMD in order to obtain a representation of the motion of the HMD relative to the vehicle. By differentiating movements of the HMD caused by the occupant motion compared to movements caused by the vehicle motion, the rendering of the virtual object may be adjusted for the relative motion between the HMD and the vehicle.
[0092] In one example, an autonomous on-road vehicle includes a compartment, which one or more occupants may occupy while traveling in the vehicle (e.g., by sitting in seats). Coupled to the front of the compartment is a Shock-Absorbing Energy Dissipation Padding (SAEDP) and a stiff element that supports the SAEDP. Optionally, the SAEDP is nontransparent. The stiff element is located, during normal driving, at eye level between the SAEDP and the outside environment. Additionally, the vehicle includes a camera (e.g., camera 142 or structure 147 that comprises multiple cameras), which is configured to take video of the outside environment in front of the occupant, and a computer (e.g., computer 143) that is configured to generate, based on the video, a representation of the outside environment in front of the occupant at eye level. Optionally, the camera, and/or each of the cameras in the structure 147, may be based on at least one of the following sensors: a CCD sensor, a CMOS sensor, a near infrared (NIR) sensor, an infrared sensor (IR), and a camera based on active illumination such as a LiDAR. Optionally, when the camera comprises multiple cameras, the multiple cameras are directed to multiple directions around the vehicle, and the multiple cameras support generating multiple representations of the outside environment from different points of view.
[0093] It is to be noted that in some cases the SAEDP may be fixed at its location both in normal driving and in times that are not considered to correspond to normal driving, while in other cases, the SAEDP may change its location during at least some of the times that do not correspond to normal driving.
[0094] The SAEDP is coupled to the compartment in such a way that it is located, during normal driving, at eye level in front of an occupant who sits in a front seat of the vehicle. Different types of SAEDPs may be utilized in different embodiments.
[0095] In one example, the SAEDP comprises a passive material that is less stiff than a standard automotive glass window. The passive material is configured to protect the occupant’s head against hitting the inner side of the vehicle compartment during a collision. Optionally, the passive material has thickness greater than at least one of the following thicknesses: 1 cm, 2 cm, 5 cm, 10 cm, 15 cm, and 20 cm. Optionally, the thickness of the passive material may refer to the average thickness of the SAEDP across the portion of the SAEDP at eye level. Alternatively, the thickness may refer to the maximal thickness at some position of the SAEDP (which is at least one of the values mentioned above).
[0096] In another example, the SAEDP comprises a pneumatic pad that is configured to inflate in order to protect the occupant’s head against hitting the inner side of the vehicle compartment during collision. In some examples, the pneumatic pads may be formed from an elastomeric material providing chambers containing air or another gas. Optionally, the chambers are retained in compressed deflated condition until being inflated by the admission of gas pressure controlled by the vehicle’s autonomous-driving control system that is responsible to estimate the probability and severity of an imminent collision. Additionally or alternatively, the chambers may be provided with restricted passages limiting the flow out from the chamber to provide shock-absorbing energy dissipation to reduce the rebound effect. US patent num. 5,382,051 discloses examples for pneumatic pads that can be used in some cases.
[0097] In yet another example, the SAEDP comprises an automotive airbag, which is configured to protect the occupant’s head against hitting the inner side of the vehicle compartment during collision. In one example, during normal driving, the airbag is in a stowed state. The airbag is coupled to an inflator configured to inflate the airbag with gas to an inflated state, upon receiving an indication indicative of a probability of an impact of the vehicle exceeding a threshold. In this example, the airbag is located, when in the stowed state, at eye level in front of the occupant.
[0098] In some examples, the compartment may include a door, and the SAEDP is physically coupled to the door from the inside, such that the SAEDP moves with the door as the door opens and/or closes.
[0099] In some examples, the vehicle may include a second SAEDP coupled to the outer front of the vehicle to minimize damage to a pedestrian during a pedestrian-vehicle collision.
[0100] In one example, the stiff element that supports the SAEDP is nontransparent. In another examples, the stiff element may be automotive laminated glass or automotive tempered glass. Optionally, the structure of the vehicle comprises a crumple zone located at eye level between the stiff element and the outside environment.
[0101] The representation of the outside environment is intended to provide the occupant with some details describing the outside environment. In some examples, the representation of the outside environment is generated from the point of view of the occupant, and it represents how a view of the outside environment would look like to the occupant, had there been a transparent window at eye level instead of the SAEDP and/or the stiff element. Optionally, a display is utilized to present the representation to the occupant.
[0102] Various types of displays may be utilized to present the representation of the outside environment to the occupant. In one example, the display is comprised in an HMD, and the vehicle further comprises a communication system configured to transmit the representation to the HMD. For example, the HMD may be a virtual reality system, an augmented reality system, or a mixed-reality system. In one example, the display is supported by at least one of the SAEDP and the stiff element. For example, the display is physically coupled to the SAEDP and/or the stiff element. Optionally, the display is a flexible display. For example, the flexible display may be based on at least one of the following technologies and their variants: OLED, organic thin film transistors (OTFT), electronic paper (e-paper), rollable display, and flexible AMOLED. In one example, the display is flexible enough such that it does not degrade the performance of the SAEDP by more than 20% during a collision. In one example, the performance of the SAEDP is measured by hitting a crash test dummy head against the SAEDP and measuring the head’s deceleration using sensors embedded in the head.
[0103] FIG. 12a, FIG. 12b, and FIG. 13 illustrate various examples of the vehicle described above. Each of the illustrated vehicles comprises a cross-section view of the vehicles, where each includes a compartment 145 for a single occupant (in FIG. 12b) or more (in FIG. 12a and FIG. 13). In the figures, much of the compartment is aligned with the SAEDP 140, which is nontransparent and comprises a soft passive material (cushiony in its nature). Supporting the SAEDP 140 is a stiff element 141, which in the illustrations comprises portions of the exterior (hull) of the vehicle which may optionally be made of one or more of the following materials: fiber-reinforced polymer, carbon fiber reinforced polymer, steel, and aluminum. The vehicles also include a camera (such as camera 142 and/or structure 147 that houses multiple cameras), which is positioned to capture a front view of the outside environment of the vehicle. Additionally, the vehicles include a computer 143, which may be positioned in various locations in the vehicle. Optionally, the computer may be comprised of multiple processors and/or graphics processors that may be located at various locations in the vehicle.
[0104] The figures illustrate various types of displays that may be utilized to present the occupant with the representation of the outside environment generated by the computer 143 based on the video taken by the camera 142. In FIG. 12a the representation is presented via an HMD 144, which may be, for example, a virtual reality HMD. In FIG. 12b the representation is presented via an HMD 146, which may be, for example, a mixed-reality headset. And in FIG. 13 the representation may be provided via one or more of the displays 150, which are coupled to the compartment. It is to be noted that in the figures described above not all of the described elements appear in each figure.
[0105] The figures also illustrate various structural alternatives that may be implemented in different examples described herein. For example, FIG. 12a illustrates a vehicle that includes window 148, which may optionally be an automotive tempered glass window, located in a location in which the head of a belted occupant is not expected to hit during collision. FIG. 12b illustrates a vehicle that includes crumple zone 149, which is located at the front of the vehicle at eye level. The figure also illustrates the structure 147 that houses multiple cameras directed to multiple directions around the vehicle.
[0106] The representation of the outside environment may be manipulated in order to improve how the outside environment looks to the occupant. Optionally, this may be done utilizing the computer. In one example, manipulating the representation includes at least one of the following manipulations: converting captured video of a winter day to video of a sunny day by preserving main items in the captured video (such as vehicles and buildings) and applying effects of a sunny day, converting unpleasant environment to a nice one, converting standard vehicles to futuristic or old fashion vehicles, and adding fans standing outside and waiving to the occupant.
[0107] In one example, the manipulation maintains the main items in the environment, such that the occupant would still know from the manipulated representation where he/she is traveling. In another example, the manipulated representation maintains the main objects in the video of the outside environment, such that the main object presented in the manipulated video essentially match the main objects that would have be seen without the manipulation.
[0108] In some cases, the vehicle compartment may include an automotive laminated glass window or automotive tempered glass window located in a location where the head of a belted occupant is not expected to hit as a result of collision while traveling in velocity of less than 50 km/h, as illustrated by the dotted rectangle 148 in FIG. 12a.
[0109] In one example, the structure of the vehicle is such that a crumple zone is located at eye level between the stiff element and the outside environment.
[0110] Various types of vehicles may benefit from utilization of the nontransparent SAEDP supported by the stiff element and in conjunction with the camera and computer, as described above. The following are some examples of different characterizations of vehicles in different examples. In one example, the vehicle weighs less than 1,500 kg without batteries, and it is designed to carry up to five occupants. In another example, the vehicle weighs less than 1,000 kg without batteries, and it comprises an engine that is able to sustain continuously at most 80 horsepower. In yet another example, the vehicle weighs less than 1,000 kg and it is designed to carry up to two occupants. In still another example, the vehicle weighs less than 800 kg without batteries, and it comprises an engine that is able to sustain continuously at most 60 horsepower. In yet another example, the vehicle weighs less than 500 kg without batteries and it comprises an engine that is able to sustain continuously at most 40 horsepower. And in still another example, the vehicle weighs less than 400 kg without batteries and is designed to carry up to two occupants.
[0111] In one example, an autonomous on-road vehicle includes a compartment, which one or more occupants may occupy while traveling in the vehicle (e.g., by sitting in seats). Coupled to the compartment is a Shock-Absorbing Energy Dissipation Padding (SAEDP) and a stiff element that supports the SAEDP. Optionally, the SAEDP is nontransparent. The SAEDP is located, during normal driving, at eye level to the left of the occupant who sits in a front seat of the vehicle. The stiff element is located, during normal driving, at eye level between the SAEDP and the outside environment. Optionally, the stiff element is nontransparent. Optionally, the stiff element may be automotive laminated glass or automotive tempered glass.
[0112] The vehicle also includes a camera (such as camera 161) that is configured to take video of the outside environment to the left of the occupant, and a computer that is configured to generate, based on the video, a representation of the outside environment to the left of the occupant at eye level. Optionally, the camera comprises multiple cameras directed to multiple directions around the vehicle, and the multiple cameras support generating multiple representations of the outside environment from different points of view.
[0113] FIG. 14 illustrates one example of the autonomous on-road vehicle described above, which shows how an SAEDP protects the occupant during a collision. In the figure, SAEDP 160 (which may comprise a passive material) is coupled to the stiff element 141. When another vehicle collides with the side of the vehicle, the occupants head strikes the soft SAEDP 160, instead of a glass window (which would be positioned there in many conventional vehicles).
[0114] The SAEDP is coupled to the compartment in such a way that it is located, during normal driving, at eye level to the left of the occupant who sits in a front seat of the vehicle. Optionally, due to its location, the SAEDP obstructs at least 30 degrees out of the horizontal unaided field of view (FOV) to the outside environment to the left of the occupant at eye level. Optionally, the SAEDP obstructs at least 45 degrees or at least 60 degrees out of the horizontal unaided FOV to the outside environment to the left of the occupant at eye level. In one example of a standard vehicle, such as Toyota® Camry® model 2015, the frontal horizontal unaided FOV extends from the left door through the windshield to the right door.
[0115] In some cases, the SAEDP is fixed to the left door of the vehicle. In one example, the vehicle has a single seat (occupied by the occupant). In another example, the vehicle has two or more front seats and the occupant occupies the leftmost of the two or more front seats.
[0116] Different types of SAEDPs may be utilized in different examples. In one example, the SAEDP comprises a passive material, which is less stiff than a standard automotive glass window, having a thickness greater than at least one of the following thicknesses: 1 cm, 2 cm, 5 cm, 10 cm, 15 cm, and 20 cm. In other examples, the SAEDP may include an automotive airbag or a pneumatic pad that is configured to inflate in order to protect the occupant’s head against hitting the inner side of the vehicle compartment during collision.
[0117] In a similar fashion to how the SAEDP and stiff element are utilized to help protect the left side of the occupant, the same setup may be applied to the right side of the vehicle, in order to help protect that side. Thus, in some example, the vehicle may further include a second SAEDP located at eye level to the right of the occupant who sits in the front seat, and a second stiff element located at eye level between the second SAEDP and the outside environment. Optionally, the second SAEDP obstructs at least 20 degrees out of the horizontal unaided FOV to the outside environment to the right of the occupant at eye level, and the computer is further configured to generate a second representation of the outside environment to the right of the occupant.
[0118] In one example, an autonomous on-road vehicle includes side window 170, nontransparent SAEDP (e.g., SAEDP 171), motor 172, and processor 175.
[0119] The processor 175 is configured to receive, from an autonomous-driving control system (such as autonomous-driving control system 65), an indication indicating that a probability of an imminent collision reaches a threshold, and to command the motor 172 to move the SAEDP 171 from the first state to the second state. In the first state the SAEDP 171 does not block the occupant’s eye level view to the outside environment, and in the second state, the SAEDP 171 blocks the occupant’s eye level view to the outside environment in order to protect the occupant’s head against hitting the side window during collision. Optionally, the processor is configured to command the motor 172 to start moving the SAEDP 171 to the second state at least 0.2 second, 0.5 second, 1 second, or 2 seconds before the expected time of the collision. [0120] The motor 172 is configured to move the SAEDP 171 over a sliding mechanism 173 between first and second states multiple times without having to be repaired. For example, during the same voyage, the SAEDP 171 may go up and down multiple times without a need for the occupant or anyone else to repair the SAEDP 171 and/or other components (such as motor 172 or the window 170) in order to the SAEDP 171 to be able to continue its operation correctly (i.e., continue moving up and down when needed). In some examples, the motor 172 is a motor designed to move the SAEDP 171 more than twice, more than 100 times, and/or more than 10,000 times without being replaced.
[0121] The side window 170 is located at eye level of an occupant who sits in the vehicle, which enables the occupant to see the outside environment. In one example, the side window 170 is a power window. In this example, the power window comprises a window regulator that transfers power from a window motor 177 to the side window glass in order to move it up or down. The motor 172 is coupled to an SAEDP regulator that transfers power from the motor 172 to the SAEDP 171 in order to move it up or down. In this example, the SAEDP regulator is located closer to the inner side of the compartment relative to the window regulator. Optionally, the motor 172 and the window motor 177 may be of the same type or of different types.
[0122] In one example, the SAEDP 171 comprises a passive material, which is less stiff than a standard automotive glass window, having thickness greater than at least one of the following thicknesses: 1 cm, 2 cm, 5 cm, 10 cm, 15 cm, and 20 cm. Optionally, the vehicle may include a storage space in a door of the vehicle, which is configured to store the SAEDP 171 in the first state. Additionally or alternatively, the vehicle may include a storage space in the roof of the vehicle, which is configured to store the SAEDP 171 in the first state.
[0123] Optionally, the SAEDP 171 may move upwards when switching between the first and second states, and the top of the SAEDP has a shape (such as a triangle or a quarter sphere) which reduces the risk of catching the a part of the occupant (e.g., a finger or limb) or the occupant’s clothing, between the top of the SAEDP 171 and an upper frame when moving the SAEDP 171 to the second state.
[0124] In one example, when switching the SAEDP 171 quickly between the first and second states, the SAEDP 171 is configured not to cover a range of 1 to 5 cm of the top height of the window. Optionally, keeping said range unoccupied reduces the risk of catching the occupant’s fingers or limb by the edge of the SAEDP 171 when moving the SAEDP 171 to the second state. [0125] In some examples, the vehicle may include additional SAEDPs that cover additional regions of the vehicle’s compartment (besides the side window 170). In one example, the vehicle includes an SAEDP 176 that covers at least a portion of the roof of the vehicle.
[0126] In some examples, the vehicle may include a camera (e.g., camera 178a), which is configured to take video of the outside environment while the SAEDP 171 is in the second state. Additionally, the vehicle may include a computer (such as computer 13), which is configured to generate a representation of the outside environment based on the video, and a display configured to present the representation of the outside environment to the occupant. The display may be physically coupled to the compartment and/or belong to an HMD. Optionally, the camera is fixed to the SAEDP, and thus moves along with the SAEDP 171 when it is moved between the first and second states. Optionally, the display is fixed to the SAEDP 171, and thus moves along with the SAEDP 171 when it is moved between the first and second states. Optionally, the display is configured to show, at eye level, a representation of the outside environment when the SAEDP is in the second state. In one example, the display is a flexible display. In another example, the camera comprises multiple cameras directed to multiple directions around the vehicle, and the computer is configured to generate at least two different representations of the outside environment, from at least two different points of view, for two occupants who sit in the vehicle.
[0127] Optionally, in addition to raising the SAEDP 171, one or more of the displays mentioned above is utilized to present the occupant a video of the threat and the predicted trajectory that could result in the collision, in order to explain why the SAEDP 171 is being moved to the second state.
[0128] FIG. 15a and FIG. 15b illustrate an example of a vehicle in which the side window may be covered by an SAEDP that can move up and down. The figures illustrate cross-sections of the vehicle, which show how the SAEDP 171 may move from the first state (in FIG. 15a) to the second state (in FIG. 15b). The dotted line 179 indicates that the SADEP 171 does not close the entire gap over the window (e.g., in order to avoid catching the occupant’s hair). The figures also illustrate sliding mechanism 173, which may be utilized to guide and assist in the movement of SAEDP 171. FIG. 15b also illustrates camera 178a and display 178b, which are connected to a processor that generates, based on the video received from the camera, a view of the outside environment when an SAEDP (on the right side of the vehicle) is in the second state. The view of the outside environment is presented to the occupant on the display 178b. Camera and display on the left SAEDP 171, which correspond to camera 178a and display 178b, are not shown in the figure in order to make it clearer; however it is to be understood that such a camera and display may be implemented with any relevant moving SAEDP.
[0129] In one example, the mechanism that moves the SAEDP 171 between the first and second states (referred to as the “SAEDP mechanism”) is similar to a power window regulator that moves an automobile window up and down. As with automobile power windows, the SAEDP regulator may be powered by an electric motor, which may come with the SAEDP regulator as one unit, or as a system that enables the motor or regulator to be replaced separately. The SAEDP mechanism includes a control system, a motor, a gear reduction, a sliding mechanism and the SAEDP, which are usually fixed on the door, but may alternatively be fixed on the roof as disclosed below. The sliding mechanism may have different architectures, such as Bowden type, double Bowden type, cable spiral, or crossed levers.
[0130] In a first example, the SAEDP mechanism is similar to a double Bowden power window mechanism, in which the SAEDP 171 is fixed on two supports respectively constrained along two rails. The control system drives the motor that wraps two Bowden cables, which move two supports and, consequently, the SAEDP 171. A Bowden cable transmits mechanical force through the movement of an inner cable relative to an outer housing, and in the case of a DC motor, the basic operations of the motor are accomplished by reversing the polarity of its power and ground input.
[0131] In a second example, the SAEDP mechanism is similar to a gear-drive type power window regulator; in this case, the SAEDP mechanism includes an SAEDP motor to power the mechanism, gear drive and geared arm to move the SAEDP 171 between the first and second states, and an SAEDP holding bracket to hold the SAEDP 171.
[0132] In a third example, the SAEDP mechanism is similar to a cable type power window regulator; in this case, the SAEDP regulator includes an SAEDP motor that drives a wire cable though a mechanism, a series of pulleys guides the cable, and a regulator carriage attaches to the cable and to the SAEDP 171 and slides on the regulator track. One or more tracks may be mounted vertically inside the door panel that serves as a guide piece when the SAEDP 171 slides up and down. Depending on the design, the setup may have one main regulator track in the center of the door, or have a track on each side of the SAEDP.
[0133] In a fourth example, the SAEDP regulator is similar to a scissor power window regulator; in this case, a motor operates a gear wheel that raises and lowers the SAEDP 171 by the use of a scissor action of rigid bars.
[0134] The motor that moves the SAEDP 171 over the sliding mechanism may be any suitable motor, such as a DC electric motor, an AC electric motor, or a pneumatic motor.
[0135] In one example, the indication that the probability of an imminent collision reaches a threshold is received from the autonomous-driving control system 65 that calculates the probability based on the trajectory of the vehicle and information about the road. Optionally, the information about the road may be received from one of more of the following sources: a sensor mounted to the vehicle, a sensor mounted on a nearby vehicle, a road map, a stationary traffic controller nearby the vehicle, and a central traffic controller that communicates with the vehicle via wireless channel.
[0136] In one example, the processor 175 is further configured to receive an updated indication that the probability of the imminent collision does not reach a second threshold, and to command the motor to move the SAEDP to the first state. In this example, the second threshold denotes a probability for a collision that is equal or lower than the threshold.
[0137] FIG. 16a illustrates one embodiment of an autonomous on-road vehicle that includes outer nontransparent SAEDP 190, which is mounted to the front side of the vehicle during normal driving, such that the SAEDP 190 is in front of and at eye level of an occupant who sits in a front seat of the vehicle. The SAEDP 190 is less stiff than a standard automotive glass window and is designed to absorb some of the crashing energy transmitted to a pedestrian during a pedestrian-vehicle collision. Additionally, the vehicle includes a camera (such as camera 142), which is mounted to the vehicle and is configured to take video of the outside environment in front of the occupant, and a computer (such as computer 143), which is configured to generate, based on the video, a representation of the outside environment at eye level for the occupant. Optionally, the representation is generated from the point of view of the occupant. Optionally, the vehicle includes a display configured to present the representation to the occupant. For example, the display may belong to an HMD worn by the occupant. In another example, the display may be coupled to the compartment of the vehicle, and may be a flexible display.
[0138] The SAEDP 190 may be implemented utilizing various approaches in different embodiments described herein. In one embodiment, the SAEDP 190 comprises a passive material. Optionally, the SAEDP 190 has thickness greater than at least one of the following thicknesses: 1 cm, 2 cm, 5 cm, 10 cm, 15 cm, and 20 cm.
[0139] In another embodiment, the SAEDP 190 comprises an automotive airbag configured to inflate in order to protect the pedestrian. FIG. 16b illustrates an outer SAEDP 190 that includes two air bags 192 configured to absorb some of the crashing energy transmitted to a pedestrian during a pedestrian-vehicle collision. Optionally, the airbag has a stowed condition and an inflated condition. The airbag is coupled to an inflator configured to inflate the airbag with gas, and the airbag is located, in the stowed condition, at eye level in front of the occupant. In this embodiment, the vehicle further includes an autonomous-driving control system, such as autonomous-driving control system 65, which is configured to calculate a probability of pedestrian-vehicle collision, based on measurements of sensors mounted to the vehicle, and to command the airbag to inflate before the pedestrians head hits the vehicle.
[0140] In yet another embodiment, the SAEDP 190 comprises a pneumatic pad configured to inflate in order to protect the pedestrian. In this embodiment, the vehicle further includes an autonomous-driving control system, such as autonomous-driving control system 65, which is configured to calculate a probability of pedestrian-vehicle collision, based on measurements of sensors mounted to the vehicle, and to command the pneumatic pad to start inflate at least 0.5 second before the expected time of the collision in order to protect the pedestrian. Optionally, the pneumatic pad is reusable, and can be used multiple times without the need to be repaired. For example, the vehicle comprises a mechanism to deflate and/or stow the pneumatic pad, without requiring its repair and/or replacement.
[0141] FIG. 17a and FIG. 17b illustrate a motorized external SAEDP 121 that can move between first and second states multiple times. The figures illustrate how the SAEDP 121 can move from the first state (in FIG. 17a) to the second state (FIG. 17b) by having the motor 122 move the SAEDP 121 over sliding mechanism 123. Additionally, the figures illustrate optional camera 126 that is embedded in the SAEDP 121, and which may provide video to a processor configured to generate a representation of the outside environment when the SAEDP 121 is in the second state.
[0142] In one example, an autonomous on-road vehicle includes window 120, reusable SAEDP 121, motor 122, and processor 124. The window 120, which is located at eye level of an occupant who sits in a front seat of the vehicle, and which may be a windshield, enables the occupant to see the outside environment. The SAEDP 121 is reusable, i.e., it may be moved multiple times without the need to replace it or repair it after each use. The SAEDP 121 may be implemented utilizing various approaches. In one example, the SAEDP 121 comprises a passive material. Optionally, the SAEDP 121 has thickness greater than at least one of the following thicknesses: 1 cm, 2 cm, 5 cm, 10 cm, 15 cm, and 20 cm. In another example, the SAEDP 121 comprises a pneumatic pad configured to inflate in order to protect the pedestrian. Optionally, the pneumatic pad is reusable, and the processor 124 is configured to command the pneumatic pad to start inflate at least 0.5 second before the expected time of the pedestrian-vehicle collision.
[0143] The motor 122 is configured to move the SAEDP 121 over a sliding mechanism 123 between first and second states multiple times without having to be repaired. In the first state the SAEDP 121 does not block the occupant’s eye level frontal view to the outside environment, and in the second state the SAEDP 121 blocks the occupant’s eye level frontal view to the outside environment. When in the second state, the SAEDP 121 is configured to absorb some of the crashing energy transmitted to a pedestrian during a pedestrian-vehicle collision.
[0144] The processor 124 is configured to receive, from an autonomous-driving control system (such as autonomous-driving control system 65), an indication indicative of whether a probability of an imminent pedestrian-vehicle collision reaches a threshold. Optionally, most of the time the vehicle travels, the processor 124 does not provide an indication that the probability reaches the threshold. Responsive to receiving an indication of an imminent collision (e.g., within less than 2 seconds), the processor 124 is configured to command the motor 122 to move the SAEDP 121 from the first state to the second state. Optionally, the processor 124 is configured to command the motor to start moving the SAEDP 121 to the second state at least 0.2 second, 0.5 second, 1 second, or 2 seconds before the pedestrian-vehicle collision in order to protect the pedestrian.
[0145] In one example, the vehicle includes a sensor configured to detect the distance and angle between the vehicle and a pedestrian, and the autonomous-driving control system calculates the probability of the imminent pedestrian-vehicle collision based on the data obtained from the sensor, the velocity of vehicle and the possible maneuver.
[0146] In one example, the processor 124 is further configured to receive an updated indication that indicates the probability of the imminent pedestrian-vehicle collision does not reach a second threshold, and to command the motor 122 to move the SAEDP to the first state. Optionally, the second threshold denotes a probability for a pedestrian-vehicle collision that is equal or lower than the threshold.
[0147] In one example, the vehicle includes a camera (such as camera 126), which is configured to take video of the outside environment while the SAEDP 121 is in the second state. Additionally, the vehicle may further include a computer configured to generate, based on the video, a representation of the outside environment, and a display configured to present the representation of the outside environment to the occupant while the SAEDP 121 is in the second state. Optionally, the camera 126 is fixed to the SAEDP 121 from the outer side, and thus moves with the SAEDP 121 when it moves between the first and second states. Optionally, the display is fixed to the SAEDP 121 from the inner side, and thus also moves with the SAEDP 121 when it moves between the first and second states; the occupant can see the display via the window 120 when the SAEDP is in the second state. Alternatively, the display may be physically coupled to the compartment (such as a windshield that also functions as a display) and/or comprised in an HMD worn by the occupant.
[0148] In one example, an autonomous on-road vehicle designed for lying down includes a closed compartment 210, a mattress 211, an SAEDP 212 covering portions of the compartment 210, a camera (e.g., the structure 147 that houses multiple cameras), a computer (e.g., the computer 143), and a display 215. FIG. 18 illustrates a vehicle compartment 210 in which an occupant may lay down. In the figure, the occupant is lying down on mattress 211, which covers the floor of the compartment 210, and is watching a movie on the display 215. The SAEDP 212 covers the front, roof, and back of the compartment 210. It is to be noted that the SAEDP 212 also covers portions of the side walls of the compartment 210, however, this is not illustrated to enable a clearer image of the example. The figure also includes an airbag 216, which may be inflated below the SAEDP 212 in order to protect the occupant and restrain his/her movement in the case of a collision.
[0149] The mattress 211 covers at least 50% of the compartment floor. Optionally, the mattress 211 covers at least 80% of the compartment floor. In one example, the mattress 211 has an average thickness of at least 3 cm. In other examples, the average thickness of the mattress 211 is greater than at least one of the following thicknesses: 5 cm, 7 cm, 10 cm, 20 cm, and 30 cm.
[0150] The SAEDP 212 is a nontransparent SAEDP, having an average thickness of at least 1 cm. Optionally, the SAEDP 212 covers at least 50% of the compartment side walls and at least 60% of the compartment front wall during normal driving. In one example, the average thickness of the SAEDP 212 is greater than at least one of the following thicknesses: 2cm, 3 cm, 5 cm, 10 cm, 15 cm, and 20 cm. In another example, the SAEDP 212 covers at least 80% of the compartment side walls and at least 80% of the compartment front wall. In yet another example, the SAEDP covers at least 50% of the compartment roof. In still another example, the mattress and the SAEDP cover essentially the entire compartment interior.
[0151] In addition to the SAEDP 212, additional measures may be employed in order to improve the safety of the occupant. In one example, the vehicle includes an automotive airbag configured to deploy in front of the SAEDP 212 in order to protect the occupant, in addition to the SAEDP 212, against hitting the inner side of the vehicle compartment during a collision. It is noted that the meaning that the airbag deploys in front of the SAEDP is that the airbag deploys towards the inner side of the compartment. Optionally, the airbag has a stowed condition and an inflated condition, and the airbag is coupled to an inflator configured to inflate the airbag with gas upon computing a predetermined impact severity. The stowed airbags may be stored in various positions, such as stored essentially in the middle of the front wall, stored essentially in the middle of the rear wall, stored in the side walls (possibly two or more horizontally spaced airbags), and stored in the roof (possibly one or more airbags towards the front of the compartment and one or more airbags towards the rear of the compartment).
[0152] In some cases, various additional safety measures may be utilized to improve the safety of the occupant while traveling, such as a sleeping net and/or a safety belt, as described for example in US patent numbers 5,536,042 and 5,375,879.
[0153] Stiff element 213 is configured to support the SAEDP 212 and to resist deformation during a collision in order to reduce compartment intrusion. Part of the stiff element 213 is located, during normal driving, at eye level between the SAEDP 212 and the outside environment. Optionally, the stiff element covers, from the outside, more than 80% of the SAEDP on the compartment side walls. Optionally, the vehicle also includes a crumple zone located at eye level between the stiff element 213 and the outside environment.
[0154] In another example, the vehicle includes a pneumatic pad configured to inflate in order to protect the occupant, in addition to the SAEDP 212, against hitting the inner side of the vehicle compartment during a collision. Optionally, the pneumatic pad is configured to deploy in front of the SAEDP 212 towards the inner side of the compartment. Alternatively, the pneumatic pad is located between the SAEDP 212 and the stiff element 213, and is configured to deploy behind the SAEDP 212. The pneumatic pad may be mounted to various locations, such as mounted to the front wall, mounted to the rear wall, mounted to the side walls, and/or mounted to the roof.
[0155] The camera is configured to take video of the outside environment. The computer is configured to generate a representation of the outside environment based on the video. Optionally, the representation is generated from the point of view of the occupant. The display 215 is configured to present the representation to the occupant. In one example, the display 215 is comprised in an HMD, and the vehicle further comprises a communication system configured to transmit the representation to the HMD. In another example, the display 215 is physically coupled to at least one of the SAEDP 212 and the stiff element 213 at eye level of the occupant. Optionally, the display 215 is a flexible display. For example, the display 215 may be a flexible display that is based on at least one of the following technologies and their variants: OLED, organic thin fdm transistors (OTFT), electronic paper (e-paper), rollable display, and flexible AMOLED. Optionally, the display 215 is flexible enough such that it does not degrade the performance of the SAEDP by more than 20% during a collision.
[0156] Having a vehicle compartment that is designed to allow an occupant to lay down comfortably can be done using various compartment designs, which may be different from the designs used in standard vehicles, in which occupants primarily sit up. In one example, the vehicle does not have an automotive seat with a backrest and safety belt, which enables the occupant to sit straight in the front two thirds of the compartment. In another example, the vehicle is designed for a single occupant, and the average distance between the mattress and the compartment roof is below 80 cm. In still another example, the vehicle is designed for a single occupant, and the average distance between the mattress and the compartment roof is below 70 cm. In still another example, the vehicle is designed for a single occupant, and the average distance of the compartment roof from the road is less than 1 meter. And in yet another example the vehicle is designed for a single occupant, and the average distance of the compartment roof from the road is less than 80 cm.
[0157] It is to be noted that the use of the terms “floor”, “roof’, “side walls”, and “front wall” with respect to the compartment are to be viewed in their common meaning when one considers the compartment to be a mostly convex hull in 3D, such as having a shape that resembles a cuboid. Thus, for example, an occupant whose face faces forward, will see the front wall ahead, the floor when looking below, the roof when looking above, and a side wall when looking to one of the sides (left or right). In embodiments that do not resemble cuboids, alternative definitions for these terms may be used based on the relative region (in 3D space) that each of the portions of the compartment occupy. For example, the floor of the compartment may be considered to be any portion of the compartment which is below at least 80% of the volume of the compartment. Similarly, the roof may be any portion of the compartment that is above at least 80% of the volume of the compartment. The front wall may be any portion of the compartment that is ahead of at least 80% of the volume of the compartment, etc. Note that using this alternative definition, some portions of the compartment may be characterized as belonging to two different regions (e.g., the front wall and the roof).
[0158] [0159] Various embodiments described herein include a processor and/or a computer. For example, the autonomous-driving control system may be implemented using a computer and generation of a representation of the outside environment is done using a processor or a computer. The following are some examples of various types of computers and/or processors that may be utilized in some of the embodiments described herein.
[0160] FIG. 19a and FIG. 19b are schematic illustrations of possible embodiments for computers (400, 410) that are able to realize one or more of the embodiments discussed herein. The computer (400, 410) may be implemented in various ways, such as, but not limited to, a server, a client, a personal computer, a network device, a handheld device (e.g., a smartphone), and/or any other computer form capable of executing a set of computer instructions.
[0161] The computer 400 includes one or more of the following components: processor 401, memory 402, computer readable medium 403, user interface 404, communication interface 405, and bus 406. In one example, the processor 401 may include one or more of the following components: a general-purpose processing device, a microprocessor, a central processing unit, a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a special-purpose processing device, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), a distributed processing entity, and/or a network processor. Continuing the example, the memory 402 may include one or more of the following memory components: CPU cache, main memory, read-only memory (ROM), dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM), flash memory, static random access memory (SRAM), and/or a data storage device. The processor 401 and the one or more memory components may communicate with each other via a bus, such as bus 406.
[0162] The computer 410 includes one or more of the following components: processor 411, memory 412, and communication interface 413. In one example, the processor 411 may include one or more of the following components: a general-purpose processing device, a microprocessor, a central processing unit, a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a special-purpose processing device, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), a distributed processing entity, and/or a network processor. Continuing the example, the memory 412 may include one or more of the following memory components: CPU cache, main memory, read-only memory (ROM), dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM), flash memory, static random access memory (SRAM), and/or a data storage device [0163] Still continuing the examples, the communication interface (405,413) may include one or more components for connecting to one or more of the following: an inter-vehicle network, Ethernet, intranet, the Internet, a fiber communication network, a wired communication network, and/or a wireless communication network. Optionally, the communication interface (405,413) is used to connect with the network 408. Additionally or alternatively, the communication interface 405 may be used to connect to other networks and/or other communication interfaces. Still continuing the example, the user interface 404 may include one or more of the following components: (i) an image generation device, such as a video display, an augmented reality system, a virtual reality system, and/or a mixed reality system, (ii) an audio generation device, such as one or more speakers, (iii) an input device, such as a keyboard, a mouse, an electronic pen, a gesture based input device that may be active or passive, and/or a brain-computer interface.
[0164] It is to be noted that when a processor (computer) is disclosed in one embodiment, the scope of the embodiment is intended to also cover the use of multiple processors (computers). Additionally, in some embodiments, a processor and/or computer disclosed in an embodiment may be part of the vehicle, while in other embodiments, the processor and/or computer may be separate of the vehicle. For example, the processor and/or computer may be in a device carried by the occupant and/or remote of the vehicle (e.g., a server).
[0165] As used herein, references to one embodiment (and its variations) mean that the feature being referred to may be included in at least one embodiment of the invention. Moreover, separate references to one embodiment, some embodiments, another embodiment, “still another embodiment”, etc., may refer to the same embodiment, may illustrate different aspects of an embodiment, and/or may refer to different embodiments.
[0166] Some embodiments may be described using the verb “indicating”, the adjective “indicative”, and/or using variations thereof. Herein, sentences in the form of “X is indicative of Y” mean that X includes information correlated with Y, up to the case where X equals Y. For example, sentences in the form of “thermal measurements indicative of a physiological response” mean that the thermal measurements include information from which it is possible to infer the physiological response. Additionally, sentences in the form of “provide/receive an indication indicating whether X happened” refer herein to any indication method, including but not limited to: sending/receiving a signal when X happened and not sending/receiving a signal when X did not happen, not sending/receiving a signal when X happened and sending/receiving a signal when X did not happen, and/or sending/receiving a first signal when X happened and sending/receiving a second signal X did not happen.
[0167] Herein, “most” of something is defined herein as above 51% of the something (including 100% of the something). For example, most of an ROI refers to at least 51% of the ROI. A “portion” of something refers herein to 0.1% to 100% of the something (including 100% of the something). Sentences of the form “a portion of an area” refer herein to 0.1% to 100% percent of the area.
[0168] As used herein, the terms “comprises,” “comprising,” “includes,” “including,” “has,” “having”, or any other variation thereof, indicate an open claim language that does not exclude additional limitations. The “a” or “an” is employed to describe one or more, and the singular also includes the plural unless it is obvious that it is meant otherwise.
[0169] Certain features of some of the embodiments, which may have been, for clarity, described in the context of separate embodiments, may also be provided in various combinations in a single embodiment. Conversely, various features of some of the embodiments, which may have been, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable sub-combination.
[0170] Embodiments described in conjunction with specific examples are presented by way of example, and not limitation. Moreover, it is evident that many alternatives, modifications, and variations will be apparent to those skilled in the art. It is to be understood that other embodiments may be utilized and structural changes may be made without departing from the scope of the appended claims and their equivalents.

Claims (12)

WE CLAIM: 1. An autonomous on-road vehicle, comprising: a window located at eye level of an occupant who sits in a front seat of the vehicle; whereby the window enables the occupant to see the outside environment; a reusable nontransparent Shock-Absorbing Energy Dissipation Padding (SAEDP); a motor configured to move the SAEDP over a sliding mechanism between first and second states multiple times without having to be repaired; and a processor configured to receive, from an autonomous-driving control system, an indication that a probability of an imminent pedestrian-vehicle collision reaches a threshold, and to command the motor to move the SAEDP from the first state to the second state; wherein in the first state the SAEDP does not block the occupant’s eye level frontal view to the outside environment, and in the second state the SAEDP blocks the occupant’s eye level frontal view to the outside environment; whereby in the second state the SAEDP is configured to absorb some of the crashing energy transmitted to a pedestrian during a pedestrian-vehicle collision. 2. The autonomous on-road vehicle of claim 1, wherein the processor is further configured to receive an updated indication that the probability of the imminent pedestrian-vehicle collision does not reach a second threshold, and to command the motor to move the SAEDP to the first state; whereby the second threshold denotes a probability for a pedestrian-vehicle collision that is equal or lower than the threshold. 3. The autonomous on-road vehicle of claim 1, wherein the processor is configured to command the motor to start moving the SAEDP to the second state at least 0.2 second, 0.5 second, 1 second, or 2 seconds before the pedestrian-vehicle collision in order to protect the pedestrian. 4. The autonomous on-road vehicle of claim 1, wherein the SAEDP comprises a pneumatic pad configured to inflate in order to protect the pedestrian. 5. The autonomous on-road vehicle of claim 4, wherein the pneumatic pad is reusable, and the processor is configured to command the pneumatic pad to start inflate at least 0.5 second before the pedestrian-vehicle collision. 6. The autonomous on-road vehicle of claim 1, wherein the SAEDP comprises a passive material having thickness greater than at least one of the following thicknesses: 1 cm, 2 cm, 5 cm, 10 cm, 15 cm, and 20 cm. 7. The autonomous on-road vehicle of claim 1, further comprising a camera configured to take video of the outside environment while the SAEDP is in the second state, a computer configured to generate a representation of the outside environment based on the video, and a display configured to present the representation of the outside environment to the occupant while the SAEDP is in the second state. 8. The autonomous on-road vehicle of claim 7, wherein the camera is fixed to the SAEDP, and thus moves with the SAEDP when it is moved between the first and second states. 9. The autonomous on-road vehicle of claim 7, wherein the display fixed to the SAEDP from the inner side, and thus moves with the SAEDP when it is moved between the first and second states; whereby the occupant can watch the display via the window when the SAEDP is in the second state. 10. The autonomous on-road vehicle of claim 7, wherein the display is at least one of the following displays: a display that is physically coupled to the compartment, and a display comprised in a head-mounted display. 05 06 18 AMENDMENTS TO THE CLAIMS HAVE BEEN FILED AS FOLLOWS CLAIMS:
1. An autonomous on-road vehicle, comprising:
a reusable nontransparent Shock-Absorbing Energy Dissipation Padding (SAEDP); a motor configured to move the SAEDP between first and second states multiple times without having to be repaired;
a processor configured to command the motor to move the SAEDP from the first state to the second state responsive to receiving a first indication that a probability of an imminent pedestrian-vehicle collision reaches a first threshold; wherein in the first state the SAEDP does not block the eye level frontal view to the outside environment of an occupant who sits in a front seat of the vehicle, and in the second state the SAEDP blocks the occupant’s eye level frontal view to the outside environment; and the processor is further configured to command the motor to move the SAEDP to the first state responsive to receiving a second indication that the probability of the imminent pedestrian-vehicle collision does not reach a second threshold.
2. The autonomous on-road vehicle of claim 1, wherein the second threshold denotes a probability for a pedestrian-vehicle collision that is equal or lower than the first threshold; and the processor is configured to receive at least one of the first and second indications from an autonomous-driving control system.
3. The autonomous on-road vehicle of claim 1, wherein the processor is configured to command the motor to start moving the SAEDP to the second state at least 0.2 second, 0.5 second, 1 second, or 2 seconds before the pedestrian-vehicle collision in order to protect the pedestrian.
4. The autonomous on-road vehicle of claim 1, wherein the SAEDP comprises a pneumatic pad configured to inflate in order to protect the pedestrian.
5. The autonomous on-road vehicle of claim 4, wherein the pneumatic pad is reusable, and the processor is configured to command the pneumatic pad to start inflate at least 0.5 second before the pedestrian-vehicle collision.
6. The autonomous on-road vehicle of claim 1, wherein the SAEDP comprises a passive material having thickness greater than at least one of the following thicknesses: 1 cm, 2 cm, 5 cm, 10 cm, 15 cm, and 20 cm.
05 06 18
7. The autonomous on-road vehicle of claim 1, further comprising a camera configured to take video of the outside environment while the SAEDP is in the second state, a computer configured to generate a representation of the outside environment based on the video, and a display configured to present the representation of the outside environment to the occupant while the SAEDP is in the second state.
8. The autonomous on-road vehicle of claim 7, wherein the camera is fixed to the SAEDP, and thus moves with the SAEDP when it is moved between the first and second states.
9. The autonomous on-road vehicle of claim 7, wherein the display fixed to the SAEDP from the inner side, and thus moves with the SAEDP when it is moved between the first and second states; whereby the occupant can watch the display via the windshield when the SAEDP is in the second state.
10. The autonomous on-road vehicle of claim 7, wherein the display is at least one of the following displays: a display that is physically coupled to the compartment, and a display comprised in a head-mounted display.
11. The autonomous on-road vehicle of claim 1, wherein the processor is configured to receive at least one of the first and second indications from an autonomous-driving control system.
12. The autonomous on-road vehicle of claim 1, wherein in the second state the SAEDP is configured to absorb some of the crashing energy released in a pedestrian-vehicle collision.
Intellectual
Property
Office
Application No: GB1717339.4 Examiner: Peter Gardiner
GB1717339.4A 2015-12-20 2016-12-20 Autonomous vehicle having an external movable shock-absorbing energy dissipation padding Expired - Fee Related GB2558361B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201562270010P 2015-12-20 2015-12-20
US201662369127P 2016-07-31 2016-07-31
GB1621783.8A GB2547532B (en) 2015-12-20 2016-12-20 Autonomous vehicle having an external shock-absorbing energy dissipation padding

Publications (3)

Publication Number Publication Date
GB201717339D0 GB201717339D0 (en) 2017-12-06
GB2558361A true GB2558361A (en) 2018-07-11
GB2558361B GB2558361B (en) 2019-09-25

Family

ID=57963520

Family Applications (4)

Application Number Title Priority Date Filing Date
GB1618138.0A Withdrawn GB2545547A (en) 2015-12-20 2016-10-27 A mirroring element used to increase perceived compartment volume of an autonomous vehicle
GB1621125.2A Expired - Fee Related GB2547512B (en) 2015-12-20 2016-12-12 Warning a vehicle occupant before an intense movement
GB1621783.8A Expired - Fee Related GB2547532B (en) 2015-12-20 2016-12-20 Autonomous vehicle having an external shock-absorbing energy dissipation padding
GB1717339.4A Expired - Fee Related GB2558361B (en) 2015-12-20 2016-12-20 Autonomous vehicle having an external movable shock-absorbing energy dissipation padding

Family Applications Before (3)

Application Number Title Priority Date Filing Date
GB1618138.0A Withdrawn GB2545547A (en) 2015-12-20 2016-10-27 A mirroring element used to increase perceived compartment volume of an autonomous vehicle
GB1621125.2A Expired - Fee Related GB2547512B (en) 2015-12-20 2016-12-12 Warning a vehicle occupant before an intense movement
GB1621783.8A Expired - Fee Related GB2547532B (en) 2015-12-20 2016-12-20 Autonomous vehicle having an external shock-absorbing energy dissipation padding

Country Status (1)

Country Link
GB (4) GB2545547A (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102017218444B4 (en) * 2017-10-16 2020-03-05 Audi Ag Method for operating a safety system for a seat system of a motor vehicle and safety system for a seat system of a motor vehicle
CN108995590A (en) * 2018-07-26 2018-12-14 广州小鹏汽车科技有限公司 A kind of people's vehicle interactive approach, system and device
US11221741B2 (en) * 2018-08-30 2022-01-11 Sony Corporation Display control of interactive content based on direction-of-view of occupant in vehicle
DE102019118854A1 (en) * 2019-07-11 2021-01-14 Bayerische Motoren Werke Aktiengesellschaft Head-mounted display for use in dynamic application areas

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006016052A2 (en) * 2004-07-16 2006-02-16 Universite Louis Pasteur, U.L.P. Active safety device comprising a damping plate covering a motor vehicle windscreen in case of collision with a pedestrian
US20070102126A1 (en) * 2005-11-07 2007-05-10 Toyoda Gosei Co., Ltd. Occupant protection apparatus

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6585384B2 (en) * 2001-06-29 2003-07-01 N-K Enterprises Llc Wireless remote controlled mirror
JP4160848B2 (en) * 2003-03-20 2008-10-08 本田技研工業株式会社 Collision protection device for vehicle
US20090174774A1 (en) * 2008-01-03 2009-07-09 Kinsley Tracy L Video system for viewing vehicle back seat
US8629784B2 (en) * 2009-04-02 2014-01-14 GM Global Technology Operations LLC Peripheral salient feature enhancement on full-windshield head-up display
DE102010016113A1 (en) * 2010-03-24 2011-09-29 Krauss-Maffei Wegmann Gmbh & Co. Kg Method for training a crew member of a particular military vehicle
KR101679881B1 (en) * 2011-12-28 2016-11-28 현대자동차주식회사 An indoor system in vehicle which having a function of assistance for make-up of user's face
DE102013014210A1 (en) * 2013-08-26 2015-02-26 GM Global Technology Operations LLC Motor vehicle with multifunctional display instrument
WO2016044820A1 (en) * 2014-09-19 2016-03-24 Kothari Ankit Enhanced vehicle sun visor with a multi-functional touch screen with multiple camera views and photo video capability
EP3240711B1 (en) * 2014-12-31 2020-09-02 Robert Bosch GmbH Autonomous maneuver notification for autonomous vehicles

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006016052A2 (en) * 2004-07-16 2006-02-16 Universite Louis Pasteur, U.L.P. Active safety device comprising a damping plate covering a motor vehicle windscreen in case of collision with a pedestrian
US20070102126A1 (en) * 2005-11-07 2007-05-10 Toyoda Gosei Co., Ltd. Occupant protection apparatus

Also Published As

Publication number Publication date
GB2545547A (en) 2017-06-21
GB2547532B (en) 2019-09-25
GB2547532A (en) 2017-08-23
GB201618138D0 (en) 2016-12-14
GB201621783D0 (en) 2017-02-01
GB201621125D0 (en) 2017-01-25
GB2558361B (en) 2019-09-25
GB2547512B (en) 2019-09-18
GB2547512A (en) 2017-08-23
GB201717339D0 (en) 2017-12-06

Similar Documents

Publication Publication Date Title
GB2545958B (en) Moveable internal shock-absorbing energy dissipation padding in an autonomous vehicle
US10059347B2 (en) Warning a vehicle occupant before an intense movement
US10717406B2 (en) Autonomous vehicle having an external shock-absorbing energy dissipation padding
US10710608B2 (en) Provide specific warnings to vehicle occupants before intense movements
US11970104B2 (en) Unmanned protective vehicle for protecting manned vehicles
GB2558361B (en) Autonomous vehicle having an external movable shock-absorbing energy dissipation padding
CN110316381B (en) Apparatus and method for providing a vehicle occupant with an attitude reference
CN104781873B (en) Image display device, method for displaying image, mobile device, image display system
WO2018057987A1 (en) Augmented reality display
CN104883554A (en) Virtual see-through instrument cluster with live video
JP2014201197A (en) Head-up display apparatus
CN106029416A (en) Sun shield
EP3869302A1 (en) Vehicle, apparatus and method to reduce the occurence of motion sickness

Legal Events

Date Code Title Description
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20201220