WO2018128946A1 - Method for providing vulnerable road user warnings in a blind spot of a parked vehicle - Google Patents

Method for providing vulnerable road user warnings in a blind spot of a parked vehicle Download PDF

Info

Publication number
WO2018128946A1
WO2018128946A1 PCT/US2017/069097 US2017069097W WO2018128946A1 WO 2018128946 A1 WO2018128946 A1 WO 2018128946A1 US 2017069097 W US2017069097 W US 2017069097W WO 2018128946 A1 WO2018128946 A1 WO 2018128946A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
pedestrian
parked
trajectory
location
Prior art date
Application number
PCT/US2017/069097
Other languages
French (fr)
Inventor
Mikko Tarkiainen
Jani Mantyjarvi
Jussi RONKAINEN
Original Assignee
Pcms Holdings, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pcms Holdings, Inc. filed Critical Pcms Holdings, Inc.
Publication of WO2018128946A1 publication Critical patent/WO2018128946A1/en

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/161Decentralised systems, e.g. inter-vehicle communication
    • G08G1/162Decentralised systems, e.g. inter-vehicle communication event-triggered
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/167Driving aids for lane monitoring, lane changing, e.g. blind spot detection
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q1/00Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor
    • B60Q1/26Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to indicate the vehicle, or parts thereof, or to give signals, to other traffic
    • B60Q1/50Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to indicate the vehicle, or parts thereof, or to give signals, to other traffic for indicating other intentions or conditions, e.g. request for waiting or overtaking
    • B60Q1/525Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to indicate the vehicle, or parts thereof, or to give signals, to other traffic for indicating other intentions or conditions, e.g. request for waiting or overtaking automatically indicating risk of collision between vehicles in traffic or with pedestrians, e.g. after risk assessment using the vehicle sensor data
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q5/00Arrangement or adaptation of acoustic signal devices
    • B60Q5/005Arrangement or adaptation of acoustic signal devices automatically actuated
    • B60Q5/006Arrangement or adaptation of acoustic signal devices automatically actuated indicating risk of collision between vehicles or with pedestrians

Definitions

  • This disclosure relates to systems and methods for connected vehicles. More specifically, this disclosure relates to systems and methods for using the sensors of parked connected vehicles to improve safety of road users.
  • a vulnerable road user is particularly vulnerable in traffic situations where there is a potential conflict with another road user.
  • the traffic conflict point is the intersection of the trajectories of the VRU and the other road user.
  • a conflict, or collision occurs if both the VRU and the other road user reach the conflict point at about the same time. The collision can be avoided if either or both respond with an emergency maneuver and appropriately adapt their speed or path.
  • the following types of road users are considered as Vulnerable Road Users (see ETSI (2016). "VRU Study”, DTR/ITS-00165):
  • VRU Vulnerable Road Users
  • VRU Collision prevention systems based on vehicle-to-pedestrian (V2P) communication have been presented, for example in US Patent No. 9064420, US Patent App. No. 2014/0267263, US Patent App. No. 2016/0179094, US Patent No. 8340894, or US Patent App. No. 2010/0039291.
  • RSU road side units
  • e.g., cameras and short-range communication e.g., a system which utilize road side units (RSU) to send alerts to approaching vehicles if there is a potential for collision with a pedestrian.
  • RSU road side units
  • implementation of new (intelligent) roadside equipment causes lot of costs to road operators and cities which deploy them on their road network. These costs include, for example, software and hardware of the roadside equipment, cabling, installation, training, maintenance and upgrading, etc. Cost/benefit analysis is needed for decision making when large investments are planned.
  • FIGS. 1-3 Examples of these accident scenarios are illustrated in FIGS. 1-3.
  • FIG. 1 illustrates the scenario of a pedestrian crossing the road (on the same side as a road user), where the pedestrian is occluded by a parked car.
  • FIG. 2 illustrates the scenario of a pedestrian crossing the road (on the opposite side from a road user), where the pedestrian is occluded by a parked car.
  • FIG. 3 illustrates the scenario of a vehicle turning at an intersection, with a pedestrian crossing the road (on the same side as the turning road user), where the pedestrian is occluded by a parked car.
  • Any V2P communication approach for preventing collisions with a VRU is largely dependent on the accuracy of the position and heading information of the VRU.
  • Research has shown that the GPS accuracy of a smartphone is not very high, and there is strong degradation of the position quality if the smartphone is stored in a pocket, bag, or the like, which may make the position information unusable as a pedestrian detection device in inner city (as well as other) scenarios.
  • the heading of a VRU is difficult to estimate as the velocity of a pedestrian, for example, is very low, and a pedestrian may also change direction very quickly and/or unexpectedly.
  • V2P communication pedestrian collision prevention systems in occluded situations will not work in optimal way. Battery consumption of mobile devices with GPS and WiFi or similar communication is also still a problem. Furthermore, it will take time before all pedestrians (and VRUs), as well as cars on the road, have V2P communication capabilities.
  • Described herein are systems and methods related to monitoring of blind spots of moving vehicles by parked vehicles to provide warnings to VRUs.
  • a method comprising determining, at a first parked vehicle, a pedestrian location and pedestrian trajectory of a first pedestrian detected by the first parked vehicle.
  • the method also comprises determining a vehicle location and vehicle trajectory of a first moving vehicle detected by the first parked vehicle.
  • the method also comprises responsive to a determination that the first moving vehicle is a risk to the first pedestrian, sending a V2V message from the first parked vehicle to the first moving vehicle comprising information about the pedestrian trajectory of the first pedestrian.
  • the method also comprises responsive to receiving, at the first parked vehicle, a response from the first moving vehicle indicating intent of the first moving vehicle to yield to the first pedestrian, indicating to the first pedestrian that it is safe to cross.
  • a method comprising detecting, at a first stationary vehicle, a pedestrian location and a pedestrian trajectory of a first pedestrian.
  • the method also comprises detecting, at the first stationary vehicle, a vehicle location and a vehicle trajectory of at least a first moving vehicle.
  • the method also comprises broadcasting a V2V message comprising information about the pedestrian trajectory of the first pedestrian.
  • the method also comprises communicating from the first stationary vehicle to the first pedestrian, based at least in part on a response to the broadcast V2V message received by the first stationary vehicle, an indication of whether it is safe for the first pedestrian to cross at a road crossing monitored by the first stationary vehicle.
  • a method comprising pre-calculating, at a first parked vehicle, an estimated blind spot area created by the first parked vehicle's position relative to a vulnerable road user (VRU) crossing location and an estimated approaching vehicle.
  • the method also comprises detecting, by a first sensor of the first parked vehicle, a first VRU approaching the VRU crossing location within the estimated blind spot area.
  • the method also comprises determining a location and trajectory of the first VRU.
  • the method also comprises detecting, at the first parked vehicle, at least a first oncoming vehicle.
  • the method also comprises determining a location and trajectory of the first oncoming vehicle.
  • the method also comprises responsive to a determination that the first oncoming vehicle is a risk to the first VRU, communicating from the first parked vehicle to the first VRU, an indication of whether it is safe for the first VRU to cross at the VRU crossing location.
  • a method comprising broadcasting, from a digital device associated with a first vulnerable road user (VRU), at least a V2P basic safety message (BSM) and a VRU warning request.
  • the method also comprises receiving, at the digital device from a first parked vehicle, a responsive VRU warning message.
  • the method also comprises visualizing the received VRU warning message for display to the first VRU.
  • VRU vulnerable road user
  • BSM basic safety message
  • a system comprising a processor and a non-transitory storage medium storing instructions operative, when executed on the processor, to perform functions including, but not limited to, those set forth above.
  • FIGS. 1-3 illustrate exemplary VRU road crossing scenarios with accident potential.
  • FIG. 4 depicts an example of a first step in a first exemplary scenario, where a first vehicle is parked near a crossing location.
  • FIG. 5 depicts an example of a second step in a first exemplary scenario, where an oncoming vehicle and a VRU approach the crossing location monitored by the first parked vehicle.
  • FIG. 6 depicts an example of an exemplary architecture of the systems and methods disclosed herein.
  • FIG. 7 depicts an example of a second exemplary scenario, where a first vehicle is parked near a crossing location, and a first and second vehicle are oncoming as a first VRU approaches the crossing.
  • FIG. 8 depicts a sequence diagram for one embodiment of a monitoring scenario utilizing the system and methods disclosed herein.
  • FIG. 9A illustrates one embodiment of pre-calculations by the parked vehicle, prior to an approaching vehicle.
  • FIG. 9B illustrates the embodiment of FIG. 9A, where the parked vehicle has updated the precalculations after detection of an approaching vehicle.
  • FIG. 10 depicts one embodiment of a visualization of a scenario at a first step, as a VRU with AR goggles approaches a crossing monitored by a parked vehicle, where the VRU is warned to wait.
  • FIG. 11 depicts one embodiment of a visualization of a scenario at a second step, as a VRU with AR goggles proceeds at a crossing monitored by a parked vehicle, where the VRU is notified to proceed with caution.
  • FIG. 12 depicts one embodiment of a visualization of a scenario utilizing a VRU's mobile device.
  • FIG. 13 depicts one embodiment of a visualization of a jaywalking scenario with visualization and warning for a VRU with AR goggles.
  • FIG. 14 illustrates an exemplary wireless transmit/receive unit (WTRU) that may be employed as a digital device or in vehicle terminal device in some embodiments.
  • WTRU wireless transmit/receive unit
  • FIG. 15 illustrates an exemplary network entity that may be employed in some embodiments.
  • modules that carry out (i.e., perform, execute, and the like) various functions that are described herein in connection with the respective modules.
  • a module includes hardware (e.g., one or more processors, one or more microprocessors, one or more microcontrollers, one or more microchips, one or more application-specific integrated circuits (ASICs), one or more field programmable gate arrays (FPGAs), one or more memory devices) deemed suitable by those of skill in the relevant art for a given implementation.
  • hardware e.g., one or more processors, one or more microprocessors, one or more microcontrollers, one or more microchips, one or more application-specific integrated circuits (ASICs), one or more field programmable gate arrays (FPGAs), one or more memory devices
  • Each described module may also include instructions executable for carrying out the one or more functions described as being carried out by the respective module, and it is noted that those instructions could take the form of or include hardware (i.e., hardwired) instructions, firmware instructions, software instructions, and/or the like, and may be stored in any suitable non-transitory computer-readable medium or media, such as commonly referred to as RAM, ROM, etc.
  • the systems and methods disclosed herein utilize communication, computing, and sensor capabilities of a parked vehicle, which may cause an occlusion of a VRU, in order to provide information and possible warning to pedestrians (or other VRUs) crossing a street.
  • the parked vehicle can accurately measure the pedestrian (or other VRU) intention to cross the street, accurately position the pedestrian and approaching vehicle, communicate with each of them, and issue warnings if needed.
  • the parked connected vehicle may also be able to handle situations where various vehicles (e.g., manual/automated, connected/not connected, etc.) and VRUs (with or without V2P) are on potential collision courses.
  • the possible blind spot edge may be visualized to a VRU by using augmented reality (AR), as well as or alternatively in other manners.
  • AR augmented reality
  • a pedestrian is used as an example of a VRU which is crossing a street from a blind-spot, and is not meant to limit the types of VRU that may be involved in a given embodiment.
  • FIG. 4 illustrates one possible scenario where a pedestrian has a V2P communication device and AR goggles, and an approaching car has V2V communication able to receive and handle detected objects from other vehicles (Collective Perception).
  • a first vehicle 405 may be parked in a parking place near an intersection 410 or other location of concern for VRUs.
  • the first vehicle 405 may be an autonomous vehicle capable of self-parking.
  • the first vehicle may be a non-autonomous vehicle having environmental sensors capable of monitoring the area around the first vehicle.
  • the first vehicle may operate to evaluate whether there is any blind spot area 415 being caused by the first vehicle 405 for any vehicles coming from behind the first vehicle.
  • the first vehicle 405 may cause limited visibility for pedestrians approaching the intersection 410, restricting pedestrians from readily seeing oncoming vehicles through the parked first vehicle.
  • the first vehicle 405 may begin monitoring any blinded areas with its sensors, and
  • FIG. 5 illustrates the scenario of FIG. 4 at a subsequent time, as another vehicle 505 and a pedestrian 510 approach the crosswalk 410.
  • a second vehicle 505 may approach the crosswalk 410.
  • the second vehicle 505 may be broadcasting a V2V BSM message, including, for example, coordinates, speed, heading, etc.
  • the parked first vehicle 405 may, based on the received BSM, determine that the approaching second vehicle 505 needs blind spot detection assistance, and send a V2V message including detected objects, and accurate measurements for location and heading of detected objects (e.g., VRUs).
  • the second vehicle 505 may receive the V2V message, indicating the objects in the blind spot area that were detected by the parked first vehicle 405, and based on relevant computations determine whether to yield or not. This decision may be communicated to the parked first vehicle 405, which may
  • the pedestrian 510 may communicate information related to the approaching second vehicle's decision to the pedestrian 510, such as via V2P message.
  • the pedestrian's AR goggles may display a visualization of the approaching second vehicle 505, based on information received from the parked first vehicle. This may allow the pedestrian to "see” the approaching second vehicle “through” the parked first vehicle.
  • the visualization to the user may be a visual notification or indicator of the approaching second vehicle's decision to yield or not (e.g., notification of "oncoming vehicle yielding", etc.).
  • the first parked vehicle may visualize the situation and warning to the (non-communicating) pedestrian with other means (e.g., sounds, images, lights, projections on the ground, etc.). Visualization can be provided to the pedestrian's mobile device, alternative or in addition to the AR goggles.
  • This system can be also used for pedestrians (VRUs) crossing a street between vehicles in a location where there is no crosswalk, including but not limited to jaywalking. In such situations, the pedestrians are often also in a blind spot.
  • FIG. 6 illustrates a block diagram of one embodiment of an architecture for the herein disclosed systems and methods.
  • systems and methods are organized around three main actors: a first parked vehicle 602, at least a second approaching vehicle 630, and at least one VRU / VRU device 650 (e.g., pedestrian with smartphone, etc.).
  • VRU / VRU device 650 e.g., pedestrian with smartphone, etc.
  • the first parked (or stationary) vehicle 602 may be an autonomous or manually driven vehicle.
  • the first parked vehicle comprises, but is not limited to: an in-vehicle terminal and an in-vehicle terminal device.
  • the in-vehicle terminal may comprise a crosswalk blind spot monitoring application 604.
  • the crosswalk blind spot monitoring application 604 may calculate the dimensions of the blind spot (606) when the vehicle has parked, as discussed more fully below.
  • the parked vehicle is generally not beforehand aware of types, measures (such as camera/radar heights, etc.), speeds, and detailed driving paths of approaching vehicles that affect the details of the blind spot.
  • the parked vehicle may pre-calculate parameters for determining the blind spot area details for various types of vehicles (e.g., trucks, semis, passenger cars, etc.).
  • an approaching vehicle is detected (either by the parked vehicle's sensors or a received V2V message, such as with a situational awareness module 610) and the parked vehicle communicates with it, the parameters for calculating the blind spot for the particular approaching vehicle are then defined.
  • the crosswalk blind spot monitoring application may monitor the blind spot area (608) and approaching vehicles while parked by utilizing all vehicle sensors and V2X communication messages. It may also, determine how detected (via sensors or communication) objects are moving, what kind of objects they are, and if warnings (and/or other information) need to be provided.
  • the crosswalk blind spot monitoring application may include a message generator 612 which builds messages or warnings to be send out.
  • the application may also communicate (614) locally with other vehicles and road users, e.g., deliver warnings. It also determines whether messages have been replied to or not.
  • Communication channels 625 may be any short-range (e.g., V2V, V2P, V2X, DSRC, etc.) or wide area wireless communication (e.g., cellular, etc.) system.
  • the in-vehicle terminal device 616 may comprise a vehicle sensor data access 618, a high definition map 620 with accurate lane level details, and optionally an external screen or projector for communication with pedestrians.
  • Approaching vehicles 630 may be manually driven or automated vehicles, and may have one or more of the following components or capabilities: visualization 632 of VRUs in a blind spot; automated driving control 634 which adjusts the automated driving based on information of the surroundings, e.g., it may make a determination to yield a pedestrian on a crosswalk; collaborative perception message handling 636 which incorporates the object detections from other vehicles into the Local Dynamic Map (LDM) of the vehicle; a communication unit 638 utilizing any short-range (e.g., V2V, V2P, V2X, DSRC, etc.) or wide area wireless communication (e.g., cellular, etc.) system; and an in-vehicle terminal device 640 having a human machine interface (HMI) (or HUD) (642), a vehicle sensor data access 644, and a high definition map 646 with accurate lane level details and LDM.
  • HMI human machine interface
  • HUD vehicle sensor data access 644
  • high definition map 646 with accurate lane level details and
  • VRUs 650 may include, but are not limited to, pedestrians with mobile devices and/or AR goggles. Devices of VRUs may or may not have the following components or capabilities: visualization 652 of the situation and warnings; V2P message handling 654; communication module 656, e.g., V2P communication based on DSRC or cellular 4G, 5G, etc.; a HMI 658; and a map 660.
  • [0064] Provide information or warnings to pedestrians (or other VRUs) crossing a street about an approaching vehicle, including the kind of vehicle approaching and if it has received information about the pedestrian in the blind spot.
  • New opportunities may be created for autonomous vehicle owners, such as selling traffic monitoring data where it is needed when a vehicle is not in use by its owner.
  • FIG. 7 One possible scenario is illustrated in FIG. 7, and a sequence diagram for the scenario of FIG. 7 is depicted in FIG. 8.
  • the line of sight between the approaching cars 705, 710 and the pedestrian 720 is occluded by a parked vehicle 715.
  • the first car 705 is manually driven and has only basic V2V communication supported (e.g., is not able to handle Collective Perception Messages).
  • the second car 710, approaching behind the first car 705, is in automated mode and can handle all V2V messages.
  • the parked vehicle 715 may be a connected vehicle which has parked itself (or been manually parked) at a curb-side parking place, where it creates a blind spot 725 relative to approaching vehicles and a pedestrian crossing.
  • the blind spot 725 occludes the view from approaching vehicles to part of the sidewalk and crosswalk, and as well as from the pedestrian 720 to approaching vehicles.
  • the parked vehicle 715 which has a crosswalk blind spot monitoring application, may check after it has parked if it is next to (or near) a crosswalk (or other location of concern), and then calculate if it is causing a blind spot (810).
  • the vehicle 715 may calculate the maximum size and volumetric dimensions of the blind spot 725 it is creating for any approaching vehicles by utilizing, for example, one or more of the following parameters: Dimensions of the parked vehicle; Crosswalk parameters, e.g., width, length, etc.; Road geometry from HD map and/or 3D model of the surrounding from the vehicle sensors; Estimated driving path of approaching vehicles; Typical pedestrian (or VRU) detection vehicle sensor setup in approaching vehicles, e.g., location (especially height of the sensor installations) and field-of-view; Typical height of possible VRUs, e.g., pedestrian adult/children, wheelchairs, etc. In some cases, depending on these parameters the calculations determine that there is a blind spot for certain type of approaching vehicles, e.g., passenger cars, but not for taller vehicles (e.g., trucks) with sensors installed higher up.
  • the calculations determine that there is a blind spot for certain type of approaching vehicles, e.g., passenger cars, but not for taller vehicles (e.g., trucks) with
  • the parked vehicle 715 may predictively calculate (or estimate) the blind spot based on estimated average parameters of a typical oncoming vehicle.
  • estimated average parameters may include, but are not limited to: height of a sensor of an oncoming vehicle; side-to-side position of a sensor of an oncoming vehicle; a speed of an oncoming vehicle; and a location within the lane of an oncoming vehicle.
  • the parked vehicle 715 may monitor (812) the blind spot 725 and any approaching vehicles (such as 705 and 710).
  • the parked vehicle 715 may constantly scan for pedestrians (and other VRUs) in the blind spot. Additional actions may be triggered, for example, detection by the parked vehicle of an approaching vehicle or a pedestrian (or other VRU), or receiving a message at the parked vehicle from an approaching vehicle or a pedestrian (or other VRU).
  • An approaching vehicle, such as vehicle 705 may, via V2V (or the like), send a Basic Safety Message (BSM) (818) including location, speed, heading, etc.
  • BSM Basic Safety Message
  • an approaching vehicle such as vehicle 710 may send information about its driving mode (816) (e.g., manually driven, manually driven with pedestrian detection, automated driving mode, etc.).
  • the approaching vehicle may be detected by the parked vehicle's sensors.
  • a pedestrian or VRU may send a V2P BSM and/or send a VRU warning support request (814), either of which may be detected by the parked vehicle.
  • the parked vehicle's sensors may detect an approaching vehicle (which may or may not be a connected vehicle), or detect a pedestrian's intention to cross the street in the blind spot (e.g., no V2P needed).
  • the parked vehicle may detect and accurately measure the pedestrian's (or VRU's) position and intention to cross the street (820).
  • the parked vehicle may then send out a V2V collective perception message (CPM) (822, 828) with accurate information about the detected pedestrian in the blind spot, including detected object details, coordinates, speed, heading, etc.
  • CCM V2V collective perception message
  • the parked vehicle may continue sending the message with updated information until the approaching vehicle passes crosswalk or stops/yields the pedestrian.
  • the V2V message may include information about the trajectory of the pedestrian (e.g., that the pedestrian is heading towards the crosswalk), but not the exact detected trajectory of the pedestrian.
  • the information about the trajectory of the pedestrian may comprise the detected trajectory of the pedestrian (or other VRU).
  • An approaching vehicle may, in some embodiments, be adapted to receive and handle the CPM from the parked vehicle. If the approaching vehicle can receive and handle CPMs, it may include the detected pedestrian details in a Local Dynamic Map (LDM) (824). The approaching vehicle may also visualize the situation or warn the driver (e.g., in a manually driven vehicle). This may also comprise sending a BSM with additional information (e.g., that a driver warning was presented) (826). In an automated driving vehicle, the approaching vehicle may factor the updated LDM information into driving path planning (e.g., decide to yield), and send a BSM with the intention (826) (e.g., approaching vehicle is about to stop and yield to the pedestrian). If the approaching vehicle cannot handle CPMs (828), the vehicle will not respond (830).
  • LDM Local Dynamic Map
  • the parked vehicle 715 may wait for a response to the CPM and acts according to whether or not a CPM response from the approaching vehicle(s) is received (832).
  • the parked vehicle determines if a pedestrian warning or other notification is needed (834), and may build the message accordingly. If there is no approaching vehicle, an indication of safe passage may be sent to the pedestrian (which may be filtered, for example, at the pedestrian end according to his or her preferences). If the approaching vehicle is CPM capable, relevant information may be communicated to the pedestrian by the parked vehicle. For example, if the approaching vehicle is in autonomous mode and makes a determination to yield, the parked vehicle may inform the pedestrian it is safe to cross. Or, if the approaching vehicle is slowing down significantly, the parked vehicle may inform the pedestrian to proceed with caution in crossing. In all other cases, the parked vehicle may warn the pedestrian to wait to cross.
  • the parked vehicle may send a warning or information V2P message to the pedestrian (or VRU) (836).
  • the parked vehicle may continue sending the message with updated information until the approaching vehicle(s) passes the crosswalk or stops/yields the pedestrian.
  • the pedestrian's device(s) may verify that the V2P message concerns the pedestrian by comparing its own evaluation of the surroundings to the accurate coordinates for the detected pedestrian provided by the parked vehicle, such as by combining GPS coordinates, compass heading, detected V2P messages from nearby pedestrians, recognition of road objects from local map (e.g., crosswalk lines or buildings), parked vehicle description, etc.
  • the pedestrian may have an AR device which detects the parked vehicle's origin, and establishes AR tracking based on, for example and without limitation, camera feed and object recognition.
  • the pedestrian's mobile device, AR glasses or goggles, wearable device, or the like shows the received warning or notification (838), including information on one or more of the following topics:
  • the parked vehicle may visualize the situation and warning to the non- communicating pedestrian with other means (e.g., sounds, images or text on a screen, lights, projections on the ground, etc.).
  • other means e.g., sounds, images or text on a screen, lights, projections on the ground, etc.
  • non-connected warnings may be particularly emphasized when there is both a non-connected pedestrian (or VRU) and a non-connected approaching vehicle, requiring only the parked vehicle to have advanced sensing capabilities.
  • the warning or notification may be communicated to a VRUs mobile device, such as via WiFi, Bluetooth, or the like.
  • a VRU's wearable device may be utilized as part of the notification process.
  • a smart watch may vibrate to indicate if a pedestrian should stop (or go).
  • used may be WALK/DON'T WALK traffic light audio style vibrations and/or audio.
  • further scenarios may be addressed by the messaging of the parked vehicle.
  • the first approaching vehicle may communicate with the parked vehicle and indicate an intent to yield to the pedestrian, but the second approaching vehicle may be a non- communicating car, resulting in a warning to the pedestrian to not cross at the moment.
  • the parked vehicle may determine or otherwise evaluate the blind spot.
  • the parked vehicle may maintain data related to itself, such as, but not limited to: type of vehicle (van, sedan, SUV, hatchback, roadster, sports car, station wagon, pickup truck, etc.); location (or coordinates), e.g., a rough location from GPS in the vehicle and the vehicle uses its own sensors to scan road dimensions and high definition map data to determine a more accurate location; dimensions of the vehicle (width, length, height); and/or the like.
  • the parked vehicle may calculate the coordinates of own corners, especially the front corner (denoted as fc x ,fc y ) of the parked vehicle on the driving lane side, and the back corner (denoted as bc x ,bc y ) on the sidewalk side.
  • These corner coordinates may be calculated, for example, by adding the parked vehicle's dimensions to the vehicle location coordinates.
  • the parked vehicle may then pre-calculate the blind spot area by using average parameters of unknown/estimated approaching vehicles. For example, the parked vehicle may estimate the average height of a camera/radar on an approaching vehicle, which is usually located on the top edge of the windshield, in the middle, and so may be about the average height for various classes of vehicle (e.g., sedan, SUV, van, etc.). Also, the parked vehicle may estimate the side-to-side position of the camera/radar on an approaching vehicle, which will usually be in the middle of the approaching vehicle. Generally, the coordinates of the assumed approaching vehicle camera may be denoted as avc x ,avc y .
  • the parked vehicle may also estimate the speed of a potential approaching vehicle according to the speed limit of the relevant road (e.g., 25 mph, 30 mph, 35 mph, 40 mph, etc.).
  • the parked vehicle may also estimate the driving trajectory on the lane (e.g., in the middle of the lane) of a potential approaching vehicle.
  • FIG. 9A illustrates one embodiment of the pre-calculations by the parked vehicle, prior to an approaching vehicle.
  • the parked vehicle has pre-calculated the blind spot front and back corner edges (thick lines fee and bee), with the pre-calculation based on the average parameter assumptions.
  • the calculation of the front corner blind spot edge fc e may generally be as follows.
  • fc denote the coordinates of the front corner
  • ai c the approaching vehicle camera coordinates (initially based on the parked vehicle's pre-calculations).
  • n some embodiments, the calculation of the
  • the blind spot area may be re-calculated, for example using the equations set forth above.
  • An example of the recalculated blind spot is illustrated in FIG. 9B, where additional details have been received or measured by the parked vehicle. Note, that speed and position in the lane of the approaching vehicle differ from the pre-calculated assumptions (and may also vary in the height of camera), and accordingly the updated fc e and bc e sections differ from the estimated edges in FIG. 9A.
  • Unit vectors of the front and back corner blind spot edges [fc e and bc e ) may be used to position the visualization for a VRU user interface (e.g., pedestrian AR goggles).
  • a VRU user interface e.g., pedestrian AR goggles.
  • the roadside corner of the parked vehicle coordinates may be defined as (fc x ,fc y ).
  • the edge of the blind spot area may be formed for AR visualization purposes.
  • the sidewalk-side blind spot edge may be formed similarly (if needed).
  • the length of the blind spot edge may be: fixed - e.g., equal to the length of the parked vehicle; or dynamic.
  • the fixed length may be substantially the same as the length of the parked vehicle (for example, ⁇ 10%, ⁇ 5%, etc.), a scalar multiplier of the length of the parked vehicle, or a fixed length regardless of the length of the parked vehicle. If dynamic, for example with one VRU the length may be twice the distance of the VRU from the parked vehicle.
  • the parked vehicle may define the length of edge fc e as twice or three times (or some other scalar multiplier) the group size or group distance from the parked vehicle (e.g. the distance to the furthest member of the group, parallel to the road).
  • dynamic lengths may be substantially the same as the types of lengths set forth herein (e.g., ⁇ 10%, ⁇ 5%, etc.).
  • alternative multipliers may be used to define the length of the blind spot edge for single or multiple VRU situations. Visualization for pedestrians (or VRUs)
  • Visualization of the situation and warnings for pedestrians or VRUs can take various different forms in different embodiments. For example, if a pedestrian is wearing AR goggles, the above mentioned scenarios and warnings may be presented to the user with augmented reality images on top of a real world view.
  • a first step of an AR visualization example is presented in FIG. 10, where a pedestrian with AR goggles is approaching the crosswalk 1005 and receives warnings about the blind spot area (1010), edge of the blind spot area in the crosswalk (1015), a command/notification to stop (or otherwise yield to oncoming vehicles) (1020), and in some embodiments information about the approaching vehicle(s).
  • a first oncoming vehicle from the left may be a manually driven car which is not connected (1025), and a second oncoming vehicle behind it may be an autonomous car that is V2V equipped (1030), which has responded that it will yield and let the pedestrian cross the street.
  • the blind spot area (1010) and the edge (1015) may be visualized in the AR display for the pedestrian (or VRU) using different colors (e.g., blind spot area highlighted in yellow, edge indicated in red, etc.).
  • the blind spot area (1110) may be visualized differently from the situation as in FIG. 10 (e.g., may be green rather than yellow, or the like), indicating that blind spot detection is supported by the camera of the parked vehicle (1050) and that the oncoming vehicle supports the blind spot system as well (such as relocated visualization 1130).
  • the oncoming vehicle may be an autonomous car which has responded with an indication that it will yield (e.g., via V2V).
  • the pedestrian may be instructed or notified to "walk with caution" (1120) (or the like), and they may cross the street.
  • this kind of information may be provided to a personal mobile device of the pedestrian about to cross a street (in place of an AR system).
  • a device based application visualization is illustrated in FIG. 12.
  • a pedestrian may utilize an application on a smartphone and be looking at the screen while walking towards a crosswalk.
  • This application can bring a warning or notification 1220 to the screen of the device, for example with similar information as described above in relation to an AR visualization (e.g., a representation of the parked car 1205, the indicator of a manually driven car 1210, and the indicator of the yielding automated vehicle 1215).
  • the visualization may be presented as a map view where the user's location in the blind spot area 1207 is shown with an indicator 1202 (for example, a circle or dot).
  • the herein disclosed systems and methods with AR or mobile device visualization can also be used for pedestrians (or VRUs) crossing a street between vehicles in locations where there is no crosswalk, including jaywalking situations. In such situations, in many cases pedestrians are also in a blind spot.
  • An exemplary embodiment of the non-crosswalk scenario is illustrated in FIG. 13.
  • the exemplary AR visualization in FIG. 13 illustrates how similar AR objects as in FIG. 10 may be utilized (e.g., highlighted portions for blind spot area 310, edge of blind spot 1315, indicator for approaching vehicle 1325, warning/notification 1320, etc.).
  • One type of message may be a V2P BSM + VRU warning request.
  • BSM as defined in SAE J2735 may include position, speed, heading, etc. (see Dedicated Short Range Communications (DSRC) Message Set Dictionary. SAE J2735.).
  • these messages may include an optional VRU warning request, which may provide information from the VRU to parked or approaching vehicles about intentions of the user including:
  • This intent may be based on a navigation application or a typical walking pattern (e.g., built on routines of the pedestrian, such as a commute, etc.).
  • the coordinates may be from a map or satellite positioning, or the like.
  • VRU type e.g. walking adult/child, wheelchair, baby stroller, cyclist, person pushing a bike, etc.
  • VRU type e.g. walking adult/child, wheelchair, baby stroller, cyclist, person pushing a bike, etc.
  • These may be incorporated to provide an enhanced Ul experience in an approaching vehicle.
  • V2V BSM + automation mode BSM as defined in SAE J2735 may include vehicle size, position, speed, heading, acceleration, brake system status, etc.
  • the message may include an indication of automation mode, for example whether the vehicle is manually driven, fully- or semi-automated, autonomous, etc.
  • vehicle sensor info such as: sensor types, locations in vehicle, etc.
  • V2V Collective Perception Messages Another type of message utilized in the systems and methods disclosed herein may be V2V Collective Perception Messages.
  • CCMs Collective Perception Messages
  • ITS Intelligent Transport System
  • VRUs Vehicles
  • more detailed information about detected objects can be sent to vehicles, for example indicating walking adult/child, wheelchair, etc.
  • V2V BSM + intention BSM as defined in SAE J2735 may be utilized. Intention may also be included in these messages, for example with BSM Vehicle Safety Extension including Path Prediction, or an indication that the vehicle is going to stop before a crosswalk (e.g., TRUE / FALSE).
  • V2P warning messages Such messages may, for example, include a warning message (e.g., WALK/DON'T WALK, etc.). They may also include details about approaching vehicles, such as, for each approaching vehicle, whether it is in an automation mode, an intended action (e.g., yield or not), sensor abilities, etc.
  • the V2P warning messages may also include a blind spot description, for example including the blind spot boundaries as latitude/longitude/elevation coordinates, an origin with respect to the parked vehicle (e.g., coordinates of the vehicle's roadside corner towards the VRU), a parked vehicle description (size, orientation, location, etc.), and/or the like.
  • the messages may also include VRU info, such as an exact location of the VRU as detected by the parked vehicle, for example in latitude/longitude/elevation coordinates and, in some embodiments, with a motion direction as a compass heading.
  • the parked vehicle may determine that this first oncoming car is a manually driven (old) car, and therefore it most likely is not able to determine that there is someone in the blind spot.
  • a second oncoming car is autonomous, and has received information about him (e.g., via V2P and V2V), and will yield and let him cross. He waits until the first car goes by (as notified/warned in his AR visualization via the parked van), and then his AR goggles indicate it is safe to cross the street. While he crosses the street, he sees the approaching autonomous car which has slowed down as he crosses.
  • Exemplary embodiments disclosed herein are implemented using one or more wired and/or wireless network nodes, such as a wireless transmit/receive unit (WTRU) or other network entity.
  • WTRU wireless transmit/receive unit
  • FIG. 14 is a system diagram of an exemplary WTRU 102, which may be employed as a digital device or in vehicle terminal device in embodiments described herein.
  • the WTRU 102 may include a processor 118, a communication interface 119 including a transceiver 120, a
  • the WTRU 102 may include any sub-combination of the foregoing elements while remaining consistent with an embodiment.
  • the processor 118 may be a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Array (FPGAs) circuits, any other type of integrated circuit (IC), a state machine, and the like.
  • the processor 118 may perform signal coding, data processing, power control, input/output processing, and/or any other functionality that enables the WTRU 102 to operate in a wireless environment.
  • the processor 118 may be coupled to the transceiver 120, which may be coupled to the transmit/receive element 122. While FIG. 14 depicts the processor 118 and the transceiver 120 as separate components, it will be appreciated that the processor 118 and the transceiver 120 may be integrated together in an electronic package or chip.
  • the transmit/receive element 122 may be configured to transmit signals to, or receive signals from, a base station over the air interface 116.
  • the transmit/receive element 122 may be an antenna configured to transmit and/or receive RF signals.
  • the transmit/receive element 122 may be an emitter/detector configured to transmit and/or receive IR, UV, or visible light signals, as examples.
  • the transmit/receive element 122 may be configured to transmit and receive both RF and light signals. It will be appreciated that the transmit/receive element 122 may be configured to transmit and/or receive any combination of wireless signals.
  • the WTRU 102 may include any number of transmit/receive elements 122. More specifically, the WTRU 102 may employ MIMO technology. Thus, in one embodiment, the WTRU 102 may include two or more transmit/receive elements 122 (e.g., multiple antennas) for transmitting and receiving wireless signals over the air interface 116.
  • the transceiver 120 may be configured to modulate the signals that are to be transmitted by the transmit/receive element 122 and to demodulate the signals that are received by the transmit/receive element 122.
  • the WTRU 102 may have multi-mode capabilities.
  • the transceiver 120 may include multiple transceivers for enabling the WTRU 102 to communicate via multiple RATs, such as UTRA and IEEE 802.11 , as examples.
  • the processor 118 of the WTRU 102 may be coupled to, and may receive user input data from, the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128 (e.g., a liquid crystal display (LCD) display unit or organic light-emitting diode (OLED) display unit).
  • the processor 118 may also output user data to the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128.
  • the processor 118 may access information from, and store data in, any type of suitable memory, such as the non-removable memory 130 and/or the removable memory 132.
  • the non-removable memory 130 may include random-access memory (RAM), read-only memory (ROM), a hard disk, or any other type of memory storage device.
  • the removable memory 132 may include a subscriber identity module (SIM) card, a memory stick, a secure digital (SD) memory card, and the like.
  • SIM subscriber identity module
  • SD secure digital
  • the processor 1 18 may access information from, and store data in, memory that is not physically located on the WTRU 102, such as on a server or a home computer (not shown).
  • the processor 118 may receive power from the power source 134, and may be configured to distribute and/or control the power to the other components in the WTRU 102.
  • the power source 134 may be any suitable device for powering the WTRU 102.
  • the power source 134 may include one or more dry cell batteries (e.g., nickel-cadmium (NiCd), nickel-zinc (NiZn), nickel metal hydride (NiMH), lithium-ion (Li-ion), and the like), solar cells, fuel cells, and the like.
  • the processor 118 may also be coupled to the GPS chipset 136, which may be configured to provide location information (e.g., longitude and latitude) regarding the current location of the WTRU 102.
  • location information e.g., longitude and latitude
  • the WTRU 102 may receive location information over the air interface 116 from a base station and/or determine its location based on the timing of the signals being received from two or more nearby base stations. It will be appreciated that the WTRU 102 may acquire location information by way of any suitable location-determination method while remaining consistent with an embodiment.
  • the processor 118 may further be coupled to other peripherals 138, which may include one or more software and/or hardware modules that provide additional features, functionality and/or wired or wireless connectivity.
  • the peripherals 138 may include sensors such as an accelerometer, an e-compass, a satellite transceiver, a digital camera (for photographs or video), a universal serial bus (USB) port, a vibration device, a television transceiver, a hands free headset, a Bluetooth® module, a frequency modulated (FM) radio unit, a digital music player, a media player, a video game player module, an Internet browser, and the like.
  • Peripherals may also include the in-vehicle sensors such as cameras, radar, lidar, combination sensors, and the like.
  • the processor may also have access to (HD) maps and LDM data.
  • FIG. 15 depicts an exemplary network entity 190 that may be used in embodiments of the present disclosure.
  • network entity 190 includes a communication interface 192, a processor 194, and non-transitory data storage 196, all of which are communicatively linked by a bus, network, or other communication path 198.
  • Communication interface 192 may include one or more wired communication interfaces and/or one or more wireless-communication interfaces. With respect to wired communication, communication interface 192 may include one or more interfaces such as Ethernet interfaces, as an example. With respect to wireless communication, communication interface 192 may include components such as one or more antennae, one or more transceivers/chipsets designed and configured for one or more types of wireless (e.g., LTE) communication, and/or any other components deemed suitable by those of skill in the relevant art. And further with respect to wireless communication, communication interface 192 may be equipped at a scale and with a configuration appropriate for acting on the network side— as opposed to the client side— of wireless communications (e.g., LTE communications, Wi Fi communications, and the like). Thus, communication interface 192 may include the appropriate equipment and circuitry (perhaps including multiple transceivers) for serving multiple mobile stations, UEs, or other access terminals in a coverage area.
  • wireless communication interface 192 may include the appropriate equipment and circuitry (perhaps including multiple transceivers) for
  • Processor 194 may include one or more processors of any type deemed suitable by those of skill in the relevant art, some examples including a general-purpose microprocessor and a dedicated DSP.
  • Data storage 196 may take the form of any non-transitory computer-readable medium or combination of such media, some examples including flash memory, read-only memory (ROM), and random-access memory (RAM) to name but a few, as any one or more types of non-transitory data storage deemed suitable by those of skill in the relevant art could be used.
  • data storage 196 contains program instructions 197 executable by processor 194 for carrying out various combinations of the various network-entity functions described herein.
  • there is a method of warning pedestrians of traffic by a parked vehicle comprising: determining, at a first parked vehicle, a pedestrian location and pedestrian trajectory of a first pedestrian detected by the first parked vehicle; determining a vehicle location and vehicle trajectory of a first moving vehicle detected by the first parked vehicle; responsive to a determination that the first moving vehicle is a risk to the first pedestrian, sending a V2V message from the first parked vehicle to the first moving vehicle comprising information about the pedestrian trajectory of the first pedestrian; and responsive to receiving, at the first parked vehicle, a response from the first moving vehicle indicating intent of the first moving vehicle to yield to the first pedestrian, indicating to the first pedestrian that it is safe to cross.
  • the method may also include wherein determining that the first moving vehicle is a risk to the first pedestrian comprises determining that the first pedestrian is in a blind spot caused by the first parked vehicle with respect to the first moving vehicle.
  • the method may further comprise pre-calculating, at the first parked vehicle, the blind spot caused by the first parked vehicle with respect to an estimated oncoming vehicle is based at least in part on estimated average parameters of an oncoming vehicle.
  • the method may include wherein the estimated average parameters comprise: height of a sensor of the oncoming vehicle; side-to-side position of the sensor of the oncoming vehicle; a speed of the oncoming vehicle; and a location within the lane of the oncoming vehicle.
  • the method may further comprise recalculating the blind spot based at least in part on details of the first moving vehicle.
  • the method may include wherein the details of the first moving vehicle are determined at least in part by at least one sensor of the first parked vehicle.
  • the method may include wherein the details of the first moving vehicle are determined at least in part from a V2V basic safety message (BSM) broadcast by the first moving vehicle and received by the first parked vehicle.
  • BSM V2V basic safety message
  • the method may also include wherein the location and trajectory of the first pedestrian are determined based at least in part on a V2P basic safety message (BSM) broadcast from a device associated with the first pedestrian.
  • BSM V2P basic safety message
  • the method may include wherein the V2P BSM further comprises a VRU warning request.
  • the method may include wherein the communication of the indication from the first parked vehicle comprises a V2P warning message.
  • the method may include wherein the V2P warning message comprises: a warning message; details of the first moving vehicle; a description of the blind spot area; and information about the first pedestrian as detected by the first parked vehicle.
  • the method may also include wherein the response from the first moving vehicle comprises a V2V BSM including an intention element, wherein the intention element indicates whether the first moving vehicle will yield to the first pedestrian.
  • the method may also include wherein the location and trajectory of the first pedestrian are determined based at least in part on at least one measurement by at least the first sensor of the first parked vehicle.
  • the method may also include wherein the location and trajectory of the first moving vehicle are determined based at least in part on at least one measurement by at least one sensor of the first parked vehicle.
  • the method may also include wherein the location and trajectory of the first pedestrian are determined based at least in part on a V2P message broadcast from the first pedestrian and received by the first parked vehicle.
  • the method may also include wherein the location and trajectory of the first moving vehicle are determined based at least in part on a V2V message broadcast from the first moving vehicle and received by the first parked vehicle.
  • the method may also include wherein the V2V message broadcast from the first parked vehicle comprises a collective perception message.
  • a method comprising: detecting, at a first stationary vehicle, a pedestrian location and a pedestrian trajectory of a first pedestrian; detecting, at the first stationary vehicle, a vehicle location and a vehicle trajectory of at least a first moving vehicle; broadcasting a V2V message comprising information about the pedestrian trajectory of the first pedestrian; and communicating from the first stationary vehicle to the first pedestrian, based at least in part on a response to the broadcast V2V message received by the first stationary vehicle, an indication of whether it is safe for the first pedestrian to cross at a road crossing monitored by the first stationary vehicle.
  • the method may also include wherein determining that the first moving vehicle is a risk to the first pedestrian comprises determining that the first pedestrian is in a blind spot caused by the first stationary vehicle with respect to the first moving vehicle.
  • the method may further comprise pre-calculating, at the first stationary vehicle, the blind spot caused by the first stationary vehicle with respect to an estimated oncoming vehicle is based at least in part on estimated average parameters of an oncoming vehicle.
  • the method may include wherein the estimated average parameters comprise: height of a sensor of the oncoming vehicle; side-to-side position of the sensor of the oncoming vehicle; a speed of the oncoming vehicle; and a location within the lane of the oncoming vehicle.
  • the method may further comprise recalculating the blind spot based at least in part on details of the first moving vehicle.
  • the method may include wherein the details of the first moving vehicle are determined at least in part by at least one sensor of the first stationary vehicle.
  • the method may include wherein the details of the first moving vehicle are determined at least in part from a V2V basic safety message (BSM) broadcast by the first moving vehicle and received by the first stationary vehicle.
  • BSM V2V basic safety message
  • the method may also include wherein the location and trajectory of the first pedestrian are determined based at least in part on a V2P basic safety message (BSM) broadcast from a device associated with the first pedestrian.
  • BSM V2P basic safety message
  • the method may include wherein the V2P BSM further comprises a VRU warning request.
  • the method may also include wherein the communication of the indication from the first stationary vehicle comprises a V2P warning message.
  • the method may include wherein the V2P warning message comprises: a warning message; details of the first moving vehicle; a description of a blind spot area; and information about the first pedestrian as detected by the first stationary vehicle.
  • the method may further comprise: receiving, at the first stationary vehicle, a response from the first moving vehicle indicating an intent of the first moving vehicle to yield to the first pedestrian; and communicating, from the first stationary vehicle, an indication that it is safe for the first pedestrian to cross at the road crossing.
  • the method may include wherein the communication of the indication from the first stationary vehicle comprises a V2P warning message.
  • the method may include wherein the V2P warning message comprises: a warning message; details of the first moving vehicle; a description of a blind spot area; and information about the first pedestrian as detected by the first stationary vehicle.
  • the method may include wherein the response from the first moving vehicle comprises a V2V BSM including an intention element, wherein the intention element indicates whether the first moving vehicle will yield to the first pedestrian.
  • the method may also further comprise: determining, at the first stationary vehicle, that either no response from the first moving vehicle has been received or that a response was received from the first moving vehicle indicating an intent of the first moving vehicle not to yield to the first pedestrian; and communicating, from the first stationary vehicle, an indication that it is not safe for the first pedestrian to cross at the road crossing.
  • the method may further comprise, prior to communicating the indication to the first pedestrian, recalculating the position and trajectory of each of the first pedestrian and the first moving vehicle.
  • the method may include wherein the communication of the indication from the first stationary vehicle comprises a V2P warning message.
  • the method may include wherein the V2P warning message comprises: a warning message; details of the first moving vehicle; a description of a blind spot area; and information about the first pedestrian as detected by the first stationary vehicle.
  • the method may include wherein communicating the indication from the first stationary vehicle comprises generating, from the first stationary vehicle, at least one of a visual or audio warning indicator to the first pedestrian.
  • the method may further comprise, responsive to a determination that no response was received from the first moving vehicle, generating, from the first stationary vehicle, at least one of a visual or audio warning indicator to the first pedestrian.
  • a visual indicator to the first pedestrian comprises a visualization projected by the first stationary vehicle onto the road.
  • the method may also include wherein the response from the first moving vehicle comprises a V2V BSM including an intention element, wherein the intention element indicates whether the first moving vehicle will yield to the first pedestrian.
  • the method may also include wherein the location and trajectory of the first pedestrian are determined based at least in part on at least one measurement by at least the first sensor of the first stationary vehicle.
  • the method may also include wherein the location and trajectory of the first moving vehicle are determined based at least in part on at least one measurement by at least one sensor of the first stationary vehicle.
  • the method may also include wherein the location and trajectory of the first pedestrian are determined based at least in part on a V2P message broadcast from the first pedestrian and received by the first stationary vehicle.
  • the method may also include wherein the location and trajectory of the first moving vehicle are determined based at least in part on a V2V message broadcast from the first moving vehicle and received by the first stationary vehicle.
  • the method may also include wherein the V2V message broadcast from the first stationary vehicle comprises a collective perception message.
  • a method comprising: pre-calculating, at a first parked vehicle, an estimated blind spot area created by the first parked vehicle's position relative to a vulnerable road user (VRU) crossing location and an estimated approaching vehicle; detecting, by a first sensor of the first parked vehicle, a first VRU approaching the VRU crossing location within the estimated blind spot area; determining a location and trajectory of the first VRU; detecting, at the first parked vehicle, at least a first oncoming vehicle; determining a location and trajectory of the first oncoming vehicle; and responsive to a determination that the first oncoming vehicle is a risk to the first VRU, communicating from the first parked vehicle to the first VRU, an indication of whether it is safe for the first VRU to cross at the VRU crossing location.
  • VRU vulnerable road user
  • the method may also include wherein determining that the first oncoming vehicle is a risk to the first VRU comprises determining that the first VRU is located in the blind spot area caused by the first parked vehicle with respect to the first oncoming vehicle.
  • the method may also include wherein the precalculation is based at least in part on estimated average parameters of an oncoming vehicle.
  • the method may include wherein the estimated average parameters comprise: height of a sensor of the oncoming vehicle; side-to-side position of the sensor of the oncoming vehicle; a speed of the oncoming vehicle; and a location within the lane of the oncoming vehicle.
  • the estimated blind spot area comprises a front corner blind spot edge.
  • the method may include wherein the estimated blind spot area further comprises a back corner blind spot edge.
  • the method may include wherein the front corner blind spot edge has a fixed length.
  • the method may include wherein the fixed length is substantially equal to a length of the first parked vehicle.
  • the method may include wherein the front corner blind spot edge has a dynamic length.
  • the method may include wherein the dynamic length is based at least in part on a number of detected VRUs.
  • the method may include wherein if only the first VRU is detected, the dynamic length comprises a scalar multiple of the first VRU's distance from the first parked vehicle.
  • the method may include wherein if multiple VRUs are detected, the dynamic length is based at least in part on either: the number of VRUs detected; or a group distance from the parked vehicle.
  • the method may also further comprise recalculating the blind spot area based at least in part on details of the first oncoming vehicle.
  • the method may include wherein the details of the first oncoming vehicle are determined at least in part by at least one sensor of the first parked vehicle.
  • the method may include wherein the details of the first oncoming vehicle are determined at least in part from a V2V basic safety message (BSM) broadcast by the first oncoming vehicle and received by the first parked vehicle.
  • BSM V2V basic safety message
  • V2V BSM broadcast by the first oncoming vehicle further comprises an indication of an automation mode of the first oncoming vehicle.
  • the method may also include wherein the location and trajectory of the first VRU are determined based at least in part on a V2P basic safety message (BSM) broadcast from a device associated with the first VRU.
  • BSM basic safety message
  • the method may include wherein the V2P BSM further comprises a VRU warning request.
  • the method may also further comprise: receiving, at the first parked vehicle, a response from the first oncoming vehicle indicating an intent of the first oncoming vehicle to yield to the first VRU; and communicating, from the first parked vehicle, an indication that it is safe for the first VRU to cross at the crossing location.
  • the method may include wherein the communication of the indication from the first parked vehicle comprises a V2P warning message.
  • the method may include wherein the V2P warning message comprises: a warning message; details of the first oncoming vehicle; a description of the blind spot area; and information about the first VRU as detected by the first parked vehicle.
  • the method may include wherein the response from the first oncoming vehicle comprises a V2V BSM including an intention element, wherein the intention element indicates whether the first oncoming vehicle will yield to the first
  • the method may also further comprise: determining, at the first parked vehicle, that either no response from the first oncoming vehicle has been received or that a response was received from the first oncoming vehicle indicating an intent of the first oncoming vehicle not to yield to the first VRU; and communicating, from the first parked vehicle, an indication that it is not safe for the first VRU to cross at the crossing location.
  • the method may further comprise, prior to communicating the indication to the first VRU, recalculating the position and trajectory of each of the first VRU and the first oncoming vehicle.
  • the method may include wherein the communication of the indication from the first parked vehicle comprises a V2P warning message.
  • the method may include wherein the V2P warning message comprises: a warning message; details of the first oncoming vehicle; a description of the blind spot area; and information about the first VRU as detected by the first parked vehicle.
  • the method may include wherein communicating the indication from the first parked vehicle comprises generating, from the first parked vehicle, at least one of a visual or audio warning indicator to the first VRU.
  • the method may further comprise, responsive to a determination that no response was received from the first oncoming vehicle, generating, from the first parked vehicle, at least one of a visual or audio warning indicator to the first VRU.
  • a visual indicator to the first VRU comprises a visualization projected by the first parked vehicle onto the street.
  • the method may also further comprise updating the determined location and trajectory of each of the first VRU and the first oncoming vehicle.
  • the method may also include wherein the location and trajectory of the first VRU are determined based at least in part on at least one measurement by at least the first sensor of the first parked vehicle.
  • the method may also include wherein the location and trajectory of the first oncoming vehicle are determined based at least in part on at least one measurement by at least one sensor of the first parked vehicle.
  • the method may also include wherein the location and trajectory of the first VRU are determined based at least in part on a V2P message broadcast from the first VRU and received by the first parked vehicle.
  • the method may also include wherein the location and trajectory of the first oncoming vehicle are determined based at least in part on a V2V message broadcast from the first oncoming vehicle and received by the first parked vehicle.
  • the method may also include wherein the V2V message broadcast from the first parked vehicle comprises a collective perception message.
  • the method may also include wherein the VRU crossing location comprises a crosswalk.
  • the method may also include wherein the VRU crossing location comprises a space between parked vehicles.
  • a method comprising: broadcasting, from a digital device associated with a first vulnerable road user (VRU), at least a V2P basic safety message (BSM) and a VRU warning request; receiving, at the digital device from a first parked vehicle, a responsive VRU warning message; and visualizing the received VRU warning message for display to the first VRU.
  • the method may also include wherein visualizing the received VRU warning message comprises presenting, by the digital device, a map view indicating a warning message, the first VRU's position, a blind spot area as determined by the first parked vehicle, and an indication of at least a first oncoming vehicle.
  • the method may also include wherein visualizing the received VRU warning message comprises presenting, by an augmented reality (AR) device associated with the first VRU, an AR overlay indicating a warning message, a blind spot area as determined by the first parked vehicle, an edge of the blind spot area, and an indication of at least a first oncoming vehicle.
  • the method may include wherein the blind spot area may be rendered in a first color associated with a "don't walk” warning message or a second color associated with a "walk” warning message.
  • the VRU warning request comprises at least one of: an indication of a street crossing location; a request for blind spot assistance; and a VRU type.
  • the method may also include wherein a wearable device is associated with the first VRU, and further comprising generating a vibration associated with the VRU warning message.
  • a system comprising a processor and a non-transitory storage medium storing instructions operative, when executed on the processor, to perform functions including: determining, at a first parked vehicle, a pedestrian location and pedestrian trajectory of a first pedestrian detected by the first parked vehicle; determining a vehicle location and vehicle trajectory of a first moving vehicle detected by the first parked vehicle; responsive to a determination that the first moving vehicle is a risk to the first pedestrian, sending a V2V message from the first parked vehicle to the first moving vehicle comprising information about the pedestrian trajectory of the first pedestrian; and responsive to receiving, at the first parked vehicle, a response from the first moving vehicle indicating intent of the first moving vehicle to yield to the first pedestrian, indicating to the first pedestrian that it is safe to cross.
  • a system comprising a processor and a non-transitory storage medium storing instructions operative, when executed on the processor, to perform functions including: detecting, at a first stationary vehicle, a pedestrian location and a pedestrian trajectory of a first pedestrian; detecting, at the first stationary vehicle, a vehicle location and a vehicle trajectory of at least a first moving vehicle; broadcasting a V2V message comprising information about the pedestrian trajectory of the first pedestrian; and communicating from the first stationary vehicle to the first pedestrian, based at least in part on a response to the broadcast V2V message received by the first stationary vehicle, an indication of whether it is safe for the first pedestrian to cross at a road crossing monitored by the first stationary vehicle.
  • a system comprising a processor and a non-transitory storage medium storing instructions operative, when executed on the processor, to perform functions including: pre-calculating, at a first parked vehicle, an estimated blind spot area created by the first parked vehicle's position relative to a vulnerable road user (VRU) crossing location and an estimated approaching vehicle; detecting, by a first sensor of the first parked vehicle, a first VRU approaching the VRU crossing location within the estimated blind spot area; determining a location and trajectory of the first VRU; detecting, at the first parked vehicle, at least a first oncoming vehicle; determining a location and trajectory of the first oncoming vehicle; and responsive to a determination that the first oncoming vehicle is a risk to the first VRU, communicating from the first parked vehicle to the first VRU, an indication of whether it is safe for the first VRU to cross at the VRU crossing location.
  • VRU vulnerable road user
  • a system comprising a processor and a non-transitory storage medium storing instructions operative, when executed on the processor, to perform functions including: broadcasting, from a digital device associated with a first vulnerable road user (VRU), at least a V2P basic safety message (BSM) and a VRU warning request; receiving, at the digital device from a first parked vehicle, a responsive VRU warning message; and visualizing the received VRU warning message for display to the first VRU.
  • VRU vulnerable road user
  • BSM basic safety message
  • ROM read only memory
  • RAM random access memory
  • register cache memory
  • semiconductor memory devices magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks, and digital versatile disks (DVDs).
  • a processor in association with software may be used to implement a radio frequency transceiver for use in a WTRU, UE, terminal, base station, RNC, or any host computer.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)

Abstract

Vulnerable road users (VRUs) in blind spots of approaching vehicles at crossing locations may be provided warnings by a parked vehicle monitoring the location. In one embodiment, a parked vehicle detects a pedestrian location and a pedestrian trajectory of a first pedestrian. The parked vehicle also detects a vehicle location and a vehicle trajectory of at least a first moving vehicle. If the parked vehicle determines that the first moving vehicle is a risk to the first pedestrian, it may send to the first moving vehicle a V2V message comprising information about the trajectory of the first pedestrian. In response to receiving a response from the first moving vehicle indicating intent to yield to the first pedestrian, the parked vehicle may indicate to the first pedestrian that it is safe to cross.

Description

METHOD FOR PROVIDING VULNERABLE ROAD USER WARNINGS IN A BLIND SPOT OF A
PARKED VEHICLE
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] The present application is a non-provisional filing of, and claims benefit under 35 U.S.C. §119(c) from, U.S. Provisional Patent Application Serial No. 62/443,480, filed January 6, 2017, entitled "METHOD FOR PROVIDING VULNERABLE ROAD USER WARNINGS IN A BLIND SPOT OF A PARKED VEHICLE", which is incorporated herein by reference in its entirety.
TECHNICAL FIELD
[0002] This disclosure relates to systems and methods for connected vehicles. More specifically, this disclosure relates to systems and methods for using the sensors of parked connected vehicles to improve safety of road users.
BACKGROUND
[0003] While advances in safety systems and technology over past decades have greatly improved driver and passenger safety, there has been relatively little new technology to ensure the safety of pedestrians. Safety systems that focus on the people around the car will become even more important as we move closer to a future of autonomous vehicles.
[0004] A vulnerable road user is particularly vulnerable in traffic situations where there is a potential conflict with another road user. The traffic conflict point is the intersection of the trajectories of the VRU and the other road user. A conflict, or collision, occurs if both the VRU and the other road user reach the conflict point at about the same time. The collision can be avoided if either or both respond with an emergency maneuver and appropriately adapt their speed or path. The following types of road users are considered as Vulnerable Road Users (see ETSI (2016). "VRU Study", DTR/ITS-00165):
[0005] - Pedestrians (including children, elderly, joggers, etc.)
[0006] - Emergency responders, safety workers, road workers, etc.
[0007] - Animals: such as horses, dogs down to wild animals
[0008] - Wheelchair users, strollers [0009] - Skaters, Skateboards, Segway
[0010] - Bikes and e-bikes, with speed limited to 25 km/h.
[0011] - High speed e-bikes with speeds higher than 25 km/h, class L1e-A.
[0012] - PTW, mopeds (scooters), class Lie.
[0013] - PTW, motorcycles, class L3e.
[0014] - PTW, tricycles, class L2e, L4e and L5e limited to 45 km/h.
[0015] - PTW, quadricycles, class L5e and L6e limited to 45 km/h.
[0016] ETSI is currently working with VRU study in which it will define Cooperative systems for Vulnerable Road Users (VRU). The study of use cases and standardization perspectives will provide an overview of the relevant VRU use cases enabled by Cooperative ITS and identity relevant ITS application and/or facilities layer features to be standardized to support VRU applications and make recommendations for further specifications for existing standards revision or new standard development.
[0017] Blind spots are major concerns for traffic safety, especially in urban areas. V2V technology and autonomous driving are expected to reduce the 40K/US or 1 M/worldwide annual fatalities rate.
[0018] VRU Collision prevention systems based on vehicle-to-pedestrian (V2P) communication have been presented, for example in US Patent No. 9064420, US Patent App. No. 2014/0267263, US Patent App. No. 2016/0179094, US Patent No. 8340894, or US Patent App. No. 2010/0039291.
[0019] There are some systems which utilize road side units (RSU) (e.g., cameras and short-range communication) to send alerts to approaching vehicles if there is a potential for collision with a pedestrian. However, implementation of new (intelligent) roadside equipment causes lot of costs to road operators and cities which deploy them on their road network. These costs include, for example, software and hardware of the roadside equipment, cabling, installation, training, maintenance and upgrading, etc. Cost/benefit analysis is needed for decision making when large investments are planned.
[0020] There are many pedestrian detection and collision avoidance technologies already on the market which are based on vehicle sensor systems. Such sensor based approaches however do not perform well in poor visibility conditions, e.g., at night time, in bad weather conditions, or if the pedestrian is not close enough (e.g., within few tens meters) or is in a non-line-of-sight (NLOS) position. The NLOS problem occurs, for example, when there are obstacles that partially or fully degrade the detection of a VRU by a vehicle's sensors. These obstacles may also partially or fully degrade the V2P communication between the vehicle and the VRU communication devices. The ETSI VRU Study draft includes these VRU accident scenarios where a pedestrian is occluded by a parked vehicle. Examples of these accident scenarios are illustrated in FIGS. 1-3. FIG. 1 illustrates the scenario of a pedestrian crossing the road (on the same side as a road user), where the pedestrian is occluded by a parked car. FIG. 2 illustrates the scenario of a pedestrian crossing the road (on the opposite side from a road user), where the pedestrian is occluded by a parked car. FIG. 3 illustrates the scenario of a vehicle turning at an intersection, with a pedestrian crossing the road (on the same side as the turning road user), where the pedestrian is occluded by a parked car.
[0021] Any V2P communication approach for preventing collisions with a VRU is largely dependent on the accuracy of the position and heading information of the VRU. Research has shown that the GPS accuracy of a smartphone is not very high, and there is strong degradation of the position quality if the smartphone is stored in a pocket, bag, or the like, which may make the position information unusable as a pedestrian detection device in inner city (as well as other) scenarios. In addition to position accuracy, the heading of a VRU is difficult to estimate as the velocity of a pedestrian, for example, is very low, and a pedestrian may also change direction very quickly and/or unexpectedly.
[0022] Until VRU positioning accuracy is further improved, the V2P communication pedestrian collision prevention systems in occluded situations will not work in optimal way. Battery consumption of mobile devices with GPS and WiFi or similar communication is also still a problem. Furthermore, it will take time before all pedestrians (and VRUs), as well as cars on the road, have V2P communication capabilities.
[0023] There is a need for intermediate solutions which can work in mixed traffic where some road users (including VRUs) have highly advanced V2X and V2P systems, some have simple (e.g., V2V) or basic communication solutions, and some have nothing.
[0024] The systems and methods disclosed herein address these issues, and others. SUMMARY
[0025] Described herein are systems and methods related to monitoring of blind spots of moving vehicles by parked vehicles to provide warnings to VRUs.
[0026] In one embodiment, there is a method comprising determining, at a first parked vehicle, a pedestrian location and pedestrian trajectory of a first pedestrian detected by the first parked vehicle. The method also comprises determining a vehicle location and vehicle trajectory of a first moving vehicle detected by the first parked vehicle. The method also comprises responsive to a determination that the first moving vehicle is a risk to the first pedestrian, sending a V2V message from the first parked vehicle to the first moving vehicle comprising information about the pedestrian trajectory of the first pedestrian. The method also comprises responsive to receiving, at the first parked vehicle, a response from the first moving vehicle indicating intent of the first moving vehicle to yield to the first pedestrian, indicating to the first pedestrian that it is safe to cross. [0027] In one embodiment, there is a method comprising detecting, at a first stationary vehicle, a pedestrian location and a pedestrian trajectory of a first pedestrian. The method also comprises detecting, at the first stationary vehicle, a vehicle location and a vehicle trajectory of at least a first moving vehicle. The method also comprises broadcasting a V2V message comprising information about the pedestrian trajectory of the first pedestrian. The method also comprises communicating from the first stationary vehicle to the first pedestrian, based at least in part on a response to the broadcast V2V message received by the first stationary vehicle, an indication of whether it is safe for the first pedestrian to cross at a road crossing monitored by the first stationary vehicle.
[0028] In one embodiment, there is a method comprising pre-calculating, at a first parked vehicle, an estimated blind spot area created by the first parked vehicle's position relative to a vulnerable road user (VRU) crossing location and an estimated approaching vehicle. The method also comprises detecting, by a first sensor of the first parked vehicle, a first VRU approaching the VRU crossing location within the estimated blind spot area. The method also comprises determining a location and trajectory of the first VRU. The method also comprises detecting, at the first parked vehicle, at least a first oncoming vehicle. The method also comprises determining a location and trajectory of the first oncoming vehicle. The method also comprises responsive to a determination that the first oncoming vehicle is a risk to the first VRU, communicating from the first parked vehicle to the first VRU, an indication of whether it is safe for the first VRU to cross at the VRU crossing location.
[0029] In one embodiment, there is a method comprising broadcasting, from a digital device associated with a first vulnerable road user (VRU), at least a V2P basic safety message (BSM) and a VRU warning request. The method also comprises receiving, at the digital device from a first parked vehicle, a responsive VRU warning message. The method also comprises visualizing the received VRU warning message for display to the first VRU.
[0030] In various embodiments, there may be a system comprising a processor and a non-transitory storage medium storing instructions operative, when executed on the processor, to perform functions including, but not limited to, those set forth above.
BRIEF DESCRIPTION OF THE DRAWINGS
[0031] A more detailed understanding may be had from the following description, presented by way of example in conjunction with the accompanying drawings in which like reference numerals in the figures indicate like elements, and wherein:
[0032] FIGS. 1-3 illustrate exemplary VRU road crossing scenarios with accident potential. [0033] FIG. 4 depicts an example of a first step in a first exemplary scenario, where a first vehicle is parked near a crossing location.
[0034] FIG. 5 depicts an example of a second step in a first exemplary scenario, where an oncoming vehicle and a VRU approach the crossing location monitored by the first parked vehicle.
[0035] FIG. 6 depicts an example of an exemplary architecture of the systems and methods disclosed herein.
[0036] FIG. 7 depicts an example of a second exemplary scenario, where a first vehicle is parked near a crossing location, and a first and second vehicle are oncoming as a first VRU approaches the crossing.
[0037] FIG. 8 depicts a sequence diagram for one embodiment of a monitoring scenario utilizing the system and methods disclosed herein.
[0038] FIG. 9A illustrates one embodiment of pre-calculations by the parked vehicle, prior to an approaching vehicle.
[0039] FIG. 9B illustrates the embodiment of FIG. 9A, where the parked vehicle has updated the precalculations after detection of an approaching vehicle.
[0040] FIG. 10 depicts one embodiment of a visualization of a scenario at a first step, as a VRU with AR goggles approaches a crossing monitored by a parked vehicle, where the VRU is warned to wait.
[0041] FIG. 11 depicts one embodiment of a visualization of a scenario at a second step, as a VRU with AR goggles proceeds at a crossing monitored by a parked vehicle, where the VRU is notified to proceed with caution.
[0042] FIG. 12 depicts one embodiment of a visualization of a scenario utilizing a VRU's mobile device.
[0043] FIG. 13 depicts one embodiment of a visualization of a jaywalking scenario with visualization and warning for a VRU with AR goggles.
[0044] FIG. 14 illustrates an exemplary wireless transmit/receive unit (WTRU) that may be employed as a digital device or in vehicle terminal device in some embodiments.
[0045] FIG. 15 illustrates an exemplary network entity that may be employed in some embodiments. DETAILED DESCRIPTION
[0046] A detailed description of illustrative embodiments will now be provided with reference to the various Figures. Although this description provides detailed examples of possible implementations, it should be noted that the provided details are intended to be by way of example and in no way limit the scope of the application. [0047] Note that various hardware elements of one or more of the described embodiments are referred to as "modules" that carry out (i.e., perform, execute, and the like) various functions that are described herein in connection with the respective modules. As used herein, a module includes hardware (e.g., one or more processors, one or more microprocessors, one or more microcontrollers, one or more microchips, one or more application-specific integrated circuits (ASICs), one or more field programmable gate arrays (FPGAs), one or more memory devices) deemed suitable by those of skill in the relevant art for a given implementation. Each described module may also include instructions executable for carrying out the one or more functions described as being carried out by the respective module, and it is noted that those instructions could take the form of or include hardware (i.e., hardwired) instructions, firmware instructions, software instructions, and/or the like, and may be stored in any suitable non-transitory computer-readable medium or media, such as commonly referred to as RAM, ROM, etc.
[0048] The systems and methods disclosed herein utilize communication, computing, and sensor capabilities of a parked vehicle, which may cause an occlusion of a VRU, in order to provide information and possible warning to pedestrians (or other VRUs) crossing a street. The parked vehicle can accurately measure the pedestrian (or other VRU) intention to cross the street, accurately position the pedestrian and approaching vehicle, communicate with each of them, and issue warnings if needed. The parked connected vehicle may also be able to handle situations where various vehicles (e.g., manual/automated, connected/not connected, etc.) and VRUs (with or without V2P) are on potential collision courses. The possible blind spot edge may be visualized to a VRU by using augmented reality (AR), as well as or alternatively in other manners.
[0049] In various embodiments set forth herein, while specific reference to a pedestrian is sometimes used, a pedestrian is used as an example of a VRU which is crossing a street from a blind-spot, and is not meant to limit the types of VRU that may be involved in a given embodiment.
[0050] ln-vehicle sensing technologies, computing power (including advanced algorithms), and communication capabilities (especially cellular connectivity) of vehicles are developing quickly.
Autonomous and (semi-) automated vehicles are coming to the market with state-of-the art environment perception, computing, and communication systems. These vehicles may also be electric or hybrid (or, e.g., hydrogen, etc.) vehicles which may have substantial in-vehicle electric energy storage. This may enable use of their in-vehicle sensing, computing, and communication systems while parked. Computing power requirements for stationary (parked) vehicle environment sensing are not as high as they are for vehicles moving at any speed. [0051] FIG. 4 illustrates one possible scenario where a pedestrian has a V2P communication device and AR goggles, and an approaching car has V2V communication able to receive and handle detected objects from other vehicles (Collective Perception).
[0052] As shown in FIG. 4, a first vehicle 405 may be parked in a parking place near an intersection 410 or other location of concern for VRUs. In some embodiments, the first vehicle 405 may be an autonomous vehicle capable of self-parking. In other embodiments, the first vehicle may be a non-autonomous vehicle having environmental sensors capable of monitoring the area around the first vehicle. After the first vehicle is parked, the first vehicle may operate to evaluate whether there is any blind spot area 415 being caused by the first vehicle 405 for any vehicles coming from behind the first vehicle. Similarly, the first vehicle 405 may cause limited visibility for pedestrians approaching the intersection 410, restricting pedestrians from readily seeing oncoming vehicles through the parked first vehicle. After the blind spot area 415 is determined, the first vehicle 405 may begin monitoring any blinded areas with its sensors, and
communicate any information gathered from the monitoring to other road users (vehicular or pedestrian).
[0053] FIG. 5 illustrates the scenario of FIG. 4 at a subsequent time, as another vehicle 505 and a pedestrian 510 approach the crosswalk 410. As shown in FIG. 5, a second vehicle 505 may approach the crosswalk 410. The second vehicle 505 may be broadcasting a V2V BSM message, including, for example, coordinates, speed, heading, etc. The parked first vehicle 405 may, based on the received BSM, determine that the approaching second vehicle 505 needs blind spot detection assistance, and send a V2V message including detected objects, and accurate measurements for location and heading of detected objects (e.g., VRUs). The second vehicle 505 may receive the V2V message, indicating the objects in the blind spot area that were detected by the parked first vehicle 405, and based on relevant computations determine whether to yield or not. This decision may be communicated to the parked first vehicle 405, which may
communicate information related to the approaching second vehicle's decision to the pedestrian 510, such as via V2P message. As the pedestrian 510 begins to enter the crosswalk 410, in the blind spot 415 created by the parked first vehicle 405, the pedestrian's AR goggles may display a visualization of the approaching second vehicle 505, based on information received from the parked first vehicle. This may allow the pedestrian to "see" the approaching second vehicle "through" the parked first vehicle. In some other embodiments, the visualization to the user may be a visual notification or indicator of the approaching second vehicle's decision to yield or not (e.g., notification of "oncoming vehicle yielding", etc.). In still further embodiments, the first parked vehicle may visualize the situation and warning to the (non-communicating) pedestrian with other means (e.g., sounds, images, lights, projections on the ground, etc.). Visualization can be provided to the pedestrian's mobile device, alternative or in addition to the AR goggles. [0054] This system can be also used for pedestrians (VRUs) crossing a street between vehicles in a location where there is no crosswalk, including but not limited to jaywalking. In such situations, the pedestrians are often also in a blind spot.
[0055] FIG. 6 illustrates a block diagram of one embodiment of an architecture for the herein disclosed systems and methods. In general, systems and methods are organized around three main actors: a first parked vehicle 602, at least a second approaching vehicle 630, and at least one VRU / VRU device 650 (e.g., pedestrian with smartphone, etc.).
[0056] The first parked (or stationary) vehicle 602 may be an autonomous or manually driven vehicle. Generally, the first parked vehicle comprises, but is not limited to: an in-vehicle terminal and an in-vehicle terminal device. The in-vehicle terminal may comprise a crosswalk blind spot monitoring application 604.
[0057] The crosswalk blind spot monitoring application 604 may calculate the dimensions of the blind spot (606) when the vehicle has parked, as discussed more fully below. The parked vehicle is generally not beforehand aware of types, measures (such as camera/radar heights, etc.), speeds, and detailed driving paths of approaching vehicles that affect the details of the blind spot. The parked vehicle may pre-calculate parameters for determining the blind spot area details for various types of vehicles (e.g., trucks, semis, passenger cars, etc.). When an approaching vehicle is detected (either by the parked vehicle's sensors or a received V2V message, such as with a situational awareness module 610) and the parked vehicle communicates with it, the parameters for calculating the blind spot for the particular approaching vehicle are then defined.
[0058] The crosswalk blind spot monitoring application may monitor the blind spot area (608) and approaching vehicles while parked by utilizing all vehicle sensors and V2X communication messages. It may also, determine how detected (via sensors or communication) objects are moving, what kind of objects they are, and if warnings (and/or other information) need to be provided. The crosswalk blind spot monitoring application may include a message generator 612 which builds messages or warnings to be send out. The application may also communicate (614) locally with other vehicles and road users, e.g., deliver warnings. It also determines whether messages have been replied to or not. Communication channels 625 may be any short-range (e.g., V2V, V2P, V2X, DSRC, etc.) or wide area wireless communication (e.g., cellular, etc.) system.
[0059] The in-vehicle terminal device 616 may comprise a vehicle sensor data access 618, a high definition map 620 with accurate lane level details, and optionally an external screen or projector for communication with pedestrians.
[0060] Approaching vehicles 630 may be manually driven or automated vehicles, and may have one or more of the following components or capabilities: visualization 632 of VRUs in a blind spot; automated driving control 634 which adjusts the automated driving based on information of the surroundings, e.g., it may make a determination to yield a pedestrian on a crosswalk; collaborative perception message handling 636 which incorporates the object detections from other vehicles into the Local Dynamic Map (LDM) of the vehicle; a communication unit 638 utilizing any short-range (e.g., V2V, V2P, V2X, DSRC, etc.) or wide area wireless communication (e.g., cellular, etc.) system; and an in-vehicle terminal device 640 having a human machine interface (HMI) (or HUD) (642), a vehicle sensor data access 644, and a high definition map 646 with accurate lane level details and LDM.
[0061] VRUs 650 may include, but are not limited to, pedestrians with mobile devices and/or AR goggles. Devices of VRUs may or may not have the following components or capabilities: visualization 652 of the situation and warnings; V2P message handling 654; communication module 656, e.g., V2P communication based on DSRC or cellular 4G, 5G, etc.; a HMI 658; and a map 660.
[0062] Advantages of the herein disclosed systems and methods may include, but are not limited to:
[0063] - Increased traffic safety, especially for VRUs crossing a road from a blind spot occluded by a parked vehicle.
[0064] - Provide information or warnings to pedestrians (or other VRUs) crossing a street about an approaching vehicle, including the kind of vehicle approaching and if it has received information about the pedestrian in the blind spot.
[0065] - Help pedestrians (or VRUs in general) know if an approaching vehicle is in an automated or manually driven mode.
[0066] - Calculations for the blind spot may be based on various parameters, which adapt to situations as they occur.
[0067] - Intuitive visualization for VRUs, with augmented reality while the VRU is about to cross or while crossing a street (or the like).
[0068] - Traffic monitoring systems based on parked connected vehicles, as disclosed herein, may be widely utilized and save cities substantial resources/funds, by reducing the need to equip as many locations with RSUs.
[0069] - Capabilities of connected vehicles may be fully (or more fully) utilized while parked.
[0070] - New opportunities may be created for autonomous vehicle owners, such as selling traffic monitoring data where it is needed when a vehicle is not in use by its owner.
[0071] One possible scenario is illustrated in FIG. 7, and a sequence diagram for the scenario of FIG. 7 is depicted in FIG. 8. In the scenario of FIG. 7 and sequence of FIG. 8, there are two cars (705, 710) approaching a crosswalk where a pedestrian 720 is starting to cross the street from a blind spot 725. The line of sight between the approaching cars 705, 710 and the pedestrian 720 is occluded by a parked vehicle 715. The first car 705 is manually driven and has only basic V2V communication supported (e.g., is not able to handle Collective Perception Messages). The second car 710, approaching behind the first car 705, is in automated mode and can handle all V2V messages.
[0072] The parked vehicle 715 may be a connected vehicle which has parked itself (or been manually parked) at a curb-side parking place, where it creates a blind spot 725 relative to approaching vehicles and a pedestrian crossing. The blind spot 725 occludes the view from approaching vehicles to part of the sidewalk and crosswalk, and as well as from the pedestrian 720 to approaching vehicles. The parked vehicle 715, which has a crosswalk blind spot monitoring application, may check after it has parked if it is next to (or near) a crosswalk (or other location of concern), and then calculate if it is causing a blind spot (810).
[0073] If it is determined that the parked vehicle 715 is causing a blind spot, the vehicle 715 may calculate the maximum size and volumetric dimensions of the blind spot 725 it is creating for any approaching vehicles by utilizing, for example, one or more of the following parameters: Dimensions of the parked vehicle; Crosswalk parameters, e.g., width, length, etc.; Road geometry from HD map and/or 3D model of the surrounding from the vehicle sensors; Estimated driving path of approaching vehicles; Typical pedestrian (or VRU) detection vehicle sensor setup in approaching vehicles, e.g., location (especially height of the sensor installations) and field-of-view; Typical height of possible VRUs, e.g., pedestrian adult/children, wheelchairs, etc. In some cases, depending on these parameters the calculations determine that there is a blind spot for certain type of approaching vehicles, e.g., passenger cars, but not for taller vehicles (e.g., trucks) with sensors installed higher up.
[0074] In some embodiments, the parked vehicle 715 may predictively calculate (or estimate) the blind spot based on estimated average parameters of a typical oncoming vehicle. In various embodiments, estimated average parameters may include, but are not limited to: height of a sensor of an oncoming vehicle; side-to-side position of a sensor of an oncoming vehicle; a speed of an oncoming vehicle; and a location within the lane of an oncoming vehicle.
[0075] The parked vehicle 715 may monitor (812) the blind spot 725 and any approaching vehicles (such as 705 and 710). The parked vehicle 715 may constantly scan for pedestrians (and other VRUs) in the blind spot. Additional actions may be triggered, for example, detection by the parked vehicle of an approaching vehicle or a pedestrian (or other VRU), or receiving a message at the parked vehicle from an approaching vehicle or a pedestrian (or other VRU). An approaching vehicle, such as vehicle 705, may, via V2V (or the like), send a Basic Safety Message (BSM) (818) including location, speed, heading, etc. In addition to the BSM, in some cases an approaching vehicle, such as vehicle 710, may send information about its driving mode (816) (e.g., manually driven, manually driven with pedestrian detection, automated driving mode, etc.). In some cases, the approaching vehicle may be detected by the parked vehicle's sensors. In some cases, a pedestrian (or VRU) may send a V2P BSM and/or send a VRU warning support request (814), either of which may be detected by the parked vehicle. Alternatively, or in addition, in some cases the parked vehicle's sensors may detect an approaching vehicle (which may or may not be a connected vehicle), or detect a pedestrian's intention to cross the street in the blind spot (e.g., no V2P needed).
[0076] Once the monitoring application is triggered (e.g., by parked vehicle detection, received V2V message, received V2P message, etc.), the parked vehicle may detect and accurately measure the pedestrian's (or VRU's) position and intention to cross the street (820).
[0077] The parked vehicle may then send out a V2V collective perception message (CPM) (822, 828) with accurate information about the detected pedestrian in the blind spot, including detected object details, coordinates, speed, heading, etc. The parked vehicle may continue sending the message with updated information until the approaching vehicle passes crosswalk or stops/yields the pedestrian. In some embodiments, the V2V message may include information about the trajectory of the pedestrian (e.g., that the pedestrian is heading towards the crosswalk), but not the exact detected trajectory of the pedestrian. In other embodiments, the information about the trajectory of the pedestrian may comprise the detected trajectory of the pedestrian (or other VRU).
[0078] An approaching vehicle may, in some embodiments, be adapted to receive and handle the CPM from the parked vehicle. If the approaching vehicle can receive and handle CPMs, it may include the detected pedestrian details in a Local Dynamic Map (LDM) (824). The approaching vehicle may also visualize the situation or warn the driver (e.g., in a manually driven vehicle). This may also comprise sending a BSM with additional information (e.g., that a driver warning was presented) (826). In an automated driving vehicle, the approaching vehicle may factor the updated LDM information into driving path planning (e.g., decide to yield), and send a BSM with the intention (826) (e.g., approaching vehicle is about to stop and yield to the pedestrian). If the approaching vehicle cannot handle CPMs (828), the vehicle will not respond (830).
[0079] The parked vehicle 715 may wait for a response to the CPM and acts according to whether or not a CPM response from the approaching vehicle(s) is received (832). The parked vehicle determines if a pedestrian warning or other notification is needed (834), and may build the message accordingly. If there is no approaching vehicle, an indication of safe passage may be sent to the pedestrian (which may be filtered, for example, at the pedestrian end according to his or her preferences). If the approaching vehicle is CPM capable, relevant information may be communicated to the pedestrian by the parked vehicle. For example, if the approaching vehicle is in autonomous mode and makes a determination to yield, the parked vehicle may inform the pedestrian it is safe to cross. Or, if the approaching vehicle is slowing down significantly, the parked vehicle may inform the pedestrian to proceed with caution in crossing. In all other cases, the parked vehicle may warn the pedestrian to wait to cross.
[0080] Once the message is built/composed, the parked vehicle may send a warning or information V2P message to the pedestrian (or VRU) (836). The parked vehicle may continue sending the message with updated information until the approaching vehicle(s) passes the crosswalk or stops/yields the pedestrian.
[0081] The pedestrian's device(s) may verify that the V2P message concerns the pedestrian by comparing its own evaluation of the surroundings to the accurate coordinates for the detected pedestrian provided by the parked vehicle, such as by combining GPS coordinates, compass heading, detected V2P messages from nearby pedestrians, recognition of road objects from local map (e.g., crosswalk lines or buildings), parked vehicle description, etc. In some cases, the pedestrian may have an AR device which detects the parked vehicle's origin, and establishes AR tracking based on, for example and without limitation, camera feed and object recognition.
[0082] In some embodiments, the pedestrian's mobile device, AR glasses or goggles, wearable device, or the like shows the received warning or notification (838), including information on one or more of the following topics:
[0083] - Is the pedestrian in the blind spot of the approaching vehicle?
[0084] - Is it safe to cross the street now, or should the pedestrian stop?
[0085] - Is there a vehicle(s) approaching from the occluded direction?
[0086] - What kind of vehicle is approaching (e.g., manually driven / automated, etc.)?
[0087] - Has the approaching vehicle received information about the pedestrian?
[0088] - Is the approaching vehicle going to yield (automated)?
[0089] - Was a driver warning provided (manual)?
[0090] - Is there a non-communicating vehicle approaching?
[0091] In some embodiments, if the pedestrian does not have V2P communication or AR goggles (or optionally even if they do) the parked vehicle may visualize the situation and warning to the non- communicating pedestrian with other means (e.g., sounds, images or text on a screen, lights, projections on the ground, etc.). In some scenarios, such non-connected warnings may be particularly emphasized when there is both a non-connected pedestrian (or VRU) and a non-connected approaching vehicle, requiring only the parked vehicle to have advanced sensing capabilities. [0092] In some embodiments, optionally, the warning or notification may be communicated to a VRUs mobile device, such as via WiFi, Bluetooth, or the like.
[0093] In some embodiments, a VRU's wearable device (e.g., smart watch) may be utilized as part of the notification process. For example, a smart watch may vibrate to indicate if a pedestrian should stop (or go). In some cases, used may be WALK/DON'T WALK traffic light audio style vibrations and/or audio.
[0094] In some additional embodiments, further scenarios may be addressed by the messaging of the parked vehicle. For example, in multi-lane situations, there could be two approaching cars from behind the parked vehicle, one in each of two lanes. The first approaching vehicle may communicate with the parked vehicle and indicate an intent to yield to the pedestrian, but the second approaching vehicle may be a non- communicating car, resulting in a warning to the pedestrian to not cross at the moment.
[0095] In various embodiments, the parked vehicle may determine or otherwise evaluate the blind spot. The parked vehicle may maintain data related to itself, such as, but not limited to: type of vehicle (van, sedan, SUV, hatchback, roadster, sports car, station wagon, pickup truck, etc.); location (or coordinates), e.g., a rough location from GPS in the vehicle and the vehicle uses its own sensors to scan road dimensions and high definition map data to determine a more accurate location; dimensions of the vehicle (width, length, height); and/or the like.
[0096] The parked vehicle may calculate the coordinates of own corners, especially the front corner (denoted as fcx,fcy) of the parked vehicle on the driving lane side, and the back corner (denoted as bcx,bcy) on the sidewalk side. These corner coordinates may be calculated, for example, by adding the parked vehicle's dimensions to the vehicle location coordinates.
[0097] The parked vehicle may then pre-calculate the blind spot area by using average parameters of unknown/estimated approaching vehicles. For example, the parked vehicle may estimate the average height of a camera/radar on an approaching vehicle, which is usually located on the top edge of the windshield, in the middle, and so may be about the average height for various classes of vehicle (e.g., sedan, SUV, van, etc.). Also, the parked vehicle may estimate the side-to-side position of the camera/radar on an approaching vehicle, which will usually be in the middle of the approaching vehicle. Generally, the coordinates of the assumed approaching vehicle camera may be denoted as avcx,avcy. The parked vehicle may also estimate the speed of a potential approaching vehicle according to the speed limit of the relevant road (e.g., 25 mph, 30 mph, 35 mph, 40 mph, etc.). The parked vehicle may also estimate the driving trajectory on the lane (e.g., in the middle of the lane) of a potential approaching vehicle. FIG. 9A illustrates one embodiment of the pre-calculations by the parked vehicle, prior to an approaching vehicle. As shown in FIG. 9A, the parked vehicle has pre-calculated the blind spot front and back corner edges (thick lines fee and bee), with the pre-calculation based on the average parameter assumptions. [0098] In some embodiments, the calculation of the front corner blind spot edge fce may generally be as follows. Let fc denote the coordinates of the front corner, and ai c the approaching vehicle camera coordinates (initially based on the parked vehicle's pre-calculations). A unit vector for front corner blind spot edge may be defined as fce = ( fCx~avCx fCy avCy |n some embodiments, the calculation of the
3 ' ' e dist(fc,avc) dist(fc,avcy
back corner [be) blind spot edge bce may generally be as follows. Let be denote the coordinates of the back corner, and ai c the approaching vehicle camera coordinates (initially based on the parked vehicle's precalculations). A unit vector for the back corner blind spot edge may be defined as bce =
f bcx-avcx bCy-avCy
dist(bc,avc) ' dist(bc,avc)
[0099] In response to the parked vehicle receiving details (including location, speed, heading, dimensions, etc.) of the approaching vehicle by V2V, or detects and measures the details of the approaching vehicle, the blind spot area may be re-calculated, for example using the equations set forth above. An example of the recalculated blind spot is illustrated in FIG. 9B, where additional details have been received or measured by the parked vehicle. Note, that speed and position in the lane of the approaching vehicle differ from the pre-calculated assumptions (and may also vary in the height of camera), and accordingly the updated fce and bce sections differ from the estimated edges in FIG. 9A.
[0100] Unit vectors of the front and back corner blind spot edges [fce and bce) may be used to position the visualization for a VRU user interface (e.g., pedestrian AR goggles). For example:
[0101] The roadside corner of the parked vehicle coordinates may be defined as (fcx,fcy). By placing the origin of the fce-unit vector into (fc¾fcy), the edge of the blind spot area may be formed for AR visualization purposes. The sidewalk-side blind spot edge may be formed similarly (if needed).
[0102] To define the length of the blind spot edge, the length of the blind spot edge may be: fixed - e.g., equal to the length of the parked vehicle; or dynamic. In some embodiments, the fixed length may be substantially the same as the length of the parked vehicle (for example, ±10%, ±5%, etc.), a scalar multiplier of the length of the parked vehicle, or a fixed length regardless of the length of the parked vehicle. If dynamic, for example with one VRU the length may be twice the distance of the VRU from the parked vehicle. But with multiple VRUs, after determining the size of the group of VRUs (if possible) by the parked vehicle's sensors, the parked vehicle may define the length of edge fce as twice or three times (or some other scalar multiplier) the group size or group distance from the parked vehicle (e.g. the distance to the furthest member of the group, parallel to the road). In some embodiments, dynamic lengths may be substantially the same as the types of lengths set forth herein (e.g., ±10%, ±5%, etc.). In other embodiments, alternative multipliers may be used to define the length of the blind spot edge for single or multiple VRU situations. Visualization for pedestrians (or VRUs)
[0103] Visualization of the situation and warnings for pedestrians or VRUs can take various different forms in different embodiments. For example, if a pedestrian is wearing AR goggles, the above mentioned scenarios and warnings may be presented to the user with augmented reality images on top of a real world view. A first step of an AR visualization example is presented in FIG. 10, where a pedestrian with AR goggles is approaching the crosswalk 1005 and receives warnings about the blind spot area (1010), edge of the blind spot area in the crosswalk (1015), a command/notification to stop (or otherwise yield to oncoming vehicles) (1020), and in some embodiments information about the approaching vehicle(s). For example, a first oncoming vehicle from the left may be a manually driven car which is not connected (1025), and a second oncoming vehicle behind it may be an autonomous car that is V2V equipped (1030), which has responded that it will yield and let the pedestrian cross the street. In some embodiments, the blind spot area (1010) and the edge (1015) may be visualized in the AR display for the pedestrian (or VRU) using different colors (e.g., blind spot area highlighted in yellow, edge indicated in red, etc.).
[0104] In a second step of the visualization example, as shown in FIG. 11 the first manually driven vehicle has passed, and the warning 1025 has been cleared from the pedestrian's AR view. The blind spot area (1110) may be visualized differently from the situation as in FIG. 10 (e.g., may be green rather than yellow, or the like), indicating that blind spot detection is supported by the camera of the parked vehicle (1050) and that the oncoming vehicle supports the blind spot system as well (such as relocated visualization 1130). The oncoming vehicle may be an autonomous car which has responded with an indication that it will yield (e.g., via V2V). The pedestrian may be instructed or notified to "walk with caution" (1120) (or the like), and they may cross the street.
[0105] In a similar fashion, this kind of information may be provided to a personal mobile device of the pedestrian about to cross a street (in place of an AR system). One exemplary embodiment of a device based application visualization is illustrated in FIG. 12. A pedestrian may utilize an application on a smartphone and be looking at the screen while walking towards a crosswalk. This application can bring a warning or notification 1220 to the screen of the device, for example with similar information as described above in relation to an AR visualization (e.g., a representation of the parked car 1205, the indicator of a manually driven car 1210, and the indicator of the yielding automated vehicle 1215). In this application embodiment, for example, the visualization may be presented as a map view where the user's location in the blind spot area 1207 is shown with an indicator 1202 (for example, a circle or dot).
[0106] The herein disclosed systems and methods with AR or mobile device visualization can also be used for pedestrians (or VRUs) crossing a street between vehicles in locations where there is no crosswalk, including jaywalking situations. In such situations, in many cases pedestrians are also in a blind spot. An exemplary embodiment of the non-crosswalk scenario is illustrated in FIG. 13. The exemplary AR visualization in FIG. 13 illustrates how similar AR objects as in FIG. 10 may be utilized (e.g., highlighted portions for blind spot area 310, edge of blind spot 1315, indicator for approaching vehicle 1325, warning/notification 1320, etc.).
[0107] Messages utilized in the herein disclosed systems and methods may take a variety of forms. Exemplary instances where various message types may be utilized may be found in the sequence diagram of FIG. 8.
[0108] One type of message may be a V2P BSM + VRU warning request. For example, BSM as defined in SAE J2735 may include position, speed, heading, etc. (see Dedicated Short Range Communications (DSRC) Message Set Dictionary. SAE J2735.). In some embodiments, these messages may include an optional VRU warning request, which may provide information from the VRU to parked or approaching vehicles about intentions of the user including:
[0109] - Intent to cross a street at specified coordinates: This intent may be based on a navigation application or a typical walking pattern (e.g., built on routines of the pedestrian, such as a commute, etc.). For example, the coordinates may be from a map or satellite positioning, or the like.
[0110] - Request for support in blind spot (may be a TRUE / FALSE value)
[0111] - Optionally, VRU type (e.g. walking adult/child, wheelchair, baby stroller, cyclist, person pushing a bike, etc.). These may be incorporated to provide an enhanced Ul experience in an approaching vehicle.
[0112] Another type of message utilized in the systems and methods disclosed herein may be V2V BSM + automation mode. For example, BSM as defined in SAE J2735 may include vehicle size, position, speed, heading, acceleration, brake system status, etc.. The message may include an indication of automation mode, for example whether the vehicle is manually driven, fully- or semi-automated, autonomous, etc. This message may further include vehicle sensor info, such as: sensor types, locations in vehicle, etc.
[0113] Another type of message utilized in the systems and methods disclosed herein may be V2V Collective Perception Messages. Collective Perception Messages (CPMs) are defined in the early draft of the ETSI standard (see ETSI TS 103 324 V<0.0.7> (2016-06). Intelligent Transport System (ITS);
Collective Perception Service [Release 2]). Optionally, more detailed information about detected objects (e.g., VRUs) can be sent to vehicles, for example indicating walking adult/child, wheelchair, etc.
[0114] Another type of message utilized may be V2V BSM + intention. For example, BSM as defined in SAE J2735 may be utilized. Intention may also be included in these messages, for example with BSM Vehicle Safety Extension including Path Prediction, or an indication that the vehicle is going to stop before a crosswalk (e.g., TRUE / FALSE). [0115] Another type of message utilized may be V2P warning messages. Such messages may, for example, include a warning message (e.g., WALK/DON'T WALK, etc.). They may also include details about approaching vehicles, such as, for each approaching vehicle, whether it is in an automation mode, an intended action (e.g., yield or not), sensor abilities, etc. The V2P warning messages may also include a blind spot description, for example including the blind spot boundaries as latitude/longitude/elevation coordinates, an origin with respect to the parked vehicle (e.g., coordinates of the vehicle's roadside corner towards the VRU), a parked vehicle description (size, orientation, location, etc.), and/or the like. The messages may also include VRU info, such as an exact location of the VRU as detected by the parked vehicle, for example in latitude/longitude/elevation coordinates and, in some embodiments, with a motion direction as a compass heading.
[0116] An exemplary scenario utilizing the herein disclosed systems and methods is described below. Tom is walking on a sidewalk in the city with his new augmented reality goggles on. He is running a navigation application which provides instructions/directions. As he approaches a point where he needs to cross the street, he notices that he steps into a blind spot area, which is visualized in his AR goggles. The blind spot is created by a van parked curb-side very near the crosswalk, and he cannot see if there are any cars coming from that direction. When he approaches the curb a warning pops up in his goggles about an oncoming car behind the van. Through V2V communication, the parked vehicle may determine that this first oncoming car is a manually driven (old) car, and therefore it most likely is not able to determine that there is someone in the blind spot. A second oncoming car is autonomous, and has received information about him (e.g., via V2P and V2V), and will yield and let him cross. He waits until the first car goes by (as notified/warned in his AR visualization via the parked van), and then his AR goggles indicate it is safe to cross the street. While he crosses the street, he sees the approaching autonomous car which has slowed down as he crosses.
[0117] Exemplary embodiments disclosed herein are implemented using one or more wired and/or wireless network nodes, such as a wireless transmit/receive unit (WTRU) or other network entity.
[0118] FIG. 14 is a system diagram of an exemplary WTRU 102, which may be employed as a digital device or in vehicle terminal device in embodiments described herein. As shown in FIG. 14, the WTRU 102 may include a processor 118, a communication interface 119 including a transceiver 120, a
transmit/receive element 122, a speaker/microphone 124, a keypad 126, a display/touchpad 128, a nonremovable memory 130, a removable memory 132, a power source 134, a global positioning system (GPS) chipset 136, and sensors 138. It will be appreciated that the WTRU 102 may include any sub-combination of the foregoing elements while remaining consistent with an embodiment. [0119] The processor 118 may be a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Array (FPGAs) circuits, any other type of integrated circuit (IC), a state machine, and the like. The processor 118 may perform signal coding, data processing, power control, input/output processing, and/or any other functionality that enables the WTRU 102 to operate in a wireless environment. The processor 118 may be coupled to the transceiver 120, which may be coupled to the transmit/receive element 122. While FIG. 14 depicts the processor 118 and the transceiver 120 as separate components, it will be appreciated that the processor 118 and the transceiver 120 may be integrated together in an electronic package or chip.
[0120] The transmit/receive element 122 may be configured to transmit signals to, or receive signals from, a base station over the air interface 116. For example, in one embodiment, the transmit/receive element 122 may be an antenna configured to transmit and/or receive RF signals. In another embodiment, the transmit/receive element 122 may be an emitter/detector configured to transmit and/or receive IR, UV, or visible light signals, as examples. In yet another embodiment, the transmit/receive element 122 may be configured to transmit and receive both RF and light signals. It will be appreciated that the transmit/receive element 122 may be configured to transmit and/or receive any combination of wireless signals.
[0121] In addition, although the transmit/receive element 122 is depicted in FIG. 14 as a single element, the WTRU 102 may include any number of transmit/receive elements 122. More specifically, the WTRU 102 may employ MIMO technology. Thus, in one embodiment, the WTRU 102 may include two or more transmit/receive elements 122 (e.g., multiple antennas) for transmitting and receiving wireless signals over the air interface 116.
[0122] The transceiver 120 may be configured to modulate the signals that are to be transmitted by the transmit/receive element 122 and to demodulate the signals that are received by the transmit/receive element 122. As noted above, the WTRU 102 may have multi-mode capabilities. Thus, the transceiver 120 may include multiple transceivers for enabling the WTRU 102 to communicate via multiple RATs, such as UTRA and IEEE 802.11 , as examples.
[0123] The processor 118 of the WTRU 102 may be coupled to, and may receive user input data from, the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128 (e.g., a liquid crystal display (LCD) display unit or organic light-emitting diode (OLED) display unit). The processor 118 may also output user data to the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128. In addition, the processor 118 may access information from, and store data in, any type of suitable memory, such as the non-removable memory 130 and/or the removable memory 132. The non-removable memory 130 may include random-access memory (RAM), read-only memory (ROM), a hard disk, or any other type of memory storage device. The removable memory 132 may include a subscriber identity module (SIM) card, a memory stick, a secure digital (SD) memory card, and the like. In other embodiments, the processor 1 18 may access information from, and store data in, memory that is not physically located on the WTRU 102, such as on a server or a home computer (not shown).
[0124] The processor 118 may receive power from the power source 134, and may be configured to distribute and/or control the power to the other components in the WTRU 102. The power source 134 may be any suitable device for powering the WTRU 102. As examples, the power source 134 may include one or more dry cell batteries (e.g., nickel-cadmium (NiCd), nickel-zinc (NiZn), nickel metal hydride (NiMH), lithium-ion (Li-ion), and the like), solar cells, fuel cells, and the like.
[0125] The processor 118 may also be coupled to the GPS chipset 136, which may be configured to provide location information (e.g., longitude and latitude) regarding the current location of the WTRU 102. In addition to, or in lieu of, the information from the GPS chipset 136, the WTRU 102 may receive location information over the air interface 116 from a base station and/or determine its location based on the timing of the signals being received from two or more nearby base stations. It will be appreciated that the WTRU 102 may acquire location information by way of any suitable location-determination method while remaining consistent with an embodiment.
[0126] The processor 118 may further be coupled to other peripherals 138, which may include one or more software and/or hardware modules that provide additional features, functionality and/or wired or wireless connectivity. For example, the peripherals 138 may include sensors such as an accelerometer, an e-compass, a satellite transceiver, a digital camera (for photographs or video), a universal serial bus (USB) port, a vibration device, a television transceiver, a hands free headset, a Bluetooth® module, a frequency modulated (FM) radio unit, a digital music player, a media player, a video game player module, an Internet browser, and the like. Peripherals may also include the in-vehicle sensors such as cameras, radar, lidar, combination sensors, and the like. The processor may also have access to (HD) maps and LDM data.
[0127] FIG. 15 depicts an exemplary network entity 190 that may be used in embodiments of the present disclosure. As depicted in FIG. 15, network entity 190 includes a communication interface 192, a processor 194, and non-transitory data storage 196, all of which are communicatively linked by a bus, network, or other communication path 198.
[0128] Communication interface 192 may include one or more wired communication interfaces and/or one or more wireless-communication interfaces. With respect to wired communication, communication interface 192 may include one or more interfaces such as Ethernet interfaces, as an example. With respect to wireless communication, communication interface 192 may include components such as one or more antennae, one or more transceivers/chipsets designed and configured for one or more types of wireless (e.g., LTE) communication, and/or any other components deemed suitable by those of skill in the relevant art. And further with respect to wireless communication, communication interface 192 may be equipped at a scale and with a configuration appropriate for acting on the network side— as opposed to the client side— of wireless communications (e.g., LTE communications, Wi Fi communications, and the like). Thus, communication interface 192 may include the appropriate equipment and circuitry (perhaps including multiple transceivers) for serving multiple mobile stations, UEs, or other access terminals in a coverage area.
[0129] Processor 194 may include one or more processors of any type deemed suitable by those of skill in the relevant art, some examples including a general-purpose microprocessor and a dedicated DSP.
[0130] Data storage 196 may take the form of any non-transitory computer-readable medium or combination of such media, some examples including flash memory, read-only memory (ROM), and random-access memory (RAM) to name but a few, as any one or more types of non-transitory data storage deemed suitable by those of skill in the relevant art could be used. As depicted in FIG. 15, data storage 196 contains program instructions 197 executable by processor 194 for carrying out various combinations of the various network-entity functions described herein.
[0131] In one embodiment, there is a method of warning pedestrians of traffic by a parked vehicle, comprising: determining, at a first parked vehicle, a pedestrian location and pedestrian trajectory of a first pedestrian detected by the first parked vehicle; determining a vehicle location and vehicle trajectory of a first moving vehicle detected by the first parked vehicle; responsive to a determination that the first moving vehicle is a risk to the first pedestrian, sending a V2V message from the first parked vehicle to the first moving vehicle comprising information about the pedestrian trajectory of the first pedestrian; and responsive to receiving, at the first parked vehicle, a response from the first moving vehicle indicating intent of the first moving vehicle to yield to the first pedestrian, indicating to the first pedestrian that it is safe to cross. The method may also include wherein determining that the first moving vehicle is a risk to the first pedestrian comprises determining that the first pedestrian is in a blind spot caused by the first parked vehicle with respect to the first moving vehicle. The method may further comprise pre-calculating, at the first parked vehicle, the blind spot caused by the first parked vehicle with respect to an estimated oncoming vehicle is based at least in part on estimated average parameters of an oncoming vehicle. The method may include wherein the estimated average parameters comprise: height of a sensor of the oncoming vehicle; side-to-side position of the sensor of the oncoming vehicle; a speed of the oncoming vehicle; and a location within the lane of the oncoming vehicle. The method may further comprise recalculating the blind spot based at least in part on details of the first moving vehicle. The method may include wherein the details of the first moving vehicle are determined at least in part by at least one sensor of the first parked vehicle. The method may include wherein the details of the first moving vehicle are determined at least in part from a V2V basic safety message (BSM) broadcast by the first moving vehicle and received by the first parked vehicle. The method may also include wherein the location and trajectory of the first pedestrian are determined based at least in part on a V2P basic safety message (BSM) broadcast from a device associated with the first pedestrian. The method may include wherein the V2P BSM further comprises a VRU warning request. The method may include wherein the communication of the indication from the first parked vehicle comprises a V2P warning message. The method may include wherein the V2P warning message comprises: a warning message; details of the first moving vehicle; a description of the blind spot area; and information about the first pedestrian as detected by the first parked vehicle. The method may also include wherein the response from the first moving vehicle comprises a V2V BSM including an intention element, wherein the intention element indicates whether the first moving vehicle will yield to the first pedestrian. The method may also include wherein the location and trajectory of the first pedestrian are determined based at least in part on at least one measurement by at least the first sensor of the first parked vehicle. The method may also include wherein the location and trajectory of the first moving vehicle are determined based at least in part on at least one measurement by at least one sensor of the first parked vehicle. The method may also include wherein the location and trajectory of the first pedestrian are determined based at least in part on a V2P message broadcast from the first pedestrian and received by the first parked vehicle. The method may also include wherein the location and trajectory of the first moving vehicle are determined based at least in part on a V2V message broadcast from the first moving vehicle and received by the first parked vehicle. The method may also include wherein the V2V message broadcast from the first parked vehicle comprises a collective perception message.
[0132] In one embodiment, there is a method comprising: detecting, at a first stationary vehicle, a pedestrian location and a pedestrian trajectory of a first pedestrian; detecting, at the first stationary vehicle, a vehicle location and a vehicle trajectory of at least a first moving vehicle; broadcasting a V2V message comprising information about the pedestrian trajectory of the first pedestrian; and communicating from the first stationary vehicle to the first pedestrian, based at least in part on a response to the broadcast V2V message received by the first stationary vehicle, an indication of whether it is safe for the first pedestrian to cross at a road crossing monitored by the first stationary vehicle. The method may also include wherein determining that the first moving vehicle is a risk to the first pedestrian comprises determining that the first pedestrian is in a blind spot caused by the first stationary vehicle with respect to the first moving vehicle.
The method may further comprise pre-calculating, at the first stationary vehicle, the blind spot caused by the first stationary vehicle with respect to an estimated oncoming vehicle is based at least in part on estimated average parameters of an oncoming vehicle. The method may include wherein the estimated average parameters comprise: height of a sensor of the oncoming vehicle; side-to-side position of the sensor of the oncoming vehicle; a speed of the oncoming vehicle; and a location within the lane of the oncoming vehicle. The method may further comprise recalculating the blind spot based at least in part on details of the first moving vehicle. The method may include wherein the details of the first moving vehicle are determined at least in part by at least one sensor of the first stationary vehicle. The method may include wherein the details of the first moving vehicle are determined at least in part from a V2V basic safety message (BSM) broadcast by the first moving vehicle and received by the first stationary vehicle.
The method may also include wherein the location and trajectory of the first pedestrian are determined based at least in part on a V2P basic safety message (BSM) broadcast from a device associated with the first pedestrian. The method may include wherein the V2P BSM further comprises a VRU warning request.
The method may also include wherein the communication of the indication from the first stationary vehicle comprises a V2P warning message. The method may include wherein the V2P warning message comprises: a warning message; details of the first moving vehicle; a description of a blind spot area; and information about the first pedestrian as detected by the first stationary vehicle. The method may further comprise: receiving, at the first stationary vehicle, a response from the first moving vehicle indicating an intent of the first moving vehicle to yield to the first pedestrian; and communicating, from the first stationary vehicle, an indication that it is safe for the first pedestrian to cross at the road crossing. The method may include wherein the communication of the indication from the first stationary vehicle comprises a V2P warning message. The method may include wherein the V2P warning message comprises: a warning message; details of the first moving vehicle; a description of a blind spot area; and information about the first pedestrian as detected by the first stationary vehicle. The method may include wherein the response from the first moving vehicle comprises a V2V BSM including an intention element, wherein the intention element indicates whether the first moving vehicle will yield to the first pedestrian. The method may also further comprise: determining, at the first stationary vehicle, that either no response from the first moving vehicle has been received or that a response was received from the first moving vehicle indicating an intent of the first moving vehicle not to yield to the first pedestrian; and communicating, from the first stationary vehicle, an indication that it is not safe for the first pedestrian to cross at the road crossing. The method may further comprise, prior to communicating the indication to the first pedestrian, recalculating the position and trajectory of each of the first pedestrian and the first moving vehicle. The method may include wherein the communication of the indication from the first stationary vehicle comprises a V2P warning message.
The method may include wherein the V2P warning message comprises: a warning message; details of the first moving vehicle; a description of a blind spot area; and information about the first pedestrian as detected by the first stationary vehicle. The method may include wherein communicating the indication from the first stationary vehicle comprises generating, from the first stationary vehicle, at least one of a visual or audio warning indicator to the first pedestrian. The method may further comprise, responsive to a determination that no response was received from the first moving vehicle, generating, from the first stationary vehicle, at least one of a visual or audio warning indicator to the first pedestrian. The method may include wherein a visual indicator to the first pedestrian comprises a visualization projected by the first stationary vehicle onto the road. The method may also include wherein the response from the first moving vehicle comprises a V2V BSM including an intention element, wherein the intention element indicates whether the first moving vehicle will yield to the first pedestrian. The method may also include wherein the location and trajectory of the first pedestrian are determined based at least in part on at least one measurement by at least the first sensor of the first stationary vehicle. The method may also include wherein the location and trajectory of the first moving vehicle are determined based at least in part on at least one measurement by at least one sensor of the first stationary vehicle. The method may also include wherein the location and trajectory of the first pedestrian are determined based at least in part on a V2P message broadcast from the first pedestrian and received by the first stationary vehicle. The method may also include wherein the location and trajectory of the first moving vehicle are determined based at least in part on a V2V message broadcast from the first moving vehicle and received by the first stationary vehicle. The method may also include wherein the V2V message broadcast from the first stationary vehicle comprises a collective perception message.
[0133] In one embodiment, there is a method comprising: pre-calculating, at a first parked vehicle, an estimated blind spot area created by the first parked vehicle's position relative to a vulnerable road user (VRU) crossing location and an estimated approaching vehicle; detecting, by a first sensor of the first parked vehicle, a first VRU approaching the VRU crossing location within the estimated blind spot area; determining a location and trajectory of the first VRU; detecting, at the first parked vehicle, at least a first oncoming vehicle; determining a location and trajectory of the first oncoming vehicle; and responsive to a determination that the first oncoming vehicle is a risk to the first VRU, communicating from the first parked vehicle to the first VRU, an indication of whether it is safe for the first VRU to cross at the VRU crossing location. The method may also include wherein determining that the first oncoming vehicle is a risk to the first VRU comprises determining that the first VRU is located in the blind spot area caused by the first parked vehicle with respect to the first oncoming vehicle. The method may also include wherein the precalculation is based at least in part on estimated average parameters of an oncoming vehicle. The method may include wherein the estimated average parameters comprise: height of a sensor of the oncoming vehicle; side-to-side position of the sensor of the oncoming vehicle; a speed of the oncoming vehicle; and a location within the lane of the oncoming vehicle. The method may also include wherein the estimated blind spot area comprises a front corner blind spot edge. The method may include wherein the estimated blind spot area further comprises a back corner blind spot edge. The method may include wherein the front corner blind spot edge has a fixed length. The method may include wherein the fixed length is substantially equal to a length of the first parked vehicle. The method may include wherein the front corner blind spot edge has a dynamic length. The method may include wherein the dynamic length is based at least in part on a number of detected VRUs. The method may include wherein if only the first VRU is detected, the dynamic length comprises a scalar multiple of the first VRU's distance from the first parked vehicle. The method may include wherein if multiple VRUs are detected, the dynamic length is based at least in part on either: the number of VRUs detected; or a group distance from the parked vehicle. The method may also further comprise recalculating the blind spot area based at least in part on details of the first oncoming vehicle. The method may include wherein the details of the first oncoming vehicle are determined at least in part by at least one sensor of the first parked vehicle. The method may include wherein the details of the first oncoming vehicle are determined at least in part from a V2V basic safety message (BSM) broadcast by the first oncoming vehicle and received by the first parked vehicle. The method may include wherein the
V2V BSM broadcast by the first oncoming vehicle further comprises an indication of an automation mode of the first oncoming vehicle. The method may also include wherein the location and trajectory of the first VRU are determined based at least in part on a V2P basic safety message (BSM) broadcast from a device associated with the first VRU. The method may include wherein the V2P BSM further comprises a VRU warning request. The method may also further comprise: receiving, at the first parked vehicle, a response from the first oncoming vehicle indicating an intent of the first oncoming vehicle to yield to the first VRU; and communicating, from the first parked vehicle, an indication that it is safe for the first VRU to cross at the crossing location. The method may include wherein the communication of the indication from the first parked vehicle comprises a V2P warning message. The method may include wherein the V2P warning message comprises: a warning message; details of the first oncoming vehicle; a description of the blind spot area; and information about the first VRU as detected by the first parked vehicle. The method may include wherein the response from the first oncoming vehicle comprises a V2V BSM including an intention element, wherein the intention element indicates whether the first oncoming vehicle will yield to the first
VRU. The method may also further comprise: determining, at the first parked vehicle, that either no response from the first oncoming vehicle has been received or that a response was received from the first oncoming vehicle indicating an intent of the first oncoming vehicle not to yield to the first VRU; and communicating, from the first parked vehicle, an indication that it is not safe for the first VRU to cross at the crossing location. The method may further comprise, prior to communicating the indication to the first VRU, recalculating the position and trajectory of each of the first VRU and the first oncoming vehicle. The method may include wherein the communication of the indication from the first parked vehicle comprises a V2P warning message. The method may include wherein the V2P warning message comprises: a warning message; details of the first oncoming vehicle; a description of the blind spot area; and information about the first VRU as detected by the first parked vehicle. The method may include wherein communicating the indication from the first parked vehicle comprises generating, from the first parked vehicle, at least one of a visual or audio warning indicator to the first VRU. The method may further comprise, responsive to a determination that no response was received from the first oncoming vehicle, generating, from the first parked vehicle, at least one of a visual or audio warning indicator to the first VRU. The method may include wherein a visual indicator to the first VRU comprises a visualization projected by the first parked vehicle onto the street. The method may also further comprise updating the determined location and trajectory of each of the first VRU and the first oncoming vehicle. The method may also include wherein the location and trajectory of the first VRU are determined based at least in part on at least one measurement by at least the first sensor of the first parked vehicle. The method may also include wherein the location and trajectory of the first oncoming vehicle are determined based at least in part on at least one measurement by at least one sensor of the first parked vehicle. The method may also include wherein the location and trajectory of the first VRU are determined based at least in part on a V2P message broadcast from the first VRU and received by the first parked vehicle. The method may also include wherein the location and trajectory of the first oncoming vehicle are determined based at least in part on a V2V message broadcast from the first oncoming vehicle and received by the first parked vehicle. The method may also include wherein the V2V message broadcast from the first parked vehicle comprises a collective perception message. The method may also include wherein the VRU crossing location comprises a crosswalk. The method may also include wherein the VRU crossing location comprises a space between parked vehicles.
[0134] In one embodiment, there is a method comprising: broadcasting, from a digital device associated with a first vulnerable road user (VRU), at least a V2P basic safety message (BSM) and a VRU warning request; receiving, at the digital device from a first parked vehicle, a responsive VRU warning message; and visualizing the received VRU warning message for display to the first VRU. The method may also include wherein visualizing the received VRU warning message comprises presenting, by the digital device, a map view indicating a warning message, the first VRU's position, a blind spot area as determined by the first parked vehicle, and an indication of at least a first oncoming vehicle. The method may also include wherein visualizing the received VRU warning message comprises presenting, by an augmented reality (AR) device associated with the first VRU, an AR overlay indicating a warning message, a blind spot area as determined by the first parked vehicle, an edge of the blind spot area, and an indication of at least a first oncoming vehicle. The method may include wherein the blind spot area may be rendered in a first color associated with a "don't walk" warning message or a second color associated with a "walk" warning message. The method may also include wherein the VRU warning request comprises at least one of: an indication of a street crossing location; a request for blind spot assistance; and a VRU type. The method may also include wherein a wearable device is associated with the first VRU, and further comprising generating a vibration associated with the VRU warning message.
[0135] In one embodiment, there is a system comprising a processor and a non-transitory storage medium storing instructions operative, when executed on the processor, to perform functions including: determining, at a first parked vehicle, a pedestrian location and pedestrian trajectory of a first pedestrian detected by the first parked vehicle; determining a vehicle location and vehicle trajectory of a first moving vehicle detected by the first parked vehicle; responsive to a determination that the first moving vehicle is a risk to the first pedestrian, sending a V2V message from the first parked vehicle to the first moving vehicle comprising information about the pedestrian trajectory of the first pedestrian; and responsive to receiving, at the first parked vehicle, a response from the first moving vehicle indicating intent of the first moving vehicle to yield to the first pedestrian, indicating to the first pedestrian that it is safe to cross.
[0136] In one embodiment, there is a system comprising a processor and a non-transitory storage medium storing instructions operative, when executed on the processor, to perform functions including: detecting, at a first stationary vehicle, a pedestrian location and a pedestrian trajectory of a first pedestrian; detecting, at the first stationary vehicle, a vehicle location and a vehicle trajectory of at least a first moving vehicle; broadcasting a V2V message comprising information about the pedestrian trajectory of the first pedestrian; and communicating from the first stationary vehicle to the first pedestrian, based at least in part on a response to the broadcast V2V message received by the first stationary vehicle, an indication of whether it is safe for the first pedestrian to cross at a road crossing monitored by the first stationary vehicle.
[0137] In one embodiment, there is a system comprising a processor and a non-transitory storage medium storing instructions operative, when executed on the processor, to perform functions including: pre-calculating, at a first parked vehicle, an estimated blind spot area created by the first parked vehicle's position relative to a vulnerable road user (VRU) crossing location and an estimated approaching vehicle; detecting, by a first sensor of the first parked vehicle, a first VRU approaching the VRU crossing location within the estimated blind spot area; determining a location and trajectory of the first VRU; detecting, at the first parked vehicle, at least a first oncoming vehicle; determining a location and trajectory of the first oncoming vehicle; and responsive to a determination that the first oncoming vehicle is a risk to the first VRU, communicating from the first parked vehicle to the first VRU, an indication of whether it is safe for the first VRU to cross at the VRU crossing location.
[0138] In one embodiment, there is a system comprising a processor and a non-transitory storage medium storing instructions operative, when executed on the processor, to perform functions including: broadcasting, from a digital device associated with a first vulnerable road user (VRU), at least a V2P basic safety message (BSM) and a VRU warning request; receiving, at the digital device from a first parked vehicle, a responsive VRU warning message; and visualizing the received VRU warning message for display to the first VRU.
[0139] Although features and elements are described above in particular combinations, one of ordinary skill in the art will appreciate that each feature or element can be used alone or in any combination with the other features and elements. In addition, the methods described herein may be implemented in a computer program, software, or firmware incorporated in a computer-readable medium for execution by a computer or processor. Examples of computer-readable storage media include, but are not limited to, a read only memory (ROM), a random access memory (RAM), a register, cache memory, semiconductor memory devices, magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks, and digital versatile disks (DVDs). A processor in association with software may be used to implement a radio frequency transceiver for use in a WTRU, UE, terminal, base station, RNC, or any host computer.

Claims

WO 2018/128946 PCT/US2017/069097 CLAIMS What is Claimed:
1. A method of operating a parked vehicle to warn pedestrians of traffic, comprising:
detecting a location and trajectory of a first pedestrian;
detecting a location and trajectory of at least a first moving vehicle;
responsive to a determination that the first moving vehicle is a risk to the first pedestrian, sending a V2V message to the first moving vehicle comprising information about the trajectory of the first pedestrian; and
in response to receiving from the first moving vehicle a response indicating intent to yield to the first pedestrian, indicating to the first pedestrian that it is safe to cross.
2. The method of claim 1 , further comprising:
detecting a location and trajectory of a second moving vehicle;
responsive to a determination that the second moving vehicle is a risk to the first pedestrian, sending a V2V message to the second moving vehicle comprising information about the trajectory of the first pedestrian; and
upon determining that no response was received from the second moving vehicle, warning the first pedestrian.
3. The method of claim 2, further comprising generating, from the parked vehicle, at least one of a visual or audio warning indicator to the first pedestrian.
4. The method of claim 3, wherein a visual indicator to the first pedestrian comprises a visualization projected by the parked vehicle onto the road.
5. The method of any of claims of 1 -4, wherein detecting the location and trajectory of the first moving vehicle comprises detecting the location and trajectory of the first moving vehicle at least in part by at least one sensor of the parked vehicle.
6. The method of any of claims of 1 -5, wherein detecting the location and trajectory of the first moving vehicle comprises receiving at the parked vehicle a V2V basic safety message (BSM) broadcast by the first moving vehicle.
7. The method of any of claims of 1 -6, wherein the location and trajectory of the first pedestrian are detected at least in part based on a V2P basic safety message (BSM) broadcast from a device associated with the first pedestrian and received by the parked vehicle.
Figure imgf000030_0001
WO 2018/128946 PCT/US2017/069097
8. The method of any of claims 1 -7, wherein the location and trajectory of the first pedestrian are
detected at least in part by at least one sensor of the parked vehicle.
9. The method of any of claims of 1 -8, wherein determining that the first moving vehicle is a risk to the first pedestrian comprises determining that the first pedestrian is in a blind spot caused by the parked vehicle with respect to the first moving vehicle.
10. The method of any of claims of 1 -9, further comprising pre-calculating, at the parked vehicle, the blind spot caused by the parked vehicle with respect to an estimated oncoming vehicle based at least in part on estimated average parameters of an oncoming vehicle.
11. The method of claim 10, further comprising recalculating the blind spot based at least in part on details of the first moving vehicle.
12. The method of any of claims of 1 -11 , wherein receiving from the first moving vehicle the response indicating intent to yield to the first pedestrian comprises receiving a V2V BSM including an intention element, wherein the intention element indicates that the first moving vehicle will yield to the first pedestrian.
13. The method of any of claims of 1 -12, further comprising, prior to indicating to the first pedestrian that it is safe to cross:
detecting again the location and trajectory of each of the first pedestrian and the first moving vehicle; and
determining that no other moving vehicle poses a risk to the first pedestrian.
14. The method of claim any of claims of 1 -13, wherein indicating to the first pedestrian that it is safe to cross comprises communicating from the parked vehicle at least one of a visual or audio indicator to the first pedestrian.
15. A system comprising a processor and a non-transitory computer-readable storage medium storing instructions operative, when executed on the processor, to perform functions including:
detecting a location and trajectory of a first pedestrian;
detecting a location and trajectory of at least a first moving vehicle;
responsive to a determination that the first moving vehicle is a risk to the first pedestrian, sending a V2V message to the first moving vehicle comprising information about the trajectory of the first pedestrian; and
Figure imgf000031_0001
WO 2018/128946 PCT/US2017/069097 in response to receiving from the first moving vehicle a response indicating intent to yield to the first pedestrian, indicating to the first pedestrian that it is safe to cross.
PCT/US2017/069097 2017-01-06 2017-12-29 Method for providing vulnerable road user warnings in a blind spot of a parked vehicle WO2018128946A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201762443480P 2017-01-06 2017-01-06
US62/443,480 2017-01-06

Publications (1)

Publication Number Publication Date
WO2018128946A1 true WO2018128946A1 (en) 2018-07-12

Family

ID=61054504

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2017/069097 WO2018128946A1 (en) 2017-01-06 2017-12-29 Method for providing vulnerable road user warnings in a blind spot of a parked vehicle

Country Status (1)

Country Link
WO (1) WO2018128946A1 (en)

Cited By (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10585471B2 (en) 2017-10-03 2020-03-10 Disney Enterprises, Inc. Systems and methods to provide an interactive space based on predicted events
US10589625B1 (en) 2015-12-11 2020-03-17 Disney Enterprises, Inc. Systems and methods for augmenting an appearance of an actual vehicle component with a virtual vehicle component
CN111223331A (en) * 2018-11-26 2020-06-02 华为技术有限公司 Vehicle early warning method and related device
WO2020123823A1 (en) * 2018-12-13 2020-06-18 Qualcomm Incorporated Interactive vehicular communication
WO2020149714A1 (en) * 2019-01-18 2020-07-23 엘지전자 주식회사 Cpm message division method using object state sorting
WO2020160748A1 (en) * 2019-02-04 2020-08-13 Nokia Technologies Oy Improving operation of wireless communication networks for detecting vulnerable road users
CN111546981A (en) * 2020-05-08 2020-08-18 新石器慧通(北京)科技有限公司 Vehicle warning device and method and automatic driving vehicle
US10785621B1 (en) 2019-07-30 2020-09-22 Disney Enterprises, Inc. Systems and methods to provide an interactive space based on vehicle-to-vehicle communications
CN111907520A (en) * 2020-07-31 2020-11-10 东软睿驰汽车技术(沈阳)有限公司 Pedestrian posture recognition method and device and unmanned automobile
US10841632B2 (en) 2018-08-08 2020-11-17 Disney Enterprises, Inc. Sequential multiplayer storytelling in connected vehicles
WO2020254283A1 (en) * 2019-06-21 2020-12-24 Volkswagen Aktiengesellschaft Communication apparatus for non-autonomous motor vehicles
WO2021040352A1 (en) * 2019-08-23 2021-03-04 엘지전자 주식회사 Method by which device transmits and receives cpm in wireless communication system for supporting sidelink, and device therefor
CN112562404A (en) * 2020-11-24 2021-03-26 中国联合网络通信集团有限公司 Vehicle early warning method and device, computer equipment and medium
US10970560B2 (en) 2018-01-12 2021-04-06 Disney Enterprises, Inc. Systems and methods to trigger presentation of in-vehicle content
US10969748B1 (en) 2015-12-28 2021-04-06 Disney Enterprises, Inc. Systems and methods for using a vehicle as a motion base for a simulated experience
EP3826335A1 (en) * 2019-11-22 2021-05-26 Volkswagen AG Control unit, vehicle and method for adjustment of a parameter of a vehicle
CN112889097A (en) * 2018-10-17 2021-06-01 戴姆勒股份公司 Road crossing channel visualization method
US11062610B1 (en) 2020-03-06 2021-07-13 Toyota Motor Engineering & Manufacturing North America, Inc. Methods and systems for using parked vehicles to notify rideshare drivers of passenger pickup locations
WO2021141448A1 (en) * 2020-01-09 2021-07-15 엘지전자 주식회사 Method for transmitting, by apparatus, cpm in wireless communication system supporting sidelink, and apparatus therefor
CN113170295A (en) * 2018-10-17 2021-07-23 诺基亚技术有限公司 Virtual representation of unconnected vehicles in all-on-vehicle (V2X) system
US11076276B1 (en) 2020-03-13 2021-07-27 Disney Enterprises, Inc. Systems and methods to provide wireless communication between computing platforms and articles
CN113453956A (en) * 2019-02-28 2021-09-28 深圳市大疆创新科技有限公司 Apparatus and method for transmitting vehicle information
WO2021228405A1 (en) * 2020-05-15 2021-11-18 Toyota Motor Europe Road safety warning system for pedestrian
EP3933803A1 (en) * 2020-06-29 2022-01-05 Beijing Baidu Netcom Science And Technology Co. Ltd. Method, apparatus and electronic device for early-warning
EP3879509A4 (en) * 2018-12-19 2022-01-19 Samsung Electronics Co., Ltd. Electronic device and method for providing v2x service using same
WO2022074638A1 (en) * 2020-10-08 2022-04-14 Sony Group Corporation Vehicle control for user safety and experience
EP4020429A1 (en) * 2020-12-22 2022-06-29 Mitsubishi Electric R&D Centre Europe B.V. Vru parameter update
US20220255223A1 (en) * 2019-05-07 2022-08-11 Bao Tran Cellular system
DE102021202955A1 (en) 2021-03-25 2022-09-29 Robert Bosch Gesellschaft mit beschränkter Haftung Procedure for averting danger within a traffic network system
US11485377B2 (en) 2020-02-06 2022-11-01 Toyota Motor Engineering & Manufacturing North America, Inc. Vehicular cooperative perception for identifying a connected vehicle to aid a pedestrian
US11524242B2 (en) 2016-01-20 2022-12-13 Disney Enterprises, Inc. Systems and methods for providing customized instances of a game within a virtual space
US11532232B2 (en) * 2019-11-01 2022-12-20 Lg Electronics Inc. Vehicle having dangerous situation notification function and control method thereof
US11605298B2 (en) 2020-01-29 2023-03-14 Toyota Motor Engineering & Manufacturing North America, Inc. Pedestrian navigation based on vehicular collaborative computing
US11656094B1 (en) 2016-04-11 2023-05-23 State Farm Mutual Automobile Insurance Company System for driver's education
US11658407B2 (en) * 2019-05-07 2023-05-23 Bao Tran Cellular system
US11727495B1 (en) 2016-04-11 2023-08-15 State Farm Mutual Automobile Insurance Company Collision risk-based engagement and disengagement of autonomous control of a vehicle
WO2023241521A1 (en) * 2022-06-14 2023-12-21 虹软科技股份有限公司 Blind area monitoring system and method
EP4270353A4 (en) * 2021-01-27 2024-01-03 NEC Corporation Vehicle-mounted device, processing method, and program
US11994333B2 (en) 2021-11-17 2024-05-28 Whirlpool Corporation Appliance fan assembly
US12027780B2 (en) * 2023-03-22 2024-07-02 Bao Tran Cellular system

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006031443A (en) * 2004-07-16 2006-02-02 Denso Corp Collision avoidance notification system
JP2008007079A (en) * 2006-06-30 2008-01-17 Aisin Seiki Co Ltd Device and method for road surface projection
US20100039291A1 (en) 2008-08-15 2010-02-18 Harrison Michael A Vehicle/Crosswalk Communication System
WO2012131871A1 (en) * 2011-03-28 2012-10-04 パイオニア株式会社 Information display device, control method, program, and storage medium
US8340894B2 (en) 2009-10-08 2012-12-25 Honda Motor Co., Ltd. Method of dynamic intersection mapping
KR101354049B1 (en) * 2012-10-30 2014-02-05 현대엠엔소프트 주식회사 Method for pedestrians jaywalking information notification system
US20140267398A1 (en) * 2013-03-14 2014-09-18 Honda Motor Co., Ltd Augmented reality heads up display (hud) for yield to pedestrian safety cues
US20140267263A1 (en) 2013-03-13 2014-09-18 Honda Motor Co., Ltd. Augmented reality heads up display (hud) for left turn safety cues
US8954252B1 (en) * 2012-09-27 2015-02-10 Google Inc. Pedestrian notifications
US20160179094A1 (en) 2014-12-17 2016-06-23 Bayerische Motoren Werke Aktiengesellschaft Communication Between a Vehicle and a Road User in the Surroundings of a Vehicle

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006031443A (en) * 2004-07-16 2006-02-02 Denso Corp Collision avoidance notification system
JP2008007079A (en) * 2006-06-30 2008-01-17 Aisin Seiki Co Ltd Device and method for road surface projection
US20100039291A1 (en) 2008-08-15 2010-02-18 Harrison Michael A Vehicle/Crosswalk Communication System
US8340894B2 (en) 2009-10-08 2012-12-25 Honda Motor Co., Ltd. Method of dynamic intersection mapping
WO2012131871A1 (en) * 2011-03-28 2012-10-04 パイオニア株式会社 Information display device, control method, program, and storage medium
US8954252B1 (en) * 2012-09-27 2015-02-10 Google Inc. Pedestrian notifications
KR101354049B1 (en) * 2012-10-30 2014-02-05 현대엠엔소프트 주식회사 Method for pedestrians jaywalking information notification system
US20140267263A1 (en) 2013-03-13 2014-09-18 Honda Motor Co., Ltd. Augmented reality heads up display (hud) for left turn safety cues
US20140267398A1 (en) * 2013-03-14 2014-09-18 Honda Motor Co., Ltd Augmented reality heads up display (hud) for yield to pedestrian safety cues
US9064420B2 (en) 2013-03-14 2015-06-23 Honda Motor Co., Ltd. Augmented reality heads up display (HUD) for yield to pedestrian safety cues
US20160179094A1 (en) 2014-12-17 2016-06-23 Bayerische Motoren Werke Aktiengesellschaft Communication Between a Vehicle and a Road User in the Surroundings of a Vehicle

Cited By (54)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10589625B1 (en) 2015-12-11 2020-03-17 Disney Enterprises, Inc. Systems and methods for augmenting an appearance of an actual vehicle component with a virtual vehicle component
US10969748B1 (en) 2015-12-28 2021-04-06 Disney Enterprises, Inc. Systems and methods for using a vehicle as a motion base for a simulated experience
US11524242B2 (en) 2016-01-20 2022-12-13 Disney Enterprises, Inc. Systems and methods for providing customized instances of a game within a virtual space
US11727495B1 (en) 2016-04-11 2023-08-15 State Farm Mutual Automobile Insurance Company Collision risk-based engagement and disengagement of autonomous control of a vehicle
US11656094B1 (en) 2016-04-11 2023-05-23 State Farm Mutual Automobile Insurance Company System for driver's education
US10585471B2 (en) 2017-10-03 2020-03-10 Disney Enterprises, Inc. Systems and methods to provide an interactive space based on predicted events
US10970560B2 (en) 2018-01-12 2021-04-06 Disney Enterprises, Inc. Systems and methods to trigger presentation of in-vehicle content
US10841632B2 (en) 2018-08-08 2020-11-17 Disney Enterprises, Inc. Sequential multiplayer storytelling in connected vehicles
CN113170295A (en) * 2018-10-17 2021-07-23 诺基亚技术有限公司 Virtual representation of unconnected vehicles in all-on-vehicle (V2X) system
CN112889097B (en) * 2018-10-17 2023-02-21 梅赛德斯-奔驰集团股份公司 Road crossing channel visualization method
CN112889097A (en) * 2018-10-17 2021-06-01 戴姆勒股份公司 Road crossing channel visualization method
EP3869470A4 (en) * 2018-11-26 2021-12-22 Huawei Technologies Co., Ltd. Vehicle early warning method and related apparatus
US11447148B2 (en) 2018-11-26 2022-09-20 Huawei Cloud Computing Technologies Co., Ltd. Vehicle warning method and related apparatus
CN111223331A (en) * 2018-11-26 2020-06-02 华为技术有限公司 Vehicle early warning method and related device
US10878698B2 (en) 2018-12-13 2020-12-29 Qualcomm Incorporated Interactive vehicular communication
WO2020123823A1 (en) * 2018-12-13 2020-06-18 Qualcomm Incorporated Interactive vehicular communication
US11749104B2 (en) 2018-12-19 2023-09-05 Samsung Electronics Co., Ltd. Electronic device and method for providing V2X service using same
EP3879509A4 (en) * 2018-12-19 2022-01-19 Samsung Electronics Co., Ltd. Electronic device and method for providing v2x service using same
WO2020149714A1 (en) * 2019-01-18 2020-07-23 엘지전자 주식회사 Cpm message division method using object state sorting
WO2020160748A1 (en) * 2019-02-04 2020-08-13 Nokia Technologies Oy Improving operation of wireless communication networks for detecting vulnerable road users
US11715376B2 (en) 2019-02-04 2023-08-01 Nokia Technologies Oy Improving operation of wireless communication networks for detecting vulnerable road users
CN113453956A (en) * 2019-02-28 2021-09-28 深圳市大疆创新科技有限公司 Apparatus and method for transmitting vehicle information
US20220255223A1 (en) * 2019-05-07 2022-08-11 Bao Tran Cellular system
US11658407B2 (en) * 2019-05-07 2023-05-23 Bao Tran Cellular system
US20230253705A1 (en) * 2019-05-07 2023-08-10 Bao Tran Cellular system
US11646492B2 (en) * 2019-05-07 2023-05-09 Bao Tran Cellular system
US20230335893A1 (en) * 2019-05-07 2023-10-19 Bao Tran Cellular system
WO2020254283A1 (en) * 2019-06-21 2020-12-24 Volkswagen Aktiengesellschaft Communication apparatus for non-autonomous motor vehicles
US10785621B1 (en) 2019-07-30 2020-09-22 Disney Enterprises, Inc. Systems and methods to provide an interactive space based on vehicle-to-vehicle communications
WO2021040352A1 (en) * 2019-08-23 2021-03-04 엘지전자 주식회사 Method by which device transmits and receives cpm in wireless communication system for supporting sidelink, and device therefor
US11532232B2 (en) * 2019-11-01 2022-12-20 Lg Electronics Inc. Vehicle having dangerous situation notification function and control method thereof
EP3826335A1 (en) * 2019-11-22 2021-05-26 Volkswagen AG Control unit, vehicle and method for adjustment of a parameter of a vehicle
WO2021141448A1 (en) * 2020-01-09 2021-07-15 엘지전자 주식회사 Method for transmitting, by apparatus, cpm in wireless communication system supporting sidelink, and apparatus therefor
US11605298B2 (en) 2020-01-29 2023-03-14 Toyota Motor Engineering & Manufacturing North America, Inc. Pedestrian navigation based on vehicular collaborative computing
US11485377B2 (en) 2020-02-06 2022-11-01 Toyota Motor Engineering & Manufacturing North America, Inc. Vehicular cooperative perception for identifying a connected vehicle to aid a pedestrian
US11062610B1 (en) 2020-03-06 2021-07-13 Toyota Motor Engineering & Manufacturing North America, Inc. Methods and systems for using parked vehicles to notify rideshare drivers of passenger pickup locations
US11076276B1 (en) 2020-03-13 2021-07-27 Disney Enterprises, Inc. Systems and methods to provide wireless communication between computing platforms and articles
CN111546981A (en) * 2020-05-08 2020-08-18 新石器慧通(北京)科技有限公司 Vehicle warning device and method and automatic driving vehicle
WO2021228405A1 (en) * 2020-05-15 2021-11-18 Toyota Motor Europe Road safety warning system for pedestrian
EP3933803A1 (en) * 2020-06-29 2022-01-05 Beijing Baidu Netcom Science And Technology Co. Ltd. Method, apparatus and electronic device for early-warning
US11645909B2 (en) 2020-06-29 2023-05-09 Beijing Baidu Netcom Science Technology Co., Ltd. Method, apparatus and electronic device for early-warning
CN111907520A (en) * 2020-07-31 2020-11-10 东软睿驰汽车技术(沈阳)有限公司 Pedestrian posture recognition method and device and unmanned automobile
WO2022074638A1 (en) * 2020-10-08 2022-04-14 Sony Group Corporation Vehicle control for user safety and experience
US11830347B2 (en) 2020-10-08 2023-11-28 Sony Group Corporation Vehicle control for user safety and experience
CN112562404B (en) * 2020-11-24 2022-02-11 中国联合网络通信集团有限公司 Vehicle early warning method and device, computer equipment and medium
CN112562404A (en) * 2020-11-24 2021-03-26 中国联合网络通信集团有限公司 Vehicle early warning method and device, computer equipment and medium
EP4020429A1 (en) * 2020-12-22 2022-06-29 Mitsubishi Electric R&D Centre Europe B.V. Vru parameter update
WO2022137611A1 (en) * 2020-12-22 2022-06-30 Mitsubishi Electric Corporation Method to relay vru application server for updating vru, and ue, vru application server, and ue crient
EP4270353A4 (en) * 2021-01-27 2024-01-03 NEC Corporation Vehicle-mounted device, processing method, and program
DE102021202955B4 (en) 2021-03-25 2023-02-23 Robert Bosch Gesellschaft mit beschränkter Haftung Procedure for averting danger within a traffic network system
DE102021202955A1 (en) 2021-03-25 2022-09-29 Robert Bosch Gesellschaft mit beschränkter Haftung Procedure for averting danger within a traffic network system
US11994333B2 (en) 2021-11-17 2024-05-28 Whirlpool Corporation Appliance fan assembly
WO2023241521A1 (en) * 2022-06-14 2023-12-21 虹软科技股份有限公司 Blind area monitoring system and method
US12027780B2 (en) * 2023-03-22 2024-07-02 Bao Tran Cellular system

Similar Documents

Publication Publication Date Title
WO2018128946A1 (en) Method for providing vulnerable road user warnings in a blind spot of a parked vehicle
CN110392336B (en) Method, system, and computer readable medium for providing collaborative awareness between connected vehicles
US10730512B2 (en) Method and system for collaborative sensing for updating dynamic map layers
US10019898B2 (en) Systems and methods to detect vehicle queue lengths of vehicles stopped at a traffic light signal
CN108399792B (en) Unmanned vehicle avoidance method and device and electronic equipment
CN111284487B (en) Lane line display method and electronic device for executing same
WO2021147637A1 (en) Lane recommendation method and apparatus, and vehicular communication device
US10832577B2 (en) Method and system for determining road users with potential for interaction
US11113969B2 (en) Data-to-camera (D2C) based filters for improved object detection in images based on vehicle-to-everything communication
CN111724616B (en) Method and device for acquiring and sharing data based on artificial intelligence
US20150304817A1 (en) Mobile communication device and communication control method
CN111292351A (en) Vehicle detection method and electronic device for executing same
CN102490673A (en) Vehicle active safety control system based on internet of vehicles and control method of vehicle active safety control system
CN109196557A (en) Image processing apparatus, image processing method and vehicle
US20140368330A1 (en) Mobile body communication device and travel assistance method
CN106448263B (en) Vehicle driving safety management system and method
CN105336216A (en) Unsignalized intersection anti-collision early warning method and terminal
CN111902321A (en) Driver assistance for a motor vehicle
CN110962744A (en) Vehicle blind area detection method and vehicle blind area detection system
WO2020214359A1 (en) Real-world traffic model
US20230245564A1 (en) System and Method for Intersection Collision Avoidance
CN114041176A (en) Security performance evaluation device, security performance evaluation method, information processing device, and information processing method
CN113516861A (en) Collaborative safety driving model for autonomous vehicles
JP5104372B2 (en) Inter-vehicle communication system, inter-vehicle communication device
CN114450211A (en) Traffic control system, traffic control method, and control device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17835921

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17835921

Country of ref document: EP

Kind code of ref document: A1