WO2021076734A1 - Method for aligning camera and sensor data for augmented reality data visualization - Google Patents

Method for aligning camera and sensor data for augmented reality data visualization Download PDF

Info

Publication number
WO2021076734A1
WO2021076734A1 PCT/US2020/055743 US2020055743W WO2021076734A1 WO 2021076734 A1 WO2021076734 A1 WO 2021076734A1 US 2020055743 W US2020055743 W US 2020055743W WO 2021076734 A1 WO2021076734 A1 WO 2021076734A1
Authority
WO
WIPO (PCT)
Prior art keywords
sensor
data
target area
vehicle
processor
Prior art date
Application number
PCT/US2020/055743
Other languages
French (fr)
Inventor
Jesse Aaron Hacker
Original Assignee
Continental Automotive Systems, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Continental Automotive Systems, Inc. filed Critical Continental Automotive Systems, Inc.
Publication of WO2021076734A1 publication Critical patent/WO2021076734A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation

Definitions

  • the invention relates generally to a system for warning a driver of a vehicle and more particularly warning a driver there may be a potential danger hidden from the driver’s line of sight.
  • One general aspect includes a method for displaying an augmented image using an augmented image system.
  • the method also includes recording data with at least one sensor at a target area where a positioning system is associated with the at least one sensor.
  • the method also includes transmitting the data to the augmented image system in a vehicle proximate to the target area.
  • the method also includes analyzing the data with a processor to determine a location of an object proximate to the target area.
  • the method also includes utilizing the sensor position data, the object data and a location of the augmented image to define a bounded area within the augmented image system.
  • the method also includes augmenting an image by bounding the defined bounded area, where coordinates of the bounded area corresponds to a location of the object in the target area.
  • the method also includes displaying the augmented image on a display in view of a vehicle operator when the object is in the target area.
  • Implementations may include one or more of the following features.
  • the method where the at least one sensor is a moving and is proximate to the target area.
  • the at least one sensor has a digital GPS.
  • the first communication device and the second communication device are dedication short range communication devices.
  • the method may include detecting obstacles in the image between the vehicle and the object.
  • the bounded potion is a first color if the object is visible and a second color if the object is obstructed.
  • the at least one sensor being one selected from the group may include of long-range radar, short-range radar, lidar, ladar, camera, ultrasound, and sonar.
  • the processor is configured to: determine at least one of a speed, an acceleration, and a heading for each object based on data from the at least one sensor; and estimate a trajectory for each object based on at least one of the speed, acceleration, and heading for each object.
  • the method where the display is one of: a screen, a touch screen, a heads-up display, a helmet visor, and a windshield.
  • the processor for analyzing the data is part of the vehicle.
  • One general aspect includes an augmented visualization system in a vehicle proximate to an target area configured to receive data from at least one sensor at an target area.
  • the system also includes a processor for the vehicle configured with instructions for: receiving data via from the at least one sensor including object data and position data; analyzing the object data with a processor to determine a location of an object proximate to the target area to define a bounding area within the augmented image system; augmenting an image by bounding the defined bounding area, where coordinates of the bounded portion corresponds to a location of the object in the target area; and displaying the augmented image on a display in view of a vehicle operator when the object is in the target area [0018] Implementations may include one or more of the following features.
  • the system where the at least one sensor has a digital GPS.
  • the processor is further configured with instructions for detecting obstacles in the image between the vehicle and the object.
  • the bounded potion is a first color if the object is visible and a second color if the object is obstructed.
  • the at least one sensor is one selected from the group may include of long-range radar, short-range radar, lidar, ladar, camera, ultrasound, and sonar.
  • the processor is further configured with instructions to: determine at least one of a speed, an acceleration, and a heading for each object based on data from the at least one sensor; and estimate a trajectory for each object based on at least one of the speed, acceleration, and heading for each object.
  • the display is one of: a screen, a touch screen, a heads-up display, a helmet visor, and a windshield.
  • FIG. 1 is a perspective view of a traffic target area having a warning system being part of an infrastructure component, according to embodiments of the present invention
  • FIG 2A is a schematic illustration of vehicle having a first embodiment of an augmented visualization system, according to embodiments of the present invention
  • FIG 2B is a schematic illustration of vehicle of an exemplary display screen of the first embodiment of an augmented visualization system, according to embodiments of the present invention
  • FIG 2C is perspective view of an object detected by a non-stationary remote sensor and displayed by the augmented visualization system, according to embodiments of the present invention.
  • FIG 3A is a schematic illustration of a second embodiment of an augmented visualization system, illustrating an object in a first position detected by a non-stationary remote sensor according to embodiments of the present invention
  • FIG 3B is a schematic illustration of a second embodiment of an augmented visualization system, illustrating an object in a second position detected by a non-stationary remote sensor according to embodiments of the present invention
  • FIG 4A is a schematic illustration of a second embodiment of an augmented visualization system, illustrating the augmented display of the object in the first position based upon FIG 3A according to embodiments of the present invention
  • FIG 4B is a schematic illustration of a second embodiment of an augmented visualization system, illustrating the augmented display of the object in the second position based upon FIG 3B according to embodiments of the present invention
  • FIG 3A is a schematic illustration of a second embodiment of an augmented visualization system, illustrating an augmented display of the object in the second position based upon FIG 3B according to embodiments of the present invention
  • FIG 5 is a flow diagram of an exemplary arrangement of operations for operating a tow vehicle in reverse for attachment to a trailer.
  • an augmented visualization system 10 provides a display 12 of information from a monitoring system(s) 11 which may be augmented to provide select safety information.
  • the monitoring system 11 may provide intelligent intersections or other target areas 20 which may be enabled with communication device 24, such as dedicated short range communication (DSRC) device.
  • the monitoring system 11 may be for detecting objects 15 including vehicles and vulnerable road users proximate to the target area 20, and broadcasting information about them as a basic safety message (BSM) to another communication device 26.
  • BSM basic safety message
  • the first communication device 24 may be part of the monitoring system 11 or may be part of another vehicle or smart device proximate to the target area 20.
  • the information broadcast by the first communication device 24 may be received by a second communication device, 26, possibly another DSRC, in communication enabled vehicles 14a and allow the communication enabled vehicles 14a to warn the drivers of various situations which may be potentially dangerous.
  • the augmented visualization system 10 analyzes and provides a modified display of information from the monitoring system 11 to augment safety information that is shown in the display 12.
  • an augmented vehicle 14b uses the augmented visualization system 10 equipped to visually alert a driver to a potential danger.
  • the augmented visualization system 10 can be displayed on display screen within the vehicle, on a heads-up display, or otherwise overlaid to display on a vehicle windshield.
  • FIG. 1 illustrates a visualization system 10 with an first embodiment of a monitoring system 11.
  • the monitoring system 11 is associated with an target area 20 having at least one sensor 22 and at least one first communication device 24.
  • the at least one sensor 22 is a non-stationary sensor 22.
  • the sensor 22 may be equipped with a positioning device, such as a digital GPS to accurately define the location of the sensor 22 relative to the monitored area 20. More particularly, to orient the location of the sensor to the vehicle having the augmented visualization system 10.
  • the sensor may be for example attached to another vehicle driving in and/or proximate to the target area 20, a drone flying in and/or proximate to the target area 20, mounted to a bicycle, pedestrian personal device, etc.
  • the communication device 24 is enabled with dedicated short range communication (DSRC) to sharing information sensed by the at least one sensor 22 through communications to broadcast information to vehicles 14, 14a, 14b proximate to the target area or other devices capable of receiving such a communication, such as a smart phone.
  • DSRC dedicated short range communication
  • proximate may be interpreted by known dictionary definitions, other definitions known by those skilled in the art, within a distance to receive the communication from the first communication device 24, or within a predetermined physical distance of the target area 20 predetermined for the monitoring system 11.
  • the sensor 22 and communication device 24 are integrated into a single component, the sensor 22 and communication device 24 may be separate components in different locations or multiple types of sensors 22 may be linked to one communication device 24.
  • the sensor 22 in this embodiment is able to detect objects in a detection area, shown generally at 20.
  • the sensor 22 is a long-range radar sensor 22, but it is within the scope of the invention that other types of sensors maybe used, such as, but not limited to long-range radar, short-range radar, LIDAR (Light Imaging, Detection, and Ranging), LADAR (Laser Imaging, Detection, and Ranging), and other types of radar, a camera, ultrasound, or sonar.
  • the sensor 22 is equipped with a positioning system 23.
  • the senor 22 is able to detect the location, as well as speed and direction of each object 15, including the location, speed, and direction of vehicles and pedestrians 15. While there is one object/pedestrian 15 which is walking in the example shown in Figure 1, it is within the scope of the invention that the sensor 22 is able to detect if each is walking, traveling by bicycle, scooter, skateboard, rollerblades, or the like and may be able to detect many more objects and vehicles 14.
  • the first communication device 24 broadcasts the information to any communication enabled objects/vehicles 15a having a second communication device 26, such as a common DSRC device or otherwise able to receive the information.
  • the augmented visualization system 10 also includes a visualization system processor 18.
  • the visualization system processor 18 may include at least one of a microprocessor, microcontroller, an application specific integrated circuits (“ASICs”), a digital signal processor, etc., as is readily appreciated by those skilled in the art.
  • the visualization system processor 18 is capable of performing calculations, executing instructions (i.e. , running a program), and otherwise manipulating data as is also appreciated by those skilled in the art.
  • the monitoring system 11 also has a sensor system processor 16.
  • the sensor system processor 16 is in communication with the at least one sensor 22. As such, the sensor system processor 16 may receive data from the various sensors 22. The sensor system processor 16 is configured to determine various characteristics of the object 15 based on the data provided by the sensors 22. These characteristics include, but are not limited to, type of object 15 (e.g., motorcycle, truck, pedestrian, car, etc.), size of each object 15, position of each object 15, weight of each object 15, travel speed of each object 15, acceleration of each object 15, and heading for each object 15.
  • type of object 15 e.g., motorcycle, truck, pedestrian, car, etc.
  • size of each object e.g., size of each object 15, position of each object 15, weight of each object 15, travel speed of each object 15, acceleration of each object 15, and heading for each object 15.
  • the sensor system processor 16 is also configured to estimate the trajectory for each object 15. This estimation is calculated based on at least one of the speed, acceleration, and heading for each object 15. That is, the sensor system processor 16 is configured to estimate potential future locations of the object 15 based on current and past location, speed, and/or acceleration.
  • the communication device 24 associated with the sensor 22 and sensor system processor 16 then broadcasts the information to the area proximate to the target area 20. All vehicles having a second DSRC/communication device 26 or an otherwise able communication device to receive the information.
  • the 18 are configured to predict a possibility that the object 15 is not seen by the driver of the vehicle 14a, 14b and, thus, there is a potential danger of collision or accident present. This probability is based, at least in part, on the estimated trajectory for each object 15 that was received from the monitoring system 11. The probability may be a number corresponding to a likelihood of collision based on various factors including the potential future locations of the object 15.
  • the visualization system 10 combines the data from the sensor 22 with information from the vehicle systems, position, cameras etc to create a display which orients the object 15 identified with the sensor 22 to the view point of the driver and/or vehicle and/or display.
  • FIGS 3A and 3B an object 15 identified by the sensor 22 is shown through the processor 16, 18 which is processing sensor data in a first sensed position 13A and a second sense position 34B. This is combined with camera image and data to provide the display 12 which augments the camera image to draw focus toward the object 15 shown in the first position 13A and the second position 34B.
  • the system 10 would continuously and/or repeatedly monitor and update the sensor data and the integration of the data to provide the augmented display 12.
  • the augmented display would track the path of the pedestrian as they move from one side of the display toward the other.
  • Aligning sensor measurements with real-world images is a helpful method of validating measurements, especially when the real-world images are not involved in the measurement process but exist simply as ground truth.
  • a visualization with a full 3D space and tying the “camera” of the visualization with high resolution GPS data for position and heading of the ground truth camera, you can automate, or at least severely reduce the work of, alignment of the images in such a way that visualization images can be generated in real time.
  • the resulting real-time imagery can be used to validate measurements, sensor alignments or even algorithm results that can be visualized along side the data (such as whether or not an object occupies a specified zone).
  • High resolution GPS sensors attached to non-fixed cameras, such as drones, can be used to create unique viewpoints for validating infrastructure sensors.
  • the visualization component if properly programmed, could serve as a stand-alone tool even without the overlay process.
  • the GPS alignment of the viewport within the visualization could allow a simulated environment to replace the real-world images for the use of promotional materials generation among other things.
  • the sensor system processor 16 may have access to information regarding traffic signals (not shown) at the target area 20.
  • the communications may be achieved, for example, by vehicle-to-vehicle communication (“V2V”) techniques and/or vehicle-to-X (“V2X”) techniques.
  • V2V vehicle-to-vehicle communication
  • V2X vehicle-to-X
  • the sensor system processor 16 may be in communication with a signal controller (not shown) to determine the state of the various traffic signals (e.g., “green light north and southbound, red light east and westbound”, etc.).
  • the sensor system processor 16 may determine the state of the traffic signals based on data provided by the sensors 22. This information can be included in the broadcast from the DSRC 24 to vehicles 14a in the vicinity of the target area 20.
  • the vehicle processor 18 may then utilize the information regarding traffic signals in predicting the probability for a collision between objects 15. In particular between the vehicle 14a and other objects 15.
  • the images and other data from the monitoring system 11 is sent from the DSRC 24 to the vehicle/second DSRC 26.
  • the sensor system processor 16 and/or visualization system processor 18 uses the data to determine there is at least one object 15 that possibly cannot be seen, or can be seen but is a potential danger to which the driver’s attention should be directed.
  • the vehicle 14b has a user interface 30 for the augmented visualization system 10, including at least one type of display 12.
  • the augmented visualization system 10 displays an image 34 on the display 12.
  • the user interface 30 and display 12 may include a screen, a touch screen, a heads-up display, a helmet visor, a phone display, a windshield, heahs-up display, etc.
  • the image 34 may be one captured by an on vehicle camera 28, as shown in Fig 2B, or may be from a camera 22 that is acting as a sensor for the monitoring system, as shown in Fig 2C.
  • the augmented visualization system 10 provides a graphic overlay/bounded area 36 to highlight and direct the driver’s attention to the location of the detected object 15, such as a bounded area 36 of the image 34 around the portion which corresponds to the obstructed object 15.
  • the positioning system 23 information is used to aligned the overlay/bounded area 36 on the image 34 with the data provided by the sensor 22.
  • the driver of the augmented vehicle 14b is alerted to a potential danger and can take action to minimize the risk of collision or accident.
  • the object 15 with the potential danger can be a pedestrian about to use the cross walk, as shown in the Figures or another type of potential danger.
  • other situations may be approaching or turning vehicles that are block from view by other vehicles or buildings, etc.
  • One skilled in the art would be able to determine possible situations when a driver may be unable or having difficulty viewing objects that may be sensed by sensors 22 to that are remote from the vehicle target area 20, but in the area of a target area 20.
  • HUDs 12 integrated into vehicles 14b that would show a similar visualization.
  • the augmented visualization system 10 could also be implemented into a bicyclist helmet or motorcycle helmet, as well as in smart glass windscreens 14b as shown in Fig 4A and 4B. Where the overlay of the bounded area 36 is added on the windscreen 12 which the driver of vehicle 14b is looking through.
  • the augmented visualization system 10 allows for far greater spatial perception by the driver and awareness of the data.
  • the augmented visualization system 10 scales with the real-life view and leaves far less about a driving scenario open to interpretation.
  • a method 500 for displaying an augmented image using an augmented image system 10 including: recording data with at least one sensor 22 at a target area 20 where a positioning system 23 is associated with the at least one sensor 22.
  • the method also includes transmitting the data to the augmented image system 10 in a vehicle 14, 14a, 14b proximate to the target area 20.
  • the method also includes analyzing the data with one of the processor 16, 18 to determine a location of an object 15 proximate to the target area 20.
  • the method also includes utilizing the sensor position data, the object data and a location of the augmented image to define a bounding area 36 within the augmented image system 10.
  • the method also includes augmenting an image 34 by bounding the defined bounding area 36, where coordinates of the bounded area 36 portion corresponds to a location of the object 15 in the target area 20.
  • the method also includes displaying the augmented image 34 on a display 12 in view of a vehicle 14, 14a, 14b operator.
  • the method also includes when the object 15 is in the target area 20.
  • Implementations may include one or more of the following features.
  • the method 500 where the at least one sensor 22 is a moving and is proximate to the target area 20.
  • the method 500 where the at least one sensor 22 22 has a digital GPS.
  • the method 500 further including transmitting the data via a first communication device 24 to a second communication device 26 in the vehicle, where the first communication device 24 and the second communication device 26 are dedication short range communication devices.
  • the method 500 further including detecting obstacles in the image 34 between the vehicle 14, 14a, 14b and the object 15.
  • the method 500 where the bounded area 36 is a first color if the object 15 is visible and a second color if the object 15 is obstructed.
  • the method 500 the at least one sensor 22 being one selected from the group including of long-range radar, short-range radar, lidar, ladar, camera 22, ultrasound, and sonar.
  • the method may also include estimate a trajectory for each object 15 based on at least one of the speed, acceleration, and heading for each object 15.
  • the method 500 where the display 12, is one of: a screen, a touch screen, a heads-up display 12, a helmet visor, and a windshield.
  • the method 500 where the processor 16, 18 for analyzing the data is part of the vehicle 14, 14a, 14b.
  • the sensor system processor 16 and/or visualization system processor 18 identifies seen objects 15 in the image 34 which are identified as areas of obstructed view.
  • the sensor system processor 16 and/or visualization system processor 18 can then identify objects 15 that are behind other objects 15 based on the data from the sensors 22.
  • a truck is parked in the road.
  • the bounded area 36 obstructed by the obstacle is shown in shading for illustrative purposes in Figures 2B, 2C, but would not be displayed on the display 12.
  • the sensor system processor 16 determines if the object 15 can be seen, illustrated to the driver by a first bounding color 36a, e.g. green, as shown in Fig 2C.
  • the object 15 may be illustrated to the driver by a second bounding 36b color, e.g. red as shown in Fig 2B.
  • a different patter on the bounding can be displayed as also shown, e.g. cross-hatching vs. solid highlighting.
  • proximate shall mean within the target area 20, within a predefined distance from the target area 20, and/or within communication range of any devices associated with the target area 20 or otherwise located within the target area 20.
  • One skilled in the art would be able to select a predefined distance from the target area 20 to which the monitoring system 11 shall apply.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)

Abstract

A method (500) for displaying an augmented image (34) using an augmented image system (10) includes recording data with at least one sensor (22) at a target area (20). A positioning system (23) is associated with the at least one sensor (22). Data is transmitted to the augmented image system (10) in a vehicle (14, 14a, 14b) proximate to the target area (20). The processor (16, 18) analyzes the data to determine a location of an object (15) proximate to the target area (20). The method also includes utilizing the data to define a bounded area (36) within the augmented image (34); augmenting the image (34) by bounding the defined bounded area (36), the coordinates bounded area (36) correspond to a location of the object (15). The augmented image (34) is displayed to vehicle operator when the object (15) is in the target area (20).

Description

METHOD FOR ALIGNING CAMERA AND SENSOR DATA FOR AUGMENTED
REALITY DATA VISUALIZATION
FIELD OF THE INVENTION [0001] The invention relates generally to a system for warning a driver of a vehicle and more particularly warning a driver there may be a potential danger hidden from the driver’s line of sight.
BACKGROUND OF THE INVENTION
[0002] Signalized and unsignalized intersections and cross-walks for pedestrians present one of the most dangerous areas where accidents may occur, such as an automobile hitting a pedestrian. Additionally, pedestrians are also distracted by cell phones, tablet computers, billboards, other pedestrians, and the like, which may limit the ability of the pedestrian to be fully aware of any dangers resulting from vehicles that may be driving unsafely. Further, the driver of a vehicle may not be able to see around other vehicles or buildings to oncoming traffic or traffic about to turn a corner.
[0003] Currently, there are many types of systems in place, which are part of a vehicle, to make a driver of the vehicle aware of potential dangers with regard to collisions with pedestrians, other vehicles, and other objects along the side of a road. Some crosswalks also have systems in place which provide blinking lights to alert drivers of approaching vehicles that at least one vulnerable road user is crossing the crosswalk. However, these systems can only alert the driver to objects or potential collisions that can be directly sensed by the vehicle sensors. [0004] Accordingly, there exists a need for a warning system, which may be part of the infrastructure of an urban environment, to alert the driver of a vehicle to potential dangers not visible by the driver and/or not sensed by the vehicle. [0005] Further areas of applicability of the present invention will become apparent from the detailed description provided hereinafter. It should be understood that the detailed description and specific examples, while indicating the preferred embodiment of the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention.
SUMMARY [0006] One general aspect includes a method for displaying an augmented image using an augmented image system. The method also includes recording data with at least one sensor at a target area where a positioning system is associated with the at least one sensor. The method also includes transmitting the data to the augmented image system in a vehicle proximate to the target area. The method also includes analyzing the data with a processor to determine a location of an object proximate to the target area. The method also includes utilizing the sensor position data, the object data and a location of the augmented image to define a bounded area within the augmented image system. The method also includes augmenting an image by bounding the defined bounded area, where coordinates of the bounded area corresponds to a location of the object in the target area. The method also includes displaying the augmented image on a display in view of a vehicle operator when the object is in the target area. [0007] Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods.
[0008] Implementations may include one or more of the following features. The method where the at least one sensor is a moving and is proximate to the target area.
[0009] The at least one sensor has a digital GPS.
[0010] The first communication device and the second communication device are dedication short range communication devices. [0011] The method may include detecting obstacles in the image between the vehicle and the object.
[0012] The bounded potion is a first color if the object is visible and a second color if the object is obstructed.
[0013] The at least one sensor being one selected from the group may include of long-range radar, short-range radar, lidar, ladar, camera, ultrasound, and sonar. [0014] The processor is configured to: determine at least one of a speed, an acceleration, and a heading for each object based on data from the at least one sensor; and estimate a trajectory for each object based on at least one of the speed, acceleration, and heading for each object. [0015] The method where the display is one of: a screen, a touch screen, a heads-up display, a helmet visor, and a windshield.
[0016] The processor for analyzing the data is part of the vehicle.
[0017] One general aspect includes an augmented visualization system in a vehicle proximate to an target area configured to receive data from at least one sensor at an target area. The system also includes a processor for the vehicle configured with instructions for: receiving data via from the at least one sensor including object data and position data; analyzing the object data with a processor to determine a location of an object proximate to the target area to define a bounding area within the augmented image system; augmenting an image by bounding the defined bounding area, where coordinates of the bounded portion corresponds to a location of the object in the target area; and displaying the augmented image on a display in view of a vehicle operator when the object is in the target area [0018] Implementations may include one or more of the following features. The system where the at least one sensor has a digital GPS.
[0019] The processor is further configured with instructions for detecting obstacles in the image between the vehicle and the object.
[0020] The bounded potion is a first color if the object is visible and a second color if the object is obstructed.
[0021] The at least one sensor is one selected from the group may include of long-range radar, short-range radar, lidar, ladar, camera, ultrasound, and sonar. [0022] The processor is further configured with instructions to: determine at least one of a speed, an acceleration, and a heading for each object based on data from the at least one sensor; and estimate a trajectory for each object based on at least one of the speed, acceleration, and heading for each object.
[0023] The display is one of: a screen, a touch screen, a heads-up display, a helmet visor, and a windshield. BRIEF DESCRIPTION OF THE DRAWINGS
[0024] The present invention will become more fully understood from the detailed description and the accompanying drawings, wherein:
[0025] FIG. 1 is a perspective view of a traffic target area having a warning system being part of an infrastructure component, according to embodiments of the present invention;
[0026] FIG 2A is a schematic illustration of vehicle having a first embodiment of an augmented visualization system, according to embodiments of the present invention; [0027] FIG 2B is a schematic illustration of vehicle of an exemplary display screen of the first embodiment of an augmented visualization system, according to embodiments of the present invention;
[0028] FIG 2C is perspective view of an object detected by a non-stationary remote sensor and displayed by the augmented visualization system, according to embodiments of the present invention;
[0029] FIG 3A is a schematic illustration of a second embodiment of an augmented visualization system, illustrating an object in a first position detected by a non-stationary remote sensor according to embodiments of the present invention; [0030] FIG 3B is a schematic illustration of a second embodiment of an augmented visualization system, illustrating an object in a second position detected by a non-stationary remote sensor according to embodiments of the present invention; and [0031] FIG 4A is a schematic illustration of a second embodiment of an augmented visualization system, illustrating the augmented display of the object in the first position based upon FIG 3A according to embodiments of the present invention; [0032] FIG 4B is a schematic illustration of a second embodiment of an augmented visualization system, illustrating the augmented display of the object in the second position based upon FIG 3B according to embodiments of the present invention; and
[0033] FIG 5 is a flow diagram of an exemplary arrangement of operations for operating a tow vehicle in reverse for attachment to a trailer.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS [0034] The following description of the preferred embodiment(s) is merely exemplary in nature and is in no way intended to limit the invention, its application, or uses. Like reference symbols in the various drawings indicate like elements. [0035] In one embodiment, an augmented visualization system 10 provides a display 12 of information from a monitoring system(s) 11 which may be augmented to provide select safety information. The monitoring system 11 may provide intelligent intersections or other target areas 20 which may be enabled with communication device 24, such as dedicated short range communication (DSRC) device. The monitoring system 11 may be for detecting objects 15 including vehicles and vulnerable road users proximate to the target area 20, and broadcasting information about them as a basic safety message (BSM) to another communication device 26. The first communication device 24 may be part of the monitoring system 11 or may be part of another vehicle or smart device proximate to the target area 20. The information broadcast by the first communication device 24 may be received by a second communication device, 26, possibly another DSRC, in communication enabled vehicles 14a and allow the communication enabled vehicles 14a to warn the drivers of various situations which may be potentially dangerous.
[0036] The augmented visualization system 10 analyzes and provides a modified display of information from the monitoring system 11 to augment safety information that is shown in the display 12. In one embodiment, an augmented vehicle 14b uses the augmented visualization system 10 equipped to visually alert a driver to a potential danger. The augmented visualization system 10 can be displayed on display screen within the vehicle, on a heads-up display, or otherwise overlaid to display on a vehicle windshield.
[0037] Since emergency brake assist (EBA) systems have sensors that may accurately determine the location, speed, and direction of objects (pedestrians, cyclists, etc), and may be equipped with V2X technologies and communicate with the smart city infrastructures, key information may be shared to allow for localized warnings. Additionally, this information may be used by the augmented visualization system 10 to determine when to alert a driver to a potential danger. [0038] Figure 1 illustrates a visualization system 10 with an first embodiment of a monitoring system 11. The monitoring system 11 is associated with an target area 20 having at least one sensor 22 and at least one first communication device 24. The at least one sensor 22 is a non-stationary sensor 22. That is the sensor is moving within and/or proximate to the target area 20 and is able to sense objects 15 within the target area. The sensor 22 may be equipped with a positioning device, such as a digital GPS to accurately define the location of the sensor 22 relative to the monitored area 20. More particularly, to orient the location of the sensor to the vehicle having the augmented visualization system 10. The sensor may be for example attached to another vehicle driving in and/or proximate to the target area 20, a drone flying in and/or proximate to the target area 20, mounted to a bicycle, pedestrian personal device, etc.
[0039] In this embodiment, the communication device 24 is enabled with dedicated short range communication (DSRC) to sharing information sensed by the at least one sensor 22 through communications to broadcast information to vehicles 14, 14a, 14b proximate to the target area or other devices capable of receiving such a communication, such as a smart phone.
[0040] In this embodiment, proximate may be interpreted by known dictionary definitions, other definitions known by those skilled in the art, within a distance to receive the communication from the first communication device 24, or within a predetermined physical distance of the target area 20 predetermined for the monitoring system 11.
[0041] In this embodiment, the sensor 22 and communication device 24 are integrated into a single component, the sensor 22 and communication device 24 may be separate components in different locations or multiple types of sensors 22 may be linked to one communication device 24. The sensor 22 in this embodiment is able to detect objects in a detection area, shown generally at 20. In one embodiment, the sensor 22 is a long-range radar sensor 22, but it is within the scope of the invention that other types of sensors maybe used, such as, but not limited to long-range radar, short-range radar, LIDAR (Light Imaging, Detection, and Ranging), LADAR (Laser Imaging, Detection, and Ranging), and other types of radar, a camera, ultrasound, or sonar. The sensor 22 is equipped with a positioning system 23.
[0042] In the Figures, the sensor 22 is able to detect the location, as well as speed and direction of each object 15, including the location, speed, and direction of vehicles and pedestrians 15. While there is one object/pedestrian 15 which is walking in the example shown in Figure 1, it is within the scope of the invention that the sensor 22 is able to detect if each is walking, traveling by bicycle, scooter, skateboard, rollerblades, or the like and may be able to detect many more objects and vehicles 14.
[0043] Once the sensor 22 detects the location, as well as speed and direction of each vehicle 14 and the location, speed, and direction of each object 15, the first communication device 24 broadcasts the information to any communication enabled objects/vehicles 15a having a second communication device 26, such as a common DSRC device or otherwise able to receive the information.
[0044] The augmented visualization system 10 also includes a visualization system processor 18. The visualization system processor 18 may include at least one of a microprocessor, microcontroller, an application specific integrated circuits (“ASICs”), a digital signal processor, etc., as is readily appreciated by those skilled in the art. The visualization system processor 18 is capable of performing calculations, executing instructions (i.e. , running a program), and otherwise manipulating data as is also appreciated by those skilled in the art. The monitoring system 11 also has a sensor system processor 16.
[0045] The sensor system processor 16 is in communication with the at least one sensor 22. As such, the sensor system processor 16 may receive data from the various sensors 22. The sensor system processor 16 is configured to determine various characteristics of the object 15 based on the data provided by the sensors 22. These characteristics include, but are not limited to, type of object 15 (e.g., motorcycle, truck, pedestrian, car, etc.), size of each object 15, position of each object 15, weight of each object 15, travel speed of each object 15, acceleration of each object 15, and heading for each object 15.
[0046] The sensor system processor 16 is also configured to estimate the trajectory for each object 15. This estimation is calculated based on at least one of the speed, acceleration, and heading for each object 15. That is, the sensor system processor 16 is configured to estimate potential future locations of the object 15 based on current and past location, speed, and/or acceleration. The communication device 24 associated with the sensor 22 and sensor system processor 16 then broadcasts the information to the area proximate to the target area 20. All vehicles having a second DSRC/communication device 26 or an otherwise able communication device to receive the information. [0047] The sensor system processor 16 and/or visualization system processor
18 are configured to predict a possibility that the object 15 is not seen by the driver of the vehicle 14a, 14b and, thus, there is a potential danger of collision or accident present. This probability is based, at least in part, on the estimated trajectory for each object 15 that was received from the monitoring system 11. The probability may be a number corresponding to a likelihood of collision based on various factors including the potential future locations of the object 15.
[0048] Additionally, the visualization system 10 combines the data from the sensor 22 with information from the vehicle systems, position, cameras etc to create a display which orients the object 15 identified with the sensor 22 to the view point of the driver and/or vehicle and/or display.
[0049] Referring to FIGS 3A and 3B an object 15 identified by the sensor 22 is shown through the processor 16, 18 which is processing sensor data in a first sensed position 13A and a second sense position 34B. This is combined with camera image and data to provide the display 12 which augments the camera image to draw focus toward the object 15 shown in the first position 13A and the second position 34B. Although shown at only two locations in the process the system 10 would continuously and/or repeatedly monitor and update the sensor data and the integration of the data to provide the augmented display 12. Thus, in the embodiment shown, the augmented display would track the path of the pedestrian as they move from one side of the display toward the other.
[0050] Aligning sensor measurements with real-world images is a helpful method of validating measurements, especially when the real-world images are not involved in the measurement process but exist simply as ground truth. By creating a visualization with a full 3D space and tying the “camera” of the visualization with high resolution GPS data for position and heading of the ground truth camera, you can automate, or at least severely reduce the work of, alignment of the images in such a way that visualization images can be generated in real time. [0051] The resulting real-time imagery can be used to validate measurements, sensor alignments or even algorithm results that can be visualized along side the data (such as whether or not an object occupies a specified zone).
[0052] High resolution GPS sensors attached to non-fixed cameras, such as drones, can be used to create unique viewpoints for validating infrastructure sensors. The visualization component, if properly programmed, could serve as a stand-alone tool even without the overlay process. The GPS alignment of the viewport within the visualization could allow a simulated environment to replace the real-world images for the use of promotional materials generation among other things.
[0053] The sensor system processor 16 may have access to information regarding traffic signals (not shown) at the target area 20. The communications may be achieved, for example, by vehicle-to-vehicle communication (“V2V”) techniques and/or vehicle-to-X (“V2X”) techniques. In one embodiment, the sensor system processor 16 may be in communication with a signal controller (not shown) to determine the state of the various traffic signals (e.g., “green light north and southbound, red light east and westbound”, etc.). In another embodiment, the sensor system processor 16 may determine the state of the traffic signals based on data provided by the sensors 22. This information can be included in the broadcast from the DSRC 24 to vehicles 14a in the vicinity of the target area 20. The vehicle processor 18 may then utilize the information regarding traffic signals in predicting the probability for a collision between objects 15. In particular between the vehicle 14a and other objects 15. [0054] The images and other data from the monitoring system 11 is sent from the DSRC 24 to the vehicle/second DSRC 26. The sensor system processor 16 and/or visualization system processor 18 uses the data to determine there is at least one object 15 that possibly cannot be seen, or can be seen but is a potential danger to which the driver’s attention should be directed. The vehicle 14b has a user interface 30 for the augmented visualization system 10, including at least one type of display 12. The augmented visualization system 10 displays an image 34 on the display 12. The user interface 30 and display 12 may include a screen, a touch screen, a heads-up display, a helmet visor, a phone display, a windshield, heahs-up display, etc. The image 34 may be one captured by an on vehicle camera 28, as shown in Fig 2B, or may be from a camera 22 that is acting as a sensor for the monitoring system, as shown in Fig 2C.
[0055] The augmented visualization system 10 provides a graphic overlay/bounded area 36 to highlight and direct the driver’s attention to the location of the detected object 15, such as a bounded area 36 of the image 34 around the portion which corresponds to the obstructed object 15. The positioning system 23 information is used to aligned the overlay/bounded area 36 on the image 34 with the data provided by the sensor 22.
[0056] In this manner, the driver of the augmented vehicle 14b is alerted to a potential danger and can take action to minimize the risk of collision or accident. The object 15 with the potential danger can be a pedestrian about to use the cross walk, as shown in the Figures or another type of potential danger. For example, other situations may be approaching or turning vehicles that are block from view by other vehicles or buildings, etc. One skilled in the art would be able to determine possible situations when a driver may be unable or having difficulty viewing objects that may be sensed by sensors 22 to that are remote from the vehicle target area 20, but in the area of a target area 20.
[0057] Additional information in the form of text or color-coded graphics to display the state of objects 15. Alternatively, there are various concept HUDs 12 integrated into vehicles 14b that would show a similar visualization. The augmented visualization system 10 could also be implemented into a bicyclist helmet or motorcycle helmet, as well as in smart glass windscreens 14b as shown in Fig 4A and 4B. Where the overlay of the bounded area 36 is added on the windscreen 12 which the driver of vehicle 14b is looking through.
[0058] The augmented visualization system 10 allows for far greater spatial perception by the driver and awareness of the data. The augmented visualization system 10 scales with the real-life view and leaves far less about a driving scenario open to interpretation. Referring to Figure 5 a method 500 for displaying an augmented image using an augmented image system 10 including: recording data with at least one sensor 22 at a target area 20 where a positioning system 23 is associated with the at least one sensor 22. The method also includes transmitting the data to the augmented image system 10 in a vehicle 14, 14a, 14b proximate to the target area 20. The method also includes analyzing the data with one of the processor 16, 18 to determine a location of an object 15 proximate to the target area 20. The method also includes utilizing the sensor position data, the object data and a location of the augmented image to define a bounding area 36 within the augmented image system 10. The method also includes augmenting an image 34 by bounding the defined bounding area 36, where coordinates of the bounded area 36 portion corresponds to a location of the object 15 in the target area 20. The method also includes displaying the augmented image 34 on a display 12 in view of a vehicle 14, 14a, 14b operator. The method also includes when the object 15 is in the target area 20.
[0059] Implementations may include one or more of the following features.
The method 500 where the at least one sensor 22 is a moving and is proximate to the target area 20. The method 500 where the at least one sensor 22 22 has a digital GPS. The method 500 further including transmitting the data via a first communication device 24 to a second communication device 26 in the vehicle, where the first communication device 24 and the second communication device 26 are dedication short range communication devices. The method 500 further including detecting obstacles in the image 34 between the vehicle 14, 14a, 14b and the object 15. The method 500 where the bounded area 36 is a first color if the object 15 is visible and a second color if the object 15 is obstructed. The method 500 the at least one sensor 22 being one selected from the group including of long-range radar, short-range radar, lidar, ladar, camera 22, ultrasound, and sonar. The method 500 where the processor 16, 18 is configured to: determine at least one of a speed, an acceleration, and a heading for each object 15 based on data from the at least one sensor 2222. The method may also include estimate a trajectory for each object 15 based on at least one of the speed, acceleration, and heading for each object 15. The method 500 where the display 12, is one of: a screen, a touch screen, a heads-up display 12, a helmet visor, and a windshield. The method 500 where the processor 16, 18 for analyzing the data is part of the vehicle 14, 14a, 14b.
[0060] Additionally, the sensor system processor 16 and/or visualization system processor 18 identifies seen objects 15 in the image 34 which are identified as areas of obstructed view. The sensor system processor 16 and/or visualization system processor 18 can then identify objects 15 that are behind other objects 15 based on the data from the sensors 22. In this embodiment, a truck is parked in the road. The bounded area 36 obstructed by the obstacle is shown in shading for illustrative purposes in Figures 2B, 2C, but would not be displayed on the display 12. Additionally, the sensor system processor 16 determines if the object 15 can be seen, illustrated to the driver by a first bounding color 36a, e.g. green, as shown in Fig 2C. If the object 15 is obstructed, it may be illustrated to the driver by a second bounding 36b color, e.g. red as shown in Fig 2B. Alternatively, or in addition a different patter on the bounding can be displayed as also shown, e.g. cross-hatching vs. solid highlighting.
[0061] While this information was explained by example between one vehicle 14b, one target area 20 and the monitoring system 11 any vehicles 14, 14a, 14b in the proximity of the target area 20 with the ability to receive and implement the method could benefit in the same manner. [0062] For the purposes of this application proximate shall mean within the target area 20, within a predefined distance from the target area 20, and/or within communication range of any devices associated with the target area 20 or otherwise located within the target area 20. One skilled in the art would be able to select a predefined distance from the target area 20 to which the monitoring system 11 shall apply.
[0063] Additionally, while the location and trajectory information is disclosed as being processed by the monitoring system 11 and the potential danger probability and image processing is described as completed by sensor system processor 16 and/or visualization system processor 18 other processors may perform the described method in its entirety or in a different combination of processing as illustrated in the example. One skilled in the art would be able to determine which assigned steps should be completed by which processor 16, 18. [0064] The description of the invention is merely exemplary in nature and, thus, variations that do not depart from the gist of the invention are intended to be within the scope of the invention. Such variations are not to be regarded as a departure from the spirit and scope of the invention.

Claims

CLAIMS What is claimed is:
1. A method (500) for displaying an augmented image (34) using an augmented image system (10) comprising: recording data with at least one sensor (22) at a target area (20) wherein a positioning system (23) is associated with the at least one sensor (22); transmitting the data to the augmented image system (10) in a vehicle (14, 14a, 14b) proximate to the target area (20); analyzing the data with a processor (16, 18) to determine a location of an object (15) proximate to the target area (20); utilizing the sensor position (13A) data, the object (15) data and a location of the augmented image (34) to define a bounded area (36) within the augmented image (34) system (10, 23); augmenting an image (34) by bounding the defined bounded area (36), wherein coordinates of the bounded area (36) corresponds to a location of the object (15) in the target area (20); and displaying the augmented image (34) on a display (12) in view of a vehicle (14, 14a, 14b) operator when the object (15) is in the target area (20).
2. The method (500) of claim 1 , wherein the at least one sensor (22) is a moving and is proximate to the target area (20).
3. The method (500) of claim 2, wherein the at least one sensor (22) has a digital GPS.
4. The method (500) of claim 1 , further comprising transmitting the data via a first communication device (24) to a second communication device (26) in the vehicle
(14, 14a, 14b), wherein the first communication device (24) and the second communication device (26) are dedication short range communication devices.
5. The method (500) of claim 1 , further comprising detecting obstacles in the image (34) between the vehicle (14, 14a, 14b) and the object (15).
6. The method (500) of claim 5, wherein the bounded potion is a first color if the object (15) is visible and a second color if the object (15) is obstructed.
7. The method (500) of claim 1, the at least one sensor (22) being one selected from the group consisting of long-range radar, short-range radar, LIDAR, LADAR, camera, ultrasound, and sonar.
8. The method (500) of claim 1, wherein the processor (16, 18) is configured to: determine at least one of a speed, an acceleration, and a heading for each object (15) based on data from the at least one sensor (22); and estimate a trajectory for each object (15) based on at least one of the speed, acceleration, and heading for each object (15).
9. The method (500) of claim 1 , where the display (12) is one of: a screen, a touch screen, a heads-up display, a helmet visor, and a windshield.
10. The method (500) of claim 1, wherein the processor (16, 18) for analyzing the data is part of the vehicle (14, 14a, 14b).
11. An augmented visualization system (10) (10, 23) for a vehicle (14, 14a, 14b) comprising: a communication device (24, 26) in a vehicle (14, 14a, 14b) proximate to an target area (20) configured to receive data from at least one sensor (22) (22) at an target area (20); a processor (16, 18) for the vehicle (14, 14a, 14b) configured with instructions for: receiving data via from the at least one sensor (22) including object data and position data; analyzing the object data with a processor (16, 18) to determine a location of an object (15) proximate to the target area (20; utilizing sensor position data, the object data and a location of the augmented image (34) to define a bounding area (36) within the augmented image system (10); augmenting an image (34) by bounding the defined bounding area (20, 36), wherein coordinates of the bounded portion (36) corresponds to a location of the object (15) in the target area (20); and displaying the augmented image (34) on a display (12) in view of a vehicle (14, 14a, 14b) operator when the object (15) is in the target area (20.
12. The system of claim 11 , wherein the at least one sensor (22) has a digital GPS.
13. The system of claim 11 , wherein processor (16, 18) is further configured with instructions for detecting obstacles in the image (34) between the vehicle (14, 14a, 14b) and the object (15).
14. The system of claim 11 , wherein the bounded potion is a first color if the object (15) is visible and a second color if the object (15) is obstructed.
15. The system of claim 11 , wherein the at least one sensor (22) is one selected from the group consisting of long-range radar, short-range radar, LIDAR (Light Imaging, Detection, and Ranging), LADAR (Laser Imaging, Detection, and Ranging), camera (22), ultrasound, and sonar.
16. The system of claim 11 , wherein processor (16, 18) is further configured with instructions to: determine at least one of a speed, an acceleration, and a heading for each object (15) based on data from the at least one sensor (22); and estimate a trajectory for each object (15) based on at least one of the speed, acceleration, and heading for each object (15).
17. The system of claim 11, wherein the display (12) is one of: a screen, a touch screen, a heads-up display (12), a helmet visor, and a windshield.
PCT/US2020/055743 2019-10-15 2020-10-15 Method for aligning camera and sensor data for augmented reality data visualization WO2021076734A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962915406P 2019-10-15 2019-10-15
US62/915,406 2019-10-15

Publications (1)

Publication Number Publication Date
WO2021076734A1 true WO2021076734A1 (en) 2021-04-22

Family

ID=73449163

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2020/055743 WO2021076734A1 (en) 2019-10-15 2020-10-15 Method for aligning camera and sensor data for augmented reality data visualization

Country Status (1)

Country Link
WO (1) WO2021076734A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190052842A1 (en) * 2017-08-14 2019-02-14 GM Global Technology Operations LLC System and Method for Improved Obstable Awareness in Using a V2x Communications System
WO2019060891A1 (en) * 2017-09-25 2019-03-28 Continental Automotive Systems, Inc. Augmented reality dsrc data visualization

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190052842A1 (en) * 2017-08-14 2019-02-14 GM Global Technology Operations LLC System and Method for Improved Obstable Awareness in Using a V2x Communications System
WO2019060891A1 (en) * 2017-09-25 2019-03-28 Continental Automotive Systems, Inc. Augmented reality dsrc data visualization

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
KUUTTI SAMPO ET AL: "A Survey of the State-of-the-Art Localization Techniques and Their Potentials for Autonomous Vehicle Applications", IEEE INTERNET OF THINGS JOURNAL, IEEE, USA, vol. 5, no. 2, 1 April 2018 (2018-04-01), pages 829 - 846, XP011680881, DOI: 10.1109/JIOT.2018.2812300 *

Similar Documents

Publication Publication Date Title
US20190244515A1 (en) Augmented reality dsrc data visualization
JP6635428B2 (en) Car peripheral information display system
US9965957B2 (en) Driving support apparatus and driving support method
US10293690B2 (en) Vehicle information projecting system and vehicle information projecting method
US11827274B2 (en) Turn path visualization to improve spatial and situational awareness in turn maneuvers
US9514650B2 (en) System and method for warning a driver of pedestrians and other obstacles when turning
US20160191840A1 (en) User interface method for terminal for vehicle and apparatus thereof
US9505346B1 (en) System and method for warning a driver of pedestrians and other obstacles
CN109733283B (en) AR-based shielded barrier recognition early warning system and recognition early warning method
JP2016095697A (en) Attention evocation apparatus
CN112771592B (en) Method for warning a driver of a motor vehicle, control device and motor vehicle
JP2007323556A (en) Vehicle periphery information notifying device
JP2008293099A (en) Driving support device for vehicle
US20150022426A1 (en) System and method for warning a driver of a potential rear end collision
US10488658B2 (en) Dynamic information system capable of providing reference information according to driving scenarios in real time
JP7006235B2 (en) Display control device, display control method and vehicle
JP2010146459A (en) Driving support device
JP2009154775A (en) Attention awakening device
CN116935695A (en) Collision warning system for a motor vehicle with an augmented reality head-up display
JP5354193B2 (en) Vehicle driving support device
US10864856B2 (en) Mobile body surroundings display method and mobile body surroundings display apparatus
WO2021076734A1 (en) Method for aligning camera and sensor data for augmented reality data visualization
JP5200990B2 (en) Driving assistance device
EP3857530A1 (en) Augmented reality dsrc data visualization
CN113396314A (en) Head-up display system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20807546

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20807546

Country of ref document: EP

Kind code of ref document: A1