CN101876751A - Car to car communication device on full-windscreen head-up display - Google Patents

Car to car communication device on full-windscreen head-up display Download PDF

Info

Publication number
CN101876751A
CN101876751A CN2010101962759A CN201010196275A CN101876751A CN 101876751 A CN101876751 A CN 101876751A CN 2010101962759 A CN2010101962759 A CN 2010101962759A CN 201010196275 A CN201010196275 A CN 201010196275A CN 101876751 A CN101876751 A CN 101876751A
Authority
CN
China
Prior art keywords
vehicle
display
information
vehicles
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2010101962759A
Other languages
Chinese (zh)
Other versions
CN101876751B (en
Inventor
T·A·塞德
J·F·什切巴
D·崔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GM Global Technology Operations LLC
Original Assignee
GM Global Technology Operations LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US12/417,077 external-priority patent/US8629903B2/en
Application filed by GM Global Technology Operations LLC filed Critical GM Global Technology Operations LLC
Publication of CN101876751A publication Critical patent/CN101876751A/en
Application granted granted Critical
Publication of CN101876751B publication Critical patent/CN101876751B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3626Details of the output of route guidance instructions
    • G01C21/365Guidance using head up displays or projectors, e.g. virtual vehicles or arrows projected on the windscreen or on the road itself
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/66Radar-tracking systems; Analogous systems
    • G01S13/72Radar-tracking systems; Analogous systems for two-dimensional tracking, e.g. combination of angle and range tracking, track-while-scan radar
    • G01S13/723Radar-tracking systems; Analogous systems for two-dimensional tracking, e.g. combination of angle and range tracking, track-while-scan radar by using numerical data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/87Combinations of radar systems, e.g. primary radar and secondary radar
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • G01S2013/9316Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles combined with communication equipment with other vehicles or with base stations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • G01S2013/9318Controlling the steering
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • G01S2013/93185Controlling the brakes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • G01S2013/9319Controlling the accelerator
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • G01S2013/932Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles using own vehicle data, e.g. ground speed, steering wheel direction
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • G01S2013/9321Velocity regulation, e.g. cruise control
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • G01S2013/9322Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles using additional data, e.g. driver condition, road state or weather data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • G01S2013/9325Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles for inter-vehicle distance regulation, e.g. navigating in platoons
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • G01S2013/9327Sensor installation details
    • G01S2013/93271Sensor installation details in the front of the vehicles
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B5/00Optical elements other than lenses
    • G02B5/20Filters

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Automation & Control Theory (AREA)
  • Optics & Photonics (AREA)
  • Electromagnetism (AREA)
  • Instrument Panels (AREA)

Abstract

The present invention relates to the car to car communication device on full-windscreen head-up display.A kind of transparent windscreen head-up display in fact comprises one of incandescnet particle or microstructure on the presumptive area of this windshield, the presumptive area of this windshield allows luminous demonstration to allow the visual field by this windshield simultaneously.A kind of method that on the transparent windscreen head-up display in fact of main car, shows the figure of describing transport information based on car to car communication, this method comprises: the communication between monitor vehicle; Determine transport information according to the communication between this vehicle, and determine the figure of the description transport information that shows on the transparent windscreen head-up display substantially, and show described figure on the transparent windscreen head-up display in fact at this.

Description

Vehicle-to-vehicle communication device on full windshield head-up display
Cross Reference to Related Applications
This application is a continuation-in-part application of U.S. application No. 12/417077 filed on 2.4.2009, which is incorporated herein by reference.
Technical Field
The present disclosure relates to graphic imaging on windshields in motor vehicles.
Background
The statements in this section merely provide background information related to the present disclosure and may not constitute prior art.
Providing information to an operator of a vehicle in an efficient manner is desirable and reduces operator fatigue. Display technology is known in which light is projected onto a screen and the light is converted into a visual display on the screen. Such displays for transport purposes are known as head-up displays, in which information is projected onto the sun visor, a screen between the operator and the windscreen, or directly onto the windscreen. However, known systems that project light directly onto the windshield typically require coatings or materials that substantially reduce the transparency of the windshield. As a result, head-up displays are typically limited to a limited area on the windshield.
Vehicle systems monitor a large amount of information. In particular, vehicle systems using driving assistance (e.g., Adaptive Cruise Control (ACC), automatic lateral control, collision avoidance systems, or collision preparation systems) and lane keeping assistance monitor and process information related to the operating environment around the vehicle. Further, information from a variety of information sources may be used to locate the vehicle relative to the 3D map database, plan a vehicle travel route to a destination, and associate the travel route with available information about the route. In addition, on-board vehicle systems provide a variety of information that may be used to improve vehicle control. Additionally, it is known to communicate between vehicles using data collected on one vehicle to communicate with vehicles at other locations on the roadway.
Disclosure of Invention
A substantially transparent windscreen head up display includes one of light emitting particles or microstructures over a predetermined area of the windscreen that allow for luminous display while allowing for a field of view through the windscreen. A method of displaying a graphic depicting traffic information based on inter-vehicle communication on a substantially transparent windscreen head-up display of a host vehicle, the method comprising: the method includes monitoring inter-vehicle communications, determining traffic information based on the inter-vehicle communications, determining a graphic depicting traffic information displayed on a substantially transparent windscreen head-up display, and displaying the graphic on the substantially transparent windscreen head-up display.
The present invention also provides the following solutions.
Scheme 1: a method of displaying a graphic depicting traffic information on a substantially transparent windscreen head-up display of a host vehicle based on inter-vehicle communication, the method comprising: monitoring communication between vehicles; determining traffic information based on the inter-vehicle communication; determining a graphic depicting traffic information displayed on the substantially transparent windscreen head-up display; and displaying the graphic on the substantially transparent windscreen head-up display; wherein the substantially transparent windscreen head-up display includes one of light emitting particles or microstructures over a predetermined area of the windscreen permitting luminescent display while permitting vision through the windscreen.
Scheme 2: the method of claim 1, wherein determining traffic information based on the inter-vehicle communication comprises: detecting an impending merge operation involving the host vehicle and other vehicles.
Scheme 3: the method of claim 2, wherein detecting an impending merge operation involving the host vehicle and other vehicles comprises: analyzing the motion of the other vehicle transmitted in the inter-vehicle communication; and determining movement of other vehicles intruding into an area adjacent to the vehicle.
Scheme 4: the method of claim 2, wherein detecting an impending merge operation involving the host vehicle and other vehicles comprises: analyzing the motion of the other vehicle transmitted in the inter-vehicle communication; determining movement of other vehicles indicative of aggressive operation; and determining a possible cut-off operation based on the motion of the other vehicle and the indication of aggressive operation; wherein determining the graph depicting traffic information comprises determining a cut-off alarm based on the possible cut-off operations.
Scheme 5: the method of claim 2, wherein detecting an impending merge operation involving the host vehicle and other vehicles comprises: monitoring proximity positions of other vehicles relative to the host vehicle; monitoring lane change signals of other vehicles transmitted in the inter-vehicle communication; and determining that the lane change signal is a merge request based on the lane change signal and a proximity location of the other vehicle relative to the host vehicle.
Scheme 6: the method of claim 2, wherein the master vehicle is manually operated; wherein determining the graph depicting traffic information includes determining a trajectory alert graph corresponding to the trajectory of the other vehicle.
Scheme 7: the method of claim 2, wherein determining the graph that describes traffic information includes determining a merge negotiation graph that corresponds to an upcoming merge of other vehicles.
Scheme 8: the method of claim 7, wherein the master is semi-automatically operated; wherein the merge negotiation graph includes an optional request to coordinate an upcoming merge of other vehicles.
Scheme 9: the method of claim 7, wherein the host vehicle is automatically operated; wherein the merge negotiation graph includes a notification of an impending merge of other vehicles.
Scheme 10: the method of claim 1, wherein determining traffic information based on the inter-vehicle communication comprises: detecting an impending traffic deceleration; and wherein determining the graph depicting traffic information comprises determining an alert of an impending traffic deceleration.
Scheme 11: the method of claim 10, further comprising commanding an autobrake operation based on detecting an impending traffic deceleration.
Scheme 12: the method of scheme 1, further comprising: monitoring a speed of the host vehicle; and determining the movement of other vehicles in front of the host vehicle by information sent in the inter-vehicle communication; wherein determining traffic information based on the inter-vehicle communication includes determining an undesirable following condition based on the velocity of the host vehicle and the movement of the other vehicle; wherein determining the graph depicting traffic information comprises determining an alert of an undesirable follow-up condition.
Scheme 13: the method of claim 1, wherein monitoring inter-vehicle communications includes monitoring travel routes of other vehicles; and wherein determining the graph depicting traffic information comprises illustrating travel routes of other vehicles.
Scheme 14: the method of claim 1, wherein monitoring inter-vehicle communications includes monitoring planned routes transmitted by other vehicles; and wherein determining the graphic descriptive of traffic information comprises illustrating the planned route.
Scheme 15: the method of claim 1, wherein monitoring inter-vehicle communication comprises monitoring an adverse driving condition ahead of the host vehicle; and wherein determining the graphic depicting traffic information comprises determining an adverse driving condition alert.
Scheme 16: the method of claim 15, wherein monitoring the adverse driving condition comprises monitoring traffic deceleration along a planned route of the host vehicle; and wherein determining the adverse driving condition alert comprises determining a suggested change-over-route alert.
Scheme 17: the method of claim 15, wherein monitoring the adverse driving condition comprises monitoring a slippery road condition ahead of the host vehicle; and wherein determining the adverse driving condition alert comprises determining a road alert of a forward slip.
Scheme 18: the method of claim 15, wherein monitoring the adverse driving condition comprises monitoring a road obstacle ahead of the host vehicle; and wherein determining the adverse driving condition alert comprises determining a forward road obstacle alert.
Scheme 19: the method of scheme 1, further comprising: monitoring a host vehicle operator eye position; and wherein determining to display a graphic describing traffic information on the substantially transparent windscreen head-up display comprises: determining a graphic registered to a driving scene feature visible to an operator through the substantially transparent windscreen head-up display.
Scheme 20: a system for displaying a graphic describing a desired vehicle response based on inter-vehicle communication on a substantially transparent windscreen head-up display of a host vehicle, the system comprising: a communication device between the vehicles; a sensor descriptive of a position of an eye of the host occupant; the substantially transparent windscreen head-up display including one of light emitting particles or microstructures over a predetermined area of the windscreen permitting luminescent display while permitting vision through the windscreen; an enhanced vision system manager that monitors the inter-vehicle communication devices, monitors the sensors describing the eye positions of the host vehicle occupants, determines movement of other vehicles based on communication through the inter-vehicle communication devices, evaluates movement of other vehicles to request response from the host vehicle; and determining a registered display requirement based on the requested host response and data from the sensor describing the host occupant's eye position; a graphics system that generates graphics describing the requested host response in accordance with the registered display requirements; and a graphics projection system in communication with the graphics system and displaying graphics describing the requested host vehicle response on the substantially transparent windscreen head up display.
Drawings
One or more embodiments will now be described, by way of example, with reference to the accompanying drawings, in which:
FIG. 1 shows an exemplary vehicle equipped with an EVS system according to the present disclosure;
FIG. 2 shows an exemplary schematic diagram of a substantially transparent display in accordance with the present invention;
FIG. 3 illustrates an exemplary graphical projection onto a surface in accordance with the present invention;
FIG. 4 shows a schematic diagram of the use of excitation light to emit visible light from a HUD in accordance with the present invention;
fig. 5 shows an exemplary arrangement of luminescent particles on a substrate according to the present invention;
FIG. 6 shows different types of light emitting materials laminated on a substrate according to the present invention;
FIG. 7 shows an exemplary plot of excitation versus emission for different luminescent materials according to the present invention;
FIG. 8 shows an exemplary pattern of microstructures dispersed within a substantially transparent or translucent substrate in accordance with the present invention;
FIG. 9 shows an exemplary view, similar to FIG. 8, of a pattern of microstructures disposed on a surface of a substantially transparent or translucent substrate in accordance with the present invention;
FIG. 10 shows an exemplary view of an angled pattern of microstructures dispersed in a substantially transparent or translucent substrate, similar to FIG. 8, in accordance with the present invention;
FIG. 11 illustrates an exemplary embodiment of a two-dimensional beam based FC display subsystem according to the present invention;
FIG. 12 shows a schematic view of a vehicle system 10 that has been configured with an object tracking system in accordance with the present invention;
FIG. 13 illustrates the flow of information used in creating a track list in accordance with the present invention;
FIG. 14 illustrates an exemplary data fusion method according to the present invention;
FIG. 15 illustrates an exemplary data flow enabling joint tracking and sensor registration (recording) in accordance with the present invention;
FIG. 16 schematically illustrates an exemplary system in which sensor inputs are fused into object trajectories suitable for use in a collision preparation system, in accordance with the present invention;
FIG. 17 schematically shows an exemplary image fusion module according to the present invention;
FIG. 18 schematically depicts an exemplary Kalman filter bank that operates to estimate the position and velocity of group objects in accordance with the present invention;
FIG. 19 illustrates exemplary range data overlaid on respective image planes for system internal analysis of various target objects in accordance with the present invention;
FIG. 20 illustrates an exemplary vehicle using sensors to acquire road geometry data ahead of the vehicle in accordance with the present invention;
FIG. 21 illustrates an exemplary forward lane estimation process in accordance with the present invention;
FIG. 22 illustrates an exemplary process in which information from a map database may be used to construct a geometric model of a roadway within a vehicle region in accordance with the present invention;
FIG. 23 illustrates an exemplary iterative method for finding an approximate position of a vehicle relative to an estimated road geometry in accordance with the present invention;
FIG. 24 illustrates an exemplary vehicle attitude determination process in accordance with the present invention;
FIG. 25 illustrates an exemplary determination made within a lateral model of a vehicle in accordance with the present invention;
FIG. 26 illustrates an exemplary use of waypoints along a projected lane forward of a vehicle to estimate lane geometry in accordance with the present invention;
27-29 illustrate exemplary application of contextual information to sensed object data to determine whether the sensed data is critical information in accordance with the present invention;
FIG. 27 shows a vehicle including three consecutive data points depicting a target object in front of the vehicle;
FIG. 28 shows exemplary states in which the corresponding data points would correctly indicate information critical to the operator; and
FIG. 29 shows an exemplary state in which the corresponding data points may incorrectly indicate information critical to the operator;
FIGS. 30 and 31 schematically illustrate exemplary applications of a limited pixilated field of view configuration in accordance with the present invention;
FIG. 30 shows an exemplary emitter capable of emitting light to a limited field of view;
FIG. 31 depicts the process of creating the necessary structure of emitters aligned with a polymer substrate to enable viewing of a limited field of view;
FIGS. 32-37 show selected exemplary displays of key information that may be projected on a HUD in accordance with the present invention;
FIG. 32 depicts an exemplary unenhanced exterior view including features that are desirable to a vehicle operator for easy visual access;
FIG. 33 shows an exemplary view obstructed by a dense fog and an exemplary enhanced visual display that may be used to compensate for the effects of the fog;
FIG. 34 illustrates an exemplary graphical display for improving security by lane changing;
FIG. 35 shows an exemplary state in which peripheral salient feature enhancement features are used with an estimated operator gaze location to alert the operator to critical information;
FIG. 36 shows an exemplary view of a display depicting navigation directions on a HUD;
FIG. 37 depicts additional exemplary views depicting key information that may be displayed on the HUD;
FIG. 38 schematically depicts an exemplary information flow for implementing the above-described method in accordance with the present invention;
FIG. 39 depicts an additional exemplary diagram of an upcoming merge/merge operation (mergermanuver) that may be displayed on the HUD; and
FIG. 40 depicts an exemplary diagram of an unsafe following scenario that may be displayed on a HUD.
Detailed Description
Referring now to the drawings, wherein the showings are for the purpose of illustrating certain exemplary embodiments only and not for the purpose of limiting the same, a method of displaying a graphical image depicting an operating environment of a vehicle on a windshield of the vehicle using an Enhanced Vision System (EVS) is disclosed. The graphical images are from sensors and/or data inputs describing the operating environment and include processing of these inputs to convey critical information to the operator or occupant of the vehicle. The graphical image to be displayed on the windscreen is additionally registered to a visually relevant feature viewable through the windscreen, so that the occupant of interest can view the relevant feature and the registered graphical image as a single identifiable input.
Fig. 1 shows an exemplary vehicle equipped with an EVS system according to the present invention. The vehicle 100 includes an EVS system manager 110; a vehicle sensor system including a camera system 120 and a radar system 125; vehicle operation sensors, including a vehicle speed sensor 130; information systems including GPS device 140 and wireless communication system 145; a head-up display (HUD) 150; EVS graphics system 155; a graphics projection system 158; and an occupant eye position sensing system 160. The EVS system manager 110 includes a programmable processor that includes a program to monitor various inputs and determine what information is appropriate for display on the HUD. The EVS system manager may communicate directly with the various systems and components, or the EVS system manager may alternatively or additionally communicate over the LAN/CAN system 115. The EVS system manager uses information related to the vehicle operating environment, which is obtained from a plurality of inputs. The camera system 120 includes a camera or image acquisition device that acquires periodic or sequential images representing a view of the vehicle. The radar system 125 includes devices known in the art that use electromagnetic radiation to detect other vehicles or objects in the vicinity of the vehicle. A variety of known on-board sensors are widely used in vehicles to monitor vehicle speed, engine speed, wheel slip, and other parameters describing the operation of the vehicle. The exemplary vehicle speed sensor 130 is depicted as representative of such an on-board sensor that describes vehicle operation, but the present invention is intended to include any such sensor used by the EVS system. GPS device 140 and wireless communication system 145 are devices known in the art that communicate with off-board sources of information such as satellite system 180 and cellular communication tower 190. The GPS device 140 may be used with a 3D map database that includes detailed information relating to the overall coordinates received by the GPS device 140 regarding the current location of the vehicle. The HUD 150 includes a windshield that is fitted with features that can display images projected onto it while remaining transparent or substantially transparent so that occupants of the vehicle can clearly view the exterior of the vehicle through the windshield. It should be understood that while HUD 150 includes a windshield forward of the vehicle, other surfaces within the vehicle may be used for projection, including side windows and rear windows. Furthermore, the view on the front windshield may be continuous as a continuous image on the front vehicle "a-pillar" and onto the side windows. The EVS graphics engine 155 includes display software or programs that translate requests to display information from the EVS system manager 110 in a manner that describes a graphical representation of the information. EVS graphics engine 155 includes a program that compensates for the curved and angled surfaces of the windshield as well as any other surfaces onto which graphics are projected. EVS graphics engine 155 controls graphics projection system 158, and graphics projection system 158 includes a laser or projection device that generates excitation light to project a graphical representation. The occupant eye position sensing system 160 includes sensors known in the art to approximate (estimate) the position of the occupant's head and further approximate the orientation or gaze location of the occupant's eyes. Based on the output of the occupant eye position sensing system 160 and the input data of the tracking position information related to the vehicle surroundings, the EVS system manager 110 can accurately register the graphic image to the HUD so that the occupant can see the image overlaid with the visual image through the windshield.
The above-mentioned EVS comprises eye-sensing and head-sensing means that allow to estimate the eye position, to register the images on the HUD so that they correspond to the operator's field of vision. However, it should be understood that the estimation of the head and eye positions may be obtained by a variety of methods. For example, in a process similar to adjusting a rearview mirror, an operator may use a calibration procedure to align graphics with a detected object upon entering the vehicle. In another embodiment, the longitudinal seat position in the vehicle may be used to estimate the position of the driver's head. In another embodiment, manual adjustment of the rearview mirror can be used to estimate the position of the operator's eyes. It should be appreciated that a combination of methods, such as seat position and mirror adjustment angle, may also be used to more accurately estimate the operator's head position. Various methods of accomplishing accurate registration of the graphics on the HUD are contemplated, and the present invention is not limited to the specific embodiments described herein.
Exemplary EVSs include: a wide field of view; full windshield glass (HUD); a substantially transparent screen including a function to display a projected graphic image thereon; a HUD image engine comprising one or more lasers capable of projecting an image on a windshield; an input source that obtains data relating to a vehicle operating environment; and an EVS system manager including a program that monitors inputs from the input device, processes the inputs and determines key information related to the operating environment, and establishes a request to create a graphical image by the HUD image engine. However, it should be understood that the exemplary EVS is but one of many configurations that an EVS can take. For example, vision or camera systems are useful for various EVS applications as will be described below. However, it should be understood that the exemplary EVS system may operate without a vision system, for example, providing useful information from only GPS devices, 3D map databases, and on-board sensors. In the alternative, it should be understood that the exemplary EVS system may operate without access to a GPS device or wireless network, and instead only use inputs from the vision system and radar system. Many different configurations of the systems and methods disclosed herein are possible, and the invention is not limited to the exemplary embodiments described herein.
Windshields including HUDs are important to the operation of the EVS. In order to function as a display device on which graphic images can be displayed while also serving as a medium through which relevant features can be seen, the windshield of the vehicle must be transparent and capable of displaying the image projected by the excitation light source. Fig. 2 is an exemplary diagram of a substantially transparent display in accordance with the present invention. An observer 10 can see an arbitrary object (e.g., a cube 12) through a substrate 14. The substrate 14 may be transparent or substantially transparent. While the viewer 10 views an arbitrary object 12 through the substrate 14, the viewer 10 can also view images (e.g., circles 15 and triangles 16) produced on the substrate 14. Substrate 14 may be part of a vehicle windshield, an architectural window, a glass substrate, a plastic substrate, a polymer substrate, or other transparent (or substantially transparent) medium as would occur to one skilled in the art. Other substrates may supplement substrate 14 to provide color, substrate protection, filtering (e.g., filtering external ultraviolet light), and other functions.
Fig. 2 illustrates illumination of a transparent display device according to an embodiment by excitation light (e.g., ultraviolet or infrared light) from a light source (e.g., a projector or laser as shown in device 20). The substrate 14 may receive excitation light from a light source, such as a projector or laser 20. The received excitation light may be absorbed by the luminescent material on the substrate 14. When the luminescent material receives the excitation light, the luminescent material may emit visible light. Accordingly, by selectively illuminating the substrate 14 with excitation light, images (e.g., circles 15 and triangles 16) can be produced on the substrate 14.
According to an embodiment of the present invention, the excitation light may be ultraviolet light. If the excitation light is ultraviolet light, a down-conversion physical phenomenon occurs when the light emitting material emits visible light in response to the ultraviolet light. Specifically, ultraviolet light has a shorter wavelength and higher energy than visible light. Accordingly, when the luminescent material absorbs ultraviolet light and emits lower-energy visible light, the ultraviolet light is down-converted into visible light because the energy level of the ultraviolet light is lowered when it is converted into visible light. In an embodiment, the luminescent material is a fluorescent material.
According to an embodiment of the present invention, the excitation light may be infrared light. If the excitation light is infrared light, an up-conversion physical phenomenon occurs when the luminescent material emits visible light in response to the infrared light. Specifically, infrared light has a longer wavelength and lower energy than visible light. Accordingly, when the light emitting material absorbs infrared light and emits visible light of higher energy, the infrared light is up-converted into visible light because the energy level of the infrared light rises when it is converted into visible light. In an embodiment, the luminescent material is a fluorescent material. In the up-conversion physics, the absorption of more than one infrared photon is necessary for the emission of each light photon. Those skilled in the art will appreciate the need for: the need for multiphoton absorption makes infrared light a less desirable choice as the excitation light than ultraviolet light.
In the embodiment shown in fig. 1, the excitation light is output by an apparatus 20 comprising a projector. The projector may be a digital projector. In an embodiment, the projector is a micro-mirror array (MMA) projector (e.g., a Digital Light Processing (DLP) projector). An MMA projector outputting ultraviolet light may be similar to an MMA projector outputting visible light, except for a color wheel having a filter adapted to the ultraviolet spectrum. In other embodiments, the projector is a Liquid Crystal Display (LCD) projector. In an embodiment, the projector may be a Liquid Crystal On Silicon (LCOS) projector. In an embodiment, the projector may be an analog projector (e.g., a slide film projector or a motion picture film projector). Those skilled in the art will recognize other types of projectors that may be used to project ultraviolet light onto the substrate 14.
FIG. 3 shows an exemplary graphical projection onto a surface according to the present invention. The radiation source 310 delivers an intense, collimated (parallel) beam of invisible (or less visible) radiation. The radiation beam passes through an optical image processor 330 and the modified radiation beam 350 is projected onto a Fluorescence Conversion (FC) display screen 380. Various methods of image display are disclosed. In a first exemplary method, an expanded static radiation beam is applied by an image processor 330 that includes a matrix of on-off switches (e.g., a matrix of tiny mirrors) that produce a dark image, and a fluorescent visible image is produced on a display screen 380 by fluorescent transformation of the dark image. The still image is typically generated from a look-up table. In a second exemplary method, the radiation beam is coupled to an image processor 330, and the image processor 330 includes a two-dimensional beam scanner (e.g., a galvanometer, an acousto-optic light deflector (AOLD), and an electro-optic light deflector (EOLD)). The electrical signals are used to control the radiation beam to illuminate a specific spot of the screen at a given time. An exemplary FC screen typically has the following structure: a layer 384 comprising fluorescent nanoparticles or molecules attached or dispersed in a homogeneous medium; a coating 388 that transmits non-visible radiation while reflecting visible emitted light; and a substrate layer 390 that absorbs the remaining invisible radiation. Alternatively, the screen comprises: a layer 384 comprising fluorescent nanoparticles or molecules attached or dispersed in a homogeneous medium; a coating 388 that absorbs non-visible radiation; and a visible transparent substrate layer 390. Self-adhesive layers and protective layers such as scratch resistant layers may also be added to the screen structure.
Two alternatives for FC are disclosed. Fig. 4 shows a scheme for emitting visible light from a HUD using excitation light according to the present invention. As shown in fig. 4, the first scheme is called down-conversion in which the wavelength of the excitation light is smaller than the fluorescence wavelength. The energy level diagram shows the down-converted molecules or nanoparticles. The photons of the shorter wavelength excitation light have more energy and cause a transition 415 from the lower energy level 410 to the higher energy level 420. The emission includes a transition 425 associated with two energy levels having smaller energy gaps. The second scheme (not shown) is called up-conversion, in which the wavelength of the excitation light is larger than the fluorescence wavelength. In the second scheme, two or more photons from the laser are required to excite the fluorescent particle in order to generate visible fluorescent photons. The longer wavelength excitation laser results in two transitions from a lower energy state through an intermediate state to a higher energy state. The emission includes a transition associated with two energy levels having a gap smaller than the energies associated with two laser photons. A common approach to the first approach is to apply a UV (or blue) light source with a wavelength of less than 500nm to excite fluorescent molecules or nanoparticles on the image screen. The UV sources include solid state lasers, semiconductor laser diodes, gas lasers, dye lasers, excimer lasers, and other UV light sources familiar to the art. A common approach for the second approach is to use an infrared laser with a wavelength of more than 700nm to excite the fluorescent molecules or particles on the screen. Including solid state lasers, semiconductor laser diodes, and other IR sources known in the art. In both of the above approaches, the excitation beam intensity is adjusted to produce visible fluorescence of different intensities or gray levels.
Various fluorescent materials are also disclosed. A common characteristic of these materials is that the size of the fluorescent particles is very small. Generally, nanoparticles or molecules having a size between 0.5nm and 500nm are preferred to have minimal scattering effects that reduce the visible transparency of the screen. These materials fall into four categories: an inorganic nano-sized phosphor; organic molecules and dyes; semiconductor-based nanoparticles; and organometallic molecules.
For the down-conversion, the following materials may be used to form the FC display: 1. inorganic or ceramic phosphors or nanoparticles including, but not limited to, metal oxides, metal halides, metal chalcogenides (e.g., metal sulfides), or mixtures thereof, e.g., metal oxy-halides, metal oxy-chalcogenides. These inorganic phosphors have wide applications in fluorescent lamps and electronic monitors. These materials can convert shorter wavelength photons (e.g., UV and blue light) to longer wavelength visible light and can be easily deposited on or dispersed in a display screen. 2. Laser dyes and small organic molecules, as well as fluorescent organic polymers. These can also convert shorter wavelength laser photons (e.g., UV and blue light) to longer wavelength visible light and can be easily deposited on a display screen. Since they are in a molecular state in a solid, the screen transparency is maintained since there is no particle scattering. 3. Nanoparticles of semiconductors, such as II-VI or III-V compound semiconductors, for example, fluorescent quantum dots. Furthermore, their addition to the screen does not affect optical transparency. 4. An organometallic molecule. The molecule includes at least a metal center, such as rare earth elements (e.g., Eu, Tb, Ce, Er, Tm, Pr, Ho) and transition metal elements such as Cr, Mn, Zn, Ir, Ru, V, as well as main group elements such as B, Al, Ga, etc. The metal element is chemically bonded to an organic group to prevent quenching (quenching) of fluorescence from a host or a solvent. Screens filled with such organometallic compounds do not scatter light or affect the transparency of the screen, unlike micro-sized particles.
Of the down-converting FC materials or molecules described above, those that can be excited by long wavelength UV (e.g., > 300nm) to blue (< 500nm) lasers and produce visible light emission can be used in embodiments of the present application. For example, the phosphor may be a garnet series of Ce-doped phosphors: (YmAl-m)3(AlnB1-n)5O12, wherein 0 < ═ m and n < > 1; a includes other rare earth elements and B includes B, Ga. In addition, phosphors comprising metal silicate, metal borate, metal phosphate and metal aluminate hosts are preferably applied to FC displays; in addition, nanoparticle phosphors are also preferably used in FC displays, these nanoparticles comprising: common rare earth elements (e.g., Eu, Tb, Ce, Dy, Er, Pr, Tm) and transition or main group elements (e.g., Mn, Cr, Ti, Ag, Cu, Zn, Bi, Pb, Sn, T1) serve as fluorescence activators. Finally, some undoped materials (e.g., metals (e.g., Ca, Zn, Cd) tungstate, metal vanadates, ZnO, etc.) are also preferred FC display materials.
Commercial laser dyes are another exemplary FC display material. A variety of commercial laser dyes are available from a variety of laser dye suppliers, including Lambda Physik, and Exciton et al. Some preferred classes of laser dyes include: pyrromethene (Pyrromethene), coumarin, rhodamine, fluorescein, other aromatic hydrocarbons and their derivatives, and the like. In addition, there are a variety of polymers containing unsaturated carbon-carbon bonds that can also be used as fluorescent materials and have a variety of optical and fluorescent applications. For example, MEH-PPV, etc. have been used in optoelectronic devices, such as Polymer Light Emitting Diodes (PLEDs). The fluorescent polymer can be directly used as a fluorescent layer of a transparent 2-D display screen. Further, recently developed semiconductor nanoparticles (e.g., quantum dots) are also preferred LIF display materials. The term "semiconductor nanoparticles" refers to inorganic crystallites having a diameter between 1nm and 1000nm, preferably between 2nm and 50 nm. The semiconductor nanoparticles are capable of emitting electromagnetic radiation upon excitation (i.e., the semiconductor nanoparticles are luminescent). The nanoparticles may be uniform nanocrystals, or include multiple shells (shells). For example, it comprises one or more "cores" of a first semiconductor material and may be surrounded by a "shell" of a second semiconductor material. The core and/or shell may be a semiconductor material including, but not limited to, group II-VI (ZnS, ZnSe, ZnTe, CdS, CdSe, CdTe, HgS, HgSe, HgTe, MgS, MgSe, MgTe, CaS, CaSe, CaTe, SrS, SrSe, SrTe, BaS, BaSe, BaTe, etc.) and group III-V (GaN, GaP, GaAs, GaSb, InN, InP, InAs, InSb, etc.) and group IV (Ge, Si, etc.) materials, as well as alloys or mixtures thereof.
Finally, fluorescent organometallic molecules containing rare earth or transition element cations are also used in the down-conversion phosphor screen. This molecule comprises a metal center of rare earth elements including Eu, Tb, Er, Tm, Ce, which are externally protected with organic chelating groups. The metal center may also include transition elements such as Zn, Mn, Cr, Ir, etc. and main group elements such as B, Al, Ga. Such organometallic molecules can be readily dissolved in liquid or transparent solid host media (host media) and form transparent phosphor screens for 2-D transparent displays with minimal light scattering. Some examples of such fluorescent organometallic molecules include: 1. tris (dibenzoylmethane) mono (phenanthroline) europium (iii) (tris (diazomethanine) mono (phenanthroline) europeium (iii)); 2. tris (8-hydroxyquinoline) erbium (Tris (8-hydroxyquinoline) erbium); 3. tris (1-phenyl-3-methyl-4- (2, 2-dimethylpropane-1-oyl) o-diazacyclopentene-5-mono) terbium (iii) (Tris (1-phenyl-3-methyl-4- (2, 2-dimethyl propan-1-oyl) pyrazolin-5-one) terbium (iii)); 4. bis (2-methyl-8-hydroxyquinone) zinc (Bis (2-methyl-8-hydroxyquinolato) zinc); 5. biphenyl borane-8-hydroxyquinolinate (Diphenylborane-8-hydroxyquinolinate).
The up-converting phosphor is similar in chemical composition to the down-converting fluorescent material. The up-converting phosphor for fluorescent conversion displays also comprises a selection of the following materials or molecules: 1. laser dyes, i.e. small organic molecules that can be excited by absorbing at least two infrared photons and emitting visible light. 2. Fluorescent polymers, i.e., polymer species that can be excited by absorbing at least two infrared photons and emitting visible light. 3. Inorganic or ceramic particles or nanoparticles, including conventional up-converting phosphors (e.g. metal fluorides, metal oxides), which can be excited by absorbing at least two infrared photons and emitting visible light. 4. Semiconductor particles, including nanoparticles, such as II-VI or III-V compound semiconductors, e.g., quantum dots, have been described in detail in the above-described "down-conversion" semiconductors.
Fluorescent up-converting inorganic phosphors include, but are not limited to, metal oxides, metal halides, metal chalcogenides (e.g., sulfides), or mixtures thereof, such as metal oxy-halides, metal oxy-chalcogenides. They are typically doped with rare earth elements (e.g., Yb <3+ >, Er <3+ >, Tm <3+ >; examples of some hosts include, but are not limited to, NaYF4, YF3, BaYF5, LaF3, La2MoO8, LaNbO4, LnO 2S; where Ln is a rare earth element, e.g., Y, La, Gd). These FC display materials may be used to form a variety of FC display objects. These objects include: screens, panels, windows, walls, billboards, and other display surfaces. There are various means to bind these fluorescent molecules or materials to the display surface: 1. they can be dissolved (organic dyes) or dispersed (inorganic particles) in a solvent (water or organic solvents). Liquid fluorescent formulations (liquifluorescent formulations) may be coated on a surface and form a solid film or coating upon drying, or they may be sandwiched between two surfaces in liquid form. 2. They can dissolve (organic dyes) or disperse (inorganic particles) in solid bodies, such as glass, polymers, gels, inorganic-organic hybrid bodies, cloth, paper, films, tapes, etc., and convert the solid bodies into fluorescent objects for laser display. 3. Some objects (e.g. cloth, paper, tape, fluorescent polymers) may already contain fluorescent molecules or luminescent functional groups. In that case, they can be directly used as laser display objects.
Returning to the exemplary embodiment shown in FIG. 2, excitation light is output from the device 20, in this example, the device 20 is a laser. The intensity and/or movement of the laser beam output from the device 20 may be adjusted to produce an image in the substrate 14. In a down conversion embodiment, the output from the laser may be ultraviolet light. In an up-conversion implementation, the output from the laser may be infrared light.
Fig. 2 is an exemplary diagram of a luminescent material (e.g., luminescent particles 22) dispersed in a substantially transparent substrate according to an embodiment. When the excitation light is absorbed by the light emitting particles 22, the light emitting particles emit visible light. Accordingly, in a down conversion embodiment, when ultraviolet light is absorbed by the light emitting particles 22, visible light is emitted from the light emitting particles. Similarly, in the up-conversion embodiment, when infrared light is absorbed by the light-emitting particles 22, visible light is emitted from the light-emitting particles.
In some exemplary embodiments, more than one projector or laser may be used for illumination. For example, a first projector may be used for emitting excitation of luminescent material of a first color and a second projector may be used for emitting excitation of luminescent material of a second color. The use of more than one projector may increase the amount of excitation light absorbed by the luminescent material. By increasing the amount of absorbed excitation light, the amount of visible light emitted from the luminescent material can be increased. The greater the amount of visible light emitted, the brighter the display. In an embodiment, a first projector may be designated to cause emission of red light, a second projector may be designated to cause emission of green light, and a third projector may be designated to cause emission of blue light. However, other configurations are conceivable. For example, it is contemplated that two projectors, four projectors, projectors that cause primary colors to be emitted, projectors that cause non-primary colors to be emitted, and lasers may be used in place of the projectors in a similar configuration.
Fig. 2 shows a luminescent material according to an embodiment of the invention comprising luminescent particles 22 dispersed in a substantially transparent substrate. These luminescent particles 22 may be substantially similar particles throughout fig. 2 or as shown in fig. 2, which may differ in composition. When the excitation light is absorbed by the light emitting particles 22, the particles emit visible light. Accordingly, in a down conversion embodiment, when ultraviolet light is absorbed by a luminescent material, visible light is emitted from the luminescent material. Similarly, in the up-conversion embodiment, when infrared light is absorbed by the light emitting material, visible light is emitted from the light emitting material. In an embodiment, each luminescent material may be a different type of luminescent material that emits visible light in a different wavelength range in response to a different range of wavelengths of the excitation light (e.g., ultraviolet or infrared light).
The luminescent particles 22 may be dispersed throughout the substrate 14. In the alternative, the particles may be disposed on the surface of substrate 14, as shown in fig. 2. The luminescent particles 22 may be integrated with the substrate 14 by coating on the substrate 14. The luminescent material may be a fluorescent material that emits visible light in response to absorption of electromagnetic radiation (e.g., visible light, ultraviolet light, or infrared light) having a different wavelength than the emitted visible light. The particles may be sized smaller than the wavelength of visible light, which reduces or eliminates scattering of visible light by the particles. Examples of particles smaller than the wavelength of visible light are nanoparticles or molecules. According to an embodiment, each luminescent particle has a diameter of less than about 400 nanometers. According to an embodiment, each luminescent particle has a diameter of less than about 300 nanometers. According to an embodiment, each luminescent particle has a diameter of less than about 200 nanometers. According to an embodiment, each luminescent particle has a diameter of less than about 100 nanometers. According to other embodiments, each of the luminescent particles has a diameter of less than about 50 nanometers. The luminescent particle may be a single molecule.
Other methods of integrally forming the luminescent material on the surface of the substrate 14 are conceivable. Similar to the embodiment shown in fig. 2, each luminescent material may be a different type of luminescent material that emits visible light in a different wavelength range in response to a different range of wavelengths of the excitation light (e.g., ultraviolet or infrared light). The luminescent material may be a fluorescent material that emits visible light in response to absorption of electromagnetic radiation (e.g., visible light, ultraviolet light, or infrared light) having a different wavelength than the emitted visible light. The luminescent material may comprise luminescent particles.
In DLP or MMA projector embodiments, the wavelength of the ultraviolet light emitted from the DLP projector may be adjusted using a color wheel with special ultraviolet light passing filters (ultraviolet passfilters). Similar adjustment techniques may be applied in other projector implementations or in laser implementations. In an embodiment, multiple projectors and multiple lasers may be used, each associated with a specific ultraviolet wavelength range to excite a specific type of luminescent particle to output a specific color of light.
Fig. 5 shows an exemplary arrangement of luminescent particles on a substrate according to the present invention. Fig. 5 is an exemplary illustration of different types of luminescent particles associated with different visible colors that may be coated on regions of the substrate 14 (e.g., stripe region 32, stripe region 34, and stripe region 36) in a substantially transparent substrate. In an embodiment, the substrate 14 may comprise different regions in which different types of luminescent particles are dispersed. For example, a first type of light-emitting particles (e.g., light-emitting particles associated with red light) may be dispersed in stripe region 32, a second type of light-emitting particles (e.g., light-emitting particles associated with green light) may be dispersed in stripe region 34, and a third type of light-emitting particles (e.g., light-emitting particles associated with blue light) may be dispersed in stripe region 36. The stripe region may be formed in the form of a stripe (i.e., a row). In the alternative, the stripe segments may be subdivided into block matrix patterns (blocks matrix patterns), with alternating colors in each block. In the alternative to coating the striped regions on the surface of the substrate 14, the striped regions may be dispersed through the substrate.
A projector or laser (e.g., projector or laser 20) may use a range of excitation light wavelengths that excite all of the different types of light-emitting particles and selectively illuminate different colors through spatial modulation of the excitation light. For example, in the example of fig. 5, a projector or laser may illuminate portions of stripe region 34 (e.g., which includes light-emitting particles associated with green light) in order to emit green visible light in a given area of substrate 14. In embodiments of spatially separated different types of luminescent particles, the excitation light source is not required to adjust the wavelength of the excitation light to produce different colors, as the color can be selected by spatial modulation of the excitation light.
In an embodiment, the excitation light projected on the substrate 14 in fig. 5 may be wavelength modulated to result in different color emissions. Accordingly, it may not be necessary to spatially modulate the excitation light. When the excitation light incident on the substrate 14 is wavelength modulated, only areas sensitive to a particular wavelength (e.g., stripes or pixels) will be illuminated. In an embodiment, the excitation light may be both spatially and wavelength modulated.
Fig. 6 shows different types of luminescent material arranged in layers on a substrate according to the invention. In an embodiment, the luminescent materials 92, 94, 96 are substantially transparent (transparent) to light, except that for each different luminescent material 92, 94, and 96 there is light that is absorbed and in a different specific wavelength range. Accordingly, in embodiments, the excitation light projected on the substrate 14 need not be spatially modulated. Furthermore, the layers may be coated on the substrate with different thicknesses. By applying different luminescent materials 92, 94 and 96 of different thicknesses, the responsiveness of the excitation light for a particular type of material can be controlled. For example, it may be desirable to balance the emission of different primary colors, since different luminescent materials may illuminate different colors with different intensities by the same amount of absorbed light.
In an embodiment, the screen is pixelated using RGB elements. Each pixel comprises three parts of RGB, respectively. A single projected UV beam may impinge on the pixelated screen. To obtain various mixtures of RGB of different colors, the same UV projection beam on a pixel may be converted to an area covering a certain amount of the RGB elements within the pixel. Accordingly, only one projection beam is required to produce a projected image of all colors. The color balance of RGB for a pixel can be calculated and converted to the correct area of RGB elements on the screen, and then the light beam can be converted to the correct relative area percentage covering each RGB element to display the correct color on the pixel.
Fig. 7 is an exemplary graph of the excitation and emission relationships of different luminescent materials according to the present invention. Exemplary region 48 shows the excitation/emission cross section of the first luminescent material. Exemplary region 46 shows the excitation/emission cross section of the second luminescent material. Exemplary region 50 shows the excitation/emission cross-section of the third luminescent material. However, it should be understood that various exemplary excitation/emission cross-sections are contemplated, including: embodiments in which a single excitation frequency range of multiple emission ranges can be generated, or conversely, embodiments in which multiple excitation frequency ranges can alternately generate the same or overlapping emission ranges.
Each of the plurality of light emitting particles may have a diameter of less than about 500 nanometers. Each of the plurality of light emitting particles may have a diameter of less than about 400 nanometers. Each of the plurality of light emitting particles may have a diameter of less than about 300 nanometers. Each of the plurality of light emitting particles may have a diameter of less than about 200 nanometers. Each of the plurality of light emitting particles may have a diameter of less than about 100 nanometers. Each of the plurality of light emitting particles may have a diameter of less than about 50 nanometers. Each of the plurality of luminescent particles may be a single molecule. Each of the plurality of light emitting particles may be a single atom.
The above embodiments describe fluorescent particles as a method of displaying graphical images on different substantially transparent windshields of a vehicle. However, it will be appreciated by those skilled in the art that other known methods of projecting graphical images on a display, which may be different and substantially transparent, are also possible. Fig. 8 is an exemplary diagram of a pattern of microstructures dispersed in a substantially transparent or translucent substrate according to the present invention. Microstructures 26 are selectively dispersed in regions of substrate 14. The width of the regions of microstructures 26 can be in the range of about 1 nanometer to about 10 millimeters. The regions of the microstructures 26 form a pattern (e.g., a mask or grid) such that the light path 30 of the viewer 10 has a restricted cross-section with the microstructures 26. In an embodiment, the pattern is repetitive. The fill factor of the modes may range from about 0.01% to about 99%. However, the light path 28 from the apparatus 20 may be at an angle to the area of the microstructure 26, thereby maximizing the cross section with the microstructure 26, increasing the scattering of the visible image from the apparatus 20 to increase the illumination of the visible image on the substrate 14. The pitch of the regions of microstructures 26 can be in the range of about 1 nanometer to about 10 millimeters. The thickness of the regions of microstructures 26 can range from about 1 micron to about 10 millimeters. The thickness of the regions of microstructures 26 can be less than the width and/or pitch of the regions of microstructures 26.
Fig. 9 is an exemplary diagram, similar to fig. 8, of a pattern of microstructures disposed on a surface of a substantially transparent or translucent substrate in accordance with the present invention. Microstructures 38 may be coated in areas on substrate 14. The region of the microstructure 38 forms a mask so that the light path 30 of the observer 10 with the microstructure 38 has a restricted (e.g. minimized) cross section. However, the light path 28 from the device 20 may be at an angle to the area of the microstructures 38, thereby maximizing the cross-section with the microstructures, increasing the scattering of the visible image from the device 20 to increase the illumination of the visible image on the substrate 14. In embodiments, the cross-section of the surface of substrate 14 of each element having the pattern of microstructures 38 is less than the depth of the pattern substantially perpendicular to substrate 14, which may increase the transparency of substrate 14.
Fig. 10 is an exemplary illustration of a pattern of angled microstructures dispersed in a substantially transparent or translucent substrate, similar to fig. 8, in accordance with the present invention. The inclined regions of microstructures 39 are formed in substrate 14. The angle of the inclined region of microstructure 39 affects the cross-sectional area of both light path 30 for observer 10 and light path 28 for projector 18. By increasing the cross-section of the light path 28, increased scattering of the visual image may be achieved, thereby increasing the illumination of the visual image on the substrate 14. In an embodiment, the sloped region of the microstructure may also be achieved by coating the microstructure region on the substrate 14.
The following embodiments relate to a transparent projection display having a partially or directionally transparent screen. In this display, a conventional full-color optical projector (or monochromatic scanner) can be used to partially or orient the transparent screen to display the optical image. A partially or directionally transparent screen may have dual characteristics. First, a partially or directionally transparent screen may be sufficiently transparent to allow visual penetration of ambient light. Second, a partially or directionally transparent screen may be filled or coated with reflective small particles or microstructures that can deflect or scatter the projected optical image as a display screen. Such particles and microstructures do not completely block the visible view through the window.
There are a variety of ways to prepare a partially or oriented transparent screen, depending on the embodiment. Fine particles from 1nm to 10 μm may be filled in a transparent or translucent glass or plastic substrate. Transparent or translucent glass or plastic substrates may be coated with fine particles of 1 nanometer to 10 micrometers. A transparent or translucent thin glass sheet or plastic film may be filled with fine particles of 1 nanometer to 10 micrometers. A transparent or translucent thin glass sheet or plastic film may be coated with fine particles of 1nm to 10 μm. The diffusing grid may be embedded or molded (patterned) on the surface of a transparent or translucent glass or plastic sheet.
Both organic and inorganic particles or pigments can be applied in or on a partially or directionally transparent screen. Some examples include: titanium oxide, silica, alumina, latex, polystyrene particles. In embodiments, the particles may range in size from about 1 nanometer to about 10 micrometers. In embodiments, the particles may range in size from about 10 nanometers to about 1 micrometer. These light scattering materials may be uniformly dispersed in the glass or plastic body in a suitable concentration, or they may be coated on the glass or plastic surface in a suitable thickness. A protective cover layer or additional body layer may be applied over the particle coating to prevent damage to the surface from physical contact.
The glass used for the partially or directionally transparent screen may comprise an inorganic solid that is transparent or translucent to visible light. Examples of such inorganic solids are oxides and halides. The glass may include silicates, borosilicates, lead crystals, alumina, silica, fused silica, quartz, glass ceramics, metal fluorides, and other similar materials. These types of glass may be used as windows in rooms, buildings, and/or moving vehicles. Plastics for partially or directionally transparent screens may include organic and polymeric solids that are transparent or translucent to visible light. The thermoplastic used for the luminescent screen may comprise a special thermosetting solid, such as a transparent gel. Some examples of such plastics include polyacrylic, polycarbonate, polyethylene, polypropylene, polystyrene, PVC, silicone, and other similar materials. The microstructures may be integrally formed in or on the screen panel to deflect the projected image from an angle while allowing substantial visible transparency at normal viewing angles. The opaque diffusing grid may be embedded in a thin glass or plastic sheet. The area of the light scattering grid from an observer standing in front of the screen is substantially smaller than the area of the light scattering grid from the image projector.
The directional transparent screen structure according to the embodiment may provide many advantages. The directional transparent screen structure may be substantially transparent to an observer perpendicular to the screen or at a slight angle off the normal to the screen. The directional transparent screen structure may have high reflection and deflection of the projected image at an oblique angle to the screen. The cylindrical transparent region may be solid-state opaque to the projected image at said oblique angle. This strong image scattering can enhance the contrast of the projected image on the display window while not obstructing direct viewing perpendicular to the screen. Directional transparent screen structures may be useful in vehicles where the driver's line of sight is generally orthogonal to the windshield. In an embodiment, the opaque columns penetrate the depth of the transparent body glass or plastic. In an embodiment, the size and density of microstructures on the screen may be varied to accommodate standard view transparency and reflected image contrast. The depth and projection angle of the screen can also be varied to adjust contrast and transparency.
In embodiments, the surface of the screen may be molded (patterned) into various non-isotropic structures to achieve a "non-isotropic" screen. For example, a pattern of a cover layer having a certain thickness (e.g., 10 nm to 1 mm) may be applied to the screen surface by various printing, stamping, photolithography methods, micro-contact printing, and other similar methods. Such printing may create a pattern of very fine scattering features and structures on the surface of the screen that may allow for angular scattering and display of the projected image while allowing for substantially direct viewing through the screen at an angle substantially normal to the screen.
FIG. 11 shows an exemplary embodiment of a two-dimensional beam based FC display subsystem according to the present invention. The excitation source 610 preferably passes through a set of beam diameter control optics 612 and a 2-D acousto-optic scanner 615. The scan control interface unit 620 coordinates the functions of the direct digital synthesizer 622, the RF amplifier 625, and the beam diameter control optics 612. The processed image beam is projected onto the FC screen through the angle expander 650. To deliver a consistent and stable image on the FC screen, the beam splitter deflects the image into a Position Sensitive Detector (PSD)635 and is processed by a position sensitive detector processor 630 and fed back to the scan control interface unit 620. The closed loop image feedback formed by 632, 635, 630 and 620 is combined to maintain the positional accuracy and pointing stability of the laser beam.
It will be apparent to those skilled in the art that various modifications and improvements can be made to the systems, methods, materials and apparatus of the FC-based display disclosed herein without departing from the spirit and scope of the invention. Thus, it is intended that the present invention cover the modifications and variations of this disclosure provided they come within the scope of the appended claims and their equivalents.
In an embodiment, a UV lamp or a shorter wavelength visible lamp is used in the projector, which may be a Liquid Crystal Display (LCD) or DLP. The projector may interface with a computer, PDA, DVD, VCR, TV or other information input device. In embodiments, the luminescent screen may be a transparent or translucent glass or plastic plate filled with a fluorescent organic dye or inorganic phosphor (inorganic phosphor).
Transparent or substantially transparent displays may have many applications. For example, a transparent or substantially transparent display may display images on transparent or translucent windows of a moving vehicle, such as an automobile, motorcycle, aircraft, and boat; the image may be information about the vehicle state. The directions (e.g., GPS map) currently displayed on the dashboard electronic display may be projected onto a window (e.g., front glass, windshield) of the vehicle. The driver does not need to divert his eyes off the road to see the status and/or direction of the vehicle.
In embodiments, the luminescent screen may be a transparent or translucent glass or plastic plate filled with a fluorescent organic dye or inorganic phosphor. In embodiments, the luminescent screen may be a transparent or translucent glass or plastic plate coated with a fluorescent organic dye or inorganic phosphor. In embodiments, the luminescent screen may be a transparent or translucent thin glass sheet or plastic film filled with a fluorescent organic dye or inorganic phosphor. In embodiments, the luminescent screen may be a transparent or translucent thin glass sheet or plastic film coated with a fluorescent organic dye or inorganic phosphor. The glass used for the phosphor screen may comprise an inorganic solid that is transparent or translucent to visible light. Examples of such inorganic solids are oxides and halides. The glass may include silicates, borosilicates, lead crystals, alumina, silica, fused silica, quartz, glass ceramics, metal fluorides, and other similar materials. These types of glass may be used as windows in rooms, buildings, and/or moving vehicles. Plastics for luminescent screens may include organic and polymeric solids that are transparent or translucent to visible light. The thermoplastic used for the luminescent screen may comprise a special thermosetting solid, such as a transparent gel. Some examples of such plastics include polyacrylic, polycarbonate, polyethylene, polypropylene, polystyrene, PVC, silicone, and other similar materials.
By combining glass and plastic with fluorescent dyes, the glass and plastic can become a fluorescent projection display. Fluorescent dyes are organic molecules or materials that absorb higher energy photons and emit lower energy photons. In order to emit visible light, these molecules can absorb UV light or shorter wavelength (e.g. violet or blue) visible light, typically in the wavelength range of 190nm to 590nm or in the wavelength range of 300nm to 450 nm. Some examples of fluorescent dyes include, but are not limited to, commercial dye molecules available from various dye suppliers including Lambda, Physik and Exciton. Fluorescent dyes that may be used in transparent displays include pyrromethene (pyrromethene), coumarin, rhodamine, fluorescein, and other aromatic hydrocarbons and their derivatives. In addition, there are many polymers containing unsaturated bonds, which can be used as a fluorescent material in a transparent display. For example, some of them (MEH-PPV, etc.) have been used in optoelectronic devices such as Polymer Light Emitting Diodes (PLEDs).
By combining glass or plastic with the phosphor material, the glass or plastic can become a fluorescent projection display. The down-converting phosphor comprises inorganic or ceramic particles or nanoparticles including, but not limited to, metal oxides, metal halides, metal chalcogenides (e.g., metal sulfides), or mixtures thereof such as metal oxy-halides and metal oxy-chalcogenides. These inorganic phosphors have wide applications in fluorescent lamps and electronic monitors. They can be applied in converting shorter wavelength projection light (e.g. UV and blue) into longer wavelength visible light. They may be dispersed or coated on a transparent screen or window and excited by the corresponding shorter wavelength projection light to display a visible image.
Fluorescent phosphor or dye molecules can be excited into visible light by projecting light in the range from ultraviolet light (e.g., wavelengths greater than 240 nanometers) to blue light (e.g., wavelengths less than 500 nanometers). The lamp for the projector may emit light of wavelengths in this range. Such lamps are commercially available (e.g., those used for tanning purposes). They may be halogen lamps, special incandescent lamps, and arc vapor lamps (e.g., mercury, xenon, deuterons, etc.). These lamps may include phosphors to convert the shorter wavelength UV to longer wavelength UV.
Phosphors comprise a metal oxide host (e.g., metal silicates, metal borates, metal phosphates, metal aluminates); metal oxyhalides, oxysulfides, metal halides, metal sulfides, and chalcogenides (chalcogides), which can be applied to the projection fluorescent display. One example of a phosphor that may be used in a fluorescent display includes the garnet series of phosphors: ce-doped (YmAl-m)3(AlnB1-n)5O 12; wherein 0 < ═ m, n < ═ 1; a comprises other rare earth elements and B comprises B and/or Ga. In addition, phosphors comprising common rare earth elements (e.g., Eu, Tb, Ce, Dy, Er, Pr, and/or Tm) and transition or main group elements (e.g., Mn, Cr, Ti, Ag, Cu, Zn, Bi, Pb, Sn, and/or T1) as fluorescence activators may be applied to projection fluorescent displays. Some undoped materials (e.g., metals, Ca, Zn, Cd, tungstates, metal vanadates, and ZnO) are also luminescent materials and can be used in projection fluorescent displays.
The organic dye and the inorganic phosphor may be filled in or coated on a body of glass or plastic to prepare a fluorescent transparent screen. The dye molecule, if dissolved in the host, will not scatter visible light, but it will absorb some visible light and add some color hue to the host. In contrast, larger phosphor particles scatter visible light, which affects the optical transparency of the body. Embodiments relate to different methods to reduce scattering of visible light by phosphor particles. In an embodiment, the size of the phosphorus particles is reduced. In an embodiment, the concentration of phosphorus particles is reduced and uniformly dispersed in the body. In an embodiment, a host with a refractive index close to that of the phosphor is selected to reduce scattering, or a phosphor with a refractive index close to that of the host is selected to reduce scattering.
Known vehicle systems use sensors, input from various devices, and onboard or remote processing to build information about the vehicle surroundings. For example, adaptive cruise control systems use sensors, such as radar devices, to track objects, such as target vehicles ahead of a host vehicle, and adjust vehicle speed based on a range and changes in range sensed relative to the target vehicle. A collision avoidance system or collision preparation system analyzes sensed objects in the path of the vehicle and takes action based on the perceived likelihood of a collision between the sensed objects and the vehicle. Lane keeping systems use available sensors and data to keep the vehicle within lane markings.
FIG. 12 shows a schematic diagram of a vehicle 710 system according to the present invention, the vehicle having been configured with an object tracking system. The exemplary vehicle includes a passenger vehicle for use on a highway, but it should be understood that the invention described herein may be applied to any vehicle or other system that seeks to monitor the location and trajectory of remote vehicles and other objects. The vehicle includes a control system that includes various algorithms and calibrations executed at various times. The control system is preferably a subset of an overall vehicle control architecture operable to provide coordinated vehicle system control. The control system is operable to monitor inputs from various sensors, synthesize pertinent information and inputs, and execute algorithms to control various actuators to achieve control targets, including such parameters as collision avoidance and adaptive cruise control. The vehicle control architecture includes a plurality of distributed processors and devices including system controllers that provide functions such as antilock braking, traction control, and vehicle stability.
Referring to fig. 12-14, an exemplary vehicle 710 includes a control system having: an observation module 722; a Data Association and Clustering (DAC) module 724 further comprising a Kalman filter 724A; and a Track Life Management (TLM) module 726 that records a track list 726A, the track list 726A including a plurality of object tracks. More specifically, the observation module includes sensors 714 and 716, their respective sensor processors, and interconnections between the sensors, sensor processors, and DAC module.
The exemplary sensing system preferably includes an object-locating sensor that includes at least two forward looking range sensing devices 714 and 716 and corresponding subsystems or processors 714A and 716A. The object-locating sensor may include a short-range radar subsystem, a long-range radar subsystem, and a forward vision subsystem. The object-locating sensing devices may include any range sensors such as FM-CW radar, (frequency modulated Continuous Wave), pulse and fsk (frequency Shift keying) radar, and lidar (Light Detection and Ranging) devices, as well as ultrasound devices, which rely on effects such as doppler measurements to locate objects ahead. Possible object locating devices include Charge Coupled Devices (CCD) or Complementary Metal Oxide Semiconductor (CMOS) video image sensors, as well as other known camera/video image processors, which use digital photographic methods to "view" the object in front. These sensing systems are used to detect and locate objects in vehicle applications that may be used with these systems, including, for example, adaptive cruise control, collision avoidance, collision preparation, and side object detection. The exemplary vehicle system may also include a Global Position Sensing (GPS) system.
These sensors are preferably arranged in the vehicle 710 in a relatively unobstructed position with respect to the scene in front of the vehicle. It should also be appreciated that each of these sensors provides an estimate of the actual position or state of the target object, where the estimate includes the estimated position and standard deviation. Thus, sensory detection and measurement of the position and state of an object is often referred to as "estimation". It should also be appreciated that the characteristics of these sensors are complementary, with some sensors being more reliable in estimating certain parameters than others. Conventional sensors have different operating ranges and angular coverage and can estimate different parameters within their operating ranges. For example, radar sensors may be commonly used to estimate range, range rate, and azimuth position of an object, but are generally unstable in estimating the extent to which an object is detected. Cameras (video cameras) with vision processors are more stable in estimating the shape and azimuthal position of an object, but are less effective in estimating the range and rate of change of range of an object. Scanning type lidar can effectively and accurately estimate range and azimuth position, but is generally unable to estimate range rate and is therefore inaccurate with respect to acquisition/identification of new objects. Ultrasonic sensors can estimate range but generally cannot estimate or calculate range rate and azimuth position. Furthermore, it should be understood that the performance of each sensor technology is affected by different environmental conditions. Thus, conventional sensors provide differences in parameters, but more importantly, the overlap of the operating ranges of these sensors creates the potential for sensory fusion.
Each object-locating sensor and subsystem provides outputs including: the distance R, the change in distance R _ dot over time, and the angle Θ, preferably relative to the longitudinal axis of the vehicle, can be recorded as a measurement vector (°), i.e., sensor data. An exemplary short-range radar subsystem has a field of view (FOV) of 160 degrees and a maximum range of 30 meters. The exemplary long range radar subsystem has a field of view of 17 degrees and a maximum range of 220 meters. The exemplary front vision subsystem has a field of view of 45 degrees and a maximum range of fifty (50) meters. For each subsystem, the field of view is preferably oriented about the longitudinal axis of the vehicle 710. The vehicle is preferably positioned into a coordinate system, referred to as an XY coordinate system 720, in which the longitudinal axis of the vehicle 710 establishes the X-axis, with a trajectory (locus) at a point convenient for vehicle and signal processing, and the Y-axis is established by an axis in a horizontal plane and perpendicular to the longitudinal axis of the vehicle 710, such that the horizontal plane is parallel to the ground.
FIG. 14 illustrates an exemplary data fusion process in accordance with the present invention. As shown in FIG. 14, the illustrated observation module includes a first sensor 714 located and positioned at a discrete point A on the vehicle, a first signal processor 714A, a second sensor 716 located and positioned at a discrete point B on the vehicle, and a second signal processor 716A. The first processor 714A will receive signals (represented as measurements) from the first sensor 714OA) IntoThe rows are transformed to determine the distance (RA), the rate of change of the distance over time (R _ dotA), and the azimuth angle (Θ a) estimated at the time of measurement for each target object 730. Similarly, the second processor 716A will receive signals (denoted as signals) from the second sensor 716OB) A transformation is performed to determine a second set of distances (RB), a rate of change of distance over time (R dotB), and an estimated azimuth angle (Θ B) for object 730.
The exemplary DAC module 724 includes a controller 728 in which algorithms and associated calibrations (not shown) are stored and configured to receive estimated data from each sensor a, B, to cluster the data into similar observation tracks (i.e., observations of temporal consistency of the object 730 by the sensors 714 and 716 over a series of discrete time events), and to fuse these clustered observations to determine true track status. It will be appreciated that fusing data using different sensing systems and techniques will produce stable results. Again, it should be understood that multiple sensors may be employed in the technique. However, it should also be appreciated that an increase in the number of sensors results in an increase in the complexity of the algorithm and requires more computational power to produce these results within the same time frame. The controller 728 may be housed within the host vehicle 710, but may also be located at a remote location. With this in mind, the controller 728 is electrically coupled to the sensor processors 714A, 716A, but may also be wirelessly coupled via RF, LAN, infrared, or other conventional wireless techniques. The TLM module 726 is configured to receive the fused observations and store the observations within a list of traces 726A.
In multi-target tracking ("MTT") fusion, sensor registration, or "alignment" of sensors, includes determining the position, orientation, and system bias of the sensors along with target state variables. In a general MTT system with sensor registration, a target trajectory (tracking) is generated while the vehicle is in motion. The track (track) represents a physical object and includes a plurality of system state variables including, for example, position and velocity. Measurements from each individual sensor are typically associated with a particular target trajectory. Multiple sensor registration techniques are known in the art and will not be described in detail herein.
The schematic illustration of fig. 12 includes the aforementioned object-locating sensors 714 and 716 mounted at locations a and B in an exemplary vehicle, preferably mounted at the front of the vehicle 710. The target object 730 moves away from the vehicle, where t1, t2, t3 represent three consecutive time frames (time ranges). Lines ra1-ra2-ra3, rf1-rf2-rf3, and rb1-rb2-rb3 represent the position of the target as measured by the first sensor 714, the fusion processor, and the second sensor 716 at times t1, t2, and t3, respectively, using the results of the measurements made by the sensors 714 and 716 located at points A, B at OA=(RA,R_dotA,ΘA) And OB=(RB,R_dotB,ΘB) Is expressed in terms of the form.
A well-known exemplary trajectory fusion process is disclosed, for example, in U.S. patent No.7,460,951 entitled "SYSTEM AND METHOD OF TARGET TRACKING USING sensing system" which is incorporated herein by reference, which allows for determining the position OF a device relative to a vehicle in an XY coordinate system. The fusion process includes using sensors 714 and 716 located at points A, B and OA=(RA,R_dotA,ΘA) And OB=(RB,R_dotB,ΘB) The target object 730 is measured. The fusion location of the target object 730 is determined and denoted as x ═ RF, R _ dotF, Θ F, Θ _ dotF, where R denotes distance and Θ denotes angle as described above. The position of the front object 730 is then converted into parameterized coordinates relative to the vehicle's XY coordinate system. The control system preferably uses fused trajectory trajectories (lines rf1, rf2, rf3), including multiple fused objects, as a reference (i.e., ground truth) to estimate true sensor positions of the sensors 714 and 716. As shown in fig. 12, the fused trajectory is given by the target object 730 in the time series t1, t2, t 3. Using a large number of correspondences of associated objects, e.g., { ra1, rf1, rb1), (ra2, rf2, rb2), (ra3, rf3, rb3) }, the true positions of sensors 714 and 716 at points A and B, respectively, can be calculatedTo minimize the residue, the well-known least squares calculation method is preferably used. In fig. 12, items denoted as ra1, ra2, and ra3 represent object maps (object maps) measured by the first sensor 714. The items denoted rb1, rb2, rb3 represent the object map observed by the second sensor 716.
Fig. 13 shows the flow of information used in creating a track list in accordance with the present invention. In fig. 13, the reference trajectory is preferably calculated and determined in the sensor fusion block 728 of fig. 14 described above. The processing of the sensor registration includes determining the relative positions of the sensors 714 and 716, as well as the relationship between the range of the vehicle identified by the XY coordinate system and their coordinate systems. The registration of the single object sensor 716 is now described. All object sensors are preferably treated similarly. For object mapping compensation, a sensor coordinate system or range, i.e. a UV coordinate system, and a vehicle coordinate range, i.e. an XY coordinate system, are preferably used. The sensor coordinate system (u, v) is preferably defined as follows: (1) the origin is at the center of the sensor, (2) the v-axis is along the longitudinal direction (boresight); and (3) the u-axis is perpendicular to the v-axis and points to the right. The vehicle coordinate system, as previously indicated as (x, y), wherein the x-axis represents the vehicle longitudinal axis and the y-axis represents the vehicle transverse axis.
The position of the trajectory (x) may be represented as (r) in the XY coordinate system. The sensor measurement (o) may be represented as (q) in the UV coordinate system. The sensor registration parameters (a) include rotation (R) and translation (R0) of the UV coordinate system.
FIG. 15 shows an exemplary data flow enabling joint tracking and sensor registration in accordance with the present invention. The method begins with the receipt of sensor data. The data correlation module matches the sensor data with the predicted location of the target. The joint tracking and registration module combines the previous estimates (i.e., a priori) and the new data (i.e., the matched measurement-trajectory pairs) and updates the target trajectory estimates and sensor registration data in the database. The time propagation processing module predicts a target trajectory or sensor registration parameter over a next time cycle through a dynamic model based on historical sensor registration, trajectory, and current vehicle kinematics. The sensor registration parameters are typically assumed to be substantially invariant over time. The confidence of the registration parameters accumulates over time. However, when a significant sensor registration change (e.g., vehicle collision) is detected, a priori information about the registration will be reset to zero.
The object trajectory may be used for a variety of purposes, including adaptive cruise control, where the vehicle adjusts speed to maintain a minimum distance from the vehicle in the current path, as described above. Another similar system to which object trajectories may be applied is the Collision Preparation System (CPS), where the identified object trajectories are analyzed to identify a likely imminent or imminent collision based on the trajectory motion relative to the vehicle. The CPS warns the driver of an impending collision and reduces the severity of the collision by automatic braking if the collision is deemed unavoidable. A method is disclosed for utilizing a multi-object fusion module with CPS that provides countermeasures when a collision is determined to be imminent, such as seat belt tightening, throttle idle, automatic braking, airbag preparation, adjustment of head restraint, horn and headlight activation, adjustment of pedals or steering column, adjustment based on estimated relative speed of the collision, suspension control adjustment, and adjustment of stability control system.
FIG. 16 schematically illustrates an exemplary system in which sensor inputs are fused into object trajectories useful in a collision preparation system in accordance with the present invention. Inputs related to objects in the vehicle surroundings are monitored by the data fusion module. The data fusion module analyzes, filters, or prioritizes the various inputs with respect to their reliability, and the prioritized or weighted inputs are added together to produce a trajectory estimate for an object in front of the vehicle. These object trajectories are then input to a collision threat assessment module, where each trajectory is used to assess the likelihood of a collision. Such a likelihood of collision may be evaluated, for example, against a threshold of likelihood of collision, and collision countermeasures may be activated if a collision is determined to be possible.
As shown in fig. 16, the CPS uses its range sensors (e.g., radar and lidar) and cameras to continuously monitor the surrounding environment and take appropriate countermeasures to avoid events or conditions that develop into collisions. The collision threat assessment generates an output for the system actuator to respond.
As shown in fig. 16, the fusion module is adapted to integrate inputs from the various sensing devices and generate a fused trajectory of the object in front of the vehicle. The fused trajectory generated in FIG. 16 includes data estimates of the relative position and trajectory of the object with respect to the vehicle. This data estimation based on radar and other ranging sensor inputs is useful, but includes inaccuracies and inaccuracies of the sensor devices used to generate the trajectory. As described above, different sensor inputs may be leveraged to improve the accuracy of the estimation involved in the generated trajectory. In particular, applications with invasive consequences, such as automatic braking and potential airbag deployment, require high accuracy in the prediction of an imminent collision, since a wrong determination can have a high impact on vehicle drivability and a wrong indication can lead to an inoperative safety system.
Vision systems provide an alternative source of sensor input for use in vehicle control systems. Methods of analyzing visual information are well known in the art and include pattern recognition, corner detection, vertical edge detection, vertical object recognition, and other methods. However, it should be understood that the high resolution visual image of the field of view in front of the vehicle, which is refreshed at high speed, needed to identify real-time motion, includes a significant amount of information that needs to be analyzed. Real-time analysis of visual information can be expensive. A method of fusing input from a vision system (with fused trajectories produced by, for example, the exemplary trajectory fusion method described above) is disclosed to focus visual analysis on a portion of the visual information most likely to pose a collision threat and use the fused analysis to warn of a likely impending collision event.
Fig. 17 schematically shows an exemplary image fusion module according to the present invention. The fusion module of fig. 17 monitors the input of distance sensor data (including object trajectory) and camera data. The object trajectory information is used to extract an image patch (image patch) or a defined area of interest in the visual data corresponding to the object trajectory information. Subsequently, the area in the image block is analyzed, and features or patterns in the data indicating the object in the image block are extracted. The extracted features are then classified according to any number of classifiers. Exemplary classifications may include the following: fast moving objects such as moving vehicles; objects moving at low speed such as pedestrians; and stationary objects such as street signs. The data comprising the classifications is then analyzed according to data correlations to form a visual fusion-based trajectory. These trajectories and associated data about the (image) blocks are then stored for iterative comparison with new data and prediction of relative vehicle motion to give a possible or imminent collision event. Furthermore, the region of interest reflecting the previously selected image patch may be communicated to the module performing image patch extraction to provide continuity of the iterative visual data analysis. In this way, distance data (range data) or distance track (range track) information is superimposed on the image plane to improve the prediction or likelihood analysis of a collision event.
Fig. 19 shows a superposition of distance data on the respective image planes according to the invention, which is useful in the intra-system analysis of various target objects. The shading strip is a radar track superimposed in the image of the front view camera. The position and image extraction module extracts image blocks around the range sensor track. The feature extraction module calculates features of the image block using the following transformations: edge, gradient orientation Histogram (HOG), Scale Invariant Feature Transform (SIFT), Harris corner detector, or (image) block projected onto linear subspace. The classification module takes the extracted features as input and provides them to a classifier to determine whether the image patch surrounds the object. The classification determines a label for each image block. For example, in fig. 19, boxes a and B are identified as vehicles, and unmarked boxes are identified as roadside objects. The prediction processing module uses the historical information of the object (i.e., the position in the previous cycle, the image block, and the label) and predicts the current value. Data association links the current measurement with the predicted object or determines that the measurement source (i.e., location, image patch, and label) is from a specific object. Finally, the object tracker is launched to generate an updated location and saved to the object track file.
FIG. 18 schematically depicts an exemplary Kalman filter bank that operates to estimate the position and velocity of group objects in accordance with the present invention. Different filters are used for different constant gliding (constant gliding) targets, high longitudinal maneuvering targets, and stationary targets. A Markov Decision Process (MDP) model is used to select the filter with the most likely measurement based on the observation and the target previous velocity profile. The multi-model filtering scheme reduces tracking reaction time, which is important for the CPS function.
The reaction to a potential collision event may be metered based on the increased likelihood. For example, a light autobrake may be used where the likelihood of determination is low, and stronger action may be taken in response to determining a high threshold likelihood.
Furthermore, it should be appreciated that improved accuracy of the likelihood of judgment may be achieved through repeated training of the alert model. For example, if a warning is issued, a review option may be provided to the driver via an audible prompt, an on-screen inquiry, or any other input method that asks the driver to confirm whether a collision warning is appropriate that is about to occur. Various methods are known in the art to accommodate proper warnings, false warnings, or missed warnings. For example, machine learning algorithms known in the art may be used to adaptively utilize the program to assign weights and emphasis to alternative calculations based on the nature of the feedback. Further, fuzzy logic may be used to adjust the inputs to the system according to a scaleable factor (scaleable factor) based on the feedback. In this way, the accuracy of the system may be improved over time and according to the operator's specific driving habits.
It should be understood that a similar approach to CPS application may be used in collision avoidance systems. Typically such systems include warnings to the operator, automatic brake actuation, automatic lateral vehicle control, changing suspension control systems, or other actions that assist the vehicle in avoiding a perceived potential collision.
Furthermore, various methods are known to obtain lane keeping or to place the vehicle in the lane by means of sensor inputs. For example, one method may analyze visual information including paint lines on the road surface and use these markers to place the vehicle in the lane. Some methods use the trajectories of other vehicles to synthesize or assist in establishing vehicle-related lane geometry. A GPS device for use with a 3D map database to enable the position of a vehicle to be estimated from global GPS coordinates and overlaid with known road geometry parameters.
An exemplary method for generating an estimate of a geometric parameter of a lane of vehicle travel on a roadway is disclosed. The method comprises the following steps: monitoring data from a global positioning device; monitoring map waypoint data representing the projected driving route based on the starting point and the destination; monitoring camera data from the vision subsystem; monitoring kinematic data of the vehicle, the data including a vehicle speed and a vehicle yaw rate; determining lane geometry parameters within the vehicle area based on the map waypoint data and the map database; determining a vehicle position relative to the lane geometry based on the lane geometry, data from the global positioning device, and the camera data; determining a road curvature at the vehicle location based on the vehicle location, the camera data, and the vehicle kinematics data; determining a vehicle orientation and a vehicle lateral offset from a center of a driving lane based on road curvature, camera data, and vehicle kinematics; and utilizing the vehicle position, road curvature, vehicle orientation, and vehicle lateral offset in a control scheme of the vehicle.
FIG. 20 illustrates an exemplary vehicle using sensors to acquire road geometry data ahead of the vehicle in accordance with the present invention. The exemplary vehicle includes a passenger vehicle for use on a highway, but it should be understood that the invention described herein may be applied to any vehicle or other system that seeks to monitor the location and trajectory of remote vehicles and other objects. The vehicle includes a control system that includes various algorithms and calibrations executed at various times. The control system is preferably a subset of an overall vehicle control architecture that provides coordinated vehicle system control. The control system monitors inputs from various sensors, synthesizes pertinent information and inputs, and executes algorithms to control various actuators to achieve control targets, thereby implementing, for example, collision avoidance and adaptive cruise control. The vehicle control architecture includes a plurality of distributed processors and devices, including system controllers, that provide functions such as antilock braking, traction control, and vehicle stability.
In the exemplary embodiment of fig. 20, vehicle 760 includes a vision subsystem 766. The vision subsystem 766 uses a camera or imaging device that generates digital images representing the area in front of the vehicle. Data from vision subsystem 766 is used to describe the state in front of the vehicle and is translated into an XY coordinate system 770 with reference to the center axis of vehicle 760. The field of view of the exemplary vision subsystem is indicated by the dashed line. The driving lanes on the roadway are described in terms of lane markings 775A and 775B and descriptive common features that may be visually discovered and are used to describe lane geometry with respect to vehicle 760. Thus, information derived from analysis of the images or camera data may be used as a status regarding forward travel of vehicle 760 using methods known to those skilled in the art.
Each processor within the system is preferably a general-purpose digital computer, which generally includes: a microprocessor or central processing unit, Read Only Memory (ROM), Random Access Memory (RAM), Electrically Programmable Read Only Memory (EPROM), a high speed clock, analog to digital (A/D) and digital to analog (D/A) circuitry, and input/output circuitry and devices (I/O) and appropriate signal conditioning and buffer circuitry. Each processor has a set of control algorithms, comprising resident program instructions and calibrations stored in ROM and executed to provide the respective functions.
The algorithms described herein are typically executed in predetermined loop cycles such that each algorithm is executed at least once each loop cycle. Algorithms stored in the non-volatile memory devices are executed by one of the central processing units and are operable to monitor inputs from the sensing devices and execute control and diagnostic routines to control operation of the respective device, using preset calibrations. Loop cycles are typically executed at regular intervals, for example each 3, 6.25, 15, 25 and 100 milliseconds during ongoing vehicle operation. Alternatively, algorithms may be executed in response to occurrence of an event.
Sensors used by vehicle 760, such as vision subsystem 766 and other radar or ranging devices, are preferably positioned within vehicle 760 in relatively unobstructed locations relative to the scene in front of the vehicle. It will also be appreciated that each sensor provides an estimate of the actual details of the road or objects on the road in front of the vehicle. It should be understood that these estimates are not exact locations and that deviations from the standard for each estimate are possible. It will also be appreciated that the characteristics of these sensors are complementary, with some sensors being more reliable in some parameter estimates than others. Conventional sensors have different operating ranges and angular coverage and can estimate different parameters within their operating ranges. For example, radar sensors may be commonly used to estimate range, range rate, and azimuth position of objects, but are generally unstable in estimating the extent of detected objects. Cameras with image processors are more stable in estimating the shape and azimuthal position of an object, but are less effective in estimating the range and rate of change of range of an object. Scanning type lidar can effectively and accurately estimate range and azimuth position, but is generally unable to estimate range rate and is therefore inaccurate with respect to acquisition/identification of new objects. Ultrasonic sensors can estimate range but generally cannot estimate or calculate range rate and azimuth position. Sensors that describe vehicle kinematics such as speed and yaw rate are not accurate enough, especially when tracking small changes in vehicle motion, which may be unreliable. Furthermore, it should be understood that the performance of each sensor technology is affected by different environmental conditions. Thus, conventional sensors provide differences in parameters, and the overlap of the operating ranges of the sensors creates the potential for sensory fusion.
The preferred control module includes a controller having algorithms and associated calibrations stored therein and configured to receive the estimated data from the available sensors to cluster the data into usable estimates of the vehicle forward state and fuse the clustered observations to determine the desired lane geometry and relative vehicle position estimates. It should be appreciated that fusing data using different sensing systems and techniques produces stable results. It should also be understood that any number of sensors may be used in the technology.
A method of creating and maintaining estimates of road and lane geometry within a system is presented in which historical measurements are used to estimate or predict subsequent trajectory data. The exemplary system makes an estimate based on a function at time T to describe the state of the system at time T + 1. Typically, to support real-time estimation, an array of information representing a gaussian distribution is used to estimate the effect of unknown errors. Such a system makes possible fusion and collection of estimates of road conditions ahead of the vehicle. However, it should be understood that such systems using historical data and gaussian distributions include inherent errors based on averaging and normal distribution assumptions. For example, in a lane geometry estimation operation, an estimated safe driving lane is established for vehicle travel, and a straight lane behind the vehicle has no actual diminishing effect on sharp turns on the road ahead of the vehicle. Divergence (divergence) of data about the lane ahead of the vehicle need not be improved by application of a random vector with a gaussian distribution to account for the divergence. Methods using historical averaging and normal or gaussian distributions, such as those relying on Kalman filters, often include error factors that cause time lags in the transition or change of road geometry.
An alternative method is disclosed for generating estimates of lane geometry and vehicle relative lane position and location without error by fusing current measurements from GPS data, the visual camera subsystem and vehicle kinematics based on historical data or normal distributions.
General lane geometry parametersThe numbers are information that is readily available through the use of GPS devices and 3D maps. Given an approximate location from the GPS device, the localized road geometry parameters may be converted into a series of road shape points. Similarly, GPS coordinates including a global latitude measurement and a global longitude measurement may be obtained by a GPS device. Vehicle kinematics, including at least vehicle speed and yaw rate, may be obtained from sensors that monitor vehicle operation and/or monitor accelerometer readings. The camera data may be used to position the vehicle in the actual lane of travel. The lane sensing coefficient is defined by camera data (i.e., y ═ a + bx + cx)2+d3Where x is the longitudinal offset of the lane and y is the lateral offset from the center of the lane). From this data, the forward lane estimation module may estimate the curvature of the lane, the lateral offset from the center of the lane, and the vehicle orientation relative to the lane tangent.
Fig. 21 shows an exemplary forward lane estimation process according to the present invention. Exemplary processes include a map geometric model module, a vehicle pose location module, a curvature estimation module, and a vehicle lateral (transverse) tracking module. The map geometric model module inputs map waypoints determined by methods known in the art, including determining generalized paths (generalized paths) from a starting point or current point to a destination or passing point in a map database, and outputs lane geometric parameters within the vehicle region. The lane geometry parameter may be described as an arc that includes a geometric representation of the roads in the area. The vehicle attitude location module inputs road geometry from the map geometry model module, GPS coordinates from the GPS device, and camera data from the vision subsystem, and outputs an estimated vehicle position within the vehicle region relative to the lane geometry. The vehicle position related to the lane geometry or the arc may be described as an arc length parameter(s)m). The curvature estimation module inputs camera data, vehicle kinematic data such as vehicle speed and yaw rate, and s from vehicle sensorsmAnd outputs a measure of curvature (K) or curve on the road at the vehicle location. Finally, the vehicle side tracking module inputs the camera data, the vehicle kinematics data and K, and outputs information about the vehicle positionAnd data of a vehicle angular direction, the vehicle position being a position with reference to a center of the current lane, and the vehicle angular direction being an angular direction with reference to a current heading direction of the lane. In this way, current inputs relating to the current position and travel of the vehicle may be used to generate data relating to lane geometry within the vehicle area, as well as vehicle position and orientation relating to the lane.
As described above, the map geometric model module inputs map waypoints and outputs lane geometric parameters within the vehicle region. Specifically, the map geometric model module monitors inputs of map shape points described in a map database and constructs a geometric model representing the shape points. FIG. 22 illustrates an exemplary process in which information from a map database is used to construct a geometric model of a road in a vehicle region in accordance with the present invention. An exemplary process includes collecting map shape points from a map database describing road geometry parameters. The map database provides map shape points in the form of global coordinates that typically describe a location in the form of a latitude location, a longitude location, and an altitude or elevation. The global coordinates are then converted to a local coordinate system, typically identifying points that approximate the vehicle position as static reference points, and describing any other position as north-going displacements from the reference point and east-going displacements from the reference point. The map shape points are then fitted with splines to produce a geometric shape or arc approximating the road geometry being represented. Finally, the tangent and curvature of the fitted spline function are determined at the estimated position of the vehicle.
An exemplary determination within a map geometric model is described. Let { (lat)i,loni) I 1.., N becomes a shape point. Picking a point as a reference point that can convert the shape point to local coordinates { (e)i,ni) I 1., N, representing the displacement from the reference point east and north. By s1=0,
Figure GSA00000136952800321
i ≧ 2 defines the series {(s)i,eiNi) | i ═ 1,.., N }, two-dimensional cubic is obtainedThe spline functions to fit the shape points as follows:
e n = f ( s ) - - - [ 1 ]
where s is the arc length parameter and e and n are the east and north components of the displacement, respectively. Subsequently, the gradient vector at s is calculated as follows.
<math><mrow><mfenced open='[' close=']'><mtable><mtr><mtd><msup><mi>e</mi><mo>&prime;</mo></msup></mtd></mtr><mtr><mtd><msup><mi>n</mi><mo>&prime;</mo></msup></mtd></mtr></mtable></mfenced><mo>=</mo><msup><mi>f</mi><mo>&prime;</mo></msup><mrow><mo>(</mo><mi>s</mi><mo>)</mo></mrow><mo>-</mo><mo>-</mo><mo>-</mo><mo>[</mo><mn>2</mn><mo>]</mo></mrow></math>
And the direction angle is calculated as follows.
ξ=atan2(n′,e′) [3]
Finally, the curvature K at s is calculated as follows:
<math><mrow><mi>k</mi><mo>=</mo><mfrac><mrow><msup><mi>e</mi><mo>&prime;</mo></msup><msup><mi>n</mi><mrow><mo>&prime;</mo><mo>&prime;</mo></mrow></msup><mo>-</mo><msup><mi>n</mi><mo>&prime;</mo></msup><msup><mi>e</mi><mrow><mo>&prime;</mo><mo>&prime;</mo></mrow></msup></mrow><msup><mrow><mo>(</mo><msup><mi>e</mi><mrow><mo>&prime;</mo><mn>2</mn></mrow></msup><mo>+</mo><msup><mi>n</mi><mrow><mo>&prime;</mo><mn>2</mn></mrow></msup><mo>)</mo></mrow><mrow><mn>3</mn><mo>/</mo><mn>2</mn></mrow></msup></mfrac><mo>-</mo><mo>-</mo><mo>-</mo><mo>[</mo><mn>4</mn><mo>]</mo></mrow></math>
wherein,
Figure GSA00000136952800332
as described above, the vehicle attitude location module inputs the lane geometry from the map geometry model module, the GPS coordinates from the GPS device, data from the camera, and outputs an estimated vehicle position within the vehicle region with respect to the lane geometry. Those skilled in the art will appreciate that the problem may be described by a location in a map for monitored GPS data. The map geometric parameters are represented by spline functions, such as the functions expressed by equation 1. The spline describes discrete locations where a lane of the road is present. Points measured by GPS data
Figure GSA00000136952800333
Exemplary form of (1) returns. Some inaccuracy and inaccuracy of bias is normal in GPS devices. Errors are also inherent in spline functions. P is rarely exactly consistent with the map geometric spline. The spline functions describe points in the lane, such as the center of the lane, and the actual vehicle position often deviates from the center of the lane by a measurable amount. The approximate position of the vehicle on the map must be determined based on P and the estimated road geometry in the area. One exemplary solution to correct for deviations between P and the geometric representation of the road is to find the closest point [ e ]m,nm]T=f(sm) So that sm=argsmin P-f(s) l. The exemplary method pairs approximations smAre useful and can be applied iteratively to find the position of the vehicle in the road curve and to improve the estimated position when the monitored data changes.
FIG. 23 illustrates an exemplary iterative method according to the present invention for finding an approximate position of a vehicle relative to an estimated road geometry. Let s0Become intosmIs estimated. The correction of the arc length parameter can be expressed as follows:
<math><mrow><mi>&Delta;s</mi><mo>=</mo><mfrac><mrow><msup><mrow><mo>(</mo><mi>P</mi><mo>-</mo><msub><mi>P</mi><mi>m</mi></msub><mo>)</mo></mrow><mi>T</mi></msup><msub><msup><mi>P</mi><mo>&prime;</mo></msup><mi>m</mi></msub></mrow><mrow><mo>|</mo><mo>|</mo><msub><msup><mi>P</mi><mo>&prime;</mo></msup><mi>m</mi></msub><mo>|</mo><mo>|</mo></mrow></mfrac><mo>-</mo><mo>-</mo><mo>-</mo><mo>[</mo><mn>5</mn><mo>]</mo></mrow></math>
wherein, Pm=f(s0) And P'm=f′(s0). In other words, the correction Δ s is at the estimated position s0Projection onto the unit vector of the gradient at (a).
Those skilled in the art will appreciate that GPS measurements are updated less frequently than typical vehicle sensor readings. An exemplary refresh rate of 1Hz is common to most onboard GPS receivers. Furthermore, updates are not always received and may be disturbing in urban areas or other areas where the view of the satellite signals is obscured. Filtering techniques may be used to compensate for the slow rate of GPS signal updates.
An exemplary vehicle attitude location module uses a Kalman filter. The vehicle attitude is modeled as a vector and consists of an east displacement (e), a north displacement (n), an azimuth (φ) relative to the lane, and an arc length(s). The vehicle attitude does not change abruptly due to inertia. The following constant turning model is therefore assumed:
e′=e+vcos(φ+ξ)ΔT+w1
n′=n+vsin(φ+ξ)ΔT+w2
[6]
φ′=φ+ωΔT-κΔT+w3
s′=s+vΔT
wherein v is the vehicle speed; ω is the vehicle yaw rate; Δ T is the time increment from the previous cycle; ξ is the current orientation of the road (c.f., (2)); k is the current curvature of the road based on the map curve; w is a1,w2And w3Is a process noise term representing non-model interference.
FIG. 24 illustrates an exemplary vehicle attitude determination process in accordance with the present invention. The process is repeatedly triggered when new data from the GPS, vehicle kinematic sensors or camera devices is monitored. Exemplary cycle times include 1 second for GPS data, 20ms for kinematic data, and 50ms for camera data for different data sources. The time increment Δ T is calculated from the difference in time record (time stamp) between the current and previous cycles. The predicted vehicle attitude is then calculated using equation 5. When GPS data is available, the measurement update for vehicle attitude directly uses the following GPS measurement formula:
egps=e+k1 [7]
ngps=n+k2 [8]
wherein (e)gps,ngps) Is the vehicle's GPS measurement location; k is a radical of1And k2Is the measurement noise. After updating the vehicle attitude using the GPS measurements, the modified arc length parameter(s) is calculated using equation 5. This step is important to obtain the correct K and ξ values by eliminating the accumulated error caused by the dead reckoning (dead reckoning) process in equation 6.
When camera data is available, the following measurement formula can be used by the Kalman filter:
a=e+k3 [9]
b=φ+k4 [10]
where a and b are camera lane sensing parameters; d is the vertical distance of the current vehicle position to the center of the lane represented as a map curve; k is a radical of3And k4Is unmodeled measurement noise. Let PmThe point on the map curve closest to the current vehicle position indicated by P ═ e, n is obtained. Let the vector m represent the map curve at PmNormal to (c). The vertical distance d may then be expressed as d ═ P (P-P)m)Tm, where the normal m is calculated as:
<math><mrow><mi>m</mi><mo>=</mo><mfenced open='[' close=']'><mtable><mtr><mtd><mn>0</mn></mtd><mtd><mo>-</mo><mn>1</mn></mtd></mtr><mtr><mtd><mn>1</mn></mtd><mtd><mn>0</mn></mtd></mtr></mtable></mfenced><mfenced open='[' close=']'><mtable><mtr><mtd><msup><mi>e</mi><mo>&prime;</mo></msup></mtd></mtr><mtr><mtd><msup><mi>n</mi><mo>&prime;</mo></msup></mtd></mtr></mtable></mfenced><mo>.</mo></mrow></math>
as described above, the curvature estimation module inputs camera data, vehicle kinematic data (e.g., vehicle speed and yaw rate) from vehicle sensors, and smAnd outputs a measure of curvature (K) or curve in the road at the vehicle location. Once the vehicle is located in the map curve denoted by s, the corresponding map curvature K can be found using equation 4map
It should be noted that there are three sources of information to estimate road curvature: map curvature (K)map) Camera curvature (K)cam2c) curvature based on yaw rateAn exemplary process for fusing these three curvatures together is described below. Make KfusIs expressed as having a variance
Figure GSA00000136952800343
The fusion curvature of (1). Make it
Figure GSA00000136952800344
Figure GSA00000136952800345
And
Figure GSA00000136952800346
representing the map curvature, the curvature based on yaw rate, and the variance of the camera curvature, respectively. We have the following updated formula. When the map curvature estimate can be utilized, then
<math><mrow><msub><mi>k</mi><mi>fus</mi></msub><mo>=</mo><mfrac><mrow><msubsup><mi>&sigma;</mi><mi>map</mi><mn>2</mn></msubsup><msub><mi>k</mi><mi>fus</mi></msub><mo>+</mo><msubsup><mi>&sigma;</mi><mi>fus</mi><mn>2</mn></msubsup><msub><mi>k</mi><mi>map</mi></msub></mrow><mrow><msubsup><mi>&sigma;</mi><mi>map</mi><mn>2</mn></msubsup><mo>+</mo><msubsup><mi>&sigma;</mi><mi>fus</mi><mn>2</mn></msubsup></mrow></mfrac><mo>,</mo><mo>-</mo><mo>-</mo><mo>-</mo><mo>[</mo><mn>11</mn><mo>]</mo></mrow></math>
And
<math><mrow><msub><mi>k</mi><mi>fus</mi></msub><mo>=</mo><mfrac><mrow><msubsup><mi>&sigma;</mi><mi>map</mi><mn>2</mn></msubsup><mo>+</mo><msubsup><mi>&sigma;</mi><mi>fus</mi><mn>2</mn></msubsup></mrow><mrow><msubsup><mi>&sigma;</mi><mi>map</mi><mn>2</mn></msubsup><mo>+</mo><msubsup><mi>&sigma;</mi><mi>fus</mi><mn>2</mn></msubsup></mrow></mfrac><mo>.</mo><mo>-</mo><mo>-</mo><mo>-</mo><mo>[</mo><mn>12</mn><mo>]</mo></mrow></math>
when yaw rate curvature estimates are available, then
<math><mrow><msub><mi>k</mi><mi>fus</mi></msub><mo>=</mo><mfrac><mrow><msubsup><mi>&sigma;</mi><mi>yaw</mi><mn>2</mn></msubsup><msub><mi>k</mi><mi>fus</mi></msub><mo>+</mo><msubsup><mi>&sigma;</mi><mi>fus</mi><mn>2</mn></msubsup><msub><mi>k</mi><mi>yaw</mi></msub></mrow><mrow><msubsup><mi>&sigma;</mi><mi>yaw</mi><mn>2</mn></msubsup><mo>+</mo><msubsup><mi>&sigma;</mi><mi>fus</mi><mn>2</mn></msubsup></mrow></mfrac><mo>,</mo><mo>-</mo><mo>-</mo><mo>-</mo><mo>[</mo><mn>13</mn><mo>]</mo></mrow></math>
And
<math><mrow><msub><mi>k</mi><mi>fus</mi></msub><mo>=</mo><mfrac><mrow><msubsup><mi>&sigma;</mi><mi>yaw</mi><mn>2</mn></msubsup><mo>+</mo><msubsup><mi>&sigma;</mi><mi>fus</mi><mn>2</mn></msubsup></mrow><mrow><msubsup><mi>&sigma;</mi><mi>yaw</mi><mn>2</mn></msubsup><mo>+</mo><msubsup><mi>&sigma;</mi><mi>fus</mi><mn>2</mn></msubsup></mrow></mfrac><mo>.</mo><mo>-</mo><mo>-</mo><mo>-</mo><mo>[</mo><mn>14</mn><mo>]</mo></mrow></math>
when the map curvature estimate is available, then
<math><mrow><msub><mi>k</mi><mi>fus</mi></msub><mo>=</mo><mfrac><mrow><msubsup><mi>&sigma;</mi><mi>cam</mi><mn>2</mn></msubsup><msub><mi>k</mi><mi>fus</mi></msub><mo>+</mo><msubsup><mi>&sigma;</mi><mi>fus</mi><mn>2</mn></msubsup><msub><mi>k</mi><mi>cam</mi></msub></mrow><mrow><msubsup><mi>&sigma;</mi><mi>cam</mi><mn>2</mn></msubsup><mo>+</mo><msubsup><mi>&sigma;</mi><mi>fus</mi><mn>2</mn></msubsup></mrow></mfrac><mo>,</mo><mo>-</mo><mo>-</mo><mo>-</mo><mo>[</mo><mn>15</mn><mo>]</mo></mrow></math>
And
<math><mrow><msub><mi>k</mi><mi>fus</mi></msub><mo>=</mo><mfrac><mrow><msubsup><mi>&sigma;</mi><mi>cam</mi><mn>2</mn></msubsup><mo>+</mo><msubsup><mi>&sigma;</mi><mi>fus</mi><mn>2</mn></msubsup></mrow><mrow><msubsup><mi>&sigma;</mi><mi>cam</mi><mn>2</mn></msubsup><mo>+</mo><msubsup><mi>&sigma;</mi><mi>fus</mi><mn>2</mn></msubsup></mrow></mfrac><mo>.</mo><mo>-</mo><mo>-</mo><mo>-</mo><mo>[</mo><mn>16</mn><mo>]</mo></mrow></math>
in the above-mentioned formula,
Figure GSA00000136952800358
Figure GSA00000136952800359
respectively representing information from different information sources: confidence in curvature information for the map, on-board sensors, and camera. The larger the variance of an information source, the less the information source contributes to the fusion curvature. Some heuristic rules (heuristic rule) are used to select different weights for the three information sources. For example, when the yaw rate is high, the selection is small
Figure GSA000001369528003510
To obtain the fusion curvature.
As described above, the vehicle lateral tracking module inputs the camera data, the vehicle kinematics and K, and outputs data regarding the position of the vehicle relative to the current lane center and the angular orientation of the vehicle relative to the current forward direction of the lane. FIG. 25 illustrates an exemplary determination made within a lateral model of a vehicle in accordance with the present invention.The vehicle lateral tracking module monitors inputs of vehicle kinematics (wheel speed v and yaw rate ω) and inputs of lane sensing parameters. A Kalman filter may be used to integrate data from vehicle kinematics and lane sensing devices. As shown in fig. 25, the lateral offset yL is a displacement from the center of the lane. KroadIs the estimated curvature. KyawIs the curvature estimated from the instantaneous vehicle path, i.e.,
Figure GSA000001369528003511
the measurement formula of the Kalman filter is expressed as: b ═ phi and a ═ yL. If the update error is greater than the threshold, gating logic is executed. In other words, if the difference between the predicted measurement and the actual measurement is greater than the threshold, the actual measurement at the current time is ignored.
FIG. 22 illustrates a method of generating a geometric model representing a road on which a vehicle is traveling. However, it should be understood that other methods of achieving the same purpose are possible. For example, a method of allocating a series of waypoints ahead of a vehicle based on map data and information relating to a projected vehicle destination and forming a projected lane of travel is disclosed. FIG. 26 illustrates an exemplary use of waypoints along a lane projected ahead of a vehicle to estimate lane geometry in accordance with the present invention. Iterative generation of waypoints at successive time intervals (waypoints spaced apart in short distance increments) may be used to augment the estimated lane geometry ahead of the vehicle. As the vehicle passes waypoints, waypoints may be ignored and only waypoints still in front of the vehicle are utilized. In this way, the projection of waypoints ahead of the vehicle along the estimated path may be used to estimate the lane geometry through which the vehicle may travel.
Real-time and reliable information about lane geometry and vehicle position and orientation relative to the lane may be useful in a variety of applications or vehicle control schemes. For example, such information may be used to assist the operator in lane keeping, headlamp adjustment, improved navigation assistance, and drowsy warning applications. However, those skilled in the art will appreciate that a variety of applications may utilize this information and that the present invention is not limited to the specific embodiments described herein.
The foregoing methods describe the use of vision or camera systems. Analysis of such information may be performed by methods known in the art. Image recognition typically includes a procedure to look for changes in contrast or color in an image that indicate vertical lines, edges, corners, or other patterns of an object. In addition, various filtering and analysis techniques related to image recognition are also well known to those skilled in the art and will not be described in detail herein.
It is known in the art to utilize available data, such as image recognition, applied in visual images to determine a clear path ahead of the host vehicle's travel. Such an exemplary system is disclosed in co-pending U.S. patent serial No.12/108,581, entitled "VEHICLE CLEAR path detection" (vehicle unobstructed path detection), which is incorporated herein by reference.
As described above, an exemplary EVS system requires an input source to input information indicative of the vehicle operating environment. As described in the exemplary methods above, a variety of sensor devices are known in the art, including but not limited to radar, lidar, ultrasonic devices, and vision systems. Further, it should be appreciated that information regarding the operating environment may be obtained from other types of devices. An infrared sensor or infrared distance camera system may be used to detect the temperature difference. Such information is suitable for viewing objects that are normally normal vision or camera systems or that may be unnoticed by the human eye. Methods of converting infrared camera data into the visible spectrum are known which enable small temperature differences to display objects in different colours to an observer. As mentioned above, GPS devices used in conjunction with 3D map data can be used not only to locate vehicles with respect to the classified road geometry parameters, but also to locate vehicles in terms of details of the road (e.g. road surface type and road surface inclination or grade). In addition, various sensors and monitoring methods are known to quantify operating parameters within the vehicle. In addition, the remote processing enabled by the wireless network allows for coordination of the vehicle location and real-time details (e.g., building, weather, and traffic conditions) determined by the GPS device.
Further, non-road/non-traffic related details may similarly be accessed over a wireless network, including, for example, network-available data and infotainment services available through online providers. The on-board system may be further integrated with the EVS system, for example, maintenance requirements (e.g., monitoring accumulated time of oil or monitoring tire pressure) deposited by the on-board diagnostic module may be used as input to the EVS. The information may be displayed directly based on onboard processing; the information may be coordinated with the online service, such as diagnosing problems with selected facilitator processors; or the information may be processed from a 3D map database, for example, to identify the need to stop at a tire store and locate a number of nearby stores based on vehicle location, including hours of operation and customer ratings. A variety of inputs may be used by the EVS and EVS system manager, and the present invention is not limited to the exemplary inputs described herein.
All of the above mentioned inputs may be used by the exemplary EVS system manager. Further, it should be understood that the EVS system manager may use the methods described above with respect to target tracking, CPS, collision avoidance, lane keeping, and obstacle-free path detection. These methods and related programs enable the EVS system manager to evaluate driving conditions, including object trajectories around the vehicle, lane recognition, and road conditions, and identify information critical to vehicle operation based on a set of key criteria.
The EVS system manager monitors inputs and determines whether discernable information related to the vehicle operating environment authorizes display of information on the windshield. A variety and quantity of information is available to the EVS system manager. However, the operator of the vehicle has a primary responsibility to view the road, and useful additional information is carefully provided in a manner that helps focus the driver's attention on the critical information, but does not distract the driver from the primary task. An exemplary EVS system manager includes routines to monitor inputs from various sources; identifying key information from the input by applying key criteria to the input including a preset threshold, a learning threshold, and/or a selectable threshold, wherein the threshold is set to minimize distracting information that is not critical to the driver; and request graphics for display based on the key information.
The threshold for determining key information from the input may be based on a number of bases. The HUD system manager may access multiple sources of information input and include various programming applications to generate a context-dependent model of the operating environment to determine if the collected information is critical. For example, a collision avoidance system or collision preparation system, as described above, may be utilized to determine the likelihood of a collision based on the echoes (feedback) of the radar sensing system. Sensing the relative trajectory of an object may be used to mark the object as critical information to avoid collisions. However, the front-to-back (contextual) content of the input that expresses the sensed object is important to determine the threshold for marking the input as critical information. Fig. 27-29 illustrate exemplary applications of contextual information (contextualization) in sensed object data to determine whether the sensed data is critical information, in accordance with the present invention. FIG. 27 shows a vehicle including three consecutive data points describing a target object in front of the vehicle, each consecutive data point being closer to the vehicle than the previous data point. The vehicle 500 is depicted as having at time T1,T2,T3Information is collected about the relative distance of the target object from the vehicle. Without contextual connection, these data points concentrated on the vehicle suggest an imminent collision of the vehicle with the target object. FIG. 28 shows an exemplary situation in which the corresponding data points would correctly indicate information critical to the operator. The vehicle 500 is depicted as traveling in a lane 510. The vehicle 520 is also depicted as traveling in the same lane 510, but in the opposite direction of the vehicle 500. In this case, the target object is located on the collision path of the host vehicle, and therefore, the collected data points indicate that the approaching distance to the target object will be correctly identified as the key information. In this case, identifying the target object on the HUD will not unreasonably affect the vehicle operator. FIG. 29 shows an exemplary scenario in which corresponding dataThe dots may incorrectly indicate information that is critical to the operator. The vehicle 500 is shown traveling within a lane 510. Road signs 530 are also displayed directly in front of the vehicle 500. The returned object tracking data for the vehicle may indicate that the landmark 530 is the process of a collision with the vehicle 500. In this case, however, including the landmark speed relative to the vehicle speed, the pre-post information indicative of a stationary landmark, and the information related to a curve in the lane 510 may be used to lower the object tracking data from the landmark 530 to a threshold that does not comply with the critical information. In the above exemplary determination, the contextual information of the target tracking data may be obtained by a variety of methods, including but not limited to correlating the relative motion of the target with: the speed of the host vehicle, GPS data including map data describing lane geometry for the current position of the vehicle, lane geometry described by visual or camera data, and/or pattern recognition programs that analyze images of tracked objects sufficient to distinguish oncoming vehicles from road signs are associated. By generating context models for the HUD system manager based on the spatial relationships determined with respect to the vehicle to evaluate input data points with respect to the target trajectory, determinations can be made regarding key characteristics of the information, such as indicating the likelihood of a collision. Such models may be based on complex procedures, including factors describing a large number of input decisions, such as possible road slips on the local road ahead of the vehicle, road grades, vehicles traveling in opposite directions over speed limits, and sounds within the host vehicle amplified to potentially distracting volumes. On the other hand, such a context model may be simplified as much as possible to a comparison of the current vehicle speed with a recognized vehicle speed limit, or a comparison of the distance of the target vehicle in front of the host vehicle with a minimum distance threshold.
The above-described embodiments are merely exemplary determinations of a number of contextual associations relating to critical information that may be made by the HUD system manager. Known methods allow GPS data to be used in conjunction with information from a 3D map database to identify a proposed route for determining a destination. Integrating these methods with the use of a HUD allows for the projection of sequential (turn-by-turn) directions on the HUD, including the important advantage of being able to register (register) directions on the actual road surface features visible through the windshield. Using the context correlation model, the vehicle is positioned relative to the visual features, allowing the orientation to be customized for the vehicle's operation and surrounding conditions. This registration on the visible road surface features makes the indication to the driver more accurate than a verbal and/or LCD map display.
Known systems using GPS devices may utilize an input destination to give an operator a route indication. However, known GPS devices include slow sampling rates and inaccurate GPS measurements. As a result, the GPS device is unable to provide input to the operator based on the associated vehicle operation relative to the planned route. The HUD system manager may project indicator arrows on the roads to illustrate the planned route, but the HUD system manager can additionally construct an associated operating environment model of the planned travel route, integrating the information that can be utilized to identify key information inputs from the planned route that represent deviations. The HUD system manager can not only use various information sources to increase the accuracy of the provided information, e.g., using visual or camera information to improve the accuracy of GPS positioning, but the information can additionally give contextual importance to the vehicle surroundings, e.g., including object tracking information or 3D map data. In one example, if the planned route includes the vehicle exiting the highway at an upcoming exit on the right side of the roadway, the GPS data may be used to prompt the operator to utilize the exit. However, GPS data integrated into a correlation model that includes visual information describing the driving lanes may be used to determine the GPS data and corresponding planned routes relative to key information thresholds. For example, if the visual data positions the vehicle in the left lane of a three lane roadway and two lane changes would be required using an upcoming exit, the information indicative of the upcoming exit may be identified as critical information, authorizing the graphical display or increasing the urgency of the graphical display on the HUD. Under the same conditions, upon monitoring the visual information indicating that the vehicle is in the right lane corresponding to the exit and the vehicle information indicating that the blinker for a right turn of the vehicle has been activated, the information indicating the upcoming exit may be determined to be non-critical information that is not graphically displayed or is only minimally displayed on the HUD. In addition, object tracking, weather, visibility, or other sources of information may be used to influence how and when navigation assistance is displayed.
Other examples of using key information thresholds for information are contemplated. Address information corresponding to a specific location of a vehicle on a roadway surface may be determined by application of GPS data and a 3D map database. Visual data including an image recognition program may be used to describe the building or range of buildings estimated to include the destination address. Historical data may be monitored and such destination descriptions may be considered critical information if the vehicle has never previously traveled to the destination or the destination is in a particularly closely located building. In the alternative, voice commands from the operator may be used as a key to determine the destination description. In another alternative, the operator head position and eye orientation (azimuth) may be monitored according to the method described below, and the destination description may be treated as key information based on operator head and eye movements indicative of finding the address.
Another example of using key information thresholds may include analysis of current weather conditions. In normal driving conditions, the projection of lines indicating lane boundaries on the HUD may be considered an unauthorized and distracting matter. However, the lane boundary line may be determined as the key information when indicating weather conditions such as fog, snow, rain, sun, or other factors exist or combine to produce conditions that may impede the view of the lane markings. Weather conditions can be identified in a variety of ways. Online data combined with GPS data may be used to estimate the current weather state. The visual data may be analyzed to determine whether the lane markings are visually discernable or whether rainfall or fog is excessively obstructive to a viewing distance sufficient to warrant lane marking projection. Sunrise and sunset times and the sun's position in the sky can be determined from calendar and GPS positioning. The information about the position of the sun may be located in relation to the direction of the vehicle to determine lane marking critical information based on the vehicle's pointing direction towards the sun. In the alternative, the sun position may be estimated based on visual information. In a similar example, if the visual information indicates a state where high-beam (high beam) enabled oncoming vehicles may cause blindness, then lane markings may be displayed as key information to assist the host operator in remaining on the current lane. In these ways, the estimated operator visibility may be used to determine the appropriate lane marker projection on the HUD. In the alternative, the lane marker may be determined to be critical information based on the vehicle position estimated to be associated with the lane, e.g., the lane marker becomes critical information as the host vehicle approaches or crosses the lane marker. The position within the lane further shows a state in which the degree of importance for the key information can be indicated, with increasing importance being displayed as the vehicle approaches and then crosses the lane marker. Increasing the intensity (brightness) of the graphical image projected on the HUD, flashing the graphical image, and the corresponding audible signal to the operator may be utilized based on the indicated importance of the increased key information. Such a location within the lane criteria may be used as a drowsiness indicator (indicator), e.g. a single deviation is considered non-critical information, but repeated deviations from the center of the lane become critical or increasingly important information, e.g. a coordinated text message or an audible warning is prompted. In certain situations, a thermal or infrared camera image covering (superimposing) the road may be utilized or requested by the operator, wherein the visual state prohibits the operator from seeing the correct driving lane, e.g. as a result of inoperative headlights.
Another example of applying the critical information threshold may include analysis of pedestrian presence and pedestrian motion. For example, normal movement of a pedestrian on a sidewalk parallel to the direction of vehicle movement may be determined as non-critical information. However, movement of the pedestrian in another direction, for example, perpendicular to the direction of vehicular traffic, may be used as a threshold to identify critical information. In this example, another example may be illustrated indicating an increased importance of the key information. If a pedestrian is walking perpendicular to the vehicle traffic on a given sidewalk, a graphic indicating lesser or moderate importance may be displayed. If a pedestrian is detected walking from the sidewalk to or within the driving lane, a graphic is displayed indicating a serious or increased importance. In another example of identifying key information about pedestrian traffic, the current traffic light pattern may be analyzed and used to identify key information. Pedestrian traffic corresponding to the visual image indicating a "walk" light may be determined to be non-critical information if the host vehicle is at a stop light. However, in the same case, pedestrian traffic corresponding to the "no walk" light indication may be determined as the key information. In the alternative, the visual information and distance information for the target may be used to project an estimated size of the target. This estimated size can be used to identify, for example, that an estimated height of less than 4 feet is critical information in all pedestrians, thereby alerting the driver to the presence of a child in the operating environment. In the alternative, school zones or deaf child zones may be identified by street sign identification applications of GPS data and 3D maps, local wireless frequency transmission or tagging (tagging), etc., where all pedestrians are tagged as critical information. In the event that pedestrian traffic is detected but determined to be invisible, the graphics using thermal or infrared imaging data may be selected to cover the field of view for the pedestrian, thereby enabling the vehicle operator to make better decisions as to the situation.
Other embodiments of critical information discernable by the EVS system manager are disclosed. In one exemplary application, the suggested following distance between the host vehicle and the other vehicle may be compared to the measured distance, and any distance below the minimum suggested distance may be identified as being the key information for display. In another example, where the vehicle is used to train a new operator, the graphics displayed to the passenger/trainer may be used to improve the review of the new operator's actions. In another example, vehicle operation in a semi-autonomous control or ACC state may display key information to communicate the current distance to other vehicles or other information to indicate control system actions to the operator so that the operator can quickly determine whether manual operator intervention is required. In another example, vehicle-to-vehicle communications may be used instantaneously to manage merge operations (merging drivers) between two ACC-using vehicles. The graphics on the HUD may be used to communicate each driver's intent to perform the merge operation in order to inform each driver of the intent to communicate to avoid undesirable changes in vehicle motion and to avoid the feeling of a potential collision. In a similar application, in a vehicle using semi-automatic travel, where automatic vehicle lateral control is utilized by a lane-keeping system associated with an automatic steering mechanism, the graphics on the HUD may be used to inform the operator in advance that a lane change or other action is about to occur, so that the operator is not surprised by the measures that the semi-automatic control subsequently takes.
In another embodiment, communication between the vehicle and the vehicle or the vehicle and a remote server may be used to monitor vehicle operation and identify patterns in vehicle operation. For example, deceleration due to an accident may be monitored by operation of many vehicles, and the information may be propagated to other vehicles in the vicinity. The information may be determined to be critical information based on monitoring the amount and appropriate warning of delays and recommended alternate routes for affected vehicles that have resulted from deceleration. In another example, wheel slip may be monitored for a plurality of vehicles, and a particular road trip in which a vehicle approaches resulting wheel slip may include a graphical block projected on the road surface that indicates a likely slip road condition. The information may be determined to be critical information based on a slip event occurring over a particular road trip, or may be based on a comparison showing operation of the vehicle relative to operation of the vehicle experiencing slip. For example, three vehicles having a speed exceeding 50 miles per hour during the last hour are determined to have slipped on the road course, but the information is determined to be non-critical information for a host vehicle traveling at 35 miles per hour. In another embodiment, wildlife may be monitored by a vision system, may be augmented by a radar system, and be based on projected wildlife classification indications as key information. The identification of horses in the field of view may be determined as non-critical information, while the identification of white tailers jumping to the road may be determined as critical information.
Related embodiments of the information about the surroundings of the vehicle are conceivable. For example, points of interest (points of interest, monuments) may be selected as key information to be displayed on the HUD. A family visiting an unfamiliar city may receive information about the boundary sign encountered on the route. Similar directions or suggested travel routes identified for the road boundaries may be selected and displayed by the EVS. Sports enthusiasts can select interested teams or sports when passing through a stadium or arena, and access through wireless communication can be used to view game schedules, fares, and current seating conditions, which can be automatically projected on the HUD. An antique collector may request a notification when passing a certain distance within an antique store, a real estate sale, or a flea market, and the graphical direction of the location (place) may be displayed upon request. Occupants seeking new homes may request notification and guidance when a new released house-in-sale meets selection criteria in order to obtain the latest listings. A car enthusiast may request that a car logo or model recognized through visual recognition be recognized by a graphic image. A variety of applications for identifying points of interest are contemplated, and the present invention is not limited to the specific embodiments described above.
Embodiments for use by emergency personnel are also contemplated. For example, an ambulance equipped with the disclosed EVS system may communicate with a vehicle or a remote server to contain relevant information on the way for emergency situations (onsite). For example, the proposed route may be updated by a transmitter, the police on site may alert an incoming ambulance of a dangerous situation, or a vehicle on site with a patient identified as a heavy injury may communicate with an ambulance to implement an image to identify the vehicle on the way where the heavy injury patient is located. Police cars may use graphics for police station communication, for example, to identify a target vehicle for pursuit on one vehicle and to generate a graphic for joint pursuit on another police car. In another example, the vehicle may utilize communications with a remote server to receive information associated with a related vehicle identified by a certain condition (e.g., a yellow alert). For example, license plates identified as wanted may be identified by software known in the art in combination with a vision system. This information (which is not known by the non-emergency vehicle and therefore does not harm the occupants of the vehicle) can be communicated to emergency personnel and communicated to the EVS system manager of the nearest police vehicle for graphical display. In addition, police cars may utilize thermal imaging to search for non-behavioral capable persons in hidden locations or passing scenes. The fire engine may use the EVS system to enhance operations, for example, by aiding in estimating the time of arrival. For example, if the transmitter receives a distress call for a person trapped on the third corner of the northwest of the building, the transmitter may enter the address and room information, and the particular room of the building may be identified as critical information requiring a graphical image of the EVS system. In another example, thermal imaging may be switched to a vehicle parked at the fire scene to assist fire fighters in determining the location and development of the fire from a safe location. A variety of such applications are contemplated, and the present invention is not limited to the specific embodiments disclosed herein.
A variety of convenient applications are also contemplated. For example, a limited pixelized field of view structure is disclosed such that an observer viewing the HUD from one direction sees one image, while another observer viewing the HUD from another direction either does not see the particular image or sees a different image than the first observer. Such systems allow the passenger to see images on the windshield that are not relevant to driving while the vehicle operator continues to see images that are relevant only to vehicle operation. For example, passengers may view infotainment-type images, such as web content, video from a data storage device, or utilize an onboard camera to use the display as an empty mirror without interfering with the driver's view. Such content may be tied into other systems so that passengers view a menu of restaurants from the internet along the projected route of the vehicle and select a restaurant as a temporary destination in the projected route without interfering with the vehicle operator. Such systems may additionally allow the vehicle operator to see an image properly registered (recorded) on the windshield without the passenger seeing the same image, which is unregistered and may be annoying to the passenger.
One advantage of HUD applications is to place information in front of the operator in a single field of view, along with other critical information such as the scene through the windshield. In known aerospace applications, HUD devices are utilized to allow a pilot to look at external scenes while providing critical information such as air speed and altitude. This information with visual information in the same field of view provides a reduction in the loss of concentration, distraction, and momentary disorientation associated with moving the eyes from the outside scene to the instrument panel. Similarly, the EVS can provide a vehicle operator with displayed information in a single field of view with the outside view visible through the windshield. Such information may be provided at all times. However, to avoid distraction, information may be filtered according to key information states or according to importance. For example, different information is critical or important at low speeds as compared to high speeds. The critical information displayed on the windshield may be adjusted based on the threshold vehicle speed. The engine speed, when within the normal range, may not be classified as critical information or simply require a minimized, low brightness display. However, when the engine speed increases to a higher level, the display may be activated or recognized to alert the operator of possible damage to the engine. The fuel level status in the vehicle fuel tank may similarly be not displayed or minimally displayed depending on whether the fuel tank is full or nearly full. Different levels of increased importance may be implemented, for example, double in size when the tank is empty to less than a quarter of the tank capacity, with a flashing indication when a certain critical low fuel level is crossed. The level of critical information or the level of importance may be customized by the vehicle operator, for example by means of selectable menus on the vehicle display. In addition, the ranking and display of critical and important information can be adjusted based on operator preferences through a wireless network or through a direct connection of a computer to the vehicle (e.g., through a USB connection). Such customization (customization) may include operator selection of display shape, line thickness, line color, location on the windshield, or other similar preferences. The display theme or the display skin may be selectively switched based on the vehicle speed or the road surface type, for example, the operator sets the expressway theme and the street theme. Themes may be selected as urban themes and rural themes based on GPS location. Designs for customized displays on windshields may be shared from a user's website or commercially available from a vehicle manufacturer or other third party. The display may cooperate with a commercially available device, such as a digital music player, and be integrated into the display theme, e.g., the display of the music player is converted to be displayed at a corner of the HUD. A single vehicle equipped with known methods to determine the identity of the operator may automatically load the operator preferences. Embodiments of multiple displays that may be projected on a windshield are contemplated, and the invention is not limited to the specific embodiments disclosed herein.
Other displays may be projected on the windshield to minimize the need for the operator to remove his eyes from the windshield. For example, an adjustable camera at the rear of the vehicle may be used to project a small image of a sleeping baby on the rear vehicle seat of the vehicle, allowing the operator to monitor the child without turning his head to see. A more panoramic view may be implemented to monitor multiple children. Such monitoring functions may be real-time or may include playback functions.
As described above, the passenger can see the infotainment-type information in a specific environment. Clearly and sometimes required by legislation, distractions from the driver must be minimized. When the vehicle is in motion, information such as video content or e-mail correspondence would be undesirable if visible to the operator. However, such an application may be available when permitted, for example, when the vehicle information indicates that the vehicle is at a standstill, or if the vehicle's parking brake is engaged. Other applications are possible that provide limited information to the operator without causing undue distraction, including for example: a score of a game from the internet, a news headline from the internet, or music information currently playing in the vehicle, such as the song name and artist name given in a minimal graphic on the HUD.
Limited pixelized field junctionExemplary embodiments of the structures enable image views from a restricted direction to include the use of microstructures or particle arrangements to receive excitation light as described above and emit light in the restricted direction. Fig. 30 and 31 exemplarily show an exemplary application of the limited pixilated field of view structure according to the present invention. Fig. 30 shows an exemplary emitter that can emit light to a limited field of view. Exemplary emitters include UV transparent packaging (e.g., from SiO)2Made) with a parabolic narrow-band multilayer reflective structure filled with LIF material that fluoresces at visible wavelengths when illuminated by ultraviolet light. In this exemplary embodiment, thin films of these emitters are deposited on a polymer. In preparation for the film, parabolic shaped indentations similar to the shape formed in the emitter are imprinted within the polymer material. The emitter is deposited on the polymer substrate by chemical vapor deposition, the emitter being filled with parabolic indentations. Fig. 31 depicts the process of creating the necessary structure of emitters aligned with a polymer substrate to achieve limited field of view viewing. By an exemplary process, such as etching, free-standing parabolic bodies (reflectors) filled with an emitting material are created by releasing them from the substrate. Removal from the polymer substrate may also be accomplished by dissolving the plastic substrate with a suitable solvent. The free-standing parabola body is then nested into a housing (divot) that has been created in a glass substrate by a photolithographic method or embossing. The method of matching the parabola body to the housing can be done by methods such as fluidic self-assembly, similar to the application of Alien Technology, where the parabola body overflows the substrate and the fitting of the parabola body and the housing occurs statistically.
Head and eye sensing devices known in the art are not described in detail herein. For the purposes of the present invention, a camera-based device is used in conjunction with image recognition software to estimate the three-dimensional head position (which can be coordinated with the vehicle coordinate system) and the direction of the operator's gaze within the vehicle based on an image recognition program. The positioning of the object with respect to the vehicle coordinate system may be determined by sensor input, for example according to the tracking method described above. Based on the operator head position coordinated with the vehicle coordinate system and the object trajectory coordinated with the vehicle coordinate system, an estimated intersection point between the tracked object and the operator's eyes can be determined on the windshield, thereby enabling information registration with relevant features through the windshield according to the invention. Similar methods with lane marker projection, as well as other methods described herein, are possible, which allow for accurate registration of information on the HUD. Similarly, the combination of head position and the estimation of the gaze direction of the operator allows the projection of information according to the method to guarantee that the operator sees as critical information as possible. A similar approach may be implemented by front or rear seat occupants of a vehicle, allowing registered projection of vehicle occupants on various surfaces.
The head and eye sensing devices enable the EVS to discern the direction of the operator's gaze. The gaze location may be compared to the identified key information. A peripheral salient feature (peripheral salient feature) enhancement feature is disclosed in which the display characteristics are adjusted based on attracting the operator's eyes to key information when the operator's gaze is elsewhere, without unduly distracting the operator when the operator's gaze is proximate to the displayed key information. For example, if the vehicle is poured from a space to the left of the field of view and is determined to be on a probable collision course with the host vehicle, and the gaze of the operator is determined to be toward the right of the field of view, then a frame may be placed around the offending vehicle and a flashing arrow may be placed at the point of gaze of the operator, alerting the operator to the frame.
The included EVS can project the registered image across the windshield, registering the image to a visible object or area through the transparent windshield. However, the vehicle sensors can process and identify information pertaining to conditions outside the windshield view. For example, radar devices and/or camera devices observing a side or rear area of a vehicle may identify traffic light information, vehicle trajectory, the presence of an emergency vehicle, or other relevant information. The EVS manager, upon evaluating the environmental model generated corresponding to a piece of critical information, determines whether the critical information can be displayed on the windshield in registration with the relevant feature visible through the windshield corresponding to the critical information. When the evaluation determines that the relevant features based on the occupant's head and eye positions are not within the viewable area of the windshield, the graphic may be registered on the windshield, for example, at the edge of the windshield closest to the source of critical information, or at a location offset from the occupant's gaze, indicating the direction in which to view the critical information. For example, if the target vehicle trajectory and speed indicate that the vehicle may run a red light on the left or right side of the host vehicle, the EVS may accurately prompt the operator with an emergency warning to avoid a side collision. Although having only an exemplary EVS projected on the front windshield is unable to register graphical representations of visual objects that are not within the viewable area of the windshield, the EVS may alert the vehicle operator to the identified critical information. In the event that critical information is identified behind the vehicle, a prompt may be displayed on the windshield, pointing to or depicting the rearview mirror. Alternatively, the virtual rear view mirror may be displayed on the windshield using a rear directional camera (rear pointing camera). Alternatively, the panoramic view may be projected using multiple cameras, for example in a wide vertical slice of the display along the top of the windshield, showing a view, for example, 180 degrees around the rear of the vehicle, thus eliminating the traditional blind spot caused by known mirror structures. In another example, the HUD may be used in a vehicle rear window to provide full screen parking assistance through a graphical image on the window. Such a rear window display may be selectively displayed in normal or reverse mode, for example by voice recognition software, to enable viewing directly or through a rear view mirror. In another example, a strategic (tactual) or simulated overhead display (overhead display) may be integrated and projected on the windshield based on the tracking information. For example, in a parking state, radar and visual information may be used to estimate the relative positions of parking spots, other vehicles, roadside and pedestrian traffic, and these estimated positions may be plotted on a graphical display. Similarly, such a strategic display may be generated at the time of a lane change operation, for example, becoming a key message once the flashing light signal is turned on, and a display showing sensed objects around the vehicle may be presented. Returning to a park condition, such as a parallel parking maneuver, a set of criteria may be programmed, such as monitoring non-parking areas and obtaining distance ranges from the roadside and adjacent vehicles. Based on the spatial relationship, reminders or suggestions may be displayed on the windshield, including highlighting available stops along the city street near the programmed destination, or recommended steering wheel and pedal controls to navigate to the stops. Exemplary situations and graphical displays are examples of key information that may be displayed that alert an operator to conditions outside of the windshield view. However, these examples are merely illustrative of a subset of the contemplated examples, and the invention is not limited thereto.
Various enhancements to EVS are contemplated to achieve features particularly relevant to automotive applications of such projection technology. Those skilled in the art will appreciate that laser and projector designs used to project complex images often use tiny or micro-electromechanical system (MEMS) mirrors to direct the projected pattern to a desired location. Existing MEMS mirror laser projector designs have single stroke (single stroke) loss or bitmap structures, providing limited efficiency and information content. An alternative method is disclosed that continues to use the described stroke embodiments, but includes a plurality of MEMS mirrors (MEMS plates) to direct a series of beams. The method disclosed herein first applies a Galileo telescope to expand the UV laser beam to a point where the multiple mirrors of the MEMS multi-mirror device are illuminated in the X direction. Each of the x-mirrors (or x-mirror groups) is mapped and matched to a y-mirror or y-mirror group. The y-mirrors are then independently aimed at the appropriate areas of the luminescent material.
Automotive applications include harsh conditions, including scratches, abrasion, and chemical contamination that may be encountered to disfavor the materials used in the HUD described above. Another system-enhanced embodiment includes the use of a protective coating over the luminescent material on the HUD. However, the introduction of such layers can create potential reflection and refraction problems with excitation light from the projection device, with emission light from the windshield, and with light passing through the windshield from outside the vehicleThe reflection and refraction problems can create double or ghost images. A broadband anti-reflection (AR) coating may be applied on the inner surface of the windshield to minimize ghosting. The AR coating may be a single layer of MgF2Or a multi-layer coating. A hard AR coating is needed to protect emissive materials used in full windshield HUDs with an organic UV laser induced fluorescent coating. Ghost images eliminate the need for coatings that effectively double the optical field, avoiding the refractive index of the material from not matching that of air. Various materials may be added to improve the AR and durability properties of the material. Multiple coatings of multiple materials and multiple absolute and relative thicknesses can be used to achieve AR functionality. Suitable materials that can be deposited by magnetron sputtering or other physical and chemical vapor deposition methods include SiO2, Si3N4, TiO2, and SiOxNy. The last material, silicon oxynitride, has the advantage that the refractive index can be adjusted by the O/N ratio (stoichiometric ratio).
Projecting the image on a curved and tilted windshield creates potential irregularities in the generated graphical image. One exemplary problem to be avoided includes non-uniformity or unexpected differences in the brightness of the pattern caused by geometric parameter differences in the excitation light interacting with different parts of the HUD. Brightness correction is a compensation technique required for vector projection displays. A method of implementing luminance correction includes re-parameterizing a parametric curve used in graphicnendering so that each portion of the path has the same effective scan length when performing sparse sampling. This effective scan length can be calculated from the scan unit area time rate (scan unit area time rate), which is a simulation of the energy of the graphic on the display screen. Solid and non-planar surface factors may be considered when calculating the effective length.
Brightness non-uniformity is a potential irregularity that can cause projection on the windshield to become difficult. Another potential irregularity includes distortion of the graphic image resulting from geometric distortion due to uneven display surface, stereoscopic, and optical aberrations in large projection wide view system configurations. A two-stage distortion correction scheme is disclosed for correcting the geometric distortion of laser vector projection displays by modeling the scan curves and projection screens with non-uniform-rational-ratio-spline (NURB) parametric curves/patches. In the first phase, the ideal NURBs in the object space will be transformed to the observation space defined by the observation points. Such perspectives are then mapped to virtual display planes because of their affine and perspective invariance. Finally, it is mapped to a non-flat display surface with a parametric spatial mapping, if needed. In the second stage, the path is transformed to a projection space defined by the position of the projector, and then the path is mapped to a projection plane. The non-linear distortion is corrected by a calibration method.
Another potential irregularity that may arise in the projection of the graphic image on the windshield includes inefficiencies arising in the scanning cycle used to project the graphic image. The scan loop includes a primitive path representing a graphics primitive/primitive (graphic private) and a blanking path connecting primitive segments. Poor scan cycle design can result in inefficient displays or display failures. When a mirror-based scanner is applied, optimization of the path results in a smooth and efficient scan. Optimization in all scan paths during a scan cycle or scan frame of a raster projection display can result in efficient and smooth vector scanning. During the insertion of the scan path list to connect scan paths like cursive script, invisible blanking paths may be optimized. The optimization may be performed on the blanking paths such that all blanking paths have a primary/secondary continuity with their neighbors. Parameterized curve modeling may be used. This method also takes advantage of optimizations in all scan paths to obtain efficient and smooth vector scanning during a scan cycle or scan frame of a raster projection display. The whole loop will be re-parameterized so that it has the shortest scan length and the largest local radius of curvature.
Displays often require areas of zero intensity, for example, in projected images that include dashed lines. A method is disclosed to improve the image quality of a vector projection engine having light sources that are directed with micromirrors. This method is applied in laser projection devices, which are directed onto a display screen using micromirrors (x-scan and y-scan mirrors). The output is a light vector that is positioned across the display screen and the intensity of the light can be adjusted by the laser current. When zero brightness is desired, an "off state" of the laser is desired. Unfortunately, the response time for switching the laser on and off is slow relative to typical scan speeds. The known method produces a weak luminance line in case zero luminance is desired. A method of producing zero brightness is disclosed by utilizing an object in the path of a light source to controllably interrupt excitation light projected onto a HUD. For example, the object inserted into the optical path may include a knife edge, a pinhole, or a mirror. This mechanical blanking can be done similar to the time scale of the scan mirror, so that there is no response time match.
The above disclosure discloses a number of different uses that can be achieved by selective projection of information by an EVS on a HUD. FIGS. 32-37 show selected exemplary displays of key information that may be projected on a HUD in accordance with the present invention. FIG. 32 depicts an exemplary unreinforced exterior view that includes features that are desirable to be visually acceptable to a vehicle operator. View 200 includes a roadway surface 206, including a first lane marking 202 and a second lane marking 204; a vehicle 208 also on the lane; a pedestrian 210; a speed limit sign 214; and a curve 216 in the upcoming road surface. All objects and features in view 200 are directly visible and no graphical display depicted via the EVS is depicted.
FIG. 33 shows an exemplary view obstructed by a dense fog and an exemplary enhanced visual display that may be used to compensate for the effects of the fog. View 220 shows the same view as in fig. 32 except for the blurring of the view due to fog. View 220 includes fog 221; a first lane marker 202 and a second lane marker 204, both of which are directly visible for short distances until obscured by fog 221; projected lane indicators 222 and 224; a vehicle indicator 228; a pedestrian indicator 230; vehicle speed display 234; and a warning indicator 237. The projected lane indicators 222 and 224 are a projection of invisible lane indications and help the operator to keep the lane even in the presence of fog 221. Projected lane indicators 222 and 224 include curved portions 236 to indicate curves in the upcoming road corresponding to curve 216 in fig. 32. It should be appreciated that the lane indicators 222 and 224 are shown as clear lines. While multiple sensors may be used to extract location data and utilize, for example, 3D maps or radar returns from distinctive features (e.g., curbs or guard rails), clear lines may be used to convey the location of upcoming lane geometry with some certainty. However, where fewer sources of information are available, the vehicle position is not precisely set, or the lane geometry is uncertain for some other reason, a range of lines or stripes may be used to help guide the operator while conveying additional precautions that should be taken into account to visually determine the true road geometry. The vehicle indicator 228 displays the location and overall behavior of the vehicle 208 to the operator. In addition, textual information including factors such as distance and relative motion estimation may be displayed to assist the operator in correctly compensating for the presence of the vehicle. The pedestrian indicator 230 gives an operator indication that a pedestrian has been detected and the overall position relative to the road. According to the method described above, different graphics or text may be used to describe different behaviors and characteristics of pedestrians. As shown in fig. 32, the logo 214 is not visible in fig. 33 due to the fog 221. However, the speed limit of an extended road surface may be known by other means, for example by means of a GPS device according to a 3D map. The vehicle speed indicator 234 provides a list of current vehicle speeds and speed limits on the road currently being traveled. As described above, the curve 216 is depicted in FIG. 32, and the curved portions in the projected lane indicators 222 and 224 give the upcoming curve position. Further, the text display may describe the approach of a curve, including the distance to the curve shown in FIG. 33. Further, the suggested shift and some other indication of the severity of the curve may be indicated in text 237 or in combination with a graphic of the curved portion 236.
FIG. 34 illustrates an exemplary graphical display for improving security by lane changing. View 240 includes: a first lane marker 202 and a second lane marker 204 that are directly visible through the windshield; adjacent lane markers 242, also directly visible; turn signal indicator 244; lane change policy display 246; and text displays 248 and 249. Turn signal indicator 244 may include a simple arrow, a flashing arrow, a recurring pattern of varying sizes, a color, intensity, location, or other pattern based on information to be communicated to the operator. For example, in lane changing, when no threat in an adjacent lane is detected, a simple arrow may be discreetly displayed on the HUD to convey no expected threat impeding operation. However, in the case shown in fig. 34, when a vehicle located in an adjacent lane poses a threat of collision if lane change is performed, the graphic may be changed to display information to stop lane change, for example, by blinking of an indicator, changing the indicator to red, placing a deletion/prohibition graphic on the indicator, or any other acceptable display method to indicate a warning to the observer. The strategic display 246 is depicted to illustrate the location of the vehicle and the relative trajectory of the vehicle indicated as a threat. The lane marker projection may be indicated on a strategic display to improve recognition of the relative position of the vehicle. Fig. 34 indicates an arrow pointing to the vehicle in which the threat is present, to make the operator more aware of the state. In addition, text 248 in conjunction with the policy display and text 249 independently located on the HUD are depicted, thereby prompting the operator to note the status.
Fig. 35 shows an exemplary state in which peripheral salient feature enhancement features are used with an estimated operator gaze location to alert the operator to critical information. View 250 includes: a first lane marker 202 and a second lane marker 204 that are directly visible through the windshield; a distraction sign 254 and the vehicle 208, both of which are directly visible through the windshield; and a plurality of figures described below. The operator's gaze location 252 is depicted as a point where the operator's eyes are clearly concentrated, for example, as a result of being concentrated on the distracting landmarks 254. The location 252 is described for illustration only and is unlikely to be displayed on the HUD as a result of distractions, such a movement pattern may affect the operator. The trajectory of the vehicle 208 indicates that the motion causing the vehicle 208 has been classified as a threat. For example, the vehicle 208 is depicted on a trajectory that is traversing the lane markings 202 and entering the operator vehicle lane. As a threat indicates identification of the vehicle 208, a vehicle indicator box 256 is displayed around the vehicle 208, including directional arrows indicating blocks of relevant information, such as the direction of travel of the vehicle. In addition, text 259 is displayed that describes the threat status. In order to focus the attention of the operator on key information of the vehicle 208 from the region of the distracting indicator 254, a text warning accompanied by an arrow is displayed in the vicinity of the operator's gaze. In this way, the attention of the operator can be pulled back to the critical information as quickly as possible.
Fig. 36 shows an exemplary view showing navigation directions on the HUD. The through-windshield view of fig. 36 includes a complex intersection 262 with 5 street sharing intersections. View 260 includes: an intersection 262, directly visible through the windshield; building 266, directly visible through the windshield; a traffic light 268, directly visible through the windshield; and a plurality of figures described hereinafter. Navigation arrows 264 are depicted that record (register) to the specific street that is to be turned at intersection 262. Further, navigation data including a 3D map is used to identify a particular building 266 as a destination, and a destination indicator 267 including boxes and text is depicted. Further, based on the vehicle information or the complexity of the intersection provided to the operator, an indication through the alert text 269 is displayed as key information, conveying a determination of a traffic signal commanding a stop as driving assistance.
FIG. 37 depicts additional exemplary views representing key information that may be displayed on a HUD. View 270 depicts the scene through the windshield at night. View 270 includes headlight illumination 271, which depicts two cones of light visible through the windshield. In addition, a virtual rear view mirror 272 is depicted which shows a panoramic view around the side and rear of the vehicle captured by a camera or a group of cameras. The exemplary view includes a host 208. The view represented in the rear view mirror may remain as a single image or may include information such as distance to the target vehicle. Further, a wildlife indicator 274 is depicted, including an overlaid portion of the infrared image, depicted as cross-hatched squares in fig. 37, to assist the operator in seeing wildlife outside of the headlight illumination 271. In addition, wildlife indicator 274 also includes directional arrows and warning text describing the status to the operator. Further, a text warning 276 is depicted that describes the detection of an audible siren that has not been associated with visual information, indicating the approximate location of the emergency vehicle. In addition, the game score 278 is displayed in textual form, which describes information of interest to the operator in a manner that is minimally distracting to the driver. In addition, radio information including the name of the currently playing song and the community performing the song is displayed to reduce the tendency of the operator to shift his or her gaze onto the vehicle radio operating panel.
In the above described embodiments, the graphics may be registered to the gaze of the occupant. It will be appreciated that displaying the graphic immediately in the centre of the viewer's gaze is distracting. Alternatively, the graphic may be initially registered at a location offset from the viewer's gaze and fixed at that location. In this way, the viewer can then conveniently position the graphic close to the viewer's current position of gaze, but can then adjust to view the graphic directly when the viewer's emphasis (priority) permits. The location of the graph may additionally take into account the location of the relevant feature being tracked. For example, the graphics may be positioned so as to avoid distracting conflicts with stop lights, pedestrians, or other important features visible through the windshield.
The flow of information and the process of controlling the above described method may take a variety of embodiments. FIG. 38 schematically depicts an exemplary information flow for implementing the above-described method in accordance with the present invention. The process 900 includes: an EVS system manager 110 that monitors information from various sources and generates display requirements; an EVS graphics system 155 that monitors display requirements from the EVS system manager 110 and generates graphics commands; and a graphics projection system 158 that projects light on the head-up display 150. Various exemplary information sources are described, including: operator inputs, visual information through the camera system 120, radar information from the radar system 125, vehicle information from the exemplary vehicle speed sensor 130, GPS information from the GPS device 140, 3D map information from the 3D map database 910, and web content from the wireless communication system 145. It should be understood that these information sources may take many forms as described throughout this disclosure, and that the present invention is not limited to the specific embodiments described herein. The occupant eye position sensing system described in FIG. 1 may be employed; however, in this particular embodiment, other sources of information, such as operator input, are used to estimate the operator's head and eye positions for image registration purposes. It should be understood that GPS information, 3D map information, and internet content may be interrelated information. The association between these information sources may occur within the EVS system manager 110, or as depicted in FIG. 38, the means for providing information to the EVS system manager 110 may include programming to coordinate the information prior to or simultaneously with providing the information to the system manager. Through this exemplary process, information may be monitored and used to project an image on the HUD.
The above embodiments describe projecting an image on a windshield of a vehicle. However, the methods described herein may be applied to any suitable surface within a vehicle. For example, the projection system may be used solely on the rear window of the vehicle. In another example, the projection system may be used in any one of the vehicle side windows, for example, in the second row of the vehicle. Such a system, together with selectable or addable programs, can be used to entertain children on the road, playing games such as letting children look for various landmarks or text on objects external to the vehicle. Information about the passenger may be displayed on such surfaces as, for example, time to destination, a digital map describing the progress of the journey, entertainment images, or internet content. Vehicles using alternative windshield structures, e.g., round, semi-round, dome-shaped, or other encapsulated canopy designs, may similarly use the windshield as a surface on which graphical images may be displayed.
The information projected on the HUD is described above as including a full windshield. However, it should be understood that the methods described herein need not be applied across the entire windshield. For example, to avoid the operator viewing too far from a position directly in front, the image may be limited to a certain conical region in the operator's field of view. In the alternative, the image may be limited to not be projected in front of the passenger to avoid disturbing the passenger. In the alternative, the central region of the operator's field of view should be set to be free of images to ensure that no images draw the operator's attention away from the most critical view in the vehicle path. In the alternative, the area around the windshield may be used to project the image with the entire middle portion of the screen being specifically viewed by the operator. In the above case, the entire field of view of the operator is not utilized, but the images may still be registered on the windshield, for example, using horizontal and vertical checkmarks around the non-display area to indicate the location of the pointed object or state. According to the methods described herein, the display configuration may be selected by the operator or occupant, or configured to display different scenarios based on a number of criteria, such as time, number of occupants in the vehicle, location, or importance of the information. The area within the vehicle that includes or excludes the display projection may take a variety of different embodiments, and the invention is not limited to the specific embodiments described herein.
In one exemplary embodiment of the invention, a method of displaying a graphic depicting traffic information based on inter-vehicle communication on a host vehicle transparent windscreen head up display is provided. Known inter-vehicle communications share vehicle information with each other using wireless communications that provide protocols for two or more vehicles. The method includes monitoring inter-vehicle communications and determining traffic information based on the inter-vehicle communications. The traffic information includes information that may be determined based on observing other vehicles. The method also includes determining a graphic describing traffic information for display on the transparent windscreen head-up display and displaying the graphic on the transparent windscreen head-up display.
Exemplary embodiments relating to traffic information may envision traffic information that includes detecting an impending merge operation involving a host vehicle and a merging (merging) target vehicle (also referred to simply as a merge vehicle or other vehicle). The detection of an impending merge operation is detected by the host vehicle analyzing the movement of the merging vehicle wirelessly transmitted in the inter-vehicle communication. To detect the impending merge operation, a host vehicle must determine the motion of a merging vehicle intruding (impacting) an area adjacent to the host vehicle. The area adjacent to the host vehicle may be understood as a predefined envelope area surrounding the host vehicle outside of which the probability of an event or situation that may develop into a collision is reduced. If an impending merge operation involving the host vehicle and the merge vehicle is detected, a graphic describing the impending merge operation may be displayed on the transparent heads-up display. For example, the merge vehicle may be highlighted and an arrow may be displayed on the host vehicle transparent heads-up display that describes the impending merge operation requested by the merge vehicle.
An impending merge operation involving the host vehicle and the merge vehicle may be detected by the host vehicle analyzing the motion of the merge vehicle sent in the inter-vehicle communication. If the analyzed movement of the merging vehicle indicates an aggressive operation, a possible cut-off/cut-off operation (take cut-off controller) is determined. It will be appreciated that the possible cut-off operation is determined in accordance with instructions incorporating movement and aggressive operation of the vehicle. The determination of a possible cut-out operation provides an indication that detection of an impending merge operation is more predictive and accurate than an operator of the host vehicle observing the motion of the currently merging vehicle to detect the merge operation. If a possible cut-off operation involving the host vehicle and the merge vehicle is detected, the graphic describing the impending merge operation displays a cut-off alert on the transparent heads-up display, wherein the cut-off alert is determined in accordance with the possible cut-off operation. For example, the merging vehicle may be highlighted and an arrow and a text alarm displayed on a transparent head-up display of the host vehicle to alert the host vehicle operator of a possible cut-off operation of the merging vehicle.
An impending merge maneuver involving the host vehicle and the merge vehicle based on inter-vehicle communication may be detected by monitoring the merge vehicle and the host vehicle's proximity location. Furthermore, lane change signals incorporated into the vehicle may be monitored by the host vehicle. It is conceivable that the monitored lane change signal is transmitted wirelessly in an inter-vehicle communication. Based on the monitored lane change signal and the proximity location of the merge vehicle relative to the host vehicle, a merge request may be determined, where an impending merge operation may be detected.
With reference to fig. 39, an embodiment of the host vehicle operating in manual, semi-automatic and automatic operation when an imminent merging operation has been detected by one of the methods described above is envisaged. The view 801 includes a host vehicle, an incorporation block vehicle 813, and an incorporation vehicle 815. When an impending merge operation has been detected for a manually operated host vehicle, the graph depicting traffic information includes a merge vehicle trajectory alert, where the merge vehicle 815 is highlighted 831. The merge vehicle 815 trajectory alert merely alerts the host operator that the merge vehicle 815 is intended to perform a merge operation. For example, a registered graphic of arrow 817 may be displayed on a transparent windscreen head-up display, pointing in the direction of the merge vehicle 815 intended to merge. Under manual operation, the host vehicle operator may decelerate to allow the merge vehicle 815 to merge. A vehicle operating in a semi-automatic and automatic mode of operation may include an merge negotiation graphic 823 when an impending merge operation has been detected by one of the methods described above. The merge negotiation graphic 823 displays a merge request for the merge vehicle 815. As described below, requests made by the merge vehicle 815 may be responded to manually or automatically by the host vehicle operator. For example, semi-automatic operation is an adaptive cruise control operation in which the merging vehicle 815 requests the host operator's permission to coordinate with the merging vehicle and permit merging of the merging vehicle 815. For example, if the host vehicle is semi-automatically operated, the merge negotiation graphic 823 includes an optional request to provide space for the merge vehicle 815. For example, the optional request may include the operator of the host vehicle replying to "yes" to allow for merge or "no" to deny merge for the merge vehicle 815 request. The selectable requests are manual and may be selected by buttons, voice commands, touch screen, or other methods to provide space for incorporating the vehicle 815. If the host vehicle is automatically operated, the merge negotiation graphic 823 includes notification of an impending merge operation. The automatic operation automatically allows or denies the request made by the merge vehicle 815 for a merge operation. For example, the host vehicle operator is notified of an impending merge operation via view 801, but does not need to be allowed to accept or decline the merge request made by merge vehicle 815.
Other embodiments utilizing adaptive cruise control may detect an impending traffic deceleration based on inter-vehicle communication. It will be appreciated that the speed of the host vehicle, as well as the speed of the other vehicles, must be continuously monitored. When an impending traffic deceleration is detected, the graph depicting the impending traffic deceleration includes an alert of the impending traffic deceleration. Methods known in the art include commanding an automatic braking operation upon detection of an impending traffic deceleration.
Referring to FIG. 40, an exemplary embodiment depicts a view 901 of a host vehicle traveling behind another vehicle 915 illustrating traffic information in an undesirable following state, such as an undesirable inter-vehicle spacing or lead time, as determined from inter-vehicle communication. The speed of the host vehicle may be monitored and the movement of another vehicle 915 traveling in front of the host vehicle may be determined based on information transmitted in the inter-vehicle communication. Depending on the velocity of the host vehicle and the motion of the source vehicle 915, an undesirable following distance condition may be determined. If an undesirable following distance condition is determined, a graphic of an undesirable following distance alert 919 depicting the undesirable following distance condition may be displayed on the transparent windscreen head-up display for the host operator. For example, the operator may operate the host vehicle at a preferred cruising speed when the inter-vehicle communication determines that the motion of the source vehicle 915 is actively decelerating. In this example, an undesirable following condition may be determined and a graphic of an undesirable following distance alert 919 may be displayed on the transparent windscreen head-up display to alert the operator of the host vehicle to maintain a greater distance or advance time relative to the source vehicle 915. Further, when an undesirable following state is determined, the source vehicle may be highlighted 931.
Embodiments are envisioned that relate to monitoring communication between vehicles. For example, the source vehicle's travel route may be monitored using inter-vehicle communication. A graphic of a travel route map of the source vehicle may be displayed on the transparent windscreen head-up display. In another example, the planned route transmitted by the source vehicle may be monitored using inter-vehicle communication. A graphic of the planned roadmap for the source vehicle may be displayed on the transparent windscreen head-up display. In yet another example, adverse driving conditions ahead of the host vehicle may be monitored. After the adverse driving condition is determined, an adverse driving condition warning graphic describing the adverse driving condition may be displayed on the transparent windscreen head-up display. For example, the adverse driving condition may include monitoring traffic deceleration along a planned route of the host vehicle and displaying a suggested alternate route alert on a transparent windscreen head-up display. The adverse driving condition may also include monitoring slippery road conditions ahead of the host vehicle, displaying a forward slippery road alert on the transparent windscreen head-up display to alert an operator of the host vehicle of the impending slippery road condition. Further, the adverse driving condition may further include monitoring a road obstacle ahead of the host vehicle, and displaying a front road obstacle alert as a graphic depicting the road obstacle ahead of the host vehicle.
In another exemplary embodiment of the invention, the eye position of the host operator is monitored. The graphics describing traffic information for display include a particular graphic registered to a particular driving scene viewable through the transparent windscreen head-up display. For example, the merge negotiation graph provided for a host operating under semi-automatic operation is a registered graph describing the impending merge operation.
An exemplary alternative embodiment includes: a system that displays a graphic describing a requested host vehicle response based on inter-vehicle communication on a windshield head-up display of the host vehicle; a communication device between the vehicles; a sensor descriptive of a position of an eye of the host occupant; and, the transparent windscreen head-up display. The system also includes an enhanced vision system manager configured to monitor the inter-vehicle communication device, monitor the sensor describing the eye position of the host vehicle occupant, determine movement of the source vehicle based on the communication via the inter-vehicle communication device, evaluate movement of the second vehicle to request response from the host vehicle, and determine a display requirement for registration based on the requested host vehicle response and data from the sensor describing the eye position of the host vehicle occupant. The system further comprises: a graphics system that generates graphics describing the requested host response in accordance with the registered display requirements; and a graphics projection system in communication with the graphics system and displaying graphics describing the requested host vehicle response on the transparent windscreen head-up display.
The foregoing describes a substantially transparent heads-up display capable of full screen display. It should be understood that a similar approach could be used on windshields that can use substantially full screen displays, partial windshield displays, such as a display limited to the driver half of the windshield, or a display centered or limited directly forward of the operator's field of view. Various embodiments of displays are contemplated, and this summary is not intended to be limited to the particular exemplary embodiments described herein.
The disclosure has described certain preferred embodiments and modifications thereto. Further modifications and alterations may occur to others upon reading and understanding the specification. Therefore, it is intended that the invention not be limited to the particular embodiment disclosed as the best mode contemplated for carrying out this invention, but that the invention will include all embodiments falling within the scope of the appended claims.

Claims (10)

1. A method of displaying a graphic depicting traffic information on a substantially transparent windscreen head-up display of a host vehicle based on inter-vehicle communication, the method comprising:
monitoring communication between vehicles;
determining traffic information based on the communication between vehicles;
determining a graphic depicting traffic information displayed on the substantially transparent windscreen head-up display; and is
Displaying the graphic on a substantially transparent windscreen head-up display;
wherein the substantially transparent windscreen head-up display includes one of light emitting particles or microstructures over a predetermined area of the windscreen permitting luminescent display while permitting vision through the windscreen.
2. The method of claim 1, wherein determining traffic information based on inter-vehicle communications comprises: detecting an impending merge operation involving the host vehicle and other vehicles.
3. The method of claim 2, wherein detecting impending merging operations involving the host vehicle and other vehicles comprises:
analyzing the motion of the other vehicle transmitted in the inter-vehicle communication; and
the movement of other vehicles intruding into an area adjacent to the vehicle is determined.
4. The method of claim 2, wherein detecting impending merging operations involving the host vehicle and other vehicles comprises:
analyzing the motion of the other vehicle transmitted in the inter-vehicle communication;
determining movement of other vehicles indicative of aggressive operation; and
determining a possible cut-off operation based on the motion of the other vehicle and the indication of aggressive operation;
wherein determining the graph depicting traffic information comprises determining a cut-off alarm based on the possible cut-off operations.
5. The method of claim 2, wherein detecting impending merging operations involving the host vehicle and other vehicles comprises:
monitoring proximity positions of other vehicles relative to the host vehicle;
monitoring lane change signals of other vehicles transmitted in the inter-vehicle communication; and
determining that the lane change signal is a merge request based on the lane change signal and a proximity location of the other vehicle relative to the host vehicle.
6. The method of claim 2, wherein the master vehicle is manually operated;
wherein determining the graph depicting traffic information includes determining a trajectory alert graph corresponding to the trajectory of the other vehicle.
7. The method of claim 2, wherein determining a graph that describes traffic information comprises determining a merge negotiation graph corresponding to an upcoming merge of other vehicles.
8. The method of claim 7, wherein the master is semi-automatically operated;
wherein the merge negotiation graph includes an optional request to coordinate an upcoming merge of other vehicles.
9. The method of claim 7, wherein the master car is automatically operated;
wherein the merge negotiation graph includes a notification of an impending merge of other vehicles.
10. A system for displaying a graphic describing a desired vehicle response based on inter-vehicle communication on a substantially transparent windscreen head-up display of a host vehicle, the system comprising:
a communication device between the vehicles;
a sensor descriptive of a position of an eye of the host occupant;
the substantially transparent windscreen head-up display including one of light emitting particles or microstructures over a predetermined area of the windscreen permitting luminescent display while permitting vision through the windscreen;
enhanced vision system manager:
monitoring a communication device between the vehicles;
monitoring the sensor descriptive of the eye position of the host occupant;
determining the movement of the other vehicle based on the communication through the inter-vehicle communication device;
evaluating the movement of the other vehicle to request the host vehicle to respond; and is
Determining a registered display requirement based on the requested host response and data from a sensor describing the host occupant's eye position;
a graphics system that generates graphics describing the requested host response in accordance with the registered display requirements; and
a graphics projection system in communication with the graphics system and displaying graphics describing the requested host vehicle response on the substantially transparent windscreen head up display.
CN2010101962759A 2009-04-02 2010-04-02 Vehicle-to-vehicle communicator on full-windshield head-up display Active CN101876751B (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US12/417,077 US8629903B2 (en) 2009-04-02 2009-04-02 Enhanced vision system full-windshield HUD
US12/417077 2009-04-02
US12/563372 2009-09-21
US12/563,372 US8269652B2 (en) 2009-04-02 2009-09-21 Vehicle-to-vehicle communicator on full-windshield head-up display

Publications (2)

Publication Number Publication Date
CN101876751A true CN101876751A (en) 2010-11-03
CN101876751B CN101876751B (en) 2012-10-03

Family

ID=42825746

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010101962759A Active CN101876751B (en) 2009-04-02 2010-04-02 Vehicle-to-vehicle communicator on full-windshield head-up display

Country Status (3)

Country Link
US (1) US8269652B2 (en)
CN (1) CN101876751B (en)
DE (1) DE102010013402A1 (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102568231A (en) * 2010-12-29 2012-07-11 通用汽车环球科技运作有限责任公司 Roadway condition warning on full windshield head-up display
CN103024214A (en) * 2011-09-23 2013-04-03 福特全球技术公司 Method for incoming call filtration
CN103217165A (en) * 2012-01-19 2013-07-24 沃尔沃汽车公司 Driver assisting system
CN103237674A (en) * 2010-12-08 2013-08-07 丰田自动车株式会社 Vehicle information transmission device
CN104639627A (en) * 2015-01-29 2015-05-20 中国科学院计算技术研究所 Information transmission method for Internet of Vehicles and corresponding vehicle-mounted device and vehicle
CN105806358A (en) * 2014-12-30 2016-07-27 中国移动通信集团公司 Driving prompting method and apparatus
CN106503676A (en) * 2016-11-04 2017-03-15 大连文森特软件科技有限公司 Based on AR augmented realities and the drive assist system of driving details collection
CN106671984A (en) * 2016-11-04 2017-05-17 大连文森特软件科技有限公司 Driving assistance system based on AR augmented reality
CN107179767A (en) * 2016-03-10 2017-09-19 松下电器(美国)知识产权公司 Steering control device, driving control method and non-transient recording medium
CN108961377A (en) * 2018-06-28 2018-12-07 西安电子科技大学 A kind of design method for airborne enhancing synthetic vision system virtual secure face
WO2019169782A1 (en) * 2018-03-05 2019-09-12 南方科技大学 Image display thin film for in-vehicle display and manufacturing method therefor
CN110570665A (en) * 2018-06-06 2019-12-13 德尔福技术有限公司 Vehicle intention communication system
US11097735B1 (en) 2020-03-19 2021-08-24 Toyota Motor North America, Inc. Transport lane usage
US11488424B2 (en) 2020-03-19 2022-11-01 Toyota Motor North America, Inc. Motion-based transport assessment
US11577749B2 (en) 2020-04-23 2023-02-14 Toyota Motor North America, Inc. Driving mode assessment
US11720114B2 (en) 2020-03-19 2023-08-08 Toyota Motor North America, Inc. Safety of transport maneuvering

Families Citing this family (194)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102006053809A1 (en) * 2006-11-15 2008-05-21 Robert Bosch Gmbh Method for setting parameters of a brake system in a motor vehicle
US8170787B2 (en) * 2008-04-15 2012-05-01 Caterpillar Inc. Vehicle collision avoidance system
EP2347400B1 (en) * 2008-11-07 2014-03-12 Volvo Lastvagnar AB Method and system for combining sensor data
US8935055B2 (en) * 2009-01-23 2015-01-13 Robert Bosch Gmbh Method and apparatus for vehicle with adaptive lighting system
US8317329B2 (en) * 2009-04-02 2012-11-27 GM Global Technology Operations LLC Infotainment display on full-windshield head-up display
US8358224B2 (en) * 2009-04-02 2013-01-22 GM Global Technology Operations LLC Point of interest location marking on full windshield head-up display
US8704653B2 (en) * 2009-04-02 2014-04-22 GM Global Technology Operations LLC Enhanced road vision on full windshield head-up display
US8427395B2 (en) * 2009-04-02 2013-04-23 GM Global Technology Operations LLC Full-windshield hud enhancement: pixelated field of view limited architecture
US8830141B2 (en) * 2009-04-02 2014-09-09 GM Global Technology Operations LLC Full-windshield head-up display enhancement: anti-reflective glass hard coat
US8330673B2 (en) * 2009-04-02 2012-12-11 GM Global Technology Operations LLC Scan loop optimization of vector projection display
US8072686B2 (en) * 2009-04-02 2011-12-06 GM Global Technology Operations LLC UV laser beamlett on full-windshield head-up display
US8350724B2 (en) * 2009-04-02 2013-01-08 GM Global Technology Operations LLC Rear parking assist on full rear-window head-up display
US8817090B2 (en) * 2009-04-02 2014-08-26 GM Global Technology Operations LLC Luminance uniformity compensation of vector projection display
US8564502B2 (en) 2009-04-02 2013-10-22 GM Global Technology Operations LLC Distortion and perspective correction of vector projection display
US8547298B2 (en) * 2009-04-02 2013-10-01 GM Global Technology Operations LLC Continuation of exterior view on interior pillars and surfaces
US8384531B2 (en) * 2009-04-02 2013-02-26 GM Global Technology Operations LLC Recommended following distance on full-windshield head-up display
US8912978B2 (en) 2009-04-02 2014-12-16 GM Global Technology Operations LLC Dynamic vehicle system information on full windshield head-up display
US7924146B2 (en) * 2009-04-02 2011-04-12 GM Global Technology Operations LLC Daytime pedestrian detection on full-windscreen head-up display
US8344894B2 (en) * 2009-04-02 2013-01-01 GM Global Technology Operations LLC Driver drowsy alert on full-windshield head-up display
US8629784B2 (en) * 2009-04-02 2014-01-14 GM Global Technology Operations LLC Peripheral salient feature enhancement on full-windshield head-up display
US8482486B2 (en) * 2009-04-02 2013-07-09 GM Global Technology Operations LLC Rear view mirror on full-windshield head-up display
US8395529B2 (en) 2009-04-02 2013-03-12 GM Global Technology Operations LLC Traffic infrastructure indicator on head-up display
US20100253595A1 (en) * 2009-04-02 2010-10-07 Gm Global Technology Operations, Inc. Virtual controls and displays by laser projection
US8384532B2 (en) * 2009-04-02 2013-02-26 GM Global Technology Operations LLC Lane of travel on windshield head-up display
US8164543B2 (en) 2009-05-18 2012-04-24 GM Global Technology Operations LLC Night vision on full windshield head-up display
DE102009047407A1 (en) * 2009-12-02 2011-06-09 Robert Bosch Gmbh Method and navigation device for simplifying a description of a route
JP5325765B2 (en) * 2009-12-28 2013-10-23 日立オートモティブシステムズ株式会社 Road shoulder detection device and vehicle using road shoulder detection device
EP2544161B1 (en) * 2010-03-03 2014-12-24 Honda Motor Co., Ltd. Surrounding area monitoring device for vehicle
JP5168601B2 (en) * 2010-03-31 2013-03-21 アイシン・エィ・ダブリュ株式会社 Own vehicle position recognition system
US9165468B2 (en) * 2010-04-12 2015-10-20 Robert Bosch Gmbh Video based intelligent vehicle control system
JP5363407B2 (en) * 2010-04-26 2013-12-11 日立建機株式会社 Construction machine display device
US8346426B1 (en) 2010-04-28 2013-01-01 Google Inc. User interface for displaying internal state of autonomous driving system
US8260482B1 (en) 2010-04-28 2012-09-04 Google Inc. User interface for displaying internal state of autonomous driving system
US8098170B1 (en) 2010-10-08 2012-01-17 GM Global Technology Operations LLC Full-windshield head-up display interface for social networking
US8606430B2 (en) 2010-10-08 2013-12-10 GM Global Technology Operations LLC External presentation of information on full glass display
US8514099B2 (en) * 2010-10-13 2013-08-20 GM Global Technology Operations LLC Vehicle threat identification on full windshield head-up display
US8599027B2 (en) * 2010-10-19 2013-12-03 Deere & Company Apparatus and method for alerting machine operator responsive to the gaze zone
JP2012123628A (en) * 2010-12-08 2012-06-28 Toyota Motor Corp Information transmission device for vehicle
DE102010063420A1 (en) * 2010-12-17 2012-06-21 Bayerische Motoren Werke Aktiengesellschaft Driver assistance system with a sensor arrangement for detecting the distance of the own vehicle to a foreign object
US8098171B1 (en) 2010-12-28 2012-01-17 GM Global Technology Operations LLC Traffic visibility in poor viewing conditions on full windshield head-up display
US8605011B2 (en) 2010-12-29 2013-12-10 GM Global Technology Operations LLC Virtual viewfinder on full windshield head-up display
US8633979B2 (en) 2010-12-29 2014-01-21 GM Global Technology Operations LLC Augmented road scene illustrator system on full windshield head-up display
US8924150B2 (en) 2010-12-29 2014-12-30 GM Global Technology Operations LLC Vehicle operation and control system for autonomous vehicles on full windshield display
US9008904B2 (en) 2010-12-30 2015-04-14 GM Global Technology Operations LLC Graphical vehicle command system for autonomous vehicles on full windshield head-up display
US9057874B2 (en) 2010-12-30 2015-06-16 GM Global Technology Operations LLC Virtual cursor for road scene object selection on full windshield head-up display
US8618952B2 (en) * 2011-01-21 2013-12-31 Honda Motor Co., Ltd. Method of intersection identification for collision warning system
JP2012155655A (en) * 2011-01-28 2012-08-16 Sony Corp Information processing device, notification method, and program
JP5729861B2 (en) * 2011-02-08 2015-06-03 本田技研工業株式会社 Vehicle driving support device
CN103370250A (en) * 2011-02-23 2013-10-23 丰田自动车株式会社 Drive assist apparatus, drive assist method, and drive assist program
KR20120105761A (en) * 2011-03-16 2012-09-26 한국전자통신연구원 Apparatus and method for visualizating external environment
GB2490094B (en) * 2011-03-29 2015-11-18 Jaguar Land Rover Ltd Monitoring apparatus and method
US8655575B2 (en) * 2011-03-31 2014-02-18 International Business Machines Corporation Real time estimation of vehicle traffic
US8947219B2 (en) * 2011-04-22 2015-02-03 Honda Motors Co., Ltd. Warning system with heads up display
EP2700032B1 (en) * 2011-04-22 2016-03-16 F3M3 Companies, Inc. A comprehensive and intelligent system for managing traffic and emergency services
US8730930B2 (en) * 2011-05-31 2014-05-20 Broadcom Corporation Polling using B-ACK for occasional back-channel traffic in VoWIFI applications
WO2013034182A1 (en) * 2011-09-08 2013-03-14 Valeo Schalter Und Sensoren Gmbh Method for creating a vehicle surroundings map, driver assistance device, and vehicle having a driver assistance device
DE102011112578A1 (en) * 2011-09-08 2013-03-14 Continental Teves Ag & Co. Ohg Method and device for an assistance system in a vehicle for carrying out an autonomous or semi-autonomous driving maneuver
JP5690688B2 (en) * 2011-09-15 2015-03-25 クラリオン株式会社 Outside world recognition method, apparatus, and vehicle system
US8692739B2 (en) 2011-09-22 2014-04-08 GM Global Technology Operations LLC Dynamic information presentation on full windshield head-up display
US20130131890A1 (en) * 2011-11-17 2013-05-23 Cartasite Inc. Methods and systems of enhancing acceleration information
US8514101B2 (en) 2011-12-02 2013-08-20 GM Global Technology Operations LLC Driving maneuver assist on full windshield head-up display
US8818708B2 (en) 2011-12-02 2014-08-26 GM Global Technology Operations LLC Optimum driving path on full windshield display
US8781170B2 (en) 2011-12-06 2014-07-15 GM Global Technology Operations LLC Vehicle ghosting on full windshield display
US8660735B2 (en) * 2011-12-14 2014-02-25 General Motors Llc Method of providing information to a vehicle
US9187118B2 (en) * 2011-12-30 2015-11-17 C & P Technologies, Inc. Method and apparatus for automobile accident reduction using localized dynamic swarming
US9443429B2 (en) 2012-01-24 2016-09-13 GM Global Technology Operations LLC Optimum gaze location on full windscreen display
TWI493478B (en) * 2012-03-21 2015-07-21 Altek Corp License plate image-pickup device and image exposure adjustment method thereof
KR102028720B1 (en) * 2012-07-10 2019-11-08 삼성전자주식회사 Transparent display apparatus for displaying an information of danger element and method thereof
US9026300B2 (en) 2012-11-06 2015-05-05 Google Inc. Methods and systems to aid autonomous vehicles driving through a lane merge
DE102012023108A1 (en) * 2012-11-27 2014-06-12 Audi Ag Method for operating driver assistance system of motor car, involves transmitting lane change information wirelessly to other motor car in environment of own motor car
DE102012023361A1 (en) * 2012-11-28 2014-05-28 Audi Ag Method and device for securing a lane change and vehicle
US8825258B2 (en) 2012-11-30 2014-09-02 Google Inc. Engaging and disengaging for autonomous driving
US9063548B1 (en) 2012-12-19 2015-06-23 Google Inc. Use of previous detections for lane marker detection
US20150367859A1 (en) * 2012-12-21 2015-12-24 Harman Becker Automotive Systems Gmbh Input device for a motor vehicle
US9081385B1 (en) 2012-12-21 2015-07-14 Google Inc. Lane boundary detection using images
US9367065B2 (en) * 2013-01-25 2016-06-14 Google Inc. Modifying behavior of autonomous vehicles based on sensor blind spots and limitations
US9049564B2 (en) * 2013-02-04 2015-06-02 Zf Friedrichshafen Ag Vehicle broadcasting system
US9116242B2 (en) 2013-03-06 2015-08-25 Toyota Motor Engineering & Manufacturing North America, Inc. Map aware adaptive automotive radar
US9122933B2 (en) * 2013-03-13 2015-09-01 Mighty Carma, Inc. After market driving assistance system
US9280901B2 (en) * 2013-03-25 2016-03-08 E-Lead Electronic Co., Ltd. Method for displaying the vehicle safety distance
US9297892B2 (en) * 2013-04-02 2016-03-29 Delphi Technologies, Inc. Method of operating a radar system to reduce nuisance alerts caused by false stationary targets
US9632210B2 (en) 2013-05-07 2017-04-25 Google Inc. Methods and systems for detecting weather conditions using vehicle onboard sensors
JP6189523B2 (en) * 2013-04-11 2017-08-30 グーグル インコーポレイテッド Method and system for detecting weather conditions using in-vehicle sensors
FR3005361B1 (en) * 2013-05-06 2018-05-04 Dassault Aviation CLEARANCE ASSISTING DEVICE FOR DISPLAYING ANIMATION AND ASSOCIATED METHOD
US10247854B2 (en) 2013-05-07 2019-04-02 Waymo Llc Methods and systems for detecting weather conditions using vehicle onboard sensors
US9523772B2 (en) 2013-06-14 2016-12-20 Microsoft Technology Licensing, Llc Object removal using lidar-based classification
US9110163B2 (en) 2013-06-14 2015-08-18 Microsoft Technology Licensing, Llc Lidar-based classification of object movement
US9103694B2 (en) 2013-06-24 2015-08-11 Here Global B.V. Method and apparatus for conditional driving guidance
DE102013213039A1 (en) * 2013-07-03 2015-01-08 Continental Automotive Gmbh Assistance system and assistance method for assisting in the control of a motor vehicle
US9645559B1 (en) * 2013-08-09 2017-05-09 Rigminder Operating, Llc Head-up display screen
US9125020B2 (en) 2013-09-18 2015-09-01 Ford Global Technologies, Llc Road trip vehicle to vehicle communication system
US9785231B1 (en) * 2013-09-26 2017-10-10 Rockwell Collins, Inc. Head worn display integrity monitor system and methods
KR101558353B1 (en) * 2013-09-26 2015-10-07 현대자동차 주식회사 Head-up display apparatus for vehicle using aumented reality
KR101588787B1 (en) * 2013-12-02 2016-01-26 현대자동차 주식회사 Method for determining lateral distance of forward vehicle and head up display system using the same
EP3092599B1 (en) * 2013-12-04 2019-03-06 Mobileye Vision Technologies Ltd. Systems and methods for mimicking a leading vehicle
US9613459B2 (en) * 2013-12-19 2017-04-04 Honda Motor Co., Ltd. System and method for in-vehicle interaction
USD760784S1 (en) * 2014-01-03 2016-07-05 Samsung Electronics Co., Ltd. Display screen or portion thereof with icon
US10067341B1 (en) 2014-02-04 2018-09-04 Intelligent Technologies International, Inc. Enhanced heads-up display system
DE102014204002A1 (en) * 2014-03-05 2015-09-10 Conti Temic Microelectronic Gmbh A method of identifying a projected icon on a road in a vehicle, device and vehicle
DE102014003784A1 (en) * 2014-03-15 2015-09-17 Audi Ag Method for driver information in a motor vehicle
US9959838B2 (en) * 2014-03-20 2018-05-01 Toyota Motor Engineering & Manufacturing North America, Inc. Transparent display overlay systems for vehicle instrument cluster assemblies
DE102014107765A1 (en) * 2014-06-03 2015-12-03 Dr. Ing. H.C. F. Porsche Aktiengesellschaft Method and device for automatic or semi-automatic suspension adjustment
KR102219268B1 (en) * 2014-11-26 2021-02-24 한국전자통신연구원 Navigation System Cooperating Routes of Explorer and Controlling Method Thereof
US9701306B2 (en) * 2014-12-23 2017-07-11 Toyota Motor Engineering & Manufacturing North America, Inc. Risk mitigation for autonomous vehicles relative to turning objects
US9713956B2 (en) * 2015-03-05 2017-07-25 Honda Motor Co., Ltd. Vehicle-to-vehicle communication system providing a spatiotemporal look ahead and method thereof
CN105974583B (en) * 2015-03-11 2019-06-18 现代摩比斯株式会社 Head up display and its control method for vehicle
KR102321095B1 (en) * 2015-03-26 2021-11-04 현대모비스 주식회사 Head up display device of a vehicle and the control method thereof
DE102015005696B4 (en) * 2015-05-04 2024-07-18 Audi Ag Displaying an object or event in a motor vehicle environment
EP3090913B1 (en) * 2015-05-08 2021-09-29 Continental Automotive GmbH Vehicle control system and method
KR101750876B1 (en) * 2015-05-28 2017-06-26 엘지전자 주식회사 Display apparatus for vehicle and Vehicle
JP6402684B2 (en) 2015-06-10 2018-10-10 トヨタ自動車株式会社 Display device
US10997570B1 (en) 2015-07-10 2021-05-04 Wells Fargo Bank, N.A. Context-aware, vehicle-based mobile banking
US20170025008A1 (en) * 2015-07-20 2017-01-26 Dura Operating, Llc Communication system and method for communicating the availability of a parking space
IL241446B (en) * 2015-09-10 2018-05-31 Elbit Systems Ltd Adjusting displays on user monitors and guiding users' attention
DE102016205141A1 (en) * 2015-11-04 2017-05-04 Volkswagen Aktiengesellschaft A method and vehicle communication system for determining a driving intention for a vehicle
DE102015223248A1 (en) 2015-11-24 2017-05-24 Continental Automotive Gmbh Method for a driver assistance system
CN108473054B (en) * 2016-02-05 2021-05-28 麦克赛尔株式会社 Head-up display device
JP6369487B2 (en) * 2016-02-23 2018-08-08 トヨタ自動車株式会社 Display device
US10449967B1 (en) 2016-03-01 2019-10-22 Allstate Insurance Company Vehicle to vehicle telematics
JP6500814B2 (en) * 2016-03-07 2019-04-17 トヨタ自動車株式会社 Vehicle lighting system
US9734744B1 (en) * 2016-04-27 2017-08-15 Joan Mercior Self-reacting message board
US10962640B2 (en) * 2016-06-17 2021-03-30 Fujitsu Ten Limited Radar device and control method of radar device
EP3264391A1 (en) * 2016-06-30 2018-01-03 Honda Research Institute Europe GmbH Method and system for assisting a driver in driving a vehicle and vehicle on which such system is mounted
US20180012197A1 (en) 2016-07-07 2018-01-11 NextEv USA, Inc. Battery exchange licensing program based on state of charge of battery pack
US9849842B1 (en) 2016-08-11 2017-12-26 GM Global Technology Operations LLC Resettable tranceiver bracket
DE102016122686B4 (en) 2016-11-24 2021-01-07 Henrik Stiewe Method for informing a road user about a traffic situation
US10196058B2 (en) 2016-11-28 2019-02-05 drive.ai Inc. Method for influencing entities at a roadway intersection
JP6642398B2 (en) * 2016-12-06 2020-02-05 トヨタ自動車株式会社 Autonomous driving system
US10261513B2 (en) 2016-12-19 2019-04-16 drive.ai Inc. Methods for communicating state, intent, and context of an autonomous vehicle
KR102438097B1 (en) * 2016-12-30 2022-08-31 젠텍스 코포레이션 Vehicle side mirror assembly
US10082869B2 (en) 2017-02-03 2018-09-25 Qualcomm Incorporated Maintaining occupant awareness in vehicles
JP6465318B2 (en) 2017-03-10 2019-02-06 株式会社Subaru Image display device
JP6465317B2 (en) 2017-03-10 2019-02-06 株式会社Subaru Image display device
JP6429413B2 (en) * 2017-03-10 2018-11-28 株式会社Subaru Image display device
JP6497818B2 (en) 2017-03-10 2019-04-10 株式会社Subaru Image display device
JP6593803B2 (en) 2017-03-10 2019-10-23 株式会社Subaru Image display device
JP6497819B2 (en) 2017-03-10 2019-04-10 株式会社Subaru Image display device
JP6515125B2 (en) 2017-03-10 2019-05-15 株式会社Subaru Image display device
JP6454368B2 (en) 2017-03-15 2019-01-16 株式会社Subaru Vehicle display system and method for controlling vehicle display system
US10365351B2 (en) * 2017-03-17 2019-07-30 Waymo Llc Variable beam spacing, timing, and power for vehicle sensors
US10152884B2 (en) * 2017-04-10 2018-12-11 Toyota Motor Engineering & Manufacturing North America, Inc. Selective actions in a vehicle based on detected ambient hazard noises
DE102017208386A1 (en) * 2017-05-18 2018-11-22 Ford Global Technologies, Llc Method for assisting controlling a vehicle, assistance system and vehicle
US11537134B1 (en) * 2017-05-25 2022-12-27 Apple Inc. Generating environmental input encoding for training neural networks
JP6894301B2 (en) * 2017-06-09 2021-06-30 株式会社小松製作所 Work vehicle
CN109305165B (en) * 2017-07-28 2022-04-12 现代摩比斯株式会社 Intelligent ultrasonic system, vehicle rear collision warning device and control method thereof
US11243532B1 (en) 2017-09-27 2022-02-08 Apple Inc. Evaluating varying-sized action spaces using reinforcement learning
US11555706B1 (en) 2017-09-27 2023-01-17 Apple Inc. Processing graph representations of tactical maps using neural networks
KR20190078664A (en) 2017-12-11 2019-07-05 삼성전자주식회사 Method and apparatus for displaying content
JP2019121107A (en) * 2017-12-28 2019-07-22 トヨタ自動車株式会社 On-vehicle communication device and vehicle
US11884205B2 (en) * 2018-01-10 2024-01-30 Mod Worldwide, Llc Messaging system
KR102506871B1 (en) * 2018-02-19 2023-03-08 현대자동차주식회사 Autonomous driving control apparatus and method for notifying departure of forward vehicle thereof
EP3757513B1 (en) * 2018-02-20 2024-04-10 Mitsubishi Electric Corporation Measurement monitoring apparatus and measurement monitoring program
TWI664438B (en) * 2018-03-09 2019-07-01 Industrial Technology Research Institute Augmented reality device
CN110244459B (en) 2018-03-09 2021-10-08 财团法人工业技术研究院 Augmented reality device
JP2021518558A (en) * 2018-03-19 2021-08-02 アウトサイト Methods and systems for identifying the material composition of objects
US11059421B2 (en) 2018-03-29 2021-07-13 Honda Motor Co., Ltd. Vehicle proximity system using heads-up display augmented reality graphics elements
DE112018007261B4 (en) * 2018-04-20 2021-11-04 Mitsubishi Electric Corporation DRIVE MONITORING DEVICE
DE102018111016A1 (en) * 2018-05-08 2019-11-14 Man Truck & Bus Se (Partial) autonomous motor vehicle and method for operating the same
US20200001779A1 (en) 2018-06-27 2020-01-02 drive.ai Inc. Method for communicating intent of an autonomous vehicle
US10696257B2 (en) * 2018-07-17 2020-06-30 Denso International America, Inc. Automatic crowd sensing and reporting system for road incidents
JP7102275B2 (en) * 2018-07-30 2022-07-19 本田技研工業株式会社 Display control device and program
WO2020026461A1 (en) * 2018-08-03 2020-02-06 日本電気株式会社 Information processing device, information processing method, and information processing program
US10607416B2 (en) * 2018-08-30 2020-03-31 Valeo Comfort And Driving Assistance Conditional availability of vehicular mixed-reality
US10775509B2 (en) * 2018-09-19 2020-09-15 Ford Global Technologies, Llc Sensor field of view mapping
US10300851B1 (en) * 2018-10-04 2019-05-28 StradVision, Inc. Method for warning vehicle of risk of lane change and alarm device using the same
DE102018128634A1 (en) * 2018-11-15 2020-05-20 Valeo Schalter Und Sensoren Gmbh Method for providing visual information about at least part of an environment, computer program product, mobile communication device and communication system
US10836313B2 (en) 2018-11-28 2020-11-17 Valeo Comfort And Driving Assistance Mixed reality view for enhancing pedestrian safety
KR102699145B1 (en) * 2018-12-10 2024-09-30 현대자동차주식회사 Apparatus and method for identificating short cut-in vehicle and vehicle including the same
KR102555916B1 (en) * 2018-12-12 2023-07-17 현대자동차주식회사 Apparatus and method for identificating odm data reliability and vehicle including the same
JP7155991B2 (en) * 2018-12-17 2022-10-19 トヨタ自動車株式会社 Notification device
DE102018222378A1 (en) 2018-12-20 2020-06-25 Robert Bosch Gmbh Device and method for controlling the output of driver information and for maintaining the attention of a driver of an automated vehicle
US10967873B2 (en) 2019-01-30 2021-04-06 Cobalt Industries Inc. Systems and methods for verifying and monitoring driver physical attention
US11186241B2 (en) 2019-01-30 2021-11-30 Cobalt Industries Inc. Automated emotion detection and environmental response
GB2588983B (en) * 2019-04-25 2022-05-25 Motional Ad Llc Graphical user interface for display of autonomous vehicle behaviors
RU2732340C1 (en) 2019-05-17 2020-09-15 Общество с ограниченной ответственностью "Научно-технический центр "Биолюмен" (ООО "НТЦ "Биолюмен") Automotive display on windshield
CN110213730B (en) * 2019-05-22 2022-07-15 未来(北京)黑科技有限公司 Method and device for establishing call connection, storage medium and electronic device
CN110223515B (en) * 2019-06-17 2021-01-01 北京航空航天大学 Vehicle track generation method
US11153010B2 (en) 2019-07-02 2021-10-19 Waymo Llc Lidar based communication
US11614739B2 (en) 2019-09-24 2023-03-28 Apple Inc. Systems and methods for hedging for different gaps in an interaction zone
US11148663B2 (en) 2019-10-02 2021-10-19 Ford Global Technologies, Llc Enhanced collision mitigation
EP3809359A1 (en) * 2019-10-14 2021-04-21 Ningbo Geely Automobile Research & Development Co. Ltd. Vehicle driving challenge system and corresponding method
US11454813B2 (en) 2019-11-07 2022-09-27 GM Global Technology Operations LLC Holographic display systems with polarization correction and distortion reduction providing enhanced image quality
CN111044073B (en) * 2019-11-26 2022-07-05 北京卫星制造厂有限公司 High-precision AGV position sensing method based on binocular laser
CN111044045B (en) * 2019-12-09 2022-05-27 中国科学院深圳先进技术研究院 Navigation method and device based on neural network and terminal equipment
US11383733B2 (en) * 2020-01-31 2022-07-12 Mitac Digital Technology Corporation Method and system for detecting a dangerous driving condition for a vehicle, and non-transitory computer readable medium storing program for implementing the method
WO2021190812A1 (en) * 2020-03-27 2021-09-30 Daimler Ag Method for supporting an automatically driving vehicle
US11506892B1 (en) 2021-05-03 2022-11-22 GM Global Technology Operations LLC Holographic display system for a motor vehicle
US11762195B2 (en) 2021-05-06 2023-09-19 GM Global Technology Operations LLC Holographic display system with conjugate image removal for a motor vehicle
US11880036B2 (en) 2021-07-19 2024-01-23 GM Global Technology Operations LLC Control of ambient light reflected from pupil replicator
USD1002647S1 (en) * 2021-10-13 2023-10-24 Waymo Llc Display screen or portion thereof with graphical user interface
USD1002648S1 (en) * 2021-10-13 2023-10-24 Waymo Llc Display screen or portion thereof with graphical user interface
USD1002649S1 (en) * 2021-10-13 2023-10-24 Waymo Llc Display screen or portion thereof with graphical user interface
EP4389550A1 (en) * 2021-10-20 2024-06-26 Samsung Electronics Co., Ltd. Electronic device mounted on vehicle and operation method therefor
US11978265B2 (en) * 2022-03-11 2024-05-07 GM Global Technology Operations LLC System and method for providing lane identification on an augmented reality display

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2491228A1 (en) * 1980-10-01 1982-04-02 Heuliez Bus Route monitoring system for public service vehicle - contains microprocessor to monitor actual route from detectors and pre-programmed data to control passenger information display
NL1015681C1 (en) * 2000-07-11 2001-08-14 Schoeren Hubertus Johannes B Road traffic light system with repeater display inside road vehicle, uses short range inductive loop to communicate between system and visual indicator on driver's dashboard
EP1204084A1 (en) * 2000-11-01 2002-05-08 Nissan Motor Co., Ltd. Information providing system and method for a vehicle
CN1397450A (en) * 2001-07-16 2003-02-19 骆文基 Speed-redicing signalling device for vehicle
CN1417755A (en) * 2002-11-18 2003-05-14 冯鲁民 Intelligent traffic system with perfect function and simple architechure
US20040225434A1 (en) * 2003-05-07 2004-11-11 Gotfried Bradley L. Vehicle navigation and safety systems
US20050259033A1 (en) * 2004-05-20 2005-11-24 Levine Alfred B Multiplex-selective heads-up displays for cars
US20070010944A1 (en) * 2005-07-09 2007-01-11 Ferrebee James H Jr Driver-adjustable sensor apparatus, system, & method for improving traffic safety
CN101236301A (en) * 2007-01-02 2008-08-06 通用汽车环球科技运作公司 Apparatus and method for displaying information within a vehicle interior

Family Cites Families (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7382274B1 (en) 2000-01-21 2008-06-03 Agere Systems Inc. Vehicle interaction communication system
US7090355B2 (en) * 2003-05-19 2006-08-15 Superimaging, Inc. System and method for a transparent color image display utilizing fluorescence conversion of nano particles and molecules
US6986581B2 (en) * 2003-11-03 2006-01-17 Superimaging, Inc. Light emitting material integrated into a substantially transparent substrate
DE10350529A1 (en) 2003-09-08 2005-03-31 Nowak Consulting Head-up display system e.g. for aircraft and road vehicles, has flat, mainly transparent anti-Stokes and/or UV element integrated into windshield panel
CN1875317A (en) * 2003-11-03 2006-12-06 超级影像股份有限公司 Light emitting material integrated into a substantially transparent substrate
DE102004001113A1 (en) 2004-01-07 2005-07-28 Robert Bosch Gmbh Information system for means of transportation
DE102004008895A1 (en) 2004-02-24 2005-09-08 Robert Bosch Gmbh System for controlling and / or regulating driver assistance systems and method related thereto
US7689230B2 (en) * 2004-04-01 2010-03-30 Bosch Rexroth Corporation Intelligent transportation system
US7213923B2 (en) * 2004-04-19 2007-05-08 Superimaging, Inc. Emission of visible light in response to absorption of excitation light
US7460951B2 (en) * 2005-09-26 2008-12-02 Gm Global Technology Operations, Inc. System and method of target tracking using sensor fusion
DE102005052424A1 (en) 2005-11-03 2007-05-10 Robert Bosch Gmbh projection display
DE102008011656A1 (en) 2008-02-28 2008-09-11 Daimler Ag Collision avoidance arrangement for motor vehicle indicates braking intensity of leading vehicle in following vehicle by varying bar indicator and/or frequency of audible warning signal
US8917904B2 (en) * 2008-04-24 2014-12-23 GM Global Technology Operations LLC Vehicle clear path detection
US8330673B2 (en) * 2009-04-02 2012-12-11 GM Global Technology Operations LLC Scan loop optimization of vector projection display
US8629903B2 (en) * 2009-04-02 2014-01-14 GM Global Technology Operations LLC Enhanced vision system full-windshield HUD
US8317329B2 (en) * 2009-04-02 2012-11-27 GM Global Technology Operations LLC Infotainment display on full-windshield head-up display
US20100253595A1 (en) * 2009-04-02 2010-10-07 Gm Global Technology Operations, Inc. Virtual controls and displays by laser projection
US8564502B2 (en) * 2009-04-02 2013-10-22 GM Global Technology Operations LLC Distortion and perspective correction of vector projection display
US8482486B2 (en) * 2009-04-02 2013-07-09 GM Global Technology Operations LLC Rear view mirror on full-windshield head-up display
US8912978B2 (en) * 2009-04-02 2014-12-16 GM Global Technology Operations LLC Dynamic vehicle system information on full windshield head-up display
US8384532B2 (en) * 2009-04-02 2013-02-26 GM Global Technology Operations LLC Lane of travel on windshield head-up display
US8395529B2 (en) * 2009-04-02 2013-03-12 GM Global Technology Operations LLC Traffic infrastructure indicator on head-up display
US8344894B2 (en) * 2009-04-02 2013-01-01 GM Global Technology Operations LLC Driver drowsy alert on full-windshield head-up display
US8384531B2 (en) * 2009-04-02 2013-02-26 GM Global Technology Operations LLC Recommended following distance on full-windshield head-up display
US8547298B2 (en) * 2009-04-02 2013-10-01 GM Global Technology Operations LLC Continuation of exterior view on interior pillars and surfaces
US7924146B2 (en) * 2009-04-02 2011-04-12 GM Global Technology Operations LLC Daytime pedestrian detection on full-windscreen head-up display
US8072686B2 (en) * 2009-04-02 2011-12-06 GM Global Technology Operations LLC UV laser beamlett on full-windshield head-up display
US8427395B2 (en) * 2009-04-02 2013-04-23 GM Global Technology Operations LLC Full-windshield hud enhancement: pixelated field of view limited architecture
US8350724B2 (en) * 2009-04-02 2013-01-08 GM Global Technology Operations LLC Rear parking assist on full rear-window head-up display
US8817090B2 (en) * 2009-04-02 2014-08-26 GM Global Technology Operations LLC Luminance uniformity compensation of vector projection display
US8358224B2 (en) * 2009-04-02 2013-01-22 GM Global Technology Operations LLC Point of interest location marking on full windshield head-up display
US8830141B2 (en) * 2009-04-02 2014-09-09 GM Global Technology Operations LLC Full-windshield head-up display enhancement: anti-reflective glass hard coat
US8629784B2 (en) * 2009-04-02 2014-01-14 GM Global Technology Operations LLC Peripheral salient feature enhancement on full-windshield head-up display
US8704653B2 (en) 2009-04-02 2014-04-22 GM Global Technology Operations LLC Enhanced road vision on full windshield head-up display

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2491228A1 (en) * 1980-10-01 1982-04-02 Heuliez Bus Route monitoring system for public service vehicle - contains microprocessor to monitor actual route from detectors and pre-programmed data to control passenger information display
NL1015681C1 (en) * 2000-07-11 2001-08-14 Schoeren Hubertus Johannes B Road traffic light system with repeater display inside road vehicle, uses short range inductive loop to communicate between system and visual indicator on driver's dashboard
EP1204084A1 (en) * 2000-11-01 2002-05-08 Nissan Motor Co., Ltd. Information providing system and method for a vehicle
CN1397450A (en) * 2001-07-16 2003-02-19 骆文基 Speed-redicing signalling device for vehicle
CN1417755A (en) * 2002-11-18 2003-05-14 冯鲁民 Intelligent traffic system with perfect function and simple architechure
US20040225434A1 (en) * 2003-05-07 2004-11-11 Gotfried Bradley L. Vehicle navigation and safety systems
US20050259033A1 (en) * 2004-05-20 2005-11-24 Levine Alfred B Multiplex-selective heads-up displays for cars
US20070010944A1 (en) * 2005-07-09 2007-01-11 Ferrebee James H Jr Driver-adjustable sensor apparatus, system, & method for improving traffic safety
CN101236301A (en) * 2007-01-02 2008-08-06 通用汽车环球科技运作公司 Apparatus and method for displaying information within a vehicle interior

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9222636B2 (en) 2010-12-08 2015-12-29 Toyota Jidosha Kabushiki Kaisha Vehicle information transmission device
CN103237674A (en) * 2010-12-08 2013-08-07 丰田自动车株式会社 Vehicle information transmission device
CN102568231A (en) * 2010-12-29 2012-07-11 通用汽车环球科技运作有限责任公司 Roadway condition warning on full windshield head-up display
CN102568231B (en) * 2010-12-29 2015-05-06 通用汽车环球科技运作有限责任公司 Method for dynamically labeling the graph on the driving scene by basically a transparent windshield head-up display
CN103024214A (en) * 2011-09-23 2013-04-03 福特全球技术公司 Method for incoming call filtration
CN103217165B (en) * 2012-01-19 2018-05-29 沃尔沃汽车公司 Driver assistance system
US9619432B2 (en) 2012-01-19 2017-04-11 Volvo Car Corporation Driver assisting system and method with windscreen heads-up display of vehicle environment
CN103217165A (en) * 2012-01-19 2013-07-24 沃尔沃汽车公司 Driver assisting system
CN105806358A (en) * 2014-12-30 2016-07-27 中国移动通信集团公司 Driving prompting method and apparatus
CN105806358B (en) * 2014-12-30 2019-02-05 中国移动通信集团公司 A kind of method and device driving prompt
CN104639627B (en) * 2015-01-29 2018-11-06 中国科学院计算技术研究所 It is a kind of for the method for sending information of car networking and corresponding car-mounted device and vehicle
CN104639627A (en) * 2015-01-29 2015-05-20 中国科学院计算技术研究所 Information transmission method for Internet of Vehicles and corresponding vehicle-mounted device and vehicle
CN107179767B (en) * 2016-03-10 2021-10-08 松下电器(美国)知识产权公司 Driving control device, driving control method, and non-transitory recording medium
CN107179767A (en) * 2016-03-10 2017-09-19 松下电器(美国)知识产权公司 Steering control device, driving control method and non-transient recording medium
CN106503676A (en) * 2016-11-04 2017-03-15 大连文森特软件科技有限公司 Based on AR augmented realities and the drive assist system of driving details collection
CN106671984A (en) * 2016-11-04 2017-05-17 大连文森特软件科技有限公司 Driving assistance system based on AR augmented reality
WO2019169782A1 (en) * 2018-03-05 2019-09-12 南方科技大学 Image display thin film for in-vehicle display and manufacturing method therefor
CN110570665A (en) * 2018-06-06 2019-12-13 德尔福技术有限公司 Vehicle intention communication system
CN108961377A (en) * 2018-06-28 2018-12-07 西安电子科技大学 A kind of design method for airborne enhancing synthetic vision system virtual secure face
CN108961377B (en) * 2018-06-28 2020-05-05 西安电子科技大学 Design method for virtual safety surface of airborne enhanced synthetic vision system
US11097735B1 (en) 2020-03-19 2021-08-24 Toyota Motor North America, Inc. Transport lane usage
US11488424B2 (en) 2020-03-19 2022-11-01 Toyota Motor North America, Inc. Motion-based transport assessment
US11720114B2 (en) 2020-03-19 2023-08-08 Toyota Motor North America, Inc. Safety of transport maneuvering
US11958487B2 (en) 2020-03-19 2024-04-16 Toyota Motor North America, Inc. Transport lane usage
US11577749B2 (en) 2020-04-23 2023-02-14 Toyota Motor North America, Inc. Driving mode assessment

Also Published As

Publication number Publication date
CN101876751B (en) 2012-10-03
US20100253539A1 (en) 2010-10-07
DE102010013402A1 (en) 2010-11-18
US8269652B2 (en) 2012-09-18

Similar Documents

Publication Publication Date Title
CN101876751B (en) Vehicle-to-vehicle communicator on full-windshield head-up display
CN101876750B (en) Dynamic vehicle system information on full-windscreen head-up display
CN101872068B (en) Daytime pedestrian detection on full-windscreen head-up display
CN101882405B (en) Scan loop method and system of vector projection display
CN101881885B (en) Peripheral salient feature enhancement on full-windshield head-up display
CN101882407B (en) Virtual controls and displays by laser projection
CN101876752B (en) Distortion and perspective correction that vector projection shows
US8704653B2 (en) Enhanced road vision on full windshield head-up display
US8547298B2 (en) Continuation of exterior view on interior pillars and surfaces
US8384532B2 (en) Lane of travel on windshield head-up display
US8830141B2 (en) Full-windshield head-up display enhancement: anti-reflective glass hard coat
US8482486B2 (en) Rear view mirror on full-windshield head-up display
US8384531B2 (en) Recommended following distance on full-windshield head-up display
US8164543B2 (en) Night vision on full windshield head-up display
CN101915991A (en) Rear parking on the full rear-window head-up display is auxiliary
CN101872069A (en) Strengthen vision system full-windshield HUD
CN101866050A (en) The luminance uniformity compensation that the vector projection shows
CN101866051A (en) Infotainment on the full-windscreen head-up display shows
CN101866097A (en) Ultra-Violet Laser wave beam on the full-windscreen head-up display
CN101913357A (en) Mark point of interest position on full-windscreen head-up display
CN101872067A (en) Full-windshield HUD strengthens: pixelated field of view limited architecture
CN101860702A (en) Driver drowsy alert on the full-windscreen head-up display
CN101872070A (en) Traffic infrastructure indicator on the head-up display

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant