WO2021139812A1 - 抬头显示系统及其控制方法、交通工具 - Google Patents

抬头显示系统及其控制方法、交通工具 Download PDF

Info

Publication number
WO2021139812A1
WO2021139812A1 PCT/CN2021/071090 CN2021071090W WO2021139812A1 WO 2021139812 A1 WO2021139812 A1 WO 2021139812A1 CN 2021071090 W CN2021071090 W CN 2021071090W WO 2021139812 A1 WO2021139812 A1 WO 2021139812A1
Authority
WO
WIPO (PCT)
Prior art keywords
imaging
information
light
target
display
Prior art date
Application number
PCT/CN2021/071090
Other languages
English (en)
French (fr)
Inventor
方涛
徐俊峰
吴慧军
Original Assignee
未来(北京)黑科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 未来(北京)黑科技有限公司 filed Critical 未来(北京)黑科技有限公司
Priority to EP21739002.0A priority Critical patent/EP4089467A4/en
Priority to US17/792,014 priority patent/US20230046484A1/en
Publication of WO2021139812A1 publication Critical patent/WO2021139812A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Arrangement of adaptations of instruments
    • B60K35/23
    • B60K35/50
    • B60K2360/178
    • B60K2360/179
    • B60K2360/191
    • B60K2360/23
    • B60K35/28
    • B60K35/29
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0112Head-up displays characterised by optical features comprising device for genereting colour display
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/013Head-up displays characterised by optical features comprising a combiner of particular shape, e.g. curvature
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0141Head-up displays characterised by optical features characterised by the informative content of the display

Definitions

  • the embodiment of the present disclosure relates to a head-up display system, a control method thereof, and a vehicle.
  • the head-up display (HUD) application in the vehicle can project important driving information such as speed and navigation on the windshield glass in front of the driver, so that the driver can see the speed without looking down or turning his head.
  • Important local information such as, navigation, etc., can improve driving safety and at the same time bring a better driving experience. Therefore, HUDs that use car windshields for imaging are receiving more and more attention.
  • Augmented Reality HUD can reasonably and vividly superimpose and display some local information in the driver's sight area, and can also be combined with actual traffic conditions to further enhance the driver's perception of the actual driving environment .
  • the rise of AR-HUD has put forward higher technical requirements for the HUD industry.
  • At least one embodiment of the present disclosure provides a head-up display system.
  • the head-up display system includes a display device group, a concave reflective element, a directional display device, and a processor.
  • the display device group includes at least one single-layer display device.
  • the directional display device includes a light source device, a direction control element, a dispersion element, and a light modulation layer; the processor is respectively connected to the directional display device and the at least one single-layer display in the display device group The image device is connected; the direction control element is configured to converge the light emitted by the light source device; the dispersion element is configured to diffuse the converged light emitted from the direction control element to make the converged light Forming a light beam for forming a light spot; the light modulation layer is configured to modulate the light beam for forming a light spot so that the light beam forms a directional imaging light; the at least one single-layer imaging device is configured to emit incident light The imaging light to the concave reflecting element; the concave reflecting element is configured to reflect the imaging light; the processor is configured to: determine the target information to be displayed and the target display device, and control the target display device to display The target information to be displayed, wherein the target display device is one or more display devices selected from the directional display device and
  • At least one embodiment of the present disclosure further provides a head-up display system, which includes a display device group and a concave reflective element; the display device group includes a plurality of single-layer display devices; the plurality of single-layer display devices Each of the imaging devices is configured to emit imaging light incident on the concave reflective element, and the object distances of at least part of the single-layer imaging devices are the same or the object distances corresponding to at least some of the single-layer imaging devices are different.
  • the distance includes the propagation path length of the imaging light from the corresponding single-layer imaging device to the concave reflecting element; the concave reflecting element is configured to reflect the imaging light.
  • At least one embodiment of the present disclosure also provides a method for controlling a head-up display system
  • the head-up display system includes a visualization device group and a directional visualization device
  • the visualization device group includes at least one single-layer visualization device, the The imaging range of the directional imaging device is larger than the imaging range of the at least one single-layer imaging device
  • the processor is configured to: determine the target information to be displayed and the target imaging device, and control the target imaging device to display the Target information; wherein the target display device is a display device selected from the directional display device and the at least one single-layer display device in the display device group.
  • the control method further includes: determining a target position, where the target position is the position where the target information is displayed on the reflective imaging part;
  • the imaging device corresponding to the imaging area is used as a target imaging device, and the target imaging device is controlled to display the target information at the target position;
  • the imaging area is an imaging light that can be incident on the reflective imaging part Surface area.
  • the imaging area is a part of an enlarged imaging area or a directional imaging area.
  • the control method further includes: when the projection position of the external object projected onto the reflective imaging portion is within the magnified imaging area, using the external object as the target object, and determining the reference object and the The target distance between target objects;
  • the magnified imaging area is the area where the imaging light emitted by the imaging device group can be incident on the surface of the reflective imaging part;
  • the projection position that will be located in the magnified imaging area or The edge of the projection position is taken as the target position;
  • the image distance of each single-layer imaging device in the imaging device group is determined respectively, the image distance matching the size of the target distance is taken as the target image distance, and
  • the single-layer imaging device corresponding to the target image distance is used as the target imaging device; wherein, the image distance is the virtual image of the single-layer imaging device formed by the concave reflective element and the concave surface reflection The distance between the elements; or when the projection position of the external object projected (or mapped) onto the reflective imaging part is within the directional imaging area, the external object is taken as the target object;
  • the head-up display system further includes an information collection device configured to collect local information and surrounding information
  • the control method further includes: obtaining the local information and the surrounding information, according to the The local information and the surrounding information generate the target information.
  • the local information includes local speed information
  • the surrounding information includes the distance between a reference object and an external object
  • the control method further includes: determining a safe distance according to the local speed information, and judging the Whether the distance between the reference object and the external object is greater than the safety distance; when the distance between the reference object and the external object is greater than the safety distance, it is determined that the reference object is currently in a normal state, and a corresponding first prompt is given Information as the target information
  • the first prompt information includes one or more of an empty set, a first prompt image, and a first prompt video; and the distance between the reference object and the external object is not greater than all
  • the first early warning information includes one or more of the first early warning image and the first early warning video .
  • the surrounding information includes the distance between a reference object and the external object and the location of the external object; generating the target information according to the local information and surrounding information includes: connecting a vehicle to the The distance between external objects and the position of the external object are used as target information; or, the projection position of the external object projected onto the reflective imaging part is determined, and the projection position or the edge of the projection position is used as the target And control the target display device to display preset target information at the target position.
  • the control method further includes: when the external object is a key object, determining that the reference object is currently in an early warning state, and using the corresponding second early warning information as the target information, the second early warning The information includes one or more of the second early warning image and the second early warning video; wherein the distance between the reference object and the external object is less than a preset distance value, and the external object is a pedestrian, animal or non In the case of a motor vehicle, the external object is taken as the key object; or, when the external object is moving toward the current driving route, and the external object is a pedestrian, an animal or a non-motor vehicle, the external object is taken as the key object; Or, when the external object is located in the current driving route, and the external object is a pedestrian, animal or non-motor vehicle, the external object is taken as a key object; or, it is currently located in an object-intensive area, and the external object is When the external object is a pedestrian, an animal or a non-motor vehicle, the external object is taken as
  • the safety distance includes a front safety distance
  • the control method further includes: when the external object is located in front of the reference object, when the current distance is not greater than the front safety distance, and the safety distance
  • the difference between the current distance and the current distance is greater than the preset distance difference and/or the time in the pre-warning state exceeds the preset time threshold, a braking signal or a deceleration signal is generated, and the braking signal or deceleration signal is output.
  • the surrounding information includes lane position information
  • the local information includes vehicle position information
  • generating the target information according to the local information and current surrounding information includes: The vehicle position information determines the offset parameter of the vehicle deviating from the current driving lane, and determines whether the offset parameter is greater than the corresponding offset threshold; the offset parameter includes the offset angle and/or the offset distance; When the shift parameter is greater than the corresponding offset threshold, it is determined that the reference object is currently in an early warning state, and the corresponding third early warning information is used as the target information.
  • the third early warning information includes a third early warning image, a third early warning video, and priority.
  • the third prompt information includes one or more of an empty set, a third prompt image, a third prompt video, and a priority driving lane.
  • the local information further includes vehicle status information
  • the vehicle status information includes one or more of vehicle speed, vehicle acceleration, and turn signal status; generated according to the local information and surrounding information
  • the target information further includes: when the offset parameter is greater than the corresponding offset threshold and the early warning condition is met, it is determined that the reference object is currently in an early warning state; wherein, the early warning condition includes that the vehicle speed is greater than a first preset The speed value, the acceleration of the vehicle is not greater than zero, the turn signal on the opposite side of the direction corresponding to the deviation angle of the vehicle is not in the on state, the current lane is in an immutable state, and the time of departure from the lane is greater than the preset first One or more of the deviation duration thresholds.
  • the control method further includes: determining the priority driving lane of the vehicle according to the vehicle position information and the lane position information, and using the priority driving lane as a target object; determining that the target object is projected onto For the projection position on the reflective imaging part, the projection position or the edge of the projection position is used as the target position, and the target display device is controlled to display preset target information at the target position; or, according to The vehicle position information and the lane position information determine the priority driving lane of the vehicle, the boundary lines on both sides of the priority driving lane are used as target objects, and the shape matching the boundary line is used as the target information; The target object is projected to the projection position on the reflective imaging part, the projection position is taken as the target position, and the target display device is controlled to display the target information in a preset color at the target position.
  • control method further includes: when the offset parameter is greater than a corresponding offset threshold, and the difference between the offset parameter and the offset threshold is greater than a preset offset difference and/ Or when the time in the offset state exceeds the preset safe offset time, a braking signal or a deceleration signal is generated, and the braking signal or deceleration signal is output.
  • the surrounding information includes forward information
  • the forward information includes the speed of the preceding vehicle and/or the current distance from the preceding vehicle
  • the control method includes: projecting the target area of the preceding vehicle onto all The projection position on the reflective imaging part is taken as the target position, and the front information is taken as the target information, and the target display device is controlled to display the target information at the target position; wherein the target area of the front vehicle is the target area.
  • control method further includes: when the reference object is currently in an early warning state, controlling the target display device to display the target information in a normal manner or a first highlighting manner, and the first highlighting Modes include one or more of dynamic display (such as scrolling display, bounce display, flashing display), highlight display, and display in the first color; and when the reference object is currently in a normal state, controlling the target display device The target information is displayed in a normal manner or a second highlighting manner, and the second highlighting manner includes displaying in a second color.
  • the first highlighting Modes include one or more of dynamic display (such as scrolling display, bounce display, flashing display), highlight display, and display in the first color
  • the head-up display system further includes a sounding device and/or a vibration device
  • the control method further includes: when the reference object is currently in an early warning state, sending a reminder voice to the sounding device and controlling the sounding device Playing the reminder voice; and/or, sending a vibration signal to the vibration device to control the vibration device to vibrate.
  • the surrounding information includes road abnormality information
  • the road abnormality information includes one of obstacle location, uneven road location, dangerous road location, maintenance road location, accident road location, and temporary inspection road location.
  • the control method further includes: using the road abnormality information as target information; or, determining an abnormal position according to the road abnormality information, and determining the projection position of the abnormal position onto the reflection imaging unit, The projection position or the edge of the projection position is used as a target position, and the target display device is controlled to display target information corresponding to the road abnormality information at the target position.
  • the surrounding information includes current visibility
  • the control method further includes: when the current visibility is less than a preset visibility threshold, acquiring the location information of the external object collected by the information collection device; The position information of the object is used as the target information; or, the projection position of the external object projected on the reflective imaging part is determined, the projection position or the edge of the projection position is used as the target position, and the target display is controlled
  • the device displays preset target information at the target location.
  • control method further includes: generating push information according to the target information, and sending the push information to the server or to other devices within a preset distance, such as vehicles, mobile phones, etc.
  • At least one embodiment of the present disclosure also provides a vehicle including any of the above-mentioned head-up display systems.
  • FIG. 1A shows a schematic diagram of a head-up display system provided by at least one embodiment of the present disclosure
  • FIG. 1B shows a first structural schematic diagram of the image forming device in the head-up display system provided by at least one embodiment of the present disclosure
  • FIG. 2A shows a schematic diagram of the imaging principle of the directional display device in the head-up display system provided by at least one embodiment of the present disclosure
  • FIG. 2B shows a schematic diagram of the composition of the imaging device in the head-up display system provided by at least one embodiment of the present disclosure
  • FIG. 3 shows a first structural schematic diagram of imaging of a directional display device in the head-up display system provided by at least one embodiment of the present disclosure
  • FIG. 4 shows a schematic diagram of imaging outside the reflective imaging part of the head-up display system provided by at least one embodiment of the present disclosure
  • FIG. 5 shows a first schematic diagram of an imaging area on a reflective imaging portion provided by at least one embodiment of the present disclosure
  • FIG. 6 shows a second schematic diagram of an imaging area on a reflective imaging portion provided by at least one embodiment of the present disclosure
  • FIG. 7 shows a second schematic diagram of the image forming device in the head-up display system provided by at least one embodiment of the present disclosure
  • FIG. 8 shows a third structural schematic diagram of the image forming device in the head-up display system provided by at least one embodiment of the present disclosure
  • FIG. 9 shows a fourth structural schematic diagram of the image forming device in the head-up display system provided by at least one embodiment of the present disclosure.
  • FIG. 10 shows a fifth structural schematic diagram of the image forming device in the head-up display system provided by at least one embodiment of the present disclosure
  • FIG. 11 shows a sixth structural schematic diagram of the image forming device in the head-up display system provided by at least one embodiment of the present disclosure
  • FIG. 12 shows a seventh structural schematic diagram of the image forming device in the head-up display system provided by at least one embodiment of the present disclosure
  • FIG. 13 shows an eighth schematic diagram of the image forming device in the head-up display system provided by at least one embodiment of the present disclosure
  • FIG. 14 shows a ninth structural schematic diagram of the image forming device in the head-up display system provided by at least one embodiment of the present disclosure
  • 15 is a schematic diagram showing the tenth structure of the image forming device in the head-up display system provided by at least one embodiment of the present disclosure
  • FIG. 16 shows a second schematic structural diagram of imaging of the directional display device in the head-up display system provided by at least one embodiment of the present disclosure
  • FIG. 17 shows a first structural schematic diagram of a solid lamp cup in the head-up display system provided by at least one embodiment of the present disclosure
  • FIG. 18 shows a second structural schematic diagram of a solid lamp cup in the head-up display system provided by at least one embodiment of the present disclosure
  • FIG. 19A shows a structural block diagram of a head-up display system provided by at least one embodiment of the present disclosure
  • 19B shows another structural block diagram of the head-up display system provided by at least one embodiment of the present disclosure.
  • FIG. 20 shows a flow chart of assisting driving by a processor based on a safe distance provided by at least one embodiment of the present disclosure
  • FIG. 21 shows a schematic diagram of a display image of the reflective imaging part when the vehicle distance is relatively close in at least one embodiment of the present disclosure
  • FIG. 22 shows a schematic diagram showing a bird's-eye view of the reflective imaging part in at least one embodiment of the present disclosure
  • FIG. 23 shows a schematic diagram of a display image of the reflective imaging part when a pedestrian approaches in at least one embodiment of the present disclosure
  • FIG. 24 shows a flowchart of a processor provided by at least one embodiment of the present disclosure to determine whether there is an offset in a lane
  • FIG. 25 shows a schematic diagram of the display image of the reflective imaging part when the lane is shifted in at least one embodiment of the present disclosure.
  • FIG. 26 shows another schematic diagram of the display image of the reflective imaging part when the lane is shifted in at least one embodiment of the present disclosure.
  • the embodiments of the present disclosure provide a head-up display system capable of layered imaging.
  • the head-up display system can form multiple layers of images at different distances from the local area, thereby realizing multi-level imaging; and the head-up display system can also Achieve large-scale imaging, such as full-window imaging, so that information can be displayed on a large scale on the vehicle, so that more content can be displayed.
  • the head-up display system includes: a display device group, a concave reflective element 20, a directional display device 30, and a processor 100; the display device group includes At least one single-layer imaging device.
  • the imaging device group includes a first single-layer imaging device 11 and a second single-layer imaging device 12, which may also include more single-layer imaging devices, such as Contains the third single-layer imaging device 13 and so on.
  • Each single-layer imaging device is configured to emit imaging light incident on the concave reflective element 20 for forming a single-layer image at a predetermined position.
  • the imaging device group may include a plurality of single-layer imaging devices (that is, at least two single-layer imaging devices), and at least some of the single-layer imaging devices have the same object distance or at least some objects corresponding to the single-layer imaging devices.
  • the distance is different, and the object distance is the propagation path length of the imaging light from the corresponding single-layer imaging device to the concave reflecting element, or the distance from the single-layer imaging device to the concave reflecting element.
  • the processor is respectively connected with the directional display device 30 and the single-layer display device in the display device group, for example, is connected in a wired or wireless manner to realize the communication between the directional display device 30 and the single-layer display device in the display device group.
  • the control function of the layered imaging device is shown in Figure 19A.
  • the above-mentioned "respectively” includes each individually connected and connected in series.
  • the processor may be used in an autonomously driven vehicle and/or a manually driven vehicle.
  • the processor may include a plurality of sub-processors, and the plurality of processors are respectively connected to the directional display device 30 and the single-layer display device in the display device group.
  • the head-up display system may also include other processors in addition to the above-mentioned processors, or the processor may also include sub-controllers in addition to the directional display device 30 and the single-layer display device in the display device group. Other sub-processors.
  • a directional display device 30 can be used to achieve large-scale imaging; a single-layer display device and the directional display device 30 can be used to achieve multi-layer (for example, two-layer) imaging; when the display element When the part includes multiple single-layer display elements, at least two-layer imaging can be realized by using multiple single-layer imaging devices.
  • the concave reflective element 20 and the directional display device 30 can be arranged on the same side of the reflective imaging part 50.
  • the reflective imaging part 50 can be a windshield of a vehicle (for example, a glass material The windshield), or the reflective film inside the windshield, etc., the reflective film can reflect the imaging light emitted by the head-up display system, and does not affect the driver to see through the reflective film to observe things or scenes outside the vehicle; correspondingly,
  • the concave reflective element 20 and the directional display device 30 may be located in the vehicle, for example, located inside the reflective imaging part 50.
  • the concave reflective element 20 and the directional display device 30 may be arranged below the reflective imaging part 50, such as an instrument panel (IP) table of an automobile.
  • IP instrument panel
  • the single-layer imaging device is configured to emit imaging light incident on the concave reflective element 20, and the object distances corresponding to different single-layer imaging devices are different from each other.
  • the object distance may be imaging The length of the propagation path of the light from the corresponding single-layer imaging device to the concave reflective element 20, or the distance from the single-layer imaging device to the concave reflective element 20; the concave reflective element 20 is configured to reflect the imaging light to the reflective imaging portion 50 , So that the reflective imaging unit 50 can reflect the imaging light to a predetermined range, such as the eye box range, so that the driver can see the virtual image of the single-layer imaging device formed by the reflective imaging unit 50 within the eye box range, such as Shown in Figure 2B.
  • the "eyebox range” refers to the range within which the user's eyes can see the image formed by the head-up display system; and the “eyebox range” has a certain size, and the user's eyes It can be moved, and within this range, the imaging can be viewed.
  • the first single-layer imaging device 11 can emit the first imaging light incident on the concave reflective element 20, and the second single-layer imaging device 12 can emit the second imaging light incident on the concave reflective element 20, And the first object distance corresponding to the first single-layer imaging device 11 is different from the second object distance corresponding to the second single-layer imaging device 12.
  • the first object distance is the propagation path length of the first imaging light from the first single-layer imaging device 11 to the concave reflective element 20 or the distance from the first single-layer imaging device 11 to the concave reflective element 20, and the second object distance It is the propagation path length of the second imaging light from the second single-layer imaging device 12 to the concave reflective element 20 or the distance from the second single-layer imaging device 12 to the concave reflective element 20.
  • the first single-layer imaging device 11 and the second single-layer imaging device 12 can realize two-layer images.
  • the single-layer imaging device in the imaging device group may be a display, such as an organic light-emitting diode display or a liquid crystal display.
  • the single-layer imaging device in the imaging device group may also be a projector. .
  • the embodiment of the present disclosure does not specifically limit the form of the single-layer imaging device in the imaging device group.
  • the “single layer” in a single-layer imaging device means that the single-layer imaging device is used in a head-up display device and can form an image with a single-layer image distance.
  • the directional imaging device 30 is configured to emit directional imaging light incident to the reflective imaging portion 50, so that the reflective imaging portion 50 can reflect the directional imaging light to a predetermined range, for example, within the eye box range 62 .
  • the directional display device 30 includes a light source device, a direction control element 32, a dispersion element 33, and a light modulation layer 34.
  • the light source device may be an electroluminescent element such as LED (Light Emitting Diode), and the number may be one or more.
  • the light modulation layer 34 includes a liquid crystal layer (for example, a liquid crystal display panel) or an electrowetting display layer, and the light modulation layer can convert light passing through it into light including image information (for example, imaging light).
  • the direction control element 32 is configured to converge the light emitted by the light source device, for example, to a preset area 61, which is a preset range, such as a position or range within the eye box range 62; the dispersion element 33 and the light modulation layer 34 can be arranged on the same side of the direction control element 32, and the dispersion element 33 and the light modulation layer 34 are located on the light emitting side of the direction control element 32, so that the light emitted by the direction control element 32 can be processed accordingly.
  • a preset area 61 which is a preset range, such as a position or range within the eye box range 62
  • the dispersion element 33 and the light modulation layer 34 can be arranged on the same side of the direction control element 32, and the dispersion element 33 and the light modulation layer 34 are located on the light emitting side of the direction control element 32, so that the light emitted by the direction control element 32 can be processed accordingly.
  • the dispersion element 33 is configured to diffuse the condensed light emitted from the direction control element 32 so that the condensed light forms a light beam for forming a light spot; the light modulation layer 34 is configured to modulate the light beam for forming a light spot to The light beam is formed into a directional imaging light.
  • the light modulation layer 34 can block or transmit light, and emit directional imaging light toward the reflective imaging part 50 during transmission, so that the reflective imaging part 50 can reflect the directional imaging light to a preset range, such as the eye box range 62 Inside.
  • a diffusing element may also be referred to as a diffusing element, which can diffuse the light beam passing through the diffusing element without changing the optical axis of the light beam (for example, the main direction of light beam propagation).
  • the light modulation layer 34 in the embodiment of the present disclosure can be made of liquid crystal.
  • the liquid crystal layer has a light modulation function, that is, imaging can be achieved.
  • the dispersion element 33 and the light modulation layer 34 may be located on the light exit side of the direction control element 32, and the dispersion element 33 may be arranged on the light entrance side of the light modulation layer 34 or on the light exit side of the light modulation layer 34.
  • the dispersive element 33 when the dispersive element 33 is arranged on the light-exit side of the light modulation layer 34, the dispersive element 33 needs to be closely attached to the light modulation layer 34 (for example, directly attached or adhered closely through optical glue, etc.), so that the light modulates Layer 34 is shown more clearly.
  • the dispersive element 33 is provided on the lower side of the light modulation layer 34 as an example for illustration.
  • the projection picture formed by the directional display device 30 is larger than the imaging picture formed by each single-layer display device.
  • the directional display device 30 can realize full-window imaging, that is, the projection picture formed by the directional display device 30 can cover The entire car window.
  • the field of view Field of View, FOV
  • FOV Field of View
  • the range of the horizontal field of view of the image viewed by the driver’s eye position is greater than or equal to 15 degrees, such as not less than 20 degrees, such as not less than 25 degrees, such as not less than 30 degrees, such as Between 15 degrees and 100 degrees, the range of the vertical field of view angle is greater than or equal to 5 degrees, for example, not less than 15 degrees, such as not less than 20 degrees, such as not less than 25 degrees, such as between 5 degrees and 30 degrees.
  • the field of view of the head-up display system can be increased, and ultra-large field of view imaging under low power consumption can be realized.
  • the above “horizontal” and “vertical” are two mutually perpendicular directions. Taking the car body coordinate system as an example, the above “horizontal” can refer to the width direction of the car in the car body coordinate system, and the above “vertical” can refer to the car body coordinate system. The height direction of the middle car.
  • the processor is configured to determine the target information to be displayed and the target display device, and control the target display device to display the target information; for example, the target display device is a slave directional display device 30 and a plurality of single display devices in the display device group.
  • the target display device is a slave directional display device 30 and a plurality of single display devices in the display device group.
  • One of the selected one of the layer display devices for example, the first single layer display device 11, the second single layer display device 12, the third single layer display device 13, etc.).
  • the processor may also output the target information to be displayed to the target display device, or output the target information to be displayed to other devices connected to the target display device, Furthermore, the other device can transmit the target information to be displayed to the target display device.
  • the first single-layer imaging device 11 and the second single-layer imaging device 12 in the imaging device group are respectively used for imaging, and the first single-layer imaging device 11 is The object distance is different from the second object distance of the second single-layer imaging device 12.
  • the single-layer imaging device for example, the first single-layer imaging device 11, or the second single-layer imaging device 12, or the subsequent third single-layer imaging device 13, etc.
  • the propagation path length of the imaging light (such as the first imaging light, the second imaging light, or the subsequent third imaging light) to the concave reflective element 20 is closer to the focal length of the concave reflective element 20, the concave reflective element 20 It can be imaged at a farther position, that is, the image formed by the concave reflective element 20 has a larger image distance; correspondingly, the image formed by the concave reflective element 20 can be formed in the reflective imaging section 50 under the action of the reflective imaging section 50 Corresponding virtual images are formed on the other side of, and the greater the image distance of the virtual image formed by the concave reflective element 20 is, the farther the virtual image formed by the reflective imaging part 50 is from the reflective imaging part 50.
  • the imaging device group in FIG. 4 includes three single-layer imaging devices (that is, the first single-layer imaging device 11, the second single-layer imaging device 12, and the third single-layer imaging device).
  • the device 13) is shown as an example, and three single-layer imaging devices are respectively imaging at the first magnified imaging position 111, the second magnified imaging position 121, and the third magnified imaging position 131, so that the head-up display system can reflect at a distance. Images are formed at different positions of the imaging section 50, that is, a multi-level image is formed.
  • the first object distance and the second object distance may both be smaller than the focal length of the concave reflective element 20 so that the concave reflective element 20 can form a virtual image enlarged by the first single-layer imaging device 11 and the second single-layer imaging device 12.
  • the head-up display system may further include a reflective imaging part 50.
  • the imaging light emitted by the single-layer imaging device in the imaging device group can be incident to the reflective imaging part 50 after being reflected by the concave reflective element 20, and be reflected by the reflective imaging part 50.
  • a preset range such as the eye box range 62
  • the eye box range 62 in the embodiment of the present disclosure refers to the range where the driver can view the image on the reflective imaging part 50, which roughly corresponds to the position of the driver’s head; the size of the eye box range 62 may be based on actual conditions. It depends on the situation.
  • the direction control element 32 can realize the convergence of light.
  • the light source device includes a plurality of light sources 31, and light sources 31 are arranged at different positions.
  • the light source device includes 7 light sources 31 as an example; correspondingly, 7 direction control elements 32 may be provided.
  • the direction control element 32 converges the light emitted by the multiple light sources 31 to the preset area 61.
  • the preset area 61 is taken as an example for illustration.
  • the preset area 61 may also be a small area, that is, only the light emitted by the light source device needs to be condensed to this point. Just within the area.
  • the direction of light emitted by the light source 31 can be adjusted by setting the orientation of the direction control elements 32 located at different positions, so as to achieve light convergence.
  • the light is diffused by the diffusing element 33 to form a light spot with a preset shape and a larger imaging range (for example, the eye box range 62), thereby facilitating the driver's viewing in a large range
  • the directional developing device 30 forms images.
  • the leftmost direction control element 32 in FIG. 3 takes the leftmost direction control element 32 in FIG. 3 as an example.
  • the light A emitted by the leftmost light source 31 can be directed along the optical path a toward the preset Set area 61;
  • the dispersing element 33 disperses the light A into multiple light rays (including light A1, light A2, etc.) and disperses them into a range, namely the eye box range 62, It is convenient for the user to view the image formed by the directional display device 30 within the range of the eye box 62.
  • the dispersion element 33 may be a diffractive optical element (DOE), such as a beam shaper (BeamShaper); the size and shape of the light spot are determined by the microstructure of the beam shaper, and the shape of the light spot includes but is not limited to a circle, Oval, square, rectangular, bat wing shape.
  • DOE diffractive optical element
  • BeamShaper beam shaper
  • the size and shape of the light spot are determined by the microstructure of the beam shaper, and the shape of the light spot includes but is not limited to a circle, Oval, square, rectangular, bat wing shape.
  • the dispersion angle of the diffused light spot in the side view direction may be 2 degrees to 10 degrees, for example, it may be 10 degrees or 5 degrees; in the front view direction (for example, along the The dispersion angle of the direction in which the main driver and the co-pilot of the vehicle extend can be 5-50 degrees, for example, it can be 10 degrees, 20 degrees, or 30 degrees.
  • the number of direction control elements 32 may be multiple, and different direction control elements 32 are arranged at different positions, configured to adjust the direction of light emitted by the light source 31 at different positions, and the light source 31 at different positions is The exit direction points to the same preset area 61. As shown in FIG. 3, the number of direction control elements 32 in FIG. 3 is seven.
  • one direction control element 32 can adjust the light emitted by one light source 31, and can also adjust the light emitted by multiple light sources 31, which is not limited in the embodiment of the present disclosure.
  • the dispersing element 33 can diffuse light into the eye box range 62, and does not completely limit the light emitted by the light source device to the eye box range. Within 62. That is, the light A may form a larger range of light spots after passing through the dispersing element 33, and the light emitted by other light sources 31 may form other light spots through the dispersing element 33, but the light emitted by all the light sources 31 can almost reach the range 62 of the eye box.
  • the light emitted by the light source device is incident on the light modulation layer 34 after being acted by the direction control element 32, so that the light modulation layer 34 can emit directional imaging light toward the eye box range 62 during operation, so that the directional imaging light
  • the imaging light is condensed into the eye box, which can increase the brightness of the directional imaging light, and under the action of the dispersing element 33, it is convenient for the driver to view the image formed by the directional imaging device 30 in the eye box 62, which improves the light. At the same time of brightness, it can also expand the visual range.
  • the directional imaging device 30 since the directional imaging light can be condensed, the directional imaging device 30 does not need to have particularly high brightness to enable the driver to observe the virtual image formed by the reflective imaging part 50; and the directional imaging device 30 may have a larger area, so that a user (for example, a driver) can view a larger range of images formed by the reflection of the reflective imaging part 50.
  • the directional display device 30 may be laid on the surface of the IP table of the vehicle.
  • the directional imaging light may be incident on the surface area of the reflective imaging part 50, and this area may be the directional imaging area 52, through which the driver can view the directional imaging position 301
  • the virtual image formed at the location is the image formed by the reflection of the directional imaging device 30 by the reflection imaging unit 50.
  • the area where the imaging light emitted by the imaging device group is incident on the surface of the reflective imaging part 50 is the enlarged imaging area 51, and the driver can view the corresponding enlarged imaging position (for example, the first enlarged imaging position 111, the first enlarged imaging position 111, and the second enlarged imaging area 51).
  • the virtual image at the second magnification imaging position 121, the third magnification imaging position 131, etc. is an image formed by the reflective imaging unit 50 and corresponding to a single-layer imaging device in the imaging device group.
  • different single-layer imaging devices in the imaging device group can correspond to different magnified imaging areas. In FIGS. 5 and 6, three magnified imaging areas are included as an example.
  • the first single-layer imaging device 11 and the second single-layer imaging device 11 The single-layer imaging device 12 and the third single-layer imaging device 13 respectively correspond to different magnified imaging regions.
  • the first imaging light emitted by the first single-layer imaging device 11 may be incident on a magnified imaging area on the surface of the reflective imaging part 50, and the reflective imaging part 50 is formed at the first magnified imaging position 111 and the first single layer.
  • the driver can view the virtual image at the first magnified imaging position 111 through the magnified imaging area.
  • the directional imaging device 30 can form a larger range of images, and the area of the directional imaging area 52 on the reflective imaging part 50 can be larger than the area of a single magnified imaging area 51.
  • the enlarged imaging area 51 may be located in the directional imaging area 52, as shown in FIG. 5; or the enlarged imaging area 51 and the directional imaging area 52 are two different areas, as shown in FIG. 6; or, the enlarged imaging area 51
  • the directional imaging area 52 may also be two areas that partially overlap.
  • the light emitted by the imaging device group and the directional imaging device 30 may cover all the reflective imaging part 50, and the light will be seen by the driver after being reflected by the reflective imaging part 50 and reaching the eye box range 62.
  • the "imaging light” in the embodiment of the present disclosure refers to the light emitted by the imaging device and capable of imaging within the eye box 62; correspondingly, the “directional imaging light” refers to the light emitted by the directional imaging device and The light that can be imaged in the eye box range 62, “imaging light” refers to the light that is emitted by the single-layer imaging device and can be imaged in the eye box range 62. That is, on the surface of the reflective imaging part 50, only the area where the imaging light capable of imaging within the eye box area 62 is incident is regarded as an enlarged imaging area or a directional imaging area.
  • the head-up display system further includes a processor configured to determine the target information to be displayed and determine which display device needs to display the target information.
  • the processor may be configured to select a target display device from the directional display device 30 and a plurality of single-layer display devices in the display device group, and control the target display device to display target information, so that the reflective imaging unit 50 A virtual image is formed at the corresponding imaging position, so that the target information can be displayed in the imaging area for the driver to watch.
  • the vehicle speed can be used as the target information; and since the directional display device 30 allows the user to view the image in the directional imaging area, the directional display device 30 can be used as the target Imaging device.
  • the "display of target information in the imaging area" in the embodiments of the present disclosure means that the driver can view the target information through the imaging area, so that from the driver's perspective, it appears that the target information is displayed in the imaging area.
  • the target information, but the virtual image corresponding to the target information is substantially located outside the reflective imaging part 50, such as the imaging position in FIG. 4 (including the first magnified imaging position 111, the second magnified imaging position 121, and the third magnified imaging position 131.
  • the description that is the same as or similar to "display target information in the imaging area" in the embodiments of the present disclosure is only for convenience of description, and is not used to limit the imaging of the reflective imaging portion 50
  • the area itself can display target information.
  • the imaging area is a part of a directional imaging area or an enlarged imaging area.
  • the head-up display system utilizes multiple single-layer imaging devices with different object distances in the imaging device group to be able to image at multiple imaging positions at different distances from the reflective imaging part, and select the image distance Attach to the imaging device closest to the object to reduce parallax.
  • the direction control element 32 can converge the directional imaging light with different incident angles into a preset area and diffuse into the eye box, which can increase the brightness of the directional imaging light and can also have a larger visual range;
  • the imaging device 30 can be arranged in a wide range, thereby forming a larger directional imaging area on the surface of the reflective imaging part, and realizing a wide range of imaging.
  • the processor selects a suitable display device as the target display device, and controls the target display device to display target information on the surface of the reflective imaging part, so that the reflective imaging part can display a larger range and/or multi-level image, which can improve head-up The display effect of the display system.
  • the processor may be further configured to determine a target position, the target position being a position of the reflective imaging part for displaying the target information to be displayed; determining the target display device according to the target position; and The target display device is controlled to display the target information to be displayed at the target position.
  • the processor may determine the target information to be displayed, and it may also determine the target position, that is, determine the position where the target information is displayed on the reflective imaging part 50; for example, the processor may also determine the imaging area containing the target position, and The imaging device corresponding to the imaging area is used as the target imaging device, and the target imaging device is controlled to display target information at the target position; the imaging area is an area where the imaging light can be incident on the surface of the reflective imaging part 50.
  • the target imaging device can be further determined by the image distance.
  • each target information has a corresponding target position
  • the target position may be preset, or may also be a position determined based on the current actual scene.
  • the target information is vehicle speed
  • the vehicle speed is preset to display the vehicle speed at the bottom left of the reflective imaging part 50
  • the corresponding position at the bottom left of the reflective imaging part 50 can be directly used as the target position
  • the graphic corresponding to the pedestrian position reminds the driver that the graphic can be target information, and the position on the reflective imaging part 50 where the target information needs to be displayed is the target position; for example, the pedestrian can be projected onto the reflective imaging part 50 as the position target location.
  • the target location may be a location point or a location range, which may be determined based on actual conditions.
  • different imaging devices correspond to different imaging areas on the reflective imaging portion 50. If the target position is located in a certain imaging area, the imaging device corresponding to the imaging area can be used as the target display. Image device, based on the target display device, the corresponding target information can be displayed at the target position. For example, if the target position is located in the enlarged imaging area corresponding to the first single-layer imaging device 11, the first single-layer imaging device 11 may be used as the target imaging device. For example, if different imaging areas have an intersection and the target location is located in multiple imaging areas, one imaging area can be selected from them; for example, the imaging area can be selected randomly or based on a preset selection rule.
  • the head-up display system can display in a fit manner based on the principle of Augmented Reality (AR).
  • AR Augmented Reality
  • the head-up display system may also be based on mixed augmented reality (Mixed Reality, MR), which is an implementation of augmented reality technology.
  • MR Mixed augmented reality
  • “fit” can mean that when the user observes the image formed by the head-up display system, the image can be displayed in cooperation with external objects.
  • the external object when the projection position of the external object projected (or mapped) onto the reflective imaging part 50 is located in the directional imaging area, the external object is taken as the target object; the directional imaging area is the directional imaging light emitted by the directional imaging device 30 can enter To the area on the surface of the reflective imaging section 50. At the same time, the projection position or the edge of the projection position in the directional imaging area is taken as the target position.
  • the edge of the projection position includes the contour of the projection position and/or the periphery of the projection position.
  • the periphery may be a position close to the outer side of the contour.
  • external objects are things located outside the reflective imaging part 50, including stationary objects such as road surfaces, indicators, etc., and may also include movable objects such as motor vehicles, pedestrians, animals, and non-motor vehicles.
  • the external object can be projected and mapped onto the reflective imaging part 50.
  • the external object can be projected and mapped to a certain position of the reflective imaging part 50 along the direction toward the eye box range 62. This position is the projection position, that is, the external object, the projection position ⁇ The range of the eye box are collinear, so that the driver can see the external objects through the projection position at the range of the eye box.
  • the processor may also be configured to: when the projection position is located in the directional imaging area, use the directional display device 30 as the target display device; and may also use the external object as the target object that can perform AR display; for example, ,
  • the projection position or the edge of the projection position in the magnified imaging area is taken as the target position, and the target display device (ie, the directional display device 30) can be controlled to display target information at the target position.
  • the projection positions of, the external object, the target information displayed on the reflective imaging unit 50, and the eye box range can be made to be collinear, so the driver at the eye box range can view the target information that fits the external object ( For example, frame the external objects, etc.), which can remind the driver more effectively.
  • the processor may also be configured to: when the projection position of the external object projected (or mapped) onto the reflective imaging part 50 is within the magnified imaging area, the external object is taken as the target object, and the reference object (such as a vehicle, For example, the target distance between a vehicle, such as a HUD device or an acquisition device, and the target object; the enlarged imaging area is the area where the imaging light emitted by the imaging device group can be incident on the surface of the reflective imaging part 50; and will be located in the enlarged imaging area
  • the projection position or the edge of the projection position in the area is used as the target position; the target display device is selected from the display device group according to the target distance, for example, the image distance corresponding to the target distance is used as the target image distance, and the target image
  • the corresponding single-layer imaging device is used as the target imaging device; for example, the image distance is the distance between the virtual image of the single-layer imaging device and the concave reflective element 20 formed by the concave reflective element 20; or the virtual image and the reflection
  • the projection position of the external object projected (or mapped) to the reflecting device can be determined by the observation area (for example, the eye box area) when the user uses the head-up display system and/or the vehicle.
  • the observation area for example, the eye box area
  • the external object falls on the projection area of the reflector 50.
  • the connection between the external object and the range of the eye box, and the part where it intersects with the reflection device 50 can be regarded as the projection position of the external object on the reflection device.
  • the processor may also be configured to: when the projection position of the external object is located in the magnified imaging area, use the external object as an AR display
  • the corresponding single-layer imaging device in the imaging device group can be used as the target imaging device. Since the imaging device group contains multiple single-layer imaging devices, in the embodiments of the present disclosure, the single-layer imaging device is determined as the target imaging device based on the distance between the external object and the head-up display system (ie, the target distance) For example, the target distance can be specifically simplified as the distance between the external object and the reflective imaging part 50.
  • the single-layer imaging device group has different object distances
  • different single-layer imaging devices also have different image distances, that is, the single-layer imaging device has different image distances.
  • the distance between the virtual image and the concave reflective element 20 is different, and the image distance can be mapped to different magnified imaging positions outside the reflective imaging part 50, such as the first magnified imaging position 111, the second magnified imaging position 121, The third magnified imaging position 131, etc.; and the larger the image distance, the farther the corresponding magnified imaging position is.
  • the zoomed imaging position closest to the target object can be determined, and the single-layer imaging device corresponding to the nearest zoomed imaging position can be used as the target visualization device.
  • the second single-layer imaging device can be used as the target imaging device.
  • a distance range can be allocated according to the image distance of each single-layer imaging device, and according to which distance range the target distance falls into, it is determined which image distance the target distance matches with, and then which single-layer imaging device is used as the target display. ⁇ Like the device.
  • the processor may also be configured to: when the projection position of the external object projected (or mapped) onto the reflective imaging part 50 is within the magnified imaging area and within the directional imaging area, the external object is regarded as the target object and the reference object is determined The target distance between the target object and the target object; and the projection position or the edge of the projection position is taken as the target position; the display device corresponding to the target distance (including the directional display device 30 and the first single-layer display device 11 , The second single-layer imaging device 12, the third single-layer imaging device 13, etc.) as the target image distance, and the imaging device corresponding to the target image distance is the target imaging device.
  • the image distance of the directional developing device 30 can be simplified as the distance between the directional imaging position 301 and the reflective imaging portion 50.
  • the directional imaging device 30 forms a virtual image outside the reflective imaging part 50.
  • the distance between this virtual image and the reflective imaging part 50 is the image distance. Because the distance from the human eye to the reflective imaging part 50 is basically fixed, the image distance is also It can be equivalent to the distance from the virtual image to the eye box range 62.
  • the most suitable display device is determined as the target display device based on the target distance of the target object, so that the target display device can be formed between the virtual image formed outside the reflective imaging part 50 and the target object.
  • the distance difference is the smallest, so that the virtual image and the target object can better fit, can effectively reduce the parallax, and can improve the effect of the augmented reality display.
  • the processor is further configured to: for the same external object, control the directional display device to display the first information of the external object, and control the display device group to display the second information of the external object.
  • the content included in the information and the second information is at least partially different; or, for multiple external objects, the directional display device is used to display the external objects that meet the first selection condition, and the display device group is used to display the external objects that meet the second selection condition. .
  • the head-up display system further includes a first transflective element 21; the first transflective element 21 can transmit light with a first characteristic and reflect light with a second characteristic.
  • the first single-layer imaging device 11 is arranged on one side of the first transflective element 21, and the second single-layer imaging device 12 and the concave reflective element 20 are arranged on the other side of the first transflective element 21, as described above. "One side" and “the other side” are opposite sides; the first imaging light has a first characteristic, the second imaging light has a second characteristic, and the first transflective element is configured to transmit the first imaging light and reflect the second imaging light.
  • the first transflective element 21 is used to adjust the position of a single-layer imaging device in the imaging device group, so that the imaging light emitted by the single-layer imaging device will not be affected by other single-layer imaging devices.
  • Blocking; FIG. 7 takes the change of the position of the second single-layer imaging device 12 as an example.
  • the first transflective element 21 can transmit light having the first characteristic, so that the first imaging light can be normally transmitted and incident to the concave reflective element 20, and the first single-layer imaging device 11 can normally image; and, the first transflective element 21 can also reflect light with the second characteristic, so that the second imaging light can be reflected by the first transflective element 21, and then enter the concave reflective element 20 to achieve imaging.
  • the first characteristic and the second characteristic may be two different characteristics.
  • the "characteristic" in the embodiments of the present disclosure refers to the properties of light, such as polarization characteristics and wavelength characteristics.
  • the first transflective element can transmit polarized light in a first polarization direction and reflect polarized light in a second polarization direction, and the first polarization direction and the second polarization direction are perpendicular to each other; and, the first single-layer imaging device 11 can emit the first imaging light in the first polarization direction, and the second single-layer imaging device 12 can emit the second imaging light in the second polarization direction, which can realize the first single-layer imaging device 11 and the second single-layer imaging device.
  • the device 12 images without impact.
  • the first transflective element in the embodiment of the present disclosure may specifically be a reflective polarizer (RPM) film or a dual brightness enhancement film (DBEF).
  • RPM reflective polarizer
  • DBEF dual brightness enhancement film
  • the first characteristic and the second characteristic may also be the same characteristic
  • the first transflective element is a translucent and reversible medium.
  • the light transmittance of the first transflective element can be 5%-95%
  • the reflectivity of the first transflective element can be 5%-95%; for example, the first transflective element is semi-transmissive and semi-reflective.
  • the medium, the transmittance and reflectance of the first transflective element are both 50%.
  • the first imaging light emitted by the first single-layer imaging device 11 passes through the first transflective element 21, half is transmitted and the other Half is reflected, so that half of the first imaging light can be transmitted to the concave reflecting element 20; correspondingly, when the second imaging light emitted by the second single-layer imaging device 12 reaches the first transflective element 21, half of the The second imaging light can be reflected to the concave reflective element 20, so that imaging of the first single-layer imaging device 11 and the second single-layer imaging device 12 can also be realized.
  • the light transmittance of the first transflective element is about 30%
  • the reflectivity is about 70%.
  • About 70% is reflected, so that about 30% of the first imaging light can be transmitted to the curved mirror 20; correspondingly, when the second imaging light emitted by the second image source 12 reaches the first transflective element 21, about 70% % Of the second imaging light can be reflected to the curved mirror 20, and imaging of the first image source 11 and the second image source 12 can be realized.
  • the first imaging light having the first characteristic means that the first imaging light only has the first characteristic; or, part of the characteristics of the first imaging light is the first characteristic, which may also be With other characteristics, it can even have a second characteristic.
  • the first single-layer imaging device 11 can emit the first imaging light that is natural light, the first imaging light can be decomposed into polarized light with the first polarization characteristic and polarized light with the second polarization characteristic, The first imaging light may have both the first characteristic and the second characteristic.
  • the first characteristic part of the first imaging light can still pass through the first transflective element 21, and a part of the first imaging light can still be incident on the concave surface.
  • the reflective element 20 does not affect the imaging of the first single-layer imaging device 11. Since light can be decomposed, the transflective element (such as the first transflective element 21 and the subsequent second transflective element 22, etc.) in the embodiments of the present disclosure can transmit light of a certain characteristic, which means that the transflective element can only It can transmit light of this characteristic, or can transmit light of part of the component of this characteristic; correspondingly, the ability of a transflective element to reflect light of a certain characteristic has a similar meaning.
  • the first transflective element 21 can transmit horizontally polarized light and reflect vertically polarized light.
  • the first imaging light is light whose polarization direction is at a 45 degree angle to the horizontal direction
  • the first imaging light can be decomposed into horizontally polarized light and vertical polarized light.
  • Polarized light, the horizontally polarized light in the first imaging light can transmit the first transflective element 21, and it can also be regarded as "the first transflective element 21 can transmit light with the first characteristic".
  • the first characteristic and the second characteristic in the embodiments of the present disclosure may be the same kind of characteristics, for example, both are polarization characteristics, or may be different types of characteristics, for example, the first characteristic is a polarization characteristic, and the second characteristic is A wavelength characteristic, which can be specifically determined based on the selected transflective element.
  • the head-up display system further includes a second transflective element 22, and the imaging device set further includes a third single-layer imaging device 13; the third single-layer imaging device 13 is configured to emit light incident to The third imaging light with the third characteristic of the concave reflective element 20; the third object distance corresponding to the third single-layer imaging device 13 is different from the first object distance and the second object distance, and the third object distance is the third The propagation path length of the imaging light from the third single-layer imaging device 13 to the concave reflective element 20 or the distance from the third single-layer imaging device 13 to the concave reflective element 20.
  • the second transflective element 22 can transmit light with the first characteristic and reflect light with the third characteristic; the first transflective element 21 can also transmit the light with the third characteristic.
  • the second transflective element 22 is arranged between the first single-layer imaging device 11 and the first transflective element 21, and the third single-layer imaging device 13 and the first transflective element 21 are arranged on the second transflective element. The same side of element 22; see Figure 8 for details.
  • the second transflective element 22 can transmit light with the second characteristic and reflect light with the third characteristic; the first transflective element 21 can also reflect the light with the third characteristic; for example, ,
  • the second transflective element 22 is arranged between the second single-layer imaging device 12 and the first transflective element 21, and the third single-layer imaging device 13 and the first transflective element 21 are arranged on the second transflective element The same side as 22; see Figure 9 for details.
  • the object distance (for example, the third object distance) of the third single-layer imaging device 13 and the object distances of the first single-layer imaging device 11 and the second single-layer imaging device 12 are also different.
  • the three single-layer imaging devices can image at different positions outside the reflective imaging part 50, for example, can respectively image at the three magnified imaging positions 111, 121, 131 shown in FIG. 4, thereby realizing multi-level imaging.
  • the third characteristic of the third imaging light may be other characteristics that are different from the first characteristic and the second characteristic.
  • the second transflective element 22 can transmit light in the first polarization direction and reflect light in the third polarization direction
  • the first transflective element 21 can transmit light in the fourth polarization direction and reflect the second polarization direction.
  • Light in the polarization direction for example, neither the first polarization direction nor the third polarization direction is perpendicular to the fourth polarization direction.
  • the first imaging light emitted by the first single-layer imaging device 11 has a first polarization direction.
  • the first imaging light can transmit through the second transflective element 22 and enter the first transflective element 21;
  • the four polarization directions are not perpendicular, so the first imaging light can decompose a part of the light in the fourth polarization direction, so that this part of the light can transmit through the first transflective element 21, and a part of the first imaging light can transmit through the first transflective Element 21,
  • the first transflective element 21 capable of transmitting light in the fourth polarization direction can also be regarded as being able to transmit light in the first polarization direction (for example, capable of transmitting light of the first characteristic), and the first transflective element 21 transmits therein
  • the third imaging light emitted by the third single-layer imaging device 13 has a third polarization direction, and when the third imaging light reaches the first transflective element 21, it can also transmit part of it, and can transmit the fourth polarization direction Therefore, the first transflective element 21 can also transmit light with the third characteristic.
  • the first characteristic, the second characteristic, and the third characteristic are three different bands.
  • the second transflective element 22 can transmit light in the first waveband and reflect light in the third waveband, and the first transflective element 21 can reflect light in the second waveband and transmit other wavebands (including the first waveband and The light of the second wave band), based on the two transflective elements, the imaging light emitted by the three single-layer imaging devices can also be incident on the concave reflecting element 20, and then imaging can be realized respectively.
  • the imaging principle shown in Fig. 9 is basically similar to the imaging principle of Fig. 8. In Fig. 9, transflective elements of different properties are selected.
  • the first transflective element 21 can transmit light of the first characteristic and reflect the second characteristic and the third characteristic.
  • the second transflective element 22 can transmit the second characteristic light and reflect the third characteristic light.
  • the solution shown in FIG. 9 is not described in detail here.
  • the three single-layer imaging devices in the embodiments of the present disclosure have different object distances, which may be that the length of the propagation path of the imaging light to the concave reflecting element 20 is different, and the “propagation path length” is the light The path length from the start point to the end point. If the light is directly incident from the start point to the end point, the propagation path length can be the distance between the start point and the end point; if the light is incident to the end point after one or more reflections, the propagation path length It can be the sum of the lengths of light reaching each reflection point in turn. As shown in FIG.
  • the first imaging light emitted by the first single-layer imaging device 11 can be directly incident on the concave reflective element 20, so the first object distance can be between the first single-layer imaging device 11 and the concave reflective element 20
  • the second imaging light emitted by the second single-layer imaging device 12 first reaches the first transflective element 21, and then enters the concave reflective element 20 after being reflected by the first transflective element 21, so the second single-layer display
  • the second object distance of the imaging device 12 may be the distance between the second single-layer imaging device 12 and the first transflective element 21 plus the distance between the first transflective element 21 and the concave reflective element 20.
  • the third object distance of the third single-layer imaging device 13 may be the distance between the third single-layer imaging device and the second transflective element 22, and the second transflective element 22 and the first transflective element 21 The sum of the distance between and the distance between the first transflective element 21 and the concave reflective element.
  • the distance between the single-layer imaging device and the concave reflecting element 20 in the imaging device group can be changed through the plane reflecting element 23 in the reflecting mirror group.
  • the head-up display system may further include a mirror group, which includes one or more planar reflective elements 23; the planar reflective element 23 is configured to reflect the imaging light emitted by the imaging device group to the concave reflective element 20.
  • the planar reflective element 23 is arranged on the propagation path of the imaging light to change the propagation path, so that the imaging light can be transmitted to the concave reflective element 20 in a manner of reflecting the imaging light.
  • the mirror group may include a flat reflective element 23 configured to reflect the imaging light emitted by at least one single-layer imaging device (for example, each single-layer imaging device) in the imaging device group to Concave reflective element 20.
  • a plurality of single-layer imaging devices may share a plane reflective element 23.
  • the first single-layer imaging device 11 and the second single-layer imaging device 12 share a plane reflective element 23.
  • the mirror group may include a plurality of planar reflective elements 23 (for example, at least two planar reflective elements 23), and the imaging device group includes a plurality of single-layer imaging devices, and a plurality of planar reflective elements. It is configured to reflect the imaging light emitted by a plurality of single-layer imaging devices to the concave reflective element.
  • a plurality of plane reflective elements 23 correspond to a plurality of single-layer imaging devices in the imaging device group one-to-one; each plane reflective element 23 is configured to reflect the imaging light emitted by the corresponding single-layer imaging device to the concave surface Reflective element 20.
  • two flat mirrors 23 may correspondingly reflect the imaging light emitted by three single-layer imaging devices, etc.
  • the embodiment of the present disclosure does not limit the correspondence between multiple flat mirrors and multiple single-layer imaging devices. .
  • different single-layer imaging devices may use different planar reflective elements 23; as shown in FIGS. 11 and 12, the first single-layer imaging device 11 and the second single-layer imaging device 11 12 Use the corresponding flat reflective elements 23 respectively; for example, a total single-layer imaging device can be set, and the image distance corresponding to different areas of the total single-layer imaging device can be changed by arranging the flat reflective elements 23 at different positions , So that the overall single-layer imaging device can be divided into multiple single-layer imaging devices, as shown in Figure 11, the overall single-layer imaging device is divided into a first single-layer imaging device 11 and a second single-layer imaging device 12, and the plane reflective elements 23 corresponding to the first single-layer imaging device 11 and the second single-layer imaging device 12 are located at different positions, so that the first object distance of the first single-layer imaging device 11 and the second single-layer imaging device 11 The second object distance of the layer developing device 12 is different.
  • the optical path can also be changed based on the planar reflective element 23.
  • the planar reflection element 23 is added in the embodiment shown in FIG. 7 to FIG. 9, the structure can be shown in FIG. 13 to FIG. 15 correspondingly.
  • the direction control element 32 in the embodiment of the present disclosure may be a light concentrating element disposed toward the preset area 61, as shown in FIG. 3.
  • the direction control element 32 includes at least one light concentrating element 321; the at least one light concentrating element 321 is disposed between the light source device and the dispersion element 33, and is configured to transmit light from different light sources included in the light source device.
  • the light condensing element 321 is configured to converge the light emitted by different light source devices to the same preset area 61.
  • the direction control element 32 includes a collimating element 322; the collimating element 322 is configured to adjust the exit direction of the light emitted by its corresponding light source within a preset angle range, and pass the adjusted light through at least one light collecting element Launch to the dispersive element 33.
  • the preset angle range may be 0 degrees to 35 degrees; for example, the collimated rays may be parallel or substantially parallel rays.
  • the parallel/collimated light is easy to control, for example, the parallel light will be focused by the convex lens; and for the chaotic light, the light concentrating element has a relatively poor control effect on it. Therefore, in the embodiments of the present disclosure, by adding collimating elements, the light is adjusted to be substantially parallel as much as possible, which can ensure that the light is easy to control.
  • the direction control element 32 when the direction control element 32 includes a collimating element 322, the light concentrating element 321 can be arranged between the collimating element 322 and the dispersing element 33; the light concentrating element 321 is configured to converge different light rays to the same preset Location 61.
  • the orientation of the direction control element 32 may not be specially set, and different light rays can also be condensed to a preset position 61 through the light collecting element 321.
  • the light collecting element 321 may be provided with multiple collimating elements 322 correspondingly.
  • the collimating element 322 is a collimating lens
  • the collimating lens includes a convex lens, a concave lens, a Fresnel lens, or one or more of the above lens combinations.
  • the lens combination may specifically be a combination of a convex lens and a concave lens.
  • the collimating element 322 is a collimating film configured to adjust the exit direction of the light within a preset angle range, and one collimating element 322 can combine multiple or even
  • the light emitted by all the light sources 31 is collimated.
  • the distance between the collimating element 322 and the position of the light source is the focal length of the collimating element 322, and the light source may be set at the focal point of the collimating element 322.
  • the direction control element 32 further includes a reflection element 35; the reflection element 35 is configured to reflect incident light emitted by the light source to the dispersion element 33.
  • the reflective element 35 includes a lamp cup.
  • the structure of the direction control element 32 may be that the light source device, the lamp cup, and the light concentrating element are arranged in sequence. It can be considered that the light emitted by the light source device first passes through the lamp cup and then is transmitted to the light concentrating element.
  • the collimating element may be arranged between the light gathering element and the light source device, may be arranged in the lamp cup, or may be arranged outside the lamp cup, and the collimating element may perform at least part of the light emitted by the light source device. Collimation, the collimated light is transmitted to the light collecting element.
  • the lamp cup is a shell surrounded by a reflective surface, such as a hollow shell.
  • the opening direction of the lamp cup faces the dispersion element 33; the bottom of the lamp cup away from the opening is configured to provide a light source corresponding to the lamp cup.
  • the inner wall of the lamp cup (for example, the inner wall of the groove of the reflecting element 35) is the reflective surface of the lamp cup.
  • the reflective surface can reflect at least part of the light with a large angle emitted by the light source, and the reflected light will be gathered, which can improve the utilization of the light emitted by the light source.
  • the shape of the reflective surface of the hollow shell includes at least one of a curved surface shape (such as a parabolic shape), a free-form surface shape, and a quadrangular pyramid shape.
  • a curved surface shape such as a parabolic shape
  • a free-form surface shape such as a parabolic shape
  • a quadrangular pyramid shape such as a quadrangular pyramid shape.
  • the shapes of the opening and the end of the lamp cup include circular, elliptical, quadrilateral, etc., and the shapes of the opening and the end may be the same or different.
  • the direction control element 32 may also include a collimating element 322; in some examples, the collimating element 322 may be disposed inside the lamp cup, and the size of the collimating element 322 is less than or equal to that of the lamp cup. The size of the opening; the collimating element 322 is configured to collimate part of the light emitted by the light source in the lamp cup and then emit it to the dispersing element 33.
  • the lamp cup is a solid lamp cup, and the solid lamp cup includes a solid transparent component, for example, the lamp cup is a solid transparent component with a reflective surface 351, and the refractive index of the solid transparent component is greater than 1;
  • the light exit faces the dispersing element 33; the "direction" here includes direct direction (no other elements are arranged in the middle) and indirect direction (other elements are arranged in the middle).
  • the end of the solid lamp cup away from the opening is configured to provide a light source corresponding to the solid lamp cup, so that at least part of the light emitted by the light source is totally reflected when it hits the surface of the solid lamp cup.
  • the specific structure of the solid lamp cup can be seen in Figure 17 and Figure 18.
  • the light outlet of the solid lamp cup refers to the opening direction of the reflective surface 351 of the solid lamp cup.
  • the surface of the solid lamp cup may be the interface between the solid transparent part and the outside world (for example, air). When the light is directed to the outside, at least part of the light that satisfies the total reflection angle may be totally reflected at this interface.
  • the surface shape of the solid lamp cup includes at least one of a curved shape (such as a parabolic shape) or a free-form curved shape.
  • the collimating element 322 can be integrated on the solid lamp cup.
  • the solid lamp cup is provided with a cavity at the end, and the side of the cavity close to the light outlet of the solid lamp cup is a convex surface protruding to the end.
  • the solid transparent part is provided with a cavity 352 at the end away from the opening of the solid lamp cup, and the side of the cavity 352 close to the opening of the solid lamp cup is a convex surface 353.
  • the solid lamp cup is provided with an opening at the light outlet, and the bottom surface of the opening is a convex surface protruding toward the light outlet, as shown in Figure 18, the solid transparent part is provided with a slot in the middle of the end near the opening of the solid lamp cup 354 (an example of the above-mentioned opening), the bottom surface of the slot 354 is a convex surface 355.
  • the light emitted by the light source is directed toward the reflective surface (for example, a hollow shell) or surface (for example, a solid transparent part) of the lamp cup, and at least one of reflection or total reflection occurs;
  • the reflective surface or surface for example, a parabolic shape
  • the reflected light will also be adjusted to collimated or nearly collimated light, which improves the collimation effect.
  • the convex surface 353 of the cavity 352 or the convex surface 355 of the slot 354 can be configured to collimate the light emitted by the light source.
  • the convex surface 353 or the convex surface 355 is equivalent to the collimating element 322.
  • the convex surface 353 or the convex surface 355 are both arranged in the middle position of the solid transparent part, and the size of the convex surface 353 or the convex surface 355 is smaller than the opening size of the solid lamp cup; the convex surface 353 or the convex surface 355 is configured to carry out part of the light emitted by the light source in the solid lamp cup. After collimation, it is transmitted to the dispersion element 33.
  • the convex surface 353 is arranged in the cavity at the end of the solid lamp cup, and the convex surface 353 can form a convex lens to collimate the light directed to the convex surface 353.
  • the middle position of the solid transparent part is provided with a slot 354, and the bottom surface of the slot 354 is a convex surface 355.
  • the convex surface 355 of the solid lamp cup is configured to perform light that cannot be reflected by the reflective surface 351 of the solid lamp cup. Collimation, other light rays with larger exit angles are at least totally reflected in the solid lamp cup, and then collimated out of the solid lamp cup.
  • the material of the solid lamp cup is a transparent material with a refractive index greater than 1, such as a polymer transparent material, glass, etc., to achieve total reflection.
  • the head-up display system further includes an information collection device 200, and the information collection device 200 is communicatively connected with the processor 100, for example, connected in a wired or wireless manner;
  • the information collection device 200 is configured to collect local information and surrounding information, and send the collected local information and surrounding information to the processor 100.
  • the processor 100 may be configured to obtain local information and surrounding information, and generate target information according to the local information and current surrounding information.
  • the information collection device 200 can collect local information related to the current driving state of the vehicle or the driver, and can also collect current surrounding information around the vehicle, so that the processor 100 can be based on the local information. Generate corresponding target information with current surrounding information.
  • the information collection device may specifically include image collection equipment, radar (such as vehicle radar), infrared sensors, laser sensors, ultrasonic sensors, rotational speed sensors, angular velocity sensors, GPS (Global Positioning System, global positioning system), V2X (Vehicle to X represents one or more of the information exchange between the car and the outside world) system and ADAS (Advanced Driving Assistant System).
  • radar such as vehicle radar
  • infrared sensors such as vehicle radar
  • laser sensors such as laser sensors
  • ultrasonic sensors ultrasonic sensors
  • rotational speed sensors angular velocity sensors
  • GPS Global Positioning System, global positioning system
  • V2X Vehicle to X represents one or more of the information exchange between the car and the outside world
  • ADAS Advanced Driving Assistant System
  • the head-up display system provided by the embodiment of the present disclosure may be installed on a vehicle, and the target information to be displayed is determined based on the speed of the vehicle.
  • the local information collected by the information collection device includes local speed information, which can indicate the speed of the vehicle; for example, the information collection device can also monitor objects outside the vehicle, such as external objects, and determine the reference object ( For example, the distance between a vehicle, such as a vehicle, and an external object.
  • the information collection device may include a speed sensor, or a rotation speed sensor arranged on the wheel, so as to determine the corresponding local speed information; or, when the vehicle is a vehicle, it may also be through a data transmission system of the vehicle, such as an on-board vehicle.
  • the automatic diagnosis system OBD On-Board Diagnostics
  • OBD On-Board Diagnostics
  • the vehicle speed measurement function to measure the vehicle speed, and then determine the local speed information of the vehicle.
  • the information collection device may also include image collection equipment, vehicle-mounted radar, or distance sensors (such as infrared distance sensors, laser distance sensors, ultrasonic distance sensors, etc.), etc., so that the current distance between the external object and the vehicle can be determined .
  • image collection equipment vehicle-mounted radar, or distance sensors (such as infrared distance sensors, laser distance sensors, ultrasonic distance sensors, etc.), etc., so that the current distance between the external object and the vehicle can be determined .
  • the processor 100 after the processor 100 obtains the local speed information and the distance between the reference object and the external object, referring to FIG. 20, the processor 100 generating target information according to the local information and surrounding information includes:
  • Step S101 Determine the safe distance according to the local speed information, and judge whether the distance between the reference object and the external object is greater than the safe distance.
  • the safety distance is the critical value of the safety distance when the vehicle is running, and the corresponding relationship between the vehicle speed and the safety distance can be preset. Based on the corresponding relationship, the current local speed information can be mapped to the corresponding Safety clearance. For example: when the vehicle speed v ⁇ 60km/h, the safety distance S is digitally equal to the vehicle speed v.
  • Other corresponding relationships may also be used, which are not limited in the embodiments of the present disclosure.
  • the external objects in the embodiments of the present disclosure may include other vehicles, pedestrians, animals, non-motor vehicles, etc., outside the vehicle, and may also include stationary objects such as roads and indicators. For different external objects, different corresponding relationships can be used to determine the safety distance.
  • Step S102 When the distance between the reference object and the external object is greater than the safety distance, it is determined that the reference object is currently in a normal state, and the corresponding first prompt information is used as the target information.
  • the first prompt information includes an empty set of first prompt images and One or more items in the first hint video.
  • the distance between the reference object (for example, a vehicle) and the external object is greater than the safe distance, it means that the external object is far from the vehicle and is relatively safe at this time.
  • the vehicle can be regarded as normal. Status.
  • the first prompt information that mainly plays a prompt role can be used as the target information.
  • the first prompt information may be an empty set, for example, if the target information is empty, the head-up display system may not display any information; or, the first prompt information may be the first prompt text, such as "safe distance, please continue to maintain"
  • the first prompt information can also be a first prompt image, such as a light-colored image, etc.; the first prompt information can also be a first prompt video, such as an applause animation.
  • the content in the prompt image or the prompt video may be at least one of text, graphics, symbols, animations, pictures, and the like.
  • Step S103 When the distance between the reference object and the external object is not greater than the safe distance, it is determined that the reference object is currently in an early warning state, and the corresponding first early warning information is used as the target information.
  • the first early warning information includes the first early warning image and the first early warning image. One or more items in an early warning video.
  • the distance between the external object and the vehicle is not greater than the safe distance, it means that the external object is closer to the vehicle, and the risk of traffic accidents is greater at this time, so it can be used as an early warning at this time.
  • Status, and then the first warning information that needs to be displayed in the warning status can be displayed as target information.
  • the first warning information may include a first warning text, such as "too close to the vehicle ahead, please slow down"; the first warning information may also include a first warning image, for example, a graphic showing a red exclamation mark, or when The position corresponding to the external object (for example, the target location) highlights the graphics matching the external object; the first warning information may also include the first warning video, for example, an animation showing two cars colliding.
  • a first warning text such as "too close to the vehicle ahead, please slow down”
  • the first warning information may also include a first warning image, for example, a graphic showing a red exclamation mark, or when The position corresponding to the external object (for example, the target location) highlights the graphics matching the external object
  • the first warning information may also include the first warning video, for example, an animation showing two cars colliding.
  • the content in the warning image or the warning video may be at least one of text, graphics, symbols, animations, pictures, and the like. "Not greater than” can be less than or equal to, or can be less than.
  • the safety distance in the embodiment of the present disclosure may include one or more of the front safety distance, the rear safety distance, and the side safety distance. If the external object is located in front of the reference object, when the distance between the reference object and the external object is not greater than the front safety distance, it can be determined that the reference object is currently in the pre-warning state; if the external object is located to the side of the reference object, between the reference object and the external object When the distance between the reference object is not greater than the side safety distance, it is determined that the reference object is currently in the early warning state; if the external object is located behind the reference object, when the distance between the reference object and the external object is not greater than the rear safety distance, it can be determined that the reference object is currently in the early warning state status.
  • the appropriate first warning information can be used as the target information in the corresponding situation. For example, if the external object on the right side is closer to the vehicle, "please keep a distance with the vehicle on the right" can be used as the target information.
  • the processor 100 can also determine the corresponding target location, and determine the display device that needs to display the target information, for example, the target display device, and then the target display device can be used for reflection imaging.
  • the target information is displayed at the target position on the part 50.
  • the target position may be set in advance, or may be determined based on the projection position of the external object on the reflective imaging part 50, so as to achieve a fit display.
  • the processor 100 may control the target display device to display the target information in a normal manner or a first highlighting manner, and the first highlighting manner includes dynamic display (such as scrolling display, bounce display, One or more of flashing display), highlight display, and display in the first color.
  • the processor 100 may control the target display device to display the target information in a normal manner or a second highlighting manner, and the second highlighting manner includes displaying in a second color.
  • the target information when the reference object is in the early warning state or the normal state, can be displayed in the same manner (for example, the normal manner).
  • the target information displayed in different states is different, and the normal manner Including one or more of static display, scrolling display, bounce display, flashing display, highlight display, etc.
  • the display mode can also be different.
  • the first color for example, red
  • the head-up display system can display "currently safe” in the second color (for example, green) , Please keep it up.”
  • the same target information can also be displayed in different display modes.
  • the external object is a pedestrian
  • the head-up display system currently needs to identify the pedestrian in AR, such as a rectangular frame to mark the location of the pedestrian; if the current state is an early warning state, the rectangle can be displayed in the first color (for example, red)
  • the frame for example, a red rectangular frame is displayed; if the current state is normal, the rectangular frame may be displayed in a second color (for example, green), for example, a green rectangular frame is displayed.
  • the state of the vehicle can be determined in real time, so that the target information can be displayed in different display modes in real time or intermittently (that is, at intervals of a predetermined time, such as an interval of 10 ms or 20 ms, etc.). For example, if the reference object is currently in the pre-warning state, and the target information "please slow down" is displayed in red; then the driver adjusts the distance between the reference object and the external object by decelerating, etc. so that the external object is outside the safe distance, for example , And then it is in the normal state. At this time, the target information such as "current driving safety" can be displayed in green.
  • FIG. 21 takes the external object as a vehicle as an example, and schematically shows a display mode when the distance to the local vehicle is too close.
  • the head-up display system detects that the current distance between the front vehicle 71 and the local vehicle is 50m, and the current safety distance is 60m. For example, it is an early warning state at this time, and the head-up display system can be displayed in the reflective imaging part 50 (for example ,
  • the target information displayed on the windshield of the local vehicle includes the warning text 501, for example, "please slow down.”
  • the target information also includes a rectangular box 502 that selects the vehicle 501 in front of you.
  • the rectangular box 502 may be displayed in red. Or highlight it, etc. to enhance the reminder effect.
  • the distance to the preceding vehicle 71 for example, the currently detected target distance
  • the distance “50.0 m” is displayed below the rectangular frame 502.
  • the information collection device may include a speed sensor, a distance sensor, etc., based on which the information collection device can obtain current surrounding information including forward information, which specifically includes the speed of the preceding vehicle and/or the current distance to the preceding vehicle.
  • the processor 100 may be configured to project the target area of the front vehicle onto the projection position on the reflective imaging unit as the target position, and use the front information as the target information, and control the target imaging device to display the target information at the target position; for example,
  • the target area of the vehicle in front is the blank area between the rear wheels of the vehicle in front or the area where the trunk is located.
  • the reflective imaging part 50 when the external object is the front vehicle, the area where the trunk of the front vehicle is located or the blank area between the two rear wheels can be used as the target area, and then based on the head-up display system, the reflective imaging part 50 The information that fits the target area is displayed on the top. For example, the speed of the front vehicle or the distance to the local vehicle can be displayed in the blank area between the two rear wheels of the front vehicle, so that the driver can intuitively and accurately determine the relevant information of the front vehicle in real time, and then can be based on the front The information in front of the vehicle responds accordingly.
  • the head-up display system may also include a sound device or a vibration device, and the processor 100 may also be configured to: send an early warning voice to the sound device and control the sound device to play the warning voice; and/or send a vibration signal to the vibration device to control the vibration
  • the device vibrates; for example, when in use, the vibrating device can be set in a position that can touch the user.
  • a speaker can be added to the head-up display system, or a voice reminder can be given by means of a speaker on the vehicle.
  • the warning voice may be a warning bell with no specific meaning, or it may be a specific voice, such as "Attention! Keep the distance between cars! and so on.
  • a mechanical vibration device can be installed on the steering wheel or seat of the vehicle where the driver will directly contact, so that the driver can be vibrated to remind the driver in the warning state.
  • the information collection device 200 may include image collection equipment, vehicle-mounted radars, or distance sensors (such as infrared distance sensors, laser distance sensors, ultrasonic distance sensors, etc.), etc., to determine the distance between a reference object and an external object (for example, target At the same time as the distance), the location of the external object is also determined.
  • the surrounding information may include the location of the external object and the distance between the reference object and the external object.
  • the processor 100 can be configured to use the position of the external object and the distance between the reference object and the external object as target information, so that the target information can be displayed on the reflective imaging part 50 in real time, and the driver can be reminded of the external object in real time. Location, distance, etc.
  • the location of the external object can also be visually identified in the AR display mode.
  • the processor 100 may also be configured to determine the projection position of the external object projected (or mapped) onto the reflective imaging part, use the projection position or the edge of the projection position as the target position, and instruct the target display device to display the preset position at the target position.
  • Set target information In the embodiment of the present disclosure, by setting the projection position of the external object as the target position, the target information consistent with the external object can be displayed at the corresponding position of the reflective imaging part 50, thereby visually marking the external object to the driver. For example, if the external object is a vehicle, for example, a box may be displayed at the corresponding position of the windshield, and the box may frame the vehicle.
  • the head-up display system takes certain information that can be displayed all the time as target information in real time, and displays it at the preset position of the reflective imaging part 50.
  • the location and distance of all external objects around the vehicle can be monitored in real time, and a bird's-eye view of the vehicle can be generated.
  • the bird's-eye view can schematically show the location of external objects in each direction of the vehicle, so that the driver can Quickly check the surrounding environment; for example, you can also display surrounding objects in different colors to indicate different levels of danger.
  • the situation of other vehicles around the local vehicle 73 can be displayed on the reflective imaging part 50 in the form of a bird's-eye view.
  • the bird's-eye view 503 shows that the rear vehicle 72 behind the local vehicle 73 is about to pass, and the warning text can be displayed. 501, for example, "overtaking behind.”
  • the processor 100 may also be configured to: when the external object is a key object, determine that the reference object is currently in an early warning state, and use the corresponding second early warning information as target information.
  • the second early warning information may include One or more of the second warning image and the second warning video.
  • the external object when the distance between the reference object and the external object is less than the preset distance value, and the external object is a pedestrian, an animal or a non-motor vehicle, the external object is taken as the key object. Or, when the external object moves toward the current driving route, and the external object is a pedestrian, an animal, or a non-motor vehicle, the external object is taken as the key object. Or, when the external object is located in the current driving route and the external object is a pedestrian, an animal or a non-motor vehicle, the external object is taken as the key object.
  • the external objects when you are currently located in an object-intensive area and the external objects are pedestrians, animals or non-motorized vehicles, the external objects will be the key objects; the object-intensive areas include one or more of schools, hospitals, parking lots, and urban areas .
  • the local information includes the driver's line of sight position information; when the line of sight position information does not match the current position of the external object, and the external object is a pedestrian, animal or non-motor vehicle, the external object is taken as the key object.
  • the external object when the external object is a pedestrian, an animal, or a non-motor vehicle that requires special attention, it can be determined whether the external object can be a key object. For example, if the distance between the reference object and the external object is less than the preset distance value, it means that the external object is close to the vehicle. At this time, it can also be used as an early warning state; for example, the preset distance value can be a preset distance value For example, it may be the "safe distance" determined based on the vehicle speed in the above embodiment. If the external object is moving toward the current driving route, or the external object is located in the current route of the vehicle, it indicates that the vehicle is more likely to collide with the external object, which can be used as an early warning state.
  • the information collection device may also include image collection equipment, infrared sensors, etc., based on the information collection device to determine the driver's line of sight information, such as the driver's eye position, line of sight position, etc.; if the line of sight information is related to the current position of the external object Mismatch, indicating that the driver is most likely not aware of external objects at present, and can be set to an early warning state to remind the driver.
  • the information collection device may specifically determine the gaze direction information based on eye tracking technology, or other technologies may also be used, which is not limited here.
  • the second warning information used to remind the driver can be generated, such as "there are pedestrians ahead, pay attention to avoid", "school ahead, pay attention to pedestrians", etc.
  • the second warning information is used as target information.
  • the head-up display system can display an early warning text 501 on the reflective imaging part 50, for example, "Pay attention to pedestrians", and can also highlight the pedestrian 74 through a rectangular frame 502.
  • an arrow 504 that can indicate the movement trend of the pedestrian 74 is used to remind the driver that there is a pedestrian currently moving toward the current driving lane 75.
  • the target information may be displayed in the normal manner or the first highlighting manner, and auxiliary reminders may also be provided by means of voice reminding, etc.
  • the reminding manner is basically similar to the above-mentioned embodiment, and will not be repeated here.
  • the "safe distance" in the foregoing embodiment may also include a front safety distance, which refers to a reference object, such as a safety distance between a vehicle and an external object located in front of it.
  • the processor 100 may also be configured to: when the external object is located in front of the reference object, the distance between the reference object and the external object is not greater than the previous safety gap, and the difference between the safety gap and the distance between the reference object and the external object is greater than a preset When the distance difference and/or the time in the pre-warning state exceeds the preset time period, a braking signal or a deceleration signal is generated, and the braking signal or a deceleration signal is output, for example, to the driving system of the vehicle.
  • the reference object may currently be in an early warning state; for example, if the difference between the front safety distance and the distance between the reference object and the external object Greater than the preset distance difference, or the reference object is in the pre-warning state for longer than the preset time, indicating that the external object is too close to the vehicle, or the distance between the two is in the dangerous range for a long time, the processor 100 can generate a brake Signal or deceleration signal, and send the braking signal or deceleration signal to the driving system of the vehicle, so that the vehicle can be decelerated or braked, so that a safe distance between the vehicle and external objects can be maintained.
  • the processor 100 can generate a brake Signal or deceleration signal, and send the braking signal or deceleration signal to the driving system of the vehicle, so that the vehicle can be decelerated or braked, so that a safe distance between the vehicle and external objects can be maintained.
  • the head-up display system can also monitor whether the lane is deviated, and determine that there is a lane deviating problem when deviating from the lane, and can give an early warning.
  • the information collection device may include image collection equipment, radar (for example, vehicle-mounted radar), GPS, etc., based on the image collection equipment, etc., the lane conditions in front of the vehicle can be determined, for example, lane position information, which may specifically include the vehicle The current lane, the lane adjacent to the vehicle, etc.; based on the information collection device, the location of the vehicle can be determined, for example, vehicle location information.
  • the processor 100 may obtain lane position information and vehicle position information. As shown in FIG. 24, the processor 100 generating target information based on local information and surrounding information may include:
  • Step S201 Determine the offset parameter of the vehicle deviating from the current driving lane according to the lane position information and the vehicle position information, and determine whether the offset parameter is greater than a corresponding offset threshold; the offset parameter includes an offset angle and/or an offset distance.
  • the offset parameter can be zero, the offset distance and the offset angle can both be zero or one of them is zero; if the driving direction of the vehicle is inconsistent with the lane direction, the corresponding offset angle needs to be determined , For example, the angle at which the vehicle deviates from the lane; if the vehicle may be deviated, such as the line of the vehicle, the corresponding offset distance needs to be determined.
  • the magnitude of the offset parameter By comparing the magnitude of the offset parameter with the preset offset threshold, it can be determined whether the offset is currently offset.
  • Step S202 When the offset parameter is greater than the corresponding offset threshold, it is determined that the reference object is currently in an early warning state, and the corresponding third early warning information is used as target information.
  • the third early warning information includes a third early warning image, a third early warning video, One or more of the priority lanes.
  • the current offset parameter is greater than the offset threshold, it indicates that the offset angle and/or the offset distance are too large, indicating that the vehicle has a risk of offset.
  • the vehicle can be regarded as being in an early warning state. And use the corresponding third early warning information as target information to remind the driver.
  • the third warning information includes a third warning text, a third warning image, or a third warning video related to lane deviation, and the current priority driving lane may also be marked, for example, the priority driving lane is used as the target information.
  • the priority driving lane can be used as an external object, and the corresponding target position can be determined by determining the projection position of the priority driving lane mapped to the reflection imaging unit 50, for example, the projection position or the edge of the projection position is used as the target position, and then The target position on the reflection imaging part 50 displays the priority driving lane.
  • the reflective imaging unit 50 may display graphics matching the priority driving lane, a trapezoid (corresponding to a straight priority driving lane), a fan ring with a gradually reduced width (corresponding to a priority driving lane that needs to be turned), and the like.
  • the shape of the graphic displayed on the reflective imaging part 50 may be specifically determined based on the actual shape of the priority driving lane mapped to the reflective imaging part 50.
  • the offset parameter when the offset parameter is greater than the corresponding offset threshold, it can be directly determined that the vehicle is in an early warning state; or, when the offset parameter is greater than the corresponding offset threshold, comprehensively determine whether the vehicle is currently lane offset based on other local information , For example, whether it can be regarded as an early warning state.
  • the information collection device includes a speed sensor, an acceleration sensor, an angular velocity sensor, etc., which can be used to collect vehicle speed, vehicle acceleration, vehicle steering angle, etc.; and a system based on the vehicle itself can determine the state of the turn signal, for example, the turn signal can be determined Whether it is in the on state; for example, generating vehicle state information based on information such as vehicle speed, vehicle acceleration, and turn signal status, and sending the vehicle state information as a kind of local information to the processor 100, which is based on the current offset parameters And vehicle status information to determine whether it is an early warning status.
  • the pre-warning conditions include that the vehicle speed is greater than the first preset speed value, the vehicle acceleration is not greater than zero, the turn signal on the same side of the direction corresponding to the offset angle of the vehicle is not turned on, the current state of the road is not changing, and the lane is off.
  • the duration of is greater than one or more of the preset first deviation duration thresholds.
  • the offset parameter is greater than the corresponding offset threshold, it indicates that there is a risk of offset. Then, it is determined whether the offset state is normal based on the vehicle state information, and if it is abnormal, it can be used as an early warning state. For example, if the vehicle speed is greater than the first preset speed value or the vehicle acceleration is not greater than zero, it means that the vehicle speed is too fast, or the vehicle still does not decelerate even when it is offset. At this time, the vehicle can be considered dangerous and it can be considered to be in an early warning state. .
  • the turn signal on the same side of the vehicle's offset angle is not turned on, for example, the vehicle is shifted to the left and the turn signal on the left is not turned on, it can also be indirectly considered that the driver is not currently in a regulated state. Turning to the left also has a greater risk and is an early warning state. Or, if the current state is in an immutable lane, for example, when there are other vehicles in the lane corresponding to the offset direction, you may not be allowed to change to that lane. If the driver continues to change lanes in the offset direction, it is likely to cause a traffic accident. It can be regarded as an early warning state. Or, if the time of departure from the lane is greater than the preset first departure time threshold, it may indicate that the vehicle has deviated from the lane for a long time, and the driver should be reminded.
  • the local information collected by the information collection device also includes vehicle status information, which includes one or more of vehicle speed, vehicle acceleration, turn signal status, double flashing signal status, and yaw rate.
  • vehicle status information includes one or more of vehicle speed, vehicle acceleration, turn signal status, double flashing signal status, and yaw rate.
  • the processor 100 may specifically make the following judgments based on the vehicle state information:
  • Normal conditions include that the vehicle speed is less than the second preset speed value, the vehicle acceleration is less than zero, the turn signal on the same side of the vehicle's offset angle is turned on, the double flashing signal light is turned on, and the yaw rate is greater than the preset One or more of the angular velocity threshold, the time of departure from the lane is less than the preset second departure time threshold, and the direction of the driver's line of sight azimuth information and the direction corresponding to the offset angle are the same.
  • the offset parameter is greater than the corresponding offset threshold, it indicates that there is a risk of offset, but if it is determined based on the vehicle status information that the current is a normal offset (for example, a normal lane change), etc., an early warning may not be given, and As a normal state. For example, if the vehicle speed is less than the second preset speed value or the vehicle acceleration is less than zero, it indicates that the vehicle speed is not fast or is decelerating. At this time, the risk is small and it can be regarded as a normal state.
  • the turn signal on the opposite side of the direction corresponding to the deviation angle of the vehicle is turned on, it means that although the vehicle is currently deviating from the lane, the driver is turning in the same direction as the deviation. It can be that the driver is changing lanes normally, Or turning, can be considered a normal state.
  • the double-flashing signal light is on, or the yaw rate is greater than the preset angular velocity threshold, it means that the vehicle needs to deviate or change lanes due to a failure, or the vehicle encounters an emergency that leads to emergency steering, avoidance, etc., and it may not be regarded as a lane deviation Conditions that require warning, for example, for lane deviation, it can also be regarded as a normal state that does not belong to the lane deviation.
  • the driver For example, if the driver’s line-of-sight azimuth information and the direction corresponding to the offset angle are the same, it means that although the current vehicle has deviated from the lane, but the driver has noticed the offset, it can also be regarded as a normal state and no additional warning is required. The driver.
  • Step S203 When the offset parameter is not greater than the corresponding offset threshold, it is determined that the reference object is currently in a normal state, and the corresponding third prompt information is used as the target information.
  • the third prompt information includes an empty set, a third prompt image, and a third prompt image.
  • the current offset parameter is not greater than the offset threshold, it indicates that the offset distance and/or the offset angle is not large. For example, it indicates that the vehicle is driving normally, and the vehicle can be regarded as being in a normal state.
  • the corresponding third prompt information can be used as the target information.
  • the processor 100 can be configured In order to control the target display device to display the target information in a normal manner or a first highlighting manner, the first highlighting manner includes dynamic display (such as scrolling display, bounce display, flashing display), highlighting, and displaying in the first color.
  • the processor 100 may be configured to control the target display device to display the target information in a normal manner or a second highlighting manner, and the second highlighting manner includes displaying in a second color.
  • the display mode in this embodiment is basically similar to the foregoing embodiment, and will not be repeated here.
  • the vehicle is in a normal state. If the offset parameter is not greater than the corresponding offset threshold, it means that the vehicle is driving normally and has no offset. Simple target information can be determined, such as displaying the text "Lane Keeping". If the offset parameter is greater than the corresponding offset threshold, but belongs to the above-mentioned normal state, it means that although the vehicle currently deviates from the lane, it can display the corresponding target information in a prompt manner when it is turning normally. For example, the head-up display system AR displays images corresponding to the current lane and the turning lane, such as projecting a blue direction arrow pointing to the turning lane, and projecting a blue virtual road that fits the current road and the projection fits the turning lane.
  • the head-up display system can determine that the current driving lane 75 is a right-turning lane based on the lane position information corresponding to the current driving lane 75.
  • the warning text 501 can be displayed on the reflective imaging part 50, for example, "please turn right", and an arrow 504 attached to the current driving lane 75 can be displayed at the same time, so that the driver can be intuitively reminded to turn right. Or, as shown in Figure 26, if the driver is currently changing lanes to the left, the direction corresponding to the deviation angle of the vehicle is left; if the driver does not currently turn on the left turn signal, the driver may be currently violating the rules.
  • the warning text 501 "Please turn on the left turn light” can be displayed on the reflector 50 to remind the driver to turn on the left turn light; and the arrow 504 can also be used to indicate the current driving direction of the vehicle to remind the driver that the driver is currently heading Offset left.
  • the priority driving lane can be displayed in real time.
  • the processor 100 is configured to determine the priority driving lane of the vehicle according to the lane position information and the vehicle position information, and use the priority driving lane as the target object; determine the projection position of the target object projected on the reflective imaging unit 50, and the projection position or projection The edge of the position is used as the target position, and the target display device is controlled to display the preset target information at the target position.
  • the boundary line of the lane may also be displayed in real time; for example, the processor 100 is configured to determine the priority lane of the vehicle according to the vehicle position information and the lane position information, and use the boundary lines on both sides of the priority lane as the target object, and combine the shape with The graphic matching the boundary line is used as the target information; the projection position of the target object projected on the reflective imaging part is determined, the projection position is taken as the target position, and the target display device is controlled to display the target information in a preset color at the target position.
  • the priority driving lane of the vehicle can be determined in real time, and the projection position of the vehicle on the reflective imaging unit 50 is determined based on the position of the priority driving lane or the position of the boundary lines on both sides of the priority driving lane, and then the target position is determined . Since the distance between the entire lane and the vehicle is gradually increasing, the priority lane cannot be treated as a point; multiple points can be selected from the priority lane as sampling points, so that the head-up display system can determine more accurately In which positions of the reflection imaging unit 50, the content that fits the priority driving lane is displayed.
  • multiple display devices can be used to display part of the priority driving lane, for example, the first single-layer display device 11 and the second single-layer display device 11
  • the image device 12 respectively displays a part of the priority driving lane; or, taking a point on the priority driving lane as a reference point (for example, using the middle point as the reference point), and using the current distance between the reference point and the vehicle as the target distance, and then Determine a target imaging device.
  • the priority driving lane can be displayed in different display modes. For example, in a normal state, the priority driving lane is displayed in green; when the vehicle deviates, the priority driving lane can be displayed in red. For example, graphics or arrows that visually fit the priority driving lane can be displayed to guide the driver to drive. Or, in a normal state, the reflective imaging part 50 displays the shape that fits the boundary line in green; when the vehicle deviates, the boundary lines on the left and right sides of the priority driving lane are displayed in red to remind the driver to change lanes.
  • the processor may also be configured to, when the offset parameter is greater than the corresponding offset threshold, and the difference between the offset parameter and the offset threshold is greater than the preset offset difference and/or is in the offset
  • the duration of the state exceeds the preset safety offset duration, the braking signal or deceleration signal is generated, and the braking signal or deceleration signal is output, for example, to the driving system of the vehicle.
  • the processor 100 may generate a braking signal or The deceleration signal and the braking signal or deceleration signal are sent to the driving system of the vehicle, so that the vehicle can be decelerated or braked to avoid a traffic accident caused by a serious deviation of the vehicle.
  • the head-up display system can also prompt the driver of abnormal roads.
  • the information collection device 200 may include image collection equipment, radar (for example, vehicle-mounted radar), etc., to collect road abnormal information; or, it may also obtain road abnormal information based on other external systems (such as real-time traffic systems, etc.).
  • the abnormal information includes one or more of the position of the obstacle, the position of the repaired section, the position of the dangerous section, the position of the uneven section, the position of the accident section, and the position of the temporary inspection section.
  • the processor 100 obtains the road abnormality information, it may use the road abnormality information as target information.
  • the processor 100 is configured to determine the projection position of the abnormal position projected on the reflection imaging unit 50 according to the road abnormality information, use the projection position or the edge of the projection position as the target position, and instruct the target imaging device to display the road at the target position.
  • Target information corresponding to the abnormal information.
  • the corresponding road abnormality information can be acquired based on the information collection device 200 or other systems, and the processor 100 can directly display the road abnormality information as target information in On the reflection imaging part 50, for example, the target information is "a traffic accident one hundred meters ahead", etc.; or, the position of the abnormal road may also be marked on the reflection imaging part 50 in the manner of AR display.
  • the information collection device detects an obstacle (such as stones, ice, potholes, etc.) on the road surface
  • the position of the obstacle such as an abnormal position
  • the abnormal position can be projected on the reflection imaging unit 50
  • the projection position is used as the target position, and the corresponding target information (such as graphics matching the shape of the obstacle, etc.) can be displayed at the target position, so that the position of the obstacle can be displayed to the driver intuitively, and the driver can be more effectively reminded member.
  • the head-up display system can also remind the driver.
  • the surrounding information also includes the current visibility.
  • the current visibility is less than the preset visibility threshold, it indicates that the current visibility is low and the driving environment is poor.
  • the information collection device 200 includes vehicle radar, distance sensors, etc. in a low visibility environment.
  • the components of external objects can also be detected normally, based on the information collection device 200 can collect the location information of the external objects; after that, the processor 100 is configured to use the location information of the external objects as target information; or, the processor 100 is configured to determine the external
  • the object is projected to the projection position on the reflective imaging part 50, the projection position or the edge of the projection position is taken as the target position, and the target display device is controlled to display the preset target information at the target position.
  • the location information of the external object includes the location of the external object and the distance between the external object and the vehicle (for example, a vehicle).
  • the head-up display system detects the external object, it can display the location information of the external object Or, use AR to more intuitively mark the location of external objects, so as to inform the driver of the location of the external objects and avoid collisions.
  • the road can also be regarded as an external object, and the position of the road can be determined based on real-time road conditions and networked road information, so that the driving route can be displayed on the reflection imaging unit 50, such as marking auxiliary lines and turning directions on the correct driving road. Signs etc.
  • abnormal roads and external objects in low-visibility environments they can all be regarded as key marked objects and can be determined as early warning states; or, they can be subdivided and divided into normal states and early warning states. For example, it can be divided based on the distance from the vehicle. If the abnormal road or external object is far from the vehicle, it can be in a normal state; if the distance is close, it can be in an early warning state.
  • the target information in different states can be displayed in a corresponding display mode.
  • the target information in the above multiple embodiments refers to a piece of content that can currently be displayed on the reflective imaging part 50; at the same time point, the reflective imaging part 50 can display multiple target information.
  • the warning state and the normal state determined in the foregoing embodiment may also correspond to the state of one target information, that is, at the same time, different target information may correspond to different states. For example, if the vehicle is a vehicle, and there are two pedestrians A and B in front of the vehicle, pedestrian A is closer to the vehicle, and pedestrian B is farther from the vehicle.
  • pedestrian A it can be determined as an early warning state, such as reflection imaging
  • the pedestrian A is marked with a red frame on the part 50 (for example, the windshield of a vehicle); for pedestrian B, it can be determined as a normal state, and the pedestrian B can be marked with a green frame on the reflective imaging part 50.
  • a red frame and a green frame may be used to mark pedestrian A and pedestrian B on the reflective imaging part 50, and the two may not affect each other.
  • the processor 100 is further configured to: generate push information according to the target information, and send the push information to the server or to other devices within a preset distance, such as vehicles, mobile phones, laptops, etc. .
  • a vehicle equipped with the head-up display system can share the collected information with other vehicles, and it can be directly sent to other nearby devices, or uploaded to the server, and the server forwards the information to other vehicles.
  • Information equipment For example, when the head-up display system uses the location information of external objects, the location information of key objects, road abnormality information, etc. as target information, the target information can be shared with other devices; or, when the local vehicle has a lane shift, it can also share the target information with other devices. You can notify other nearby vehicles, remind other vehicles to avoid, etc.
  • the head-up display system includes a display device group and a concave reflector 20; the display device group includes a plurality of single-layer display devices, as shown in FIG. The first single-layer imaging device 11 and the second single-layer imaging device 12 are shown.
  • Each single-layer imaging device is configured to emit imaging light incident on the concave reflective element 20, and the object distances of at least part of the single-layer imaging devices are the same or the object distances corresponding to at least some of the single-layer imaging devices are different.
  • the distance includes the propagation path length of the imaging light from the corresponding single-layer imaging device to the concave reflective element.
  • the object distances corresponding to different single-layer imaging devices are different from each other.
  • the object distance is the propagation path length of the imaging light from the corresponding single-layer imaging device to the concave reflecting element or the single-layer imaging device to the concave reflecting element.
  • the distance; the concave reflective element 20 is configured to reflect the imaging light, for example to the reflective imaging portion, so that the reflective imaging portion can reflect the imaging light to a preset range, such as the eye box range.
  • the reflection imaging part is, for example, a windshield of a car.
  • the head-up display system further includes a directional display device 30.
  • the directional display device 30 includes a light source device, a direction control element 32, a dispersion element 33, and a light modulation layer 34.
  • the light source device includes at least one light source 31.
  • the direction control element 32 is configured to converge the light emitted by the light source device, for example, to a preset area, which is a position or range within a preset range (for example, an eye box range).
  • the dispersion element 33 and the light modulation layer 34 may be arranged on the same side of the direction control element 32.
  • the dispersive element configuration 33 is to disperse the converged light emitted from the direction control element 32 so that the converged light forms a light beam for forming a light spot.
  • the light modulation layer 34 is configured to modulate the light beam used to form the light spot so that the light beam forms a directional imaging light.
  • the light modulation layer 34 is configured to block or transmit light, and emit directional imaging light toward the reflective imaging part during transmission, so that the reflective imaging part can reflect the directional imaging light within a preset range.
  • the light modulation layer 34 includes a liquid crystal layer (for example, a liquid crystal display panel) or an electrowetting display layer, and the light modulation layer can convert light passing through it into light including image information (for example, imaging light).
  • the head-up display system may further include a first transflective element 21; the first transflective element 21 can transmit light with a first characteristic and reflect light with a second characteristic;
  • the imaging device group includes a first single-layer imaging device 11 that can emit a first imaging light and a second single-layer imaging device 12 that can emit a second imaging light; for example, the first single-layer imaging device 11 may be arranged in the first single-layer imaging device 11
  • One side of the transflective element 21, the second single-layer imaging device 12 and the concave reflective element 20 can be arranged on the other side of the first transflective element 21; the above-mentioned "one side” and "the other side” are opposite sides .
  • the first imaging light has a first characteristic
  • the second imaging light has a second characteristic. Therefore, the first transflective element can transmit the first imaging light and reflect the second imaging light.
  • the head-up display system further includes a second transflective element 22, and the imaging device group also includes a third single-layer imaging device 13 capable of emitting third imaging light.
  • the third imaging light has a third characteristic.
  • the third object distance corresponding to the third single-layer imaging device 13 is different from the first object distance and the second object distance.
  • the third object distance is the reflection of the third imaging light from the third single-layer imaging device 13 to the concave surface.
  • the second transflective element 22 can transmit light with the first characteristic and reflect light with the third characteristic; the first transflective element 21 can also transmit light with the third characteristic; referring to FIG. 8, the second transflective The element 22 can be arranged between the first single-layer imaging device 11 and the first transflective element 21, and the third single-layer imaging device 13 and the first transflective element 21 can be arranged on the same side of the second transflective element 22.
  • the second transflective element 22 can transmit light with a second characteristic and reflect light with a third characteristic; the first transflective element can also reflect light with a third characteristic; referring to FIG. 9, the second transparent The reflective element 22 can be arranged between the second single-layer imaging device 12 and the first transflective element 21, and the third single-layer imaging device 13 and the first transflective element 21 can be arranged on the second transflective element 22. Same side.
  • the head-up display system further includes one or more flat reflective elements; referring to FIG. 10, when the head-up display system includes a flat reflective element 23, the flat reflective element 23 is configured to The imaging light emitted by at least one single-layer imaging device is reflected to the concave reflective element 20; or, when the head-up display system includes multiple planar reflective elements 23 and at least one single-layer imaging device includes multiple single-layer imaging devices, more The two planar reflective elements 23 are configured to reflect the imaging light emitted by a plurality of single-layer imaging devices to the concave reflective element.
  • a plurality of plane mirrors 23 correspond to a plurality of single-layer imaging devices on a one-to-one basis; each plane mirror 23 is used to reflect the imaging light emitted by the corresponding single-layer imaging device to the curved surface.
  • Mirror 20; or two flat mirrors 23 can correspondingly reflect the imaging light emitted by three single-layer imaging devices, etc.
  • the embodiment of the present disclosure does not have a correspondence relationship between multiple flat mirrors and multiple single-layer imaging devices. Make a limit.
  • the area of the directional imaging area on the reflective imaging section is larger than that of the magnified imaging area; the directional imaging area is the area where the directional imaging light can enter the surface of the reflective imaging section, and the magnified imaging area is the area where the imaging light emitted by the imaging device group can enter. To the area reflecting the surface of the imaging part.
  • At least one embodiment of the present disclosure also provides a method for controlling a head-up display system.
  • the head-up display system includes a visualization device group and a directional visualization device; the visualization device group includes at least one single-layer visualization device.
  • the imaging range is greater than the imaging range of at least one single-layer imaging device;
  • the control method includes: determining the target information to be displayed and the target imaging device, and controlling the target imaging device to display the target information; for example, the target imaging device is A display device selected from at least one single-layer display device in the directional display device and the group of the display device.
  • the head-up display system can be any head-up display system provided in the embodiments of the present disclosure, and the control method can also implement various functions and various operations performed by the processor in the above-mentioned head-up display system.
  • the control method can also implement various functions and various operations performed by the processor in the above-mentioned head-up display system.
  • I won’t repeat them here.
  • At least one embodiment of the present disclosure also provides a vehicle, such as a vehicle, a ship, or an airplane.
  • vehicle includes any head-up display system provided by the embodiments of the present disclosure, so that a user (such as a driver) can use the head-up display system to update Safe and convenient control of vehicles.

Abstract

一种抬头显示系统及其控制方法、交通工具,抬头显示系统包括:显像装置组、凹面反射元件(20)、定向显像装置(30)和处理器(100);显像装置组包括至少一个单层显像装置;定向显像装置包括光源装置、方向控制元件(32)、弥散元件(33)和光调制层(34);处理器(100)分别与定向显像装置(20)、显像装置组内的单层显像装置连接。该抬头显示系统可以实现分层成像,定向显像装置可以实现大范围成像,从而在反射成像部表面形成较大面积的定向成像区域,提升抬头显示系统的显示效果。

Description

抬头显示系统及其控制方法、交通工具
本申请要求于2020年1月10日递交的中国专利申请第202010026623.1号的优先权,出于所有目的,在此全文引用上述中国专利申请公开的内容以作为本申请的一部分。
技术领域
本公开的实施例涉及一种抬头显示系统及其控制方法、交通工具。
背景技术
抬头显示器(Head Up Display,HUD)应用在车辆上可以把时速、导航等重要的行车信息投影到驾驶员前面的风挡玻璃上,让驾驶员尽量做到不低头、不转头就能看到时速、导航等重要的本地信息,能够提高驾驶安全系数,同时也能带来更好的驾驶体验。因此,使用汽车挡风玻璃进行成像的HUD正受到越来越多的关注。
增强现实抬头显示器(Augmented RealityHUD,AR-HUD)能够在驾驶员的视线区域内合理、生动地叠加显示一些本地信息,还可结合于实际交通路况当中,从而进一步增强驾驶者对于实际驾驶环境的感知。AR-HUD的兴起,对HUD行业提出了更高的技术要求。
发明内容
本公开至少一实施例提供一种抬头显示系统,该抬头显示系统包括显像装置组、凹面反射元件、定向显像装置和处理器,其中,所述显像装置组包括至少一个单层显像装置;所述定向显像装置包括光源装置、方向控制元件、弥散元件和光调制层;所述处理器分别与所述定向显像装置、所述显像装置组内的所述至少一个单层显像装置连接;所述方向控制元件配置为将所述光源装置发出的光线会聚;所述弥散元件配置为将从所述方向控制元件出射的经过会聚的光弥散,以使所述经过会聚的光形成用于形成光斑的光束;所述光调制层配置为对所述用于形成光斑的光束进行调制,以使所述光束形成定向成像光线;所述至少一个单层显像装置配置为发出入射至所述凹面反射元件的成像光线;所述凹面反射元件配置为反射所述成像光线;所述处理器配置为:确定待显示的目标信息和目标显像装置,控制所述目标显像装置显示所述待显示的目标信息,其中,所述目标显像装置为从所述定向显像装置、所述显像装置组内的所述至少一个单层显像装置中选取的一个或多个显像装置。
本公开至少一实施例还提供一种抬头显示系统,该抬头显示系统包括显像装置组和凹面反射元件;所述显像装置组包括多个单层显像装置;所述多个单层显像装置中的每个配置为发出入射至所述凹面反射元件的成像光线,至少部分单层显像装置的物距相 同或者至少部分单层显像装置所对应的物距不相同,所述物距包括所述成像光线从相对应的所述单层显像装置到所述凹面反射元件的传播路径长度;所述凹面反射元件配置为反射所述成像光线。
本公开至少一实施例还提供一种抬头显示系统的控制方法,所述抬头显示系统包括显像装置组和定向显像装置;所述显像装置组包括至少一个单层显像装置,所述定向显像装置的成像范围大于所述至少一个单层显像装置的成像范围;所述处理器配置为:确定待显示的目标信息和目标显像装置,控制所述目标显像装置显示所述目标信息;其中,所述目标显像装置为从所述定向显像装置、所述显像装置组内的所述至少一个单层显像装置中选取的一个显像装置。
例如,在一些实施例中,控制方法还包括:确定目标位置,所述目标位置为所述反射成像部上显示所述目标信息的位置;以及确定包含所述目标位置的成像区域,将与所述成像区域相对应的显像装置作为目标显像装置,并控制所述目标显像装置在所述目标位置处显示所述目标信息;所述成像区域为成像光线能够入射至所述反射成像部表面的区域。例如,所述成像区域为放大成像区域或者定向成像区域的一部分。
例如,在一些实施例中,控制方法还包括:在外界对象投影至所述反射成像部上的投影位置位于放大成像区域内时,将所述外界对象作为目标对象,并确定参照物与所述目标对象之间的目标距离;所述放大成像区域为所述显像装置组发出的成像光线能够入射至所述反射成像部表面的区域;将位于所述放大成像区域内的所述投影位置或者所述投影位置的边缘作为目标位置;以及分别确定所述显像装置组内每个单层显像装置的像距,将与所述目标距离大小相匹配的像距作为目标像距,并将与所述目标像距相对应的单层显像装置作为目标显像装置;其中,所述像距为所述凹面反射元件所成的、所述单层显像装置的虚像与所述凹面反射元件之间的距离;或者在外界对象投影(或映射)至所述反射成像部上的投影位置位于定向成像区域内时,将所述外界对象作为目标对象;所述定向成像区域为所述定向显像装置发出的定向成像光线能够入射至所述反射成像部表面的区域;将位于所述定向成像区域内的所述投影位置或者所述投影位置的边缘作为目标位置。
例如,在一些实施例中,抬头显示系统还包括信息采集装置,所述信息采集装置配置为采集本地信息和周围信息,控制方法还包括:获取所述本地信息和所述周围信息,根据所述本地信息和所述周围信息生成所述目标信息。
例如,在一些实施例中,所述本地信息包括本地速度信息,所述周围信息包括参照物与外界对象之间的距离,控制方法还包括:根据所述本地速度信息确定安全间距,判断所述参照物与外界对象之间的距离是否大于所述安全间距;在所述参照物与外界对象之间的距离大于所述安全间距时,确定参照物当前处于正常状态,并将相应的第一提示信息作为所述目标信息,所述第一提示信息包括空集、第一提示图像、第一提示视频中的一项或多项;以及在所述参照物与外界对象之间的距离不大于所述安全间距时,确定参照物当前处于预警状态,并将相应的第一预警信息作为所述目标信息,所述第一预警 信息包括第一预警图像、第一预警视频中的一项或多项。
例如,在一些实施例中,所述周围信息包括参照物与所述外界对象之间的距离以及外界对象的位置;根据所述本地信息和周围信息生成所述目标信息包括:将车辆与所述外界对象之间的距离以及所述外界对象的位置作为目标信息;或者,确定所述外界对象投影至所述反射成像部上的投影位置,将所述投影位置或者所述投影位置的边缘作为目标位置,并控制所述目标显像装置在所述目标位置处显示预先设置的目标信息。
例如,在一些实施例中,控制方法还包括:在所述外界对象为重点对象时,确定参照物当前处于预警状态,并将相应的第二预警信息作为所述目标信息,所述第二预警信息包括第二预警图像、第二预警视频中的一项或多项;其中,在参照物与所述外界对象之间的距离小于预设距离值,且所述外界对象为行人、动物或非机动车辆时,将所述外界对象作为重点对象;或者,在所述外界对象朝向当前行驶线路移动,且所述外界对象为行人、动物或非机动车辆时,将所述外界对象作为重点对象;或者,在所述外界对象位于当前行驶线路中,且所述外界对象为行人、动物或非机动车辆时,将所述外界对象作为重点对象;或者,在当前位于对象密集区域内,且所述外界对象为行人、动物或非机动车辆时,将所述外界对象作为重点对象;所述对象密集区域包括学校、医院、停车场、市区中的一种或多种;或者,所述本地信息包括驾驶员的视线方位信息;在所述视线方位信息与所述外界对象的当前位置不匹配,且所述外界对象为行人、动物或非机动车辆时,将所述外界对象作为重点对象。
例如,在一些实施例中,所述安全间距包括前安全间距;控制方法还包括:在所述外界对象位于参照物前方,在所述当前距离不大于所述前安全间距,且所述安全间距与所述当前距离之差大于预设距离差值和/或处于预警状态的时长超过预设时长阈值时,生成制动信号或减速信号,并输出所述制动信号或减速信号。
例如,在一些实施例中,所述周围信息包括车道位置信息,所述本地信息包括车辆位置信息;根据所述本地信息和当前周围信息生成所述目标信息包括:根据所述车道位置信息和所述车辆位置信息确定车辆偏离当前行驶车道的偏移参数,并判断所述偏移参数是否大于相应的偏移阈值;所述偏移参数包括偏移角度和/或偏移距离;在所述偏移参数大于相应的偏移阈值时,确定参照物当前处于预警状态,并将相应的第三预警信息作为所述目标信息,所述第三预警信息包括第三预警图像、第三预警视频、优先行驶车道中的一项或多项;以及在所述偏移参数不大于相应的偏移阈值时,确定参照物当前处于正常状态,并将相应的第三提示信息作为所述目标信息,所述第三提示信息包括空集、第三提示图像、第三提示视频、优先行驶车道中的一项或多项。
例如,在一些实施例中,所述本地信息还包括车辆状态信息,所述车辆状态信息包括车辆速度、车辆加速度、转向灯状态中的一项或多项;根据所述本地信息和周围信息生成所述目标信息还包括:在所述偏移参数大于相应的偏移阈值,且满足预警条件时,确定参照物当前处于预警状态;其中,所述预警条件包括所述车辆速度大于第一预设速度值、所述车辆加速度不大于零、车辆的所述偏移角度所对应的方向相反一侧的转向灯 未处于开启状态、当前为不可变道状态、偏离车道的时长大于预设的第一偏离时长阈值中的一种或多种。
例如,在一些实施例中,控制方法还包括:根据所述车辆位置信息和所述车道位置信息确定车辆的优先行驶车道,并将所述优先行驶车道作为目标对象;确定所述目标对象投影至所述反射成像部上的投影位置,将所述投影位置或者所述投影位置的边缘作为目标位置,并控制所述目标显像装置在所述目标位置处显示预先设置的目标信息;或者,根据所述车辆位置信息和所述车道位置信息确定车辆的优先行驶车道,将所述优先行驶车道两侧的边界线作为目标对象,并将形状与所述边界线相匹配的图形作为目标信息;确定所述目标对象投影至所述反射成像部上的投影位置,将所述投影位置作为目标位置,并控制所述目标显像装置在所述目标位置处以预设颜色显示所述目标信息。
例如,在一些实施例中,控制方法还包括:在所述偏移参数大于相应的偏移阈值,且所述偏移参数与所述偏移阈值之差大于预设的偏移差值和/或处于偏移状态的时长超过预设的安全偏移时长时,生成制动信号或减速信号,并输出所述制动信号或减速信号。
例如,在一些实施例中,所述周围信息包括前方信息,所述前方信息包括前方车辆的车速和/或与前方车辆的当前距离;控制方法包括:将所述前方车辆的目标区域投影至所述反射成像部上的投影位置作为目标位置,并将所述前方信息作为目标信息,控制目标显像装置在所述目标位置处显示所述目标信息;其中,所述前方车辆的目标区域为所述前方车辆的后轮之间的空白区域或后备箱所在区域。
例如,在一些实施例中,控制方法还包括:在参照物当前处于预警状态时,控制所述目标显像装置以正常方式或第一突出显示方式显示所述目标信息,所述第一突出显示方式包括动态显示(例如滚动显示、跳动显示、闪烁显示)、高亮显示、以第一颜色显示中的一种或多种;以及在参照物当前处于正常状态时,控制所述目标显像装置以正常方式或第二突出显示方式显示所述目标信息,所述第二突出显示方式包括以第二颜色显示。
例如,在一些实施例中,抬头显示系统还包括发声装置和/或振动装置,控制方法还包括:在参照物当前处于预警状态时,向所述发声装置发送提醒语音,并控制所述发声装置播放所述提醒语音;和/或者,向所述振动装置发送振动信号,控制所述振动装置振动。
例如,在一些实施例中,所述周围信息包括道路异常信息,所述道路异常信息包括障碍物位置、不平路段位置、危险路段位置、维修路段位置、事故路段位置、临时检查路段位置中的一项或多项;控制方法还包括:将所述道路异常信息作为目标信息;或者,根据所述道路异常信息确定异常位置,并确定所述异常位置投影至所述反射成像部上的投影位置,将所述投影位置或者所述投影位置的边缘作为目标位置,并控制所述目标显像装置在所述目标位置处显示与所述道路异常信息相对应的目标信息。
例如,在一些实施例中,所述周围信息包括当前能见度;控制方法还包括:在所述当前能见度小于预设的能见度阈值时,获取信息采集装置采集的外界对象的位置信息;将所述外界对象的位置信息作为目标信息;或者,确定所述外界对象投影至所述反射成 像部上的投影位置,将所述投影位置或者所述投影位置的边缘作为目标位置,并控制所述目标显像装置在所述目标位置处显示预先设置的目标信息。
例如,在一些实施例中,控制方法还包括:根据所述目标信息生成推送信息,并将所述推送信息发送至服务器或发送至预设距离内的其他设备,例如交通工具、手机等。
本公开至少一实施例还提供一种交通工具,包括上述任一抬头显示系统。
附图说明
为了更清楚地说明本公开实施例的技术方案,下面将对实施例的附图作简单地介绍,显而易见地,下面描述中的附图仅仅涉及本公开的一些实施例,而非对本公开的限制。
图1A示出了本公开至少一实施例所提供的抬头显示系统的示意图;
图1B示出了本公开至少一实施例所提供的抬头显示系统中,显像装置组成像的第一结构示意图;
图2A示出了本公开至少一实施例所提供的抬头显示系统中,定向显像装置成像的原理示意图;
图2B示出了本公开至少一实施例所提供的抬头显示系统中,显像装置组成像的原理示意图;
图3示出了本公开至少一实施例所提供的抬头显示系统中,定向显像装置成像的第一结构示意图;
图4示出了本公开至少一实施例所提供的抬头显示系统在反射成像部外成像的示意图;
图5示出了本公开至少一实施例所提供的反射成像部上成像区域的第一示意图;
图6示出了本公开至少一实施例所提供的反射成像部上成像区域的第二示意图;
图7示出了本公开至少一实施例所提供的抬头显示系统中,显像装置组成像的第二结构示意图;
图8示出了本公开至少一实施例所提供的抬头显示系统中,显像装置组成像的第三结构示意图;
图9示出了本公开至少一实施例所提供的抬头显示系统中,显像装置组成像的第四结构示意图;
图10示出了本公开至少一实施例所提供的抬头显示系统中,显像装置组成像的第五结构示意图;
图11示出了本公开至少一实施例所提供的抬头显示系统中,显像装置组成像的第六结构示意图;
图12示出了本公开至少一实施例所提供的抬头显示系统中,显像装置组成像的第七结构示意图;
图13示出了本公开至少一实施例所提供的抬头显示系统中,显像装置组成像的第八结构示意图;
图14示出了本公开至少一实施例所提供的抬头显示系统中,显像装置组成像的第九结构示意图;
图15示出了本公开至少一实施例所提供的抬头显示系统中,显像装置组成像的第十结构示意图;
图16示出了本公开至少一实施例所提供的抬头显示系统中,定向显像装置成像的第二结构示意图;
图17示出了本公开至少一实施例所提供的抬头显示系统中,实心灯杯的第一结构示意图;
图18示出了本公开至少一实施例所提供的抬头显示系统中,实心灯杯的第二结构示意图;
图19A示出了本公开至少一实施例所提供的抬头显示系统的结构框图;
图19B示出了本公开至少一实施例所提供的抬头显示系统的另一结构框图;
图20示出了本公开至少一实施例所提供的处理器基于安全间距进行辅助驾驶的流程图;
图21示出了本公开至少一实施例中,在车距较近时反射成像部显示画面的一种示意图;
图22示出了本公开至少一实施例中,反射成像部显示鸟瞰图的一种示意图;
图23示出了本公开至少一实施例中,在有行人靠近时反射成像部显示画面的一种示意图;
图24示出了本公开至少一实施例所提供的处理器确定车道是否存在偏移的流程图;
图25示出了本公开至少一实施例中,在车道偏移时反射成像部显示画面的一种示意图;以及
图26示出了本公开至少一实施例中,在车道偏移时反射成像部显示画面的另一种示意图。
具体实施方式
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例的附图,对本公开实施例的技术方案进行清楚、完整地描述。显然,所描述的实施例是本公开的一部分实施例,而不是全部的实施例。基于所描述的本公开的实施例,本领域普通技术人员在无需创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。
除非另外定义,本公开使用的技术术语或者科学术语应当为本公开所属领域内具有一般技能的人士所理解的通常意义。本公开中使用的“第一”、“第二”以及类似的词语并不表示任何顺序、数量或者重要性,而只是用来区分不同的组成部分。“包括”或者“包含”等类似的词语意指出现该词前面的元件或者物件涵盖出现在该词后面列举的元件或者物件及其等同,而不排除其他元件或者物件。“连接”或者“相连”等类似的词语并非限定于 物理的或者机械的连接,而是可以包括电性的连接,不管是直接的还是间接的。“上”、“下”、“左”、“右”等仅用于表示相对位置关系,当被描述对象的绝对位置改变后,则该相对位置关系也可能相应地改变。
本公开的实施例提供的一种可以分层成像的抬头显示系统,该抬头显示系统可以在与本地不同距离的位置处成多层的像,从而实现多层次成像;且该抬头显示系统还能够实现较大范围成像,例如全车窗成像,使得在交通工具上能够大范围显示信息,从而能够显示更多内容。
在本公开的一些实施例中,如图1A-图2B所示,该抬头显示系统包括:显像装置组、凹面反射元件20、定向显像装置30和处理器100;该显像装置组包括至少一个单层显像装置。在一些实施例中,如图1A所示,该显像装置组包括第一单层显像装置11和第二单层显像装置12,其还可以包括更多的单层显像装置,例如包含第三单层显像装置13等。每个单层显像装置配置为发出入射至凹面反射元件20的成像光线,以用于形成位于预定位置的单层图像。
例如,显像装置组可以包括多个单层显像装置(即至少两个单层显像装置),至少部分单层显像装置的物距相同或者至少部分单层显像装置所对应的物距不相同,物距为成像光线从相对应的单层显像装置到凹面反射元件的传播路径长度,或者为单层显像装置到凹面反射元件的距离。由此可以实现多层次成像。
处理器分别与定向显像装置30、显像装置组内的单层显像装置相连,例如以有线或者无线的方式进行通讯连接,进而实现对定向显像装置30以及显像装置组内的单层显像装置的控制功能,如图19A所示。上述“分别”包括各自单独连接以及串联的方式连接。
例如,处理器可用于自动驾驶车辆和/或手动驾驶车辆。例如,在一些实例中,处理器可以包括多个子处理器,该多个处理器与定向显像装置30和显像装置组内的单层显像装置分别连接。例如,平视显示系统也可以包括除上述处理器外的其他处理器,或者,处理器也可以包括除与定向显像装置30和显像装置组内的单层显像装置连接的子控制器外的其他子处理器。
本公开的一些实施例中,利用定向显像装置30可以实现大范围成像;利用一个单层显像装置和该定向显像装置30可以实现多层(例如,两层)成像;当该显示元件部包括多个单层显示元件时,利用多个单层显像装置可以实现至少两层成像。例如,在使用过程中,凹面反射元件20和定向显像装置30可以设置在反射成像部50的同侧,例如,该反射成像部50可以为交通工具的挡风窗(例如,包括玻璃材质的挡风窗),或者是挡风窗内侧的反射膜等,该反射膜能够反射抬头显示系统发出的成像光线,且不影响驾驶员透射该反射膜观察交通工具外部的事物或场景;相应的,凹面反射元件20和定向显像装置30可以位于交通工具之内,例如,位于反射成像部50的内侧。例如,凹面反射元件20、定向显像装置30可以设置在反射成像部50的下方,例如汽车的仪表板(Instrument Panel,IP)台处等。
本公开的一些实施例中,该单层显像装置配置为发出入射至凹面反射元件20的成像 光线,不同的单层显像装置所对应的物距互不相同,例如,物距可以是成像光线从相对应的单层显像装置到凹面反射元件20的传播路径长度,或者单层显像装置到凹面反射元件20的距离;该凹面反射元件20配置为将成像光线反射至反射成像部50,以使得反射成像部50能够将成像光线反射至预定范围,例如眼盒范围内,使得驾驶员在该眼盒范围内能够看到反射成像部50所成的单层显像装置的虚像,如图2B所示。
本公开的一些实施例中,“眼盒(eyebox)范围”是指用户双眼在此范围内可以看到抬头显示系统所成的图像;并且,“眼盒范围”具有一定的大小,用户的双眼可以移动,在此范围内,都可以观看到成像。
参见图1A所示,第一单层显像装置11可以发出入射至凹面反射元件20的第一成像光线,第二单层显像装置12可以发出入射至凹面反射元件20的第二成像光线,且第一单层显像装置11对应的第一物距与第二单层显像装置12对应的第二物距不同。其中,第一物距为第一成像光线从第一单层显像装置11到凹面反射元件20的传播路径长度或者第一单层显像装置11到凹面反射元件20的距离,第二物距为第二成像光线从第二单层显像装置12到凹面反射元件20的传播路径长度或者第二单层显像装置12到凹面反射元件20的距离。例如,第一单层显像装置11和第二单层显像装置12可以实现两层图像。
例如,显像装置组内的单层显像装置可以为显示器,例如有机发光二极管显示器或者液晶显示器等,在另一些实施例中,显像装置组内的单层显像装置也可以为投影仪。本公开的实施例对显像装置组内的单层显像装置的形式不做具体限定。
本公开的一些实施例中,单层显像装置中的“单层”指该单层显像装置用在抬头显示装置中,可以形成具有单层像距的图像。
参见图2A和图3所示,定向显像装置30配置为发出入射至反射成像部50的定向成像光线,以使得反射成像部50能够将定向成像光线反射至预定范围,例如眼盒范围62内。在一些实施例中,该定向显像装置30包括光源装置、方向控制元件32、弥散元件33和光调制层34。例如,光源装置可以为LED(Light Emitting Diode)等电致发光元件,数量可以为一个或多个。例如,光调制层34包括液晶层(例如,液晶显示面板)或电润湿显示层,光调制层可以将经过其的光线转化为包括图像信息的光线(例如,成像光线)。
方向控制元件32配置为将光源装置发出的光线会聚,例如会聚至预设区域61,该预设区域61为预设范围,例如眼盒范围62内的一个位置或范围;弥散元件33和光调制层34可以设置在方向控制元件32的同一侧,且弥散元件33和光调制层34位于方向控制元件32的发光侧,使得可以对方向控制元件32发出的光线进行相应处理。弥散元件33配置为将从方向控制元件32出射的经过会聚的光弥散,以使经过会聚的光形成用于形成光斑的光束;光调制层34配置为对用于形成光斑的光束进行调制,以使所述光束形成定向成像光线。例如,光调制层34可以遮挡或透射光线,并在透射时发出朝向反射成像部50的定向成像光线,以使得反射成像部50能够将该定向成像光线反射至预设范围,例如眼盒范围62内。例如,弥散元件也可以称为扩散元件,该扩散元件可以扩散经过该扩散元件的光束但不改变该光束的光轴(例如,光束传播的主要方向)。
例如,本公开实施例中的光调制层34可采用液晶制成,通过控制液晶排序,从而遮挡或透射光线,并能够调整透射光线的比例,从而可以调整光线亮度,和/或在光调制层34处形成图像。由此,该液晶层具有光调制功能,也即可以成像。本公开的一些实施例中,弥散元件33和光调制层34可以位于方向控制元件32的出光侧,弥散元件33可以设置在光调制层34的进光侧,也可以设置在光调制层34的出光侧;例如,当弥散元件33设置在光调制层34出光侧时,弥散元件33需要与光调制层34贴紧设置(例如,直接贴紧或通过光学胶等粘接贴紧),使得光调制层34显示更加清楚。图3中以弥散元件33设置在光调制层34的下侧为例说明。
例如,定向显像装置30形成的投影画面大于每个单层显像装置形成的成像画面,例如,定向显像装置30可以实现全车窗成像,即定向显像装置30形成的投影画面可以覆盖整个车窗。通常,可以用视场角(Field of View,FOV)来衡量画面大小,视场角。例如,对全车窗画面而言,驾驶员眼睛位置观看图像的水平视场角的范围为大于或等于15度,例如不小于20度,例如不小于25度,例如不小于30度,例如在15度-100度之间,垂直视场角的范围大于或等于5度,例如不小于15度,例如不小于20度,例如不小于25度,例如在5度-30度之间。由此可以增大平视显示系统视场角,实现了低功耗下的超大视场角成像。上述的“水平”和“垂直”是两个互相垂直的方向,以车体坐标系为例,上述“水平”可以指车体坐标系中车宽度方向,上述“垂直”可以指车体坐标系中车高度方向。
处理器配置为确定待显示的目标信息和目标显像装置,并控制目标显像装置显示目标信息;例如,该目标显像装置为从定向显像装置30、显像装置组内的多个单层显像装置(例如第一单层显像装置11、第二单层显像装置12、第三单层显像装置13等)中选取的一个显像装置。
例如,处理器在控制目标显像装置显示待显示的目标信息之前,还可以向目标显像装置输出待显示的目标信息,或者向与目标显像装置连接的其他装置输出待显示的目标信息,进而该其他装置可以将待显示的目标信息传输给目标显像装置。
例如,本公开的一些实施例中,显像装置组内的第一单层显像装置11和第二单层显像装置12分别用于成像,且第一单层显像装置11的第一物距与第二单层显像装置12的第二物距不同。基于凹面反射元件20的成像原理可知,单层显像装置(例如第一单层显像装置11、或第二单层显像装置12、或者后续的第三单层显像装置13等)发出的成像光线(例如第一成像光线、第二成像光线、或者后续的第三成像光线等)传播至该凹面反射元件20的传播路径长度越接近该凹面反射元件20的焦距,该凹面反射元件20可以在更远的位置成像,即凹面反射元件20所成的像具有更大的像距;相应的,凹面反射元件20所成的像在反射成像部50的作用下,可以在反射成像部50的另一侧形成相应的虚像,且凹面反射元件20所成的虚像像距越大,反射成像部50所成的虚像距离该反射成像部50越远。如图4所示,图4中以该显像装置组包含三个单层显像装置(即及第一单层显像装置11、第二单层显像装置12和第三单层显像装置13)为例示出,且三个单层 显像装置分别在第一放大成像位置111、第二放大成像位置121和第三放大成像位置131处分别成像,使得该抬头显示系统可以在距离反射成像部50不同位置处成像,即形成多层次像。例如,该第一物距和第二物距可以均小于凹面反射元件20的焦距,使得凹面反射元件20可以形成第一单层显像装置11和第二单层显像装置12放大的虚像。
例如,抬头显示系统还可以包括反射成像部50,显像装置组内单层显像装置发出的成像光线经凹面反射元件20反射后可以入射至反射成像部50处,并被反射成像部50反射至预设范围,例如眼盒范围62内,使得驾驶员在眼盒范围62内可以观察到单层显像装置所成的像。本公开实施例中的眼盒范围62指的是供驾驶员可以在反射成像部50上观看到像的范围,大约对应驾驶员头部所在的位置;该眼盒范围62的大小具体可基于实际情况而定。
例如,通过方向控制元件32可以实现对光线的会聚。例如,参见图3所示,光源装置包括多个光源31,不同位置设置有光源31,图3中以光源装置包括7个光源31为例说明;相应的,可以设置7个方向控制元件32,以分别控制7个光源31发出光线的方向。如图3所示,在不存在弥散元件33时,方向控制元件32将多个光源31发出的光线会聚至预设区域61处。图3中以预设区域61为一个点位置为例说明,在本公开的一些实施例中,预设区域61也可以为一个很小的区域,即只需要将光源装置发出的光线会聚至该区域内即可。例如,可以通过设置位于不同位置的方向控制元件32的朝向来调整光源31发出光线的方向,从而实现光线会聚。
例如,本公开的一些实施例中,通过弥散元件33将光弥散开,并形成预设形状的、成像范围更大的光斑(例如,眼盒范围62),从而方便驾驶员在大范围内观看定向显像装置30成像。
例如,以图3中最左侧的方向控制元件32为例说明,如图3所示,在不存在弥散元件33时,最左侧的光源31发出的光线A可以沿着光路a射向预设区域61;当在方向控制元件32外部设置弥散元件33后,弥散元件33将光线A分散成多个光线(包括光线A1、光线A2等)并分散至一个范围内,即眼盒范围62,方便用户在眼盒范围62的范围内可以查看定向显像装置30所成的像。
例如,弥散元件33可以为衍射光学元件(Diffractive Optical Elements,DOE),例如光束整形片(BeamShaper);光斑的大小和形状由光束整形片的微观结构所决定,光斑形状包括但不限于圆形、椭圆形、正方形、长方形、蝙蝠翼形状。例如,弥散后的光斑在侧视方向(例如,垂直于交通工具所在路面的方向)的弥散角可以为2度-10度,例如,可以为10度或者5度;在正视方向(例如,沿交通工具主驾驶和副驾驶延伸的方向)的弥散角可以为5度-50度,例如,可以为10度、20度或者30度。
例如,方向控制元件32的数量可以为多个,不同的方向控制元件32设置在不同的位置,配置为调整不同位置的光源31所发出光线的出射方向,且不同位置的光源31发出的光线的出射方向指向同一个预设区域61。如图3所示,图3中的方向控制元件32的数量为7个。例如,一个方向控制元件32可以调整一个光源31发出的光线,也可以 调整多个光源31发出的光线,本公开的实施例对此不做限定。
本领域技术人员可以理解,图3中对弥散元件33的弥散作用为示意性说明,弥散元件33可以将光线弥散至眼盒范围62内,并不是将光源装置发出的光线完全限制在眼盒范围62内。即光线A经弥散元件33后可能可以形成更大范围的光斑,其他光源31发出的光线经弥散元件33可形成其他光斑,但是所有光源31发出的光线几乎都可以到达眼盒范围62内。
本公开的实施例中,光源装置发出的光线经过方向控制元件32的作用后,入射至光调制层34,使得光调制层34在工作时可以发出朝向眼盒范围62的定向成像光线,使得定向成像光线会聚至眼盒范围内,从而可以提高定向成像光线的亮度,且在弥散元件33的作用下方便驾驶员在眼盒范围62内观看到定向显像装置30所成的像,在提高光线亮度的同时,还可以扩大可视范围。
本公开的一些实施例中,由于可以对定向成像光线进行会聚,不需要定向显像装置30具有特别高的亮度即可使得驾驶员观察到反射成像部50所成的虚像;且定向显像装置30可以具有较大的面积,使得用户(例如,驾驶员)可以观看到反射成像部50反射所成的较大范围的像。例如,定向显像装置30可以铺设在车辆的IP台表面。
例如,参见图5和图6所示,定向成像光线可以入射到反射成像部50的表面区域,该区域可以为定向成像区域52,驾驶员通过该定向成像区域52可以观看到在定向成像位置301处所成的虚像,该虚像为定向显像装置30经反射成像部50反射所成的像。例如,显像装置组发出的成像光线入射到反射成像部50表面的区域为放大成像区域51,驾驶员通过该放大成像区域51可以观看到相应放大成像位置(例如第一放大成像位置111、第二放大成像位置121、第三放大成像位置131等)处的虚像,该虚像为反射成像部50所成的、与显像装置组内单层显像装置相对应的像。例如,显像装置组内不同的单层显像装置可以对应不同的放大成像区域,图5和图6中以包含三个放大成像区域为例说明,第一单层显像装置11、第二单层显像装置12和第三单层显像装置13分别对应不同的放大成像区域。例如,第一单层显像装置11发出的第一成像光线可以入射至反射成像部50表面的一个放大成像区域,且反射成像部50在第一放大成像位置111处形成与该第一单层显像装置11相对应的虚像,驾驶员即可通过该放大成像区域查看到第一放大成像位置111处的虚像。
例如,在一些实施例中,定向显像装置30可以形成较大范围的像,反射成像部50上的定向成像区域52的面积可以大于单个放大成像区域51的面积。例如,该放大成像区域51可以位于定向成像区域52内,如图5所示;或者放大成像区域51与定向成像区域52为两个不同的区域,如图6所示;或者,放大成像区域51与定向成像区域52也可以是部分重叠的两个区域。
在一些实施例中,显像装置组和定向显像装置30发出的光线可能覆盖全部的反射成像部50,光线经反射成像部50反射后到达眼盒范围62内才会被驾驶员观看到,本公开实施例中的“成像光线”指的是显像装置发出的、并能在眼盒范围62内成像的光线;相 应的,“定向成像光线”指的是定向显像装置发出的、并能在眼盒范围62内成像的光线,“成像光线”指的是单层显像装置发出的、并能在眼盒范围62内成像的光线。即,在反射成像部50表面,只有被能够在眼盒范围62内成像的成像光线入射的区域才会作为放大成像区域或定向成像区域。
本公开的实施例中,该抬头显示系统还包括处理器,配置为确定需要显示的目标信息,以及确定需要由哪个显像装置来显示该目标信息。例如,处理器可以配置为从定向显像装置30、显像装置组内的多个单层显像装置中选取目标显像装置,并控制目标显像装置显示目标信息,从而使得反射成像部50在相应的成像位置处形成虚像,从而可以在成像区域内显示目标信息,供驾驶员观看。例如,当前需要在定向成像区域内显示车速,即可以将车速作为目标信息;而由于定向显像装置30可以使得用户在该定向成像区域内观看到像,故可以将定向显像装置30作为目标显像装置。
需要说明的是,本公开实施例中的“在成像区域内显示目标信息”指的是驾驶员可以通过该成像区域观看到目标信息,使得从驾驶员的角度看起来是在成像区域内显示了该目标信息,但该目标信息所对应的虚像实质上位于反射成像部50的外部,例如图4中的成像位置(包括第一放大成像位置111、第二放大成像位置121、第三放大成像位置131、定向成像位置301)处。本公开实施例中与“在成像区域内显示目标信息”相同或相似的描述(例如后续的“在目标位置显示目标信息”等)均只为了方便描述,并不用于限定反射成像部50的成像区域等本身可以显示目标信息。例如,成像区域是定向成像区域或者放大成像区域中的一部分。
本公开实施例提供的一种抬头显示系统,利用显像装置组内具有不同物距的多个单层显像装置,能够在与反射成像部不同距离的多个成像位置处成像,选取像距与物体最近的显像装置进行贴合,减小视差。并且,方向控制元件32可以将不同入射角度的定向成像光线会聚至预设区域内,并弥散至眼盒范围内,能够提高定向成像光线的亮度,还可以具有较大的可视范围;该定向显像装置30可以大范围设置,从而在反射成像部表面形成较大面积的定向成像区域,实现大范围成像。处理器选取合适的显像装置作为目标显像装置,并控制目标显像装置在反射成像部表面显示目标信息,使得反射成像部可以显示较大范围和/或或多层次的像,能够提高抬头显示系统的显示效果。
在上述实施例的基础上,该处理器还可以配置为确定目标位置,目标位置为反射成像部具有的用于显示所述待显示的目标信息的位置;根据目标位置确定目标显像装置;以及控制目标显像装置在目标位置处显示待显示的目标信息。例如,处理器可以确定需要显示的目标信息,其还可以确定目标位置,即确定在反射成像部50上显示目标信息的位置;例如,该处理器还可以确定包含目标位置的成像区域,并将与成像区域相对应的显像装置作为目标显像装置,并控制目标显像装置在目标位置显示目标信息;该成像区域为成像光线能够入射至反射成像部50表面的区域。
例如,若深度不同的成像层在反射成像部50上的成像区域一致时,可以进一步通过像距来判断目标显像装置。
本公开的一些实施例中,每个目标信息均具有相对应的目标位置,该目标位置可以为预先设置的,或者,也可以是基于当前的实际场景而确定的位置。例如,若该目标信息为车速,且预先设置在反射成像部50的左下方显示车速,可以直接将反射成像部50左下方的相应位置作为目标位置;或者,当前外界存在行人,需要形成与该行人位置相对应的图形来提醒驾驶员,该图形可以为目标信息,反射成像部50上需要显示该目标信息的位置即为目标位置;例如,可以将行人投影至反射成像部50上的位置作为目标位置。例如,该目标位置可以为一个位置点,也可以为一个位置范围,具体可基于实际情况而定。
本公开的一些实施例中,不同的显像装置对应反射成像部50上不同的成像区域,若该目标位置位于某个成像区域内,可以将与该成像区域相对应的显像装置作为目标显像装置,基于该目标显像装置即可在目标位置处显示相应的目标信息。例如,该目标位置位于与第一单层显像装置11相对应的放大成像区域内,可以将第一单层显像装置11作为目标显像装置。例如,若不同的成像区域有交集,且该目标位置位于多个成像区域中时,可以从中选择一个成像区域;例如,可以随机选择成像区域,也可以基于预先设置的选择规则进行选择。
例如,该抬头显示系统基于增强现实(Augmented Reality,AR)原理能够以贴合的方式进行显示。例如,该平视显示系统还可以基于混合增强现实(Mixed Reality,MR),混合增强现实是增强现实技术的一种实施方式。例如,“贴合”可以是,用户在观察抬头显示系统形成的图像时,该图像可以与外界物体配合显示。
例如,在外界对象投影(或映射)至反射成像部50上的投影位置位于定向成像区域内时,将外界对象作为目标对象;该定向成像区域为定向显像装置30发出的定向成像光线能够入射至反射成像部50表面的区域。同时,将位于定向成像区域内的投影位置或者投影位置的边缘作为目标位置。
例如,投影位置的边缘包括投影位置的轮廓和/或投影位置的周边,例如,周边可以是靠近轮廓的外侧位置。
本公开的实施例中,外界对象为位于反射成像部50外侧的事物,包括道路路面、指示标等静止的物体,也可以包括机动车、行人、动物、非机动车等可移动的物体。外界对象可以投影映射至反射成像部50上,例如,外界对象沿朝着眼盒范围62的方向可以投影映射至反射成像部50的某个位置,该位置即为投影位置,即外界对象、投影位置、眼盒范围三者共线,使得驾驶员在眼盒范围处可以透射该投影位置观看到外界对象。例如,该处理器还可以配置为:在该投影位置位于定向成像区域内,将定向显像装置30作为目标显像装置;并且,也可以将该外界对象作为可以进行AR显示的目标对象;例如,将位于放大成像区域内的投影位置或者投影位置的边缘作为目标位置,进而可以控制目标显像装置(即定向显像装置30)在该目标位置处显示目标信息,由于该目标位置与外界对象的投影位置相一致,可以使得外界对象、反射成像部50上显示的目标信息、眼盒范围三点共线,故眼盒范围处的驾驶员可以观看到与外界对象相贴合的目标信息(例如 将外界对象框出来等),可以更有效地提醒驾驶员。
例如,该处理器还可以配置为:在外界对象投影(或映射)至反射成像部50上的投影位置位于放大成像区域内时,将外界对象作为目标对象,并确定参照物(例如交通工具,例如车辆,例如HUD设备或采集设备等)与目标对象之间的目标距离;该放大成像区域为显像装置组发出的成像光线能够入射至反射成像部50表面的区域;并且,将位于放大成像区域内的投影位置或者投影位置的边缘作为目标位置;根据目标距离从显像装置组内选取目标显像装置,例如将与目标距离大小相对应的像距作为目标像距,并将与目标像距相对应的单层显像装置作为目标显像装置;例如,像距为凹面反射元件20所成的、单层显像装置的虚像与凹面反射元件20之间的距离;或者该虚像与反射成像部50之间的距离,也可以等效成虚像到眼盒范围62的距离。
需要注意的是,本公开一些实施例中,外界对象投影(或映射)至反射装置的投影位置,可以是用户在使用平视显示系统和/或交通工具时,由观察区域(例如,眼盒范围)内观察外界对象时,外界对象落在反射装置50上的投影区域。例如,外界对象和眼盒范围连线,其与反射装置50相交的部分,就可以认为是外界物体在反射装置上的投影位置。
本公开的一些实施例中,与外界对象的投影位置位于定向成像区域内相似,该处理器还可以配置为:在外界对象的投影位置位于放大成像区域内时,将该外界对象作为进行AR显示的目标对象,并可以将显像装置组内相应的单层显像装置作为目标显像装置。由于显像装置组内包含多个单层显像装置,本公开实施例中基于外界对象与该抬头显示系统之间的距离(即目标距离)来确定哪个单层显像装置为目标显像装置,例如,该目标距离具体可以简化为外界对象与反射成像部50之间的距离。
例如,由于显像装置组内不同的单层显像装置具有不同的物距,基于成像规律可知,不同的单层显像装置也对应有不同的像距,即单层显像装置所成的虚像与凹面反射元件20之间的距离不同,该像距可以一一映射至反射成像部50外侧不同的放大成像位置,例如图4中的第一放大成像位置111、第二放大成像位置121、第三放大成像位置131等;且像距越大,相应的放大成像位置也越远。在确定目标对象的目标距离之后,即可确定与该目标对象最近的放大成像位置,进而将与最近的放大成像位置所对应的单层显像装置作为目标显像装置。例如,若外界对象在第二放大成像位置121附近,可以将第二单层显像装置作为目标显像装置。例如,可以根据每个单层显像装置的像距分配一个距离范围,根据目标距离落入哪个距离范围来确定目标距离与哪个像距相匹配,进而确定将哪个单层显像装置作为目标显像装置。
例如,处理器还可以配置为:在外界对象投影(或映射)至反射成像部50上的投影位置位于放大成像区域内且位于定向成像区域内时,将外界对象作为目标对象,并确定参照物与目标对象之间的目标距离;并且,将投影位置或者投影位置的边缘作为目标位置;将与目标距离大小相对应的显像装置(包括定向显像装置30、第一单层显像装置11、第二单层显像装置12、第三单层显像装置13等)作为目标像距,并将与目标像距相对应的显像装置作为目标显像装置。例如,定向显像装置30的像距可以简化为定向成像位置 301与反射成像部50之间的距离。例如,定向显像装置30在反射成像部50外成虚像,这个虚像与反射成像部50之间的距离为像距,因为人眼到反射成像部50的距离是基本固定的,所以像距也可以等效成虚像到眼盒范围62的距离。
本公开的一些实施例中,基于目标对象的目标距离来确定最合适的显像装置作为目标显像装置,可以使得目标显像装置在反射成像部50外所成的虚像与目标对象之间的距离差最小,使得虚像与目标对象能够更好地贴合,能有效减小视差,并能提高增强现实显示的效果。
例如,本公开的一些实施例中,处理器还配置为:对于同一外界对象,控制定向显像装置显示外界对象的第一信息,并且控制显像装置组显示外界对象的第二信息,第一信息和第二信息包括的内容至少部分不同;或者,对于多个外界对象,利用定向显像装置显示满足第一选取条件的外界对象,并且利用显像装置组显示满足第二选取条件的外界目标。
例如,在上述实施例的基础上,参见图7所示,该抬头显示系统还包括第一透反元件21;第一透反元件21能够透射具有第一特性的光线,并反射具有第二特性的光线。例如,第一单层显像装置11设置在第一透反元件21的一侧,第二单层显像装置12和凹面反射元件20设置在第一透反元件21的另一侧,上述“一侧”和“另一侧”为相反侧;第一成像光线具有第一特性,第二成像光线具有第二特性,第一透反元件配置为透射第一成像光线并且反射第二成像光线。
本公开的实施例中,利用第一透反元件21调整显像装置组内某个单层显像装置的位置,可以使得单层显像装置发出的成像光线不会被其他单层显像装置遮挡;图7中以改变第二单层显像装置12的位置为例示出。第一透反元件21能够透射具有第一特性的光线,使得第一成像光线可以正常透射并入射至凹面反射元件20,第一单层显像装置11可以正常成像;并且,第一透反元件21还可以反射具有第二特性的光线,使得第二成像光线可以被该第一透反元件21反射,进而入射至凹面反射元件20实现成像。例如,第一特性和第二特性可以是两种不同的特性,本公开实施例中的“特性”指的是光线所具有的性质,如偏振特性、波长特性等。例如,第一透反元件能够透射第一偏振方向的偏振光线,并能反射第二偏振方向的偏振光线,且第一偏振方向与第二偏振方向互相垂直;并且,第一单层显像装置11可以发出第一偏振方向的第一成像光线,第二单层显像装置12可以发出第二偏振方向的第二成像光线,可以实现第一单层显像装置11和第二单层显像装置12无影响地成像。本公开实施例中的第一透反元件具体可以为反射式偏振镜(Reflective Polarizer Mirror,RPM)膜或双层增亮薄膜(Dual Brightness Enhancement Film,DBEF)。
例如,在另一些实施例中,第一特性和第二特性也可以为相同的特性,而第一透反元件为可透可反的介质。例如,第一透反元件对光线的透射率可以为5%~95%,第一透反元件对光线的反射率可以为5%~95%;例如,第一透反元件为半透半反介质,第一透反元件的透光率和反光率均为50%,例如,第一单层显像装置11发出的第一成像光线在 经过第一透反元件21时,一半被透射、另一半被反射,使得其中的一半第一成像光线可以透射至凹面反射元件20处;相应的,第二单层显像装置12发出的第二成像光线在到达第一透反元件21时,一半的第二成像光线可以被反射至凹面反射元件20处,从而也能够实现第一单层显像装置11和第二单层显像装置12成像。例如,第一透反元件对光线的透射率约为30%,反射率约为70%,第一像源11发出的第一成像光线在经过第一透反元件21时,约30%被透射、约70%被反射,使得约30%的第一成像光线可以透射至曲面镜20处;相应的,第二像源12发出的第二成像光线在到达第一透反元件21时,约70%的第二成像光线可以被反射至曲面镜20处,可实现第一像源11和第二像源12成像。
例如,本领域技术人员可以理解,第一成像光线具有第一特性指的可以是该第一成像光线只具有第一特性;或者,该第一成像光线的部分特性为第一特性,其也可以具有其他特性,甚至也可以具有第二特性。如上段所述的例子,若第一单层显像装置11可以发出是自然光的第一成像光线,该第一成像光线可以分解为第一偏振特性的偏振光线和第二偏振特性的偏振光线,第一成像光线可以同时具有第一特性和第二特性,例如,第一成像光线中的第一特性部分的光线仍然可以透射第一透反元件21,第一成像光线的一部分仍然可以入射至凹面反射元件20,不会影响第一单层显像装置11成像。由于光线可以被分解,本公开实施例中的透反元件(如第一透反元件21,以及后续的第二透反元件22等)可以透射某特性的光线指的是该透反元件可以只能透射该特性的光线、或者能够透射该特性的部分分量的光线;相应的,透反元件能够反射某特性的光线也具有类似的含义。例如,第一透反元件21可以透射水平偏振光并反射垂直偏振光,若第一成像光线是偏振方向与水平方向呈45度角的光线,该第一成像光线可以分解为水平偏振光和垂直偏振光,第一成像光线中的水平偏振光可以透射该第一透反元件21,也可认为是“第一透反元件21能够透射具有第一特性的光线”。例如,本公开实施例中的第一特性和第二特性可以为同类的特性,例如都是偏振特性,也可以为不同类的特性,例如第一特性为一种偏振特性,而第二特性为一种波长特性,具体可基于所选用的透反元件确定。
例如,在一些实施例中,该抬头显示系统还包括第二透反元件22,且显像装置组还包括第三单层显像装置13;第三单层显像装置13配置为发出入射至凹面反射元件20的、具有第三特性的第三成像光线;第三单层显像装置13对应的第三物距与第一物距和第二物距均不同,第三物距为第三成像光线从第三单层显像装置13到凹面反射元件20的传播路径长度或者第三单层显像装置13到凹面反射元件20的距离。
例如,第二透反元件22能够透射具有第一特性的光线,并反射具有第三特性的光线;第一透反元件21还能够透射具有第三特性的光线。例如,第二透反元件22设置在第一单层显像装置11与第一透反元件21之间,且第三单层显像装置13与第一透反元件21设置在第二透反元件22的同一侧;具体可参见图8所示。
或者,在另一些实施例中,第二透反元件22能够透射具有第二特性的光线,并反射具有第三特性的光线;第一透反元件21还能够反射具有第三特性的光线;例如,第二透反元件22设置在第二单层显像装置12与第一透反元件21之间,且第三单层显像装置13 与第一透反元件21设置在第二透反元件22的同一侧;具体可参见图9所示。
本公开的实施例中,第三单层显像装置13的物距(例如,第三物距)与第一单层显像装置11和第二单层显像装置12的物距也均不相同,从而使得三个单层显像装置可以在反射成像部50外侧不同位置成像,例如可以在图4所示的三个放大成像位置111、121、131处分别成像,从而实现多层次成像。例如,第三成像光线所具有的第三特性可以是与第一特性和第二特性均不相同的其他特性。
如图8所示,假设第二透反元件22可以透射第一偏振方向的光线并反射第三偏振方向的光线,并且,第一透反元件21可以透射第四偏振方向的光线并反射第二偏振方向的光线;例如,第一偏振方向、第三偏振方向均不与该第四偏振方向垂直。第一单层显像装置11发出的第一成像光线具有第一偏振方向,该第一成像光线可以透射第二透反元件22并入射至第一透反元件21;由于第一偏振方向与第四偏振方向不垂直,故该第一成像光线可以分解出一部分第四偏振方向的光线,使得该部分光线能够透射第一透反元件21,第一成像光线中的一部分能够透射该第一透反元件21,能够透射第四偏振方向光线的第一透反元件21也可以看做是能够透射第一偏振方向的光线(例如,能够透射第一特性的光线),第一透反元件21透射其中的一部分;同理,第三单层显像装置13发出的第三成像光线具有第三偏振方向,该第三成像光线到达第一透反元件21时也可以透射一部分,能够透射第四偏振方向的分量,故该第一透反元件21也可以透射第三特性的光线。并且,该第二单层显像装置12发出的第二成像光线具有第二偏振方向,其可以被第一透反元件21反射,进而可以实现三个单层显像装置分别成像。
例如,在一些实施例中,第一特性、第二特性和第三特性为三个不同波段。例如,图8中,第二透反元件22可以透射第一波段的光线并反射第三波段的光线,第一透反元件21可以反射第二波段的光线并透射其他波段(包括第一波段和第二波段)的光线,基于该两个透反元件也可以将三个单层显像装置发出的成像光线入射至凹面反射元件20,进而分别实现成像。图9所示的成像原理与图8的成像原理基本类似,图9中选用不同性质的透反元件,第一透反元件21可以透射第一特性的光线,并能反射第二特性和第三特性的光线,而第二透反元件22可以透射第二特性的光线,并反射第三特性的光线。此处不对图9所示的方案进行详述。
例如,需要说明的是,本公开实施例中的三个单层显像装置具有不同的物距,可以是成像光线传播至凹面反射元件20的传播路径长度不同,该“传播路径长度”为光线从起点传播到终点的路径长度,若光线直接从起点入射至终点,该传播路径长度可以为起点与终点之间的距离;若光线经过一次或多次反射后才入射至终点,该传播路径长度可以为光线依次到达每个反射点的长度之和。如图9中,第一单层显像装置11发出的第一成像光线可以直接入射至凹面反射元件20,故第一物距可以为第一单层显像装置11与凹面反射元件20之间的距离;而第二单层显像装置12发出的第二成像光线首先到达第一透反元件21,经第一透反元件21反射后才入射至凹面反射元件20,故第二单层显像装置12的第二物距可以是第二单层显像装置12与第一透反元件21之间的距离、再加上第 一透反元件21与凹面反射元件20之间的距离。相应的,第三单层显像装置13的第三物距可以是第三单层显像装置与第二透反元件22之间的距离、第二透反元件22与第一透反元件21之间的距离、第一透反元件21与凹面反射元件之间的距离三者之和。
例如,在一些实施例中,为了减小该抬头显示系统的体积,可以通过反射镜组中的平面反射元件23改变显像装置组内单层显像装置与凹面反射元件20之间的距离,从而缩小抬头显示系统的体积。如图10所示,该抬头显示系统还可以包括反射镜组,反射镜组包括一个或多个平面反射元件23;平面反射元件23配置为将显像装置组发出的成像光线反射至凹面反射元件20。
本公开实施例中,平面反射元件23设置在成像光线的传播路径上,用来改变该传播路径,从而能够以反射成像光线的方式将成像光线传输至凹面反射元件20。
例如,反射镜组可以包括一个平面反射元件23,该平面反射元件23配置为将显像装置组内的至少一个单层显像装置(例如每个单层显像装置)发出的成像光线反射至凹面反射元件20。例如,多个单层显像装置可以共用一个平面反射元件23,如图10所示,第一单层显像装置11和第二单层显像装置12共用一个平面反射元件23。
或者,在另一些实施例中,反射镜组可以包括多个平面反射元件23(例如,至少两个平面反射元件23),显像装置组包括多个单层显像装置,多个平面反射元件配置为将多个单层显像装置发出的成像光线反射至凹面反射元件。例如,多个平面反射元件23与显像装置组内的多个单层显像装置一一对应;每个平面反射元件23配置为将相对应的单层显像装置发出的成像光线反射至凹面反射元件20。或者,也可以两个平面反射镜23对应反射三个单层显像装置发出的成像光线等,本公开的实施例对多个平面反射镜与多个单层显像装置的对应关系不做限定。
本公开的一些实施例中,不同的单层显像装置可以分别使用不同的平面反射元件23;如图11和图12所示,第一单层显像装置11和第二单层显像装置12分别使用与其相对应的平面反射元件23;例如,可以设置一个总单层显像装置,通过在不同的位置设置平面反射元件23来改变该总单层显像装置不同区域所对应的像距,从而可以将该总单层显像装置分为多个单层显像装置,如图11中,总单层显像装置分为第一单层显像装置11和第二单层显像装置12,且第一单层显像装置11和第二单层显像装置12所对应的平面反射元件23位于不同的位置,使得第一单层显像装置11的第一物距和第二单层显像装置12的第二物距不同。
例如,在抬头显示系统包含透反元件时,也可以基于平面反射元件23来改变光路。具体的,图7至图9所示的实施例在增加平面反射元件23时,其结构可相应参见图13至图15所示。
例如,在上述任一实施例的基础上,本公开实施例中的方向控制元件32可以为朝向预设区域61设置的光线聚集元件,如图3所示。或者,如图16所示,方向控制元件32包括至少一个光线聚集元件321;至少一个光线聚集元件321设置在光源装置与弥散元件33之间,配置为将光源装置包括的不同的光源发出的光线会聚,例如,光线聚集元件321 配置为将不同的光源装置发出的光线会聚至同一个预设区域61。
例如,该方向控制元件32包括准直元件322;准直元件322配置为将其对应的光源发出的光线的出射方向调整至预设角度范围内,并将调整后的光线通过至少一个光线聚集元件发射至弥散元件33。例如,该预设角度范围可以为0度~35度;例如,准直光线可以是平行或基本平行的光线。
由于对于光线聚集元件而言,平行/准直的光线是易于调控的,例如平行光经过凸透镜就会实现聚焦;而对于杂乱无章的光线,光线聚集元件对它的调控作用比较差。因此本公开的实施例通过增加准直元件,将光线尽量调整为基本平行,可保证光线易于调控。
例如,当方向控制元件32包含准直元件322时,该光线聚集元件321可以设置在准直元件322与弥散元件33之间;该光线聚集元件321配置为将不同的光线会聚至同一个预设位置61。例如,可以不特殊设置方向控制元件32的朝向,通过光线聚集元件321也可以将不同的光线会聚至一个预设位置61。
例如,如图16所示,光线聚集元件321可以对应设置多个准直元件322。例如,准直元件322为准直透镜,该准直透镜包括凸透镜、凹透镜、菲涅尔透镜、或以上几种透镜组合中的一种或多种,该透镜组合具体可以是凸透镜与凹透镜的组合,菲涅尔透镜与凹透镜的组合等;或者,所述准直元件322为准直膜,配置为将光线的出射方向调整至预设角度范围内,一个准直元件322就可以将多个甚至所有光源31发出的光线准直。例如,准直元件322与光源位置之间的距离为所述准直元件322的焦距,可以是将光源设置在准直元件322的焦点处。
例如,参见图2和图16所示,方向控制元件32还包括反射元件35;反射元件35配置为将光源发出的入射光反射至弥散元件33。
例如,在一些实施例中,该反射元件35包括灯杯。方向控制元件32的结构,可以是为光源装置、灯杯、光线聚集元件的顺序依次设置,可以认为光源装置发出的光线首先经过灯杯,再传输至光线聚集元件。
例如,在一些实施例中,准直元件可以设置在光线聚集元件与光源装置之间,可以设置在灯杯内,也可以设置在灯杯外,准直元件将光源装置发出的至少部分光线进行准直,准直后的光线传输至光线聚集元件。
例如,该灯杯为由反光面围成的壳体,例如中空壳体,例如,灯杯的开口方向朝向弥散元件33;灯杯远离该开口的底部配置为设置灯杯对应的光源。例如,灯杯的内壁(例如,反射元件35的凹槽内壁)即为灯杯的反光面。反光面可以将光源发出的至少部分角度较大的光线反射,反射后的光线会聚拢,可以提高光源发出的光线的利用率。
例如,中空壳体的反光面的形状,包括曲面形状(如抛物面形状)或自由曲面形状、四棱台形状等中的至少之一。例如,灯杯的开口和端部的形状包括圆形、椭圆形、四边形等形状,开口和端部的形状可以相同也可以不同。
例如,方向控制元件32也可以包括准直元件322;在一些示例中,准直元件322可以设置在所述灯杯的内部,且所述准直元件322的尺寸小于或等于所述灯杯的开口大小; 准直元件322配置为将所述灯杯内的光源发出的部分光线进行准直后发射至弥散元件33。
或者,在另一些实施例中,灯杯是实心灯杯,实心灯杯包括实心透明部件,例如灯杯为具有反光面351的实心透明部件,实心透明部件的折射率大于1;实心灯杯的出光口朝向弥散元件33;这里的“朝向”包括直接朝向(中间未设置其他元件)和间接朝向(中间设置有其他元件)。实心灯杯远离开口的端部配置为设置实心灯杯对应的光源,以使得光源发出的至少部分光线射向所述实心灯杯的表面时发生全反射。例如,实心灯杯的具体结构可参见图17和图18所示。例如,实心灯杯的出光口指的是实心灯杯反光面351的开口方向。例如,实心灯杯的表面可以是实心透明部件与外界(例如,空气)之间的界面,光线射向外界时,至少部分满足全反射角度的光线可以在此界面发生全反射。
例如,实心灯杯的表面形状,包括曲面形状(如抛物面形状)或自由曲面形状等至少之一。
例如,可以将准直元件322集成在实心灯杯上。实心灯杯在端部设有空腔,空腔靠近实心灯杯的出光口的一面为凸向端部的凸面。参见图17所示,实心透明部件在远离实心灯杯开口的端部设有空腔352,该空腔352靠近实心灯杯开口的一面为凸面353。或者,实心灯杯在出光口设有开孔,开孔的底面为凸向出光口的凸面,如图18所示,实心透明部件在靠近实心灯杯开口的端部的中间位置设有开槽354(上述开孔的示例),开槽354的底面为凸面355。
例如,本公开的一些实施例中,光源发出的光线射向灯杯的反光面(例如,中空壳体)或表面(例如,实心透明部件),发生反射或全反射中的至少一种;通过控制反光面或表面的形状(例如,抛物面形状),反射后的光线也会调整为准直或接近准直的光线,提升了准直效果。
本公开的一些实施例中,空腔352的凸面353或开槽354的凸面355均可配置为对光源发出的光线进行准直,例如,凸面353或凸面355相当于准直元件322。凸面353或凸面355均设置在实心透明部件的中间位置,且凸面353或凸面355的尺寸小于实心灯杯的开口大小;凸面353或凸面355配置为将实心灯杯内的光源发出的部分光线进行准直后发射至弥散元件33。如图17所示,将凸面353设置在实心灯杯尾端的空腔内,该凸面353可以形成一个凸透镜,对射向该凸面353的光线进行准直。或者,参见图18所示,实心透明部件的中间位置设有开槽354,且开槽354的底面为凸面355,实心灯杯的凸面355配置为将实心灯杯反光面351不能反射的光线进行准直,其他出射角度较大的光线在实心灯杯内至少发生全反射后再准直射出实心灯杯。例如,实心灯杯的材质为折射率大于1的透明材质,比如高分子透明材质、玻璃等,以实现全反射。
例如,在上述任一实施例的基础上,参见图19B所示,该抬头显示系统还包括信息采集装置200,该信息采集装置200与处理器100通信连接,例如以有线或者无线的方式相连;信息采集装置200配置为采集本地信息和周围信息,并将采集到的本地信息和周围信息发送至处理器100。例如,该处理器100可以配置为:获取本地信息和周围信息, 根据本地信息和当前周围信息生成目标信息。
本公开的实施例中,信息采集装置200可以采集与交通工具当前驾驶状态相关或与驾驶员相关的本地信息,也可以采集交通工具外界周围的当前周围信息,使得处理器100可以基于该本地信息和当前周围信息生成相应的目标信息。例如,该信息采集装置具体可以包括图像采集设备、雷达(例如车载雷达)、红外传感器、激光传感器、超声波传感器、转速传感器、角速度传感器、GPS(Global Positioning System,全球定位系统)、V2X(Vehicle to X,表示车对外界的信息交换)系统、ADAS(Advanced Driving Assistant System,高级驾驶辅助系统)中的一种或多种。例如,不同的信息采集装置基于其需求可以安装在不同的位置,此处不做赘述。
例如,在上述实施例的基础上,本公开实施例提供的抬头显示系统可以设置在交通工具上,基于该交通工具的速度来确定需要显示的目标信息。例如,信息采集装置所采集的本地信息包括本地速度信息,该本地速度信息可以表示交通工具的速度;例如,信息采集装置还可以监测交通工具外部的对象,例如,外界对象,并确定参照物(例如交通工具,例如车辆)与外界对象之间的距离。例如,该信息采集装置可以包括速度传感器、或者设置在车轮上的转速传感器,进而可以确定相应的本地速度信息;或者,在该交通工具为车辆时,也可以通过车辆的数据传输系统,如车载自动诊断系统OBD(On-Board Diagnostics)来读取车辆的车速信息,进而可以确定本地速度信息;或者,通过设置在交通工具内部的辅助装置,如行车记录仪、电子狗、智能手机等设备自带的车速测量功能来测量车速,进而确定该交通工具的本地速度信息。例如,该信息采集装置还可以包括图像采集设备、车载雷达、或距离传感器(如红外距离传感器、激光距离传感器、超声波距离传感器等)等,从而可以确定外界对象与该交通工具之间的当前距离。
例如,在一些实施例中,在处理器100获取到本地速度信息和参照物与外界对象之间的距离之后,参见图20所示,处理器100根据本地信息和周围信息生成目标信息包括:
步骤S101:根据本地速度信息确定安全间距,判断参照物与外界对象之间的距离是否大于安全间距。
本公开的实施例中,安全间距为交通工具行驶时的安全间距的临界值,可以预先设置车速与安全间距之间的对应关系,基于该对应关系,可以将当前的本地速度信息映射为相应的安全间距。例如:当车速v≥60km/h时,安全间距S在数字上等于车速v,如车速为110km/h,安全间距S可以为110米;当40km/h≤车速v≤60km/h时,安全间距S=50m;当20km/h≤车速v≤40km/h,安全间距S=30m;当车速v≤20km/h,安全间距S=15m等。也可以采用其他的对应关系,本公开的实施例对此不做限定。另外,本公开实施例中的外界对象可以包括交通工具外部的其他车辆、行人、动物、非机动车等,也可以包括道路、指示标等静止的物体。对于不同的外界对象,可以采用不同的对应关系确定安全间距。
步骤S102:在参照物与外界对象之间的距离大于安全间距时,确定参照物当前处于 正常状态,并将相应的第一提示信息作为目标信息,第一提示信息包括空集第一提示图像和第一提示视频中的一项或多项。
本公开的实施例中,若参照物(例如交通工具)与外界对象之间的距离大于安全间距,说明外界对象距离交通工具较远,此时比较安全,例如,交通工具可以看作是处于正常状态,此时可以将主要起到提示作用的第一提示信息作为目标信息。该第一提示信息可以为空集,例如,目标信息为空,该抬头显示系统可以不显示任何信息;或者,该第一提示信息可以为第一提示文字,例如“安全间距,请继续保持”等;该第一提示信息还可以为第一提示图像,例如浅颜色的图像等;该第一提示信息也可以为第一提示视频,例如鼓掌动画等。
本公开的实施例中,提示图像或提示视频中的内容可以是文字、图形、符号、动画、画面等中的至少之一。
步骤S103:在参照物与外界对象之间的距离不大于安全间距时,确定参照物当前处于预警状态,并将相应的第一预警信息作为目标信息,第一预警信息包括第一预警图像和第一预警视频中的一项或多项。
本公开的实施例中,若外界对象与交通工具之间的距离不大于安全间距时,说明该外界对象距离交通工具较近,此时存在交通事故的风险较大,故此时可以作为一种预警状态,进而可以将在预警状态需要显示的第一预警信息作为目标信息予以显示。例如,该第一预警信息可以包括第一预警文字,例如“与前方车辆距离太近,请减速”;第一预警信息也可以包括第一预警图像,例如,显示红色叹号的图形,或者在与该外界对象相对应的位置处(例如,目标位置)突出显示与外界对象相匹配的图形;第一预警信息也可以包括第一预警视频,例如显示两车相撞的动画等。
本公开的实施例中,预警图像或预警视频中的内容可以是文字、图形、符号、动画、画面等中的至少之一。“不大于”可以是小于或等于,或者可以是小于。
例如,本公开实施例中的安全间距可以包括前安全间距、后安全间距、侧安全间距中的一项或多项。若外界对象位于参照物前方,在参照物与外界对象之间的距离不大于前安全间距时,可以确定参照物当前处于预警状态;若外界对象位于参照物侧方,在参照物与外界对象之间的距离不大于侧安全间距时,确定参照物当前处于预警状态;若外界对象位于参照物后方,在参照物与外界对象之间的距离不大于后安全间距时,可以确定参照物当前处于预警状态。例如,在相应的情况可以将合适的第一预警信息作为目标信息,例如,若右侧方的外界对象距离交通工具较近,可以将“请与右侧车辆保持距离”等作为目标信息。
例如,在确定目标信息的同时,该处理器100还可以确定相应的目标位置,并确定需要显示该目标信息的显像装置,例如,目标显像装置,进而通过目标显像装置可以在反射成像部50上的目标位置处显示出该目标信息。如上述其他实施例所述,该目标位置可以预先设置,也可以基于外界对象在该反射成像部50上的投影位置来确定,以实现贴合显示。
例如,当参照物处于不同的状态时,例如,处于预警状态或正常状态时,可以采用不同的显示方式来显示目标信息。例如,在参照物当前处于预警状态时,处理器100可以控制目标显像装置以正常方式或第一突出显示方式显示目标信息,该第一突出显示方式包括动态显示(例如滚动显示、跳动显示、闪烁显示)、高亮显示、以第一颜色显示中的一种或多种。在参照物当前处于正常状态时,处理器100可以控制目标显像装置以正常方式或第二突出显示方式显示目标信息,该第二突出显示方式包括以第二颜色显示。
本公开的实施例中,在参照物为预警状态或正常状态时,均可以以相同的方式(例如,正常方式)显示该目标信息,在不同的状态下所显示的目标信息不同,该正常方式包括静止显示、滚动显示、跳动显示、闪烁显示、高亮显示等中的一种或多种。
或者,在不同的状态下,不仅显示的目标信息不同,显示方式也可以不同。例如,在预警状态下,可以以第一颜色(例如红色)显示“与前方车辆距离太近,请减速”;在正常状态下,抬头显示系统可以以第二颜色(例如绿色)显示“当前安全,请继续保持”。或者,在不同的状态下,也可以以不同的显示方式显示相同的目标信息。例如,外界对象为行人,且抬头显示系统当前需要以AR方式标识出该行人,例如以矩形框来标出行人所在位置;若当前为预警状态,可以以第一颜色(例如红色)显示该矩形框,例如显示红色的矩形框;若当前为正常状态,可以以第二颜色(例如绿色)显示该矩形框,例如显示绿色的矩形框。
例如,本公开实施例中可以实时确定交通工具所处的状态,从而可以实时或间歇(即间隔预定时间,例如间隔10ms或者20ms等)以不同的显示方式来显示目标信息。例如,若参照物当前为预警状态,并以红色显示“请减速”的目标信息;之后驾驶员通过减速等方式调整了参照物与外界对象之间的距离使得外界对象位于安全间距之外,例如,之后为正常状态,这时可以再以绿色显示“当前行车安全”等目标信息。
图21以外界对象为车辆为例,示意性示出了距离本地车辆过近时的一种显示方式。如图21所示,抬头显示系统检测到前方车辆71距离本地车辆的当前距离为50m,而当前的安全间距为60m,例如,此时为预警状态,抬头显示系统可以在反射成像部50(例如,本地车辆的挡风玻璃)上显示的目标信息包括预警文字501,例如,“请减速”,该目标信息还包括框选出前方车辆501的矩形框502,该矩形框502具体可以是红色显示或高亮显示等,以加强提醒效果。例如,也可以同时显示与前方车辆71之间的距离(例如,当前检测到的目标距离),图21中将该距离“50.0m”显示在了矩形框502下方。
例如,在驾驶过程中,若外界对象为前方车辆(例如,外界对象为车辆且位于本地车辆前方),可以贴合显示该前方车辆的车速、间距等。例如,信息采集装置可以包括速度传感器、距离传感器等,基于该信息采集装置可以获取到包含前方信息的当前周围信息,该前方信息具体包括前方车辆的车速和/或与前方车辆的当前距离。例如,处理器100可以配置为将前方车辆的目标区域投影至反射成像部上的投影位置作为目标位置,并将前方信息作为目标信息,控制目标显像装置在目标位置处显示目标信息;例如,前方车辆的目标区域为前方车辆的后轮之间的空白区域或后备箱所在区域。
本公开的实施例中,在外界对象为前方车辆时,可以将前方车辆的后备箱所在区域或两个后轮之间的空白区域作为目标区域,进而基于该抬头显示系统可以在反射成像部50上显示与该目标区域贴合的信息。例如,可以在前方车辆两个后轮中间的空白区域处显示该前方车辆的车速、或与本地车辆之间的距离,从而实时方便驾驶员直观准确地确定前方车辆的相关信息,进而能够基于前方车辆的前方信息做出相应的操作反应。
本公开的实施例中,在参照物当前处于预警状态时,还可以采用其他提醒方式进行辅助提醒。例如,抬头显示系统还可以包括发声装置或者振动装置,处理器100还可以配置为:向发声装置发送预警语音,并控制发声装置播放预警语音;和/或者,向振动装置发送振动信号,控制振动装置振动;例如,在使用时,振动装置可以设置在能够接触到用户的位置。本公开的实施例中,可以在抬头显示系统中加装扬声器、或者借助交通工具上的扬声器进行语音提醒,该预警语音可以为没有具体含义的预警铃声,或者,也可以是具体的语音,如“注意!保持车距!”等。例如,可以在交通工具的方向盘或者座椅等驾驶员会直接接触的位置设置机械式振动装置,从而在预警状态下能够以振动的方式提醒驾驶员。
例如,在一种可能的实现方式中,无论参照物当前处于何种状态,也可以实时(或间歇)显示外界对象的相关信息。例如,信息采集装置200可以包括图像采集设备、车载雷达、或距离传感器(如红外距离传感器、激光距离传感器、超声波距离传感器等)等,在确定参照物与外界对象之间的距离(例如,目标距离)的同时,还确定外界对象的位置,该周围信息可以包括外界对象的位置和参照物与外界对象之间的距离。例如,处理器100可以配置为将外界对象的位置和参照物与外界对象之间的距离作为目标信息,从而可以在反射成像部50上实时显示该目标信息,进而可以实时提醒驾驶员外界对象的位置、距离等。
或者,也可以以AR显示方式直观地标识出外界对象的位置。例如,处理器100也可以配置为确定外界对象投影(或映射)至反射成像部上的投影位置,将投影位置或者投影位置的边缘作为目标位置,并指示目标显像装置在目标位置处显示预先设置的目标信息。本公开的实施例中,通过将外界对象的投影位置设为目标位置,可以在反射成像部50的相应位置处显示与外界对象一致的目标信息,从而可以直观地向驾驶员标出外界对象。例如,若外界对象为车辆,例如,可以在挡风玻璃相应位置处显示一个方框,该方框可以框出该车辆。
例如,该抬头显示系统实时将某些能够一直显示的信息作为目标信息,并显示在反射成像部50的预设位置。例如,可以实时监测交通工具四周所有外界对象的位置和距离,并生成该交通工具的鸟瞰图,该鸟瞰图中可以示意性表示交通工具前后左右每个方向的外界对象的位置,方便驾驶员可以快速查看四周的环境;例如,还可以以不同的颜色显示四周的外界对象,以表示不同的危险等级。参见图22所示,可以在反射成像部50上以鸟瞰图的形式显示本地车辆73周围其他车辆的情况,例如鸟瞰图503中显示本地车辆73左后方的后方车辆72即将超车,可以显示预警文字501,例如,“后方超车”。
在上述实施例的基础上,若外界对象为行人、非机动车等,其一般具有更高的安全优先级,例如,交通工具在行驶过程中需要优先考虑行人等的位置,避免相撞;故在外界对象为行人、非机动车等时优先进行提醒。本公开的实施例中,处理器100还可以配置为:在外界对象为重点对象时,确定参照物当前处于预警状态,并将相应的第二预警信息作为目标信息,该第二预警信息可以包括第二预警图像和第二预警视频中的一项或多项。
例如,在参照物与外界对象之间的距离小于预设距离值,且外界对象为行人、动物或非机动车辆时,将外界对象作为重点对象。或者,在外界对象朝向当前行驶线路移动,且外界对象为行人、动物或非机动车辆时,将外界对象作为重点对象。或者,在外界对象位于当前行驶线路中,且外界对象为行人、动物或非机动车辆时,将外界对象作为重点对象。或者,在当前位于对象密集区域内,且外界对象为行人、动物或非机动车辆时,将外界对象作为重点对象;对象密集区域包括学校、医院、停车场、市区中的一种或多种。或者,本地信息包括驾驶员的视线方位信息;在视线方位信息与外界对象的当前位置不匹配,且外界对象为行人、动物或非机动车辆时,将外界对象作为重点对象。
本公开的实施例中,外界对象为行人、动物或非机动车辆等需要特别注意的事物时,可以判断该外界对象是否能够作为重点对象。例如,若参照物与外界对象之间的距离小于预设距离值时,说明外界对象距离交通工具较近,此时也可以作为预警状态;例如,该预设距离值可以为预先设置的距离值,例如,其可以为上述实施例中基于车速所确定的“安全间距”。若外界对象正在向着当前行驶线路移动时,或者外界对象位于交通工具当前行驶的线路中时,说明交通工具与该外界对象相撞的可能性较大,可以作为预警状态。或者,基于GPS等可以确定该外界对象位于学校、医院等人员密集区域时,此时一般会存在数量较多的行人,故可以设为预警状态以提醒驾驶员。或者,信息采集装置还可以包括图像采集设备、红外传感器等,基于该信息采集装置确定驾驶员的视线方位信息,例如驾驶员的双眼位置、视线位置等;若视线方位信息与外界对象的当前位置不匹配,说明驾驶员当前极有可能没有注意到外界对象,可以设为预警状态以提醒驾驶员。例如,信息采集装置具体可基于眼球追踪技术来确定视线方位信息,也可采用其他技术,此处不做限定。
本公开的实施例中,在确定当前为预警状态时,可以生成用于提醒驾驶员的第二预警信息,例如“前方有行人,注意避让”、“前方学校,注意行人”等,并将该第二预警信息作为目标信息。如图23所示,当检测到前方有行人74时,抬头显示系统可以在反射成像部50上显示预警文字501,例如,“注意行人”,还可以通过矩形框502将行人74突出框选出来,并通过能够表示该行人74的运动趋势的箭头504提醒驾驶员当前有行人正朝向当前行驶车道75移动。例如,可以以正常方式或第一突出显示方式显示目标信息,也可以采用语音提醒等方式进行辅助提醒,该提醒方式与上述实施例的基本相似,此处不做赘述。
例如,上述实施例中的“安全间距”还可以包括前安全间距,该前安全间距指的是 参照物,例如交通工具与位于前方的外界对象之间的安全间距。处理器100还可以配置为:在外界对象位于参照物前方,在参照物与外界对象之间的距离不大于前安全间距,且安全间距与参照物与外界对象之间的距离之差大于预设距离差值和/或处于预警状态的时长超过预设时长时,生成制动信号或减速信号,并输出制动信号或减速信号,例如输出至交通工具的驾驶系统。
本公开的实施例中,若参照物与外界对象之间的距离不大于该前安全间距,参照物当前可以为预警状态;例如,若前安全间距与参照物与外界对象之间的距离之差大于预设距离差值,或者参照物处于预警状态的时长超过预设时长,说明外界对象距离交通工具过近,或者二者之间的距离长时间处于危险范围内,处理器100可以生成制动信号或减速信号,并将制动信号或减速信号发送至交通工具的驾驶系统,从而可以对交通工具进行减速或制动,使得交通工具与外界对象之间可以保持安全间距。
在上述实施例的基础上,在交通工具为车辆时,该抬头显示系统还可以监测是否车道偏移,并在偏离车道时确定存在车道偏移的问题,可以进行预警。例如,信息采集装置可以包括图像采集设备、雷达(例如车载雷达)、GPS等,基于图像采集设备等可以确定交通工具前方的车道情况,例如,车道位置信息,该车道位置信息具体可以包括交通工具当前所在车道、交通工具相邻的车道等;基于该信息采集装置可以确定交通工具所在的位置,例如,车辆位置信息。例如,处理器100可以获取到车道位置信息和车辆位置信息,参见图24所示,处理器100根据本地信息和周围信息生成目标信息可以包括:
步骤S201:根据车道位置信息和车辆位置信息确定车辆偏离当前行驶车道的偏移参数,并判断偏移参数是否大于相应的偏移阈值;偏移参数包括偏移角度和/或偏移距离。
本公开的实施例中,基于车辆位置和车道位置可以确定该车辆是否位于合适的车道内,例如,能够判断是否存在偏离。若车辆位于相应的车道内,偏移参数可以为零,偏移距离和偏移角度可以均为零或者其中一者为零;若车辆的行驶方向与车道方向不一致,需要确定相应的偏移角度,例如,车辆偏离车道的角度;若车辆可能存在偏移时,例如车辆压线,需要确定相应的偏移距离。通过比较偏移参数与预设的偏移阈值的大小可以确定当前是否偏移。
步骤S202:在偏移参数大于相应的偏移阈值时,确定参照物当前处于预警状态,并将相应的第三预警信息作为目标信息,第三预警信息包括第三预警图像、第三预警视频、优先行驶车道中的一项或多项。
本公开的实施例中,若当前的偏移参数大于偏移阈值,说明偏移角度过大和/或偏移距离过大,说明车辆存在偏移风险,例如,车辆可以看作是处于预警状态,并将相应的第三预警信息作为目标信息以提醒驾驶员。例如,该第三预警信息包括与车道偏移相关的第三预警文字、第三预警图像或第三预警视频,也可以将当前的优先行驶车道标注出来,例如,将优先行驶车道作为目标信息。例如,该优先行驶车道可以作为一个外界对象,通过确定该优先行驶车道映射到反射成像部50上的投影位置可以确定相应的目标位置,例如将该投影位置或投影位置的边缘作为目标位置,进而在反射成像部50上的目标 位置显示优先行驶车道。例如,可以在反射成像部50上显示与优先行驶车道相匹配的箭头、梯形(对应直行的优先行驶车道)、宽度逐渐变小的扇环(对应需要拐弯的优先行驶车道)等图形。例如,在反射成像部50上所显示的图形形状具体可基于优先行驶车道的映射到反射成像部50上的实际形状而定。
例如,在偏移参数大于相应的偏移阈值时,可以直接确定车辆处于预警状态;或者,在偏移参数大于相应的偏移阈值时,基于其他的本地信息来综合判断车辆当前是否车道偏移,例如,是否可以当作是预警状态。例如,信息采集装置包括速度传感器、加速度传感器、角速度传感器等,可以分别用于采集车辆速度、车辆加速度、车辆转向角度等;且基于车辆本身的系统可以确定转向灯状态,例如,可以确定转向灯是否为开启状态;例如,基于车辆速度、车辆加速度、转向灯状态等信息生成车辆状态信息,并将该车辆状态信息作为一种本地信息发送至处理器100,处理器100基于当前的偏移参数以及车辆状态信息来确定是否为预警状态。
例如,在偏移参数大于相应的偏移阈值,且满足预警条件时,确定参照物当前处于预警状态。例如,预警条件包括车辆速度大于第一预设速度值、车辆加速度不大于零、车辆的偏移角度所对应的方向相同一侧的转向灯未处于开启状态、当前为不可变道状态、偏离车道的时长大于预设的第一偏离时长阈值中的一种或多种。
本公开的实施例中,若偏移参数大于相应的偏移阈值,说明存在偏移风险,之后基于车辆状态信息判断该偏移状态是否正常,若不正常可以作为预警状态。例如,若车辆速度大于第一预设速度值或车辆加速度不大于零,说明车辆速度过快、或者车辆在偏移的情况下仍然不减速,此时可认为车辆比较危险,可以认为处于预警状态。或者,若当车辆的偏移角度所对应的方向相同一侧的转向灯未处于开启状态,例如车辆向左偏移,而左侧的转向灯未开启,也可以间接认为驾驶员当前未规范地向左侧转向,也存在较大风险,为预警状态。或者,若当前为不可变道状态,例如偏移方向所对应的车道存在其他车辆时,可以不允许变更到该车道,若驾驶员继续沿偏移方向进行变道,容易引起交通事故,故也可以当作是预警状态。或者,若偏离车道的时长大于预设的第一偏离时长阈值,可以说明该车辆长时间偏离了车道,应当提醒驾驶员。
例如,在偏移参数大于相应的偏移阈值时,某些情况为正常偏移,可以不特殊提醒驾驶员,例如,可以为正常状态,或者说此时不属于车辆偏移的情况。例如,信息采集装置所采集的本地信息还包括车辆状态信息,该车辆状态信息包括车辆速度、车辆加速度、转向灯状态、双闪信号灯状态、横摆角速度中的一项或多项。处理器100基于该车辆状态信息具体可进行如下判断:
在偏移参数大于相应的偏移阈值,且满足正常条件时,确定参照物当前处于预警状态。正常条件包括车辆速度小于第二预设速度值、车辆加速度小于零、车辆的偏移角度所对应的方向相同一侧的转向灯处于开启状态、双闪信号灯为开启状态、横摆角速度大于预设角速度阈值、偏离车道的时长小于预设的第二偏离时长阈值、驾驶员的视线方位信息与偏移角度所对应的方向相同中的一种或多种。
本公开的实施例中,若偏移参数大于相应的偏移阈值,说明存在偏移风险,但是若基于车辆状态信息确定当前是正常偏移(例如正常变道)等,可以不进行预警,可以当作正常状态。例如,若车辆速度小于第二预设速度值或车辆加速度小于零,说明车辆速度不快、或正在减速,此时风险较小,可以作为正常状态。若车辆的偏移角度所对应的方向相反一侧的转向灯处于开启状态,说明车辆当前虽然偏离了车道,但驾驶员正在向偏移方向的相同方向转向,可以是驾驶员正在正常变道、或拐弯,可以认为是正常状态。若双闪信号灯为开启状态,或者横摆角速度大于预设角速度阈值,说明该车辆因故障而需要偏离或变道,或者车辆遇到紧急情况导致紧急转向、避让等,可以不当作是车道偏移需要预警的情况,例如,对于车道偏移来说,其也可以作为一种不属于车道偏移情况的正常状态。例如,若驾驶员的视线方位信息与偏移角度所对应的方向相同,说明当前虽然车辆偏移了车道,但驾驶员注意到了偏移情况,也可以作为一种正常状态,不需要额外预警提醒驾驶员。
步骤S203:在偏移参数不大于相应的偏移阈值时,确定参照物当前处于正常状态,并将相应的第三提示信息作为目标信息,第三提示信息包括空集、第三提示图像、第三提示视频、优先行驶车道中的一项或多项。
本公开的实施例中,若当前的偏移参数不大于偏移阈值,说明偏移距离不大和/或偏移角度不大,例如,说明车辆在正常行驶,车辆可以看作是处于正常状态,可以将相应的第三提示信息作为目标信息。
例如,与上述确定安全间距的实施例类似,当车辆处于预警状态或正常状态等不同的状态时,可以采用不同的显示方式来显示目标信息,比如在当前处于预警状态时,处理器100可以配置为控制目标显像装置以正常方式或第一突出显示方式显示目标信息,该第一突出显示方式包括动态显示(例如滚动显示、跳动显示、闪烁显示)、高亮显示、以第一颜色显示中的一种或多种。在参照物当前处于正常状态时,处理器100可以配置为控制目标显像装置以正常方式或第二突出显示方式显示目标信息,该第二突出显示方式包括以第二颜色显示。本实施例中的显示方式与上述实施例基本类似,此处不做赘述。
例如,车辆处于正常状态具有两种情况,若偏移参数不大于相应的偏移阈值,说明车辆正常行驶且无偏移,可以确定简单的目标信息,例如显示文字“车道保持中”等。若偏移参数大于相应的偏移阈值,但属于上述正常状态的情况时,说明车辆当前虽然偏离了车道,但在正常转向等,可以以提示的方式显示相应的目标信息。例如,该抬头显示系统AR显示对应于当前车道和转向车道的图像,如投射出指向转向车道的蓝色的方向箭头,投射出与当前道路贴合的蓝色虚拟道路且投射与转向车道贴合的绿色车道;或者,可以投射出道路的简略地图,包括当前车道与转向车道,二者可以用特定的颜色、形状区分表示。例如,当前即将驶出高速路,驾驶员向右侧匝道转向,可以在反射成像部50上投射出主车道与匝道的图像并配有指向匝道的箭头;驾驶员变道超车时,反射成像部50上投射本车道与超车道的图像,并可以闪烁提醒车辆的变道轨迹。如图25所示,抬头显示系统根据该当前行驶车道75对应的车道位置信息可以确定该当前行驶车道75 为右转弯的车道,若车辆继续直行可能会导致车辆的偏移角度加大,此时可以在反射成像部50上显示预警文字501,例如,“请右转”,同时可以显示与该当前行驶车道75贴合的箭头504,从而可以直观地提醒驾驶员进行右转。或者,如图26所示,若驾驶员当前在向左变道,该车辆的偏移角度所对应的方向即为左向;若驾驶员当前未开启左转向灯,该驾驶员当前可能在违规变道,可以在反射装置50上显示告警文字501“请开左转灯”,以提醒驾驶员开启左转向灯;并且,还可以用箭头504表示车辆当前的行驶方向,提醒驾驶员当前正在向左偏移。
例如,在车辆行驶过程中,无论车辆处于预警状态还是正常状态,可以实时显示优先行驶车道。例如,处理器100配置为根据车道位置信息和车辆位置信息确定车辆的优先行驶车道,并将优先行驶车道作为目标对象;确定目标对象投影至反射成像部50上的投影位置,将投影位置或者投影位置的边缘作为目标位置,并控制目标显像装置在目标位置处显示预先设置的目标信息。
或者,也可实时显示车道的边界线;例如,处理器100配置为根据车辆位置信息和车道位置信息确定车辆的优先行驶车道,将优先行驶车道两侧的边界线作为目标对象,并将形状与边界线相匹配的图形作为目标信息;确定目标对象投影至反射成像部上的投影位置,将投影位置作为目标位置,并控制目标显像装置在目标位置处以预设颜色显示目标信息。
本公开的实施例中,可以实时确定车辆的优先行驶车道,基于该优先行驶车道的位置或优先行驶车道两侧边界线的位置确定其投影至反射成像部50上的投影位置,进而确定目标位置。由于整个车道与车辆之间的距离是逐渐变大的,该优先行驶车道不能当作一个点进行处理;可以从优先行驶车道中选取多个点作为采样点,使得抬头显示系统可以更准确地确定在反射成像部50的哪些位置来显示与优先行驶车道贴合的内容。例如,由于优先行驶车道上不同点与车辆之间的距离不同,故可以利用多个显像装置分别显示该优先行驶车道的一部分,例如用第一单层显像装置11和第二单层显像装置12分别显示优先行驶车道的一部分;或者,将优先行驶车道上的一个点作为参考点(例如将中间点作为参考点),把该参考点与车辆之间的当前距离作为目标距离,进而确定一个目标显像装置。
例如,在不同的状态下,可以以不同的显示方式来显示该优先行驶车道。例如,在正常状态下,以绿色显示显示该优先行驶车道;当车辆偏移时,可以以红色显示该优先行驶车道。例如,可以显示与该优先行驶车道在视觉上贴合的图形或箭头等,以引导驾驶员进行行驶。或者,在正常状态下,在反射成像部50上以绿色显示与边界线贴合的形状;当车辆偏移时,以红色显示优先行驶车道左右两侧的边界线,从而提醒驾驶员变道。
在上述实施例的基础上,处理器还可以配置为,在偏移参数大于相应的偏移阈值,且偏移参数与偏移阈值之差大于预设的偏移差值和/或处于偏移状态的时长超过预设的安全偏移时长,生成制动信号或减速信号,并将该制动信号或减速信号输出,例如输出至交通工具的驾驶系统。
本公开的实施例中,若车辆的偏移参数大于相应的偏移阈值,当前可能存在偏移风险;并且,若偏移参数与偏移阈值之差大于预设的偏移差值,或者处于偏移状态的时长超过预设的安全偏移时长,可以说明车辆当前偏移程度过大,或者车辆长时间处于偏移的情况下行驶,危险系数较高,处理器100可以生成制动信号或减速信号,并将制动信号或减速信号发送至交通工具的驾驶系统,从而可以对车辆进行减速或制动,避免车辆因偏移问题严重而引发交通事故。
在上述实施例的基础上,该抬头显示系统还可以向驾驶员提示异常道路。例如,信息采集装置200可以包括图像采集设备、雷达(例如车载雷达)等,用于采集道路异常信息;或者,也可以基于外部的其他系统(如实时交通系统等)获取道路异常信息,该道路异常信息包括障碍物位置、维修路段位置、危险路段位置、不平路段位置、事故路段位置、临时检查路段位置中的一项或多项。处理器100获取到道路异常信息时,可以将道路异常信息作为目标信息。或者,处理器100配置为根据道路异常信息确定异常位置投影至反射成像部50上的投影位置,将投影位置或者投影位置的边缘作为目标位置,并指示目标显像装置在目标位置处显示与道路异常信息相对应的目标信息。
本公开的实施例中,若交通工具附近存在道路异常的情况,基于信息采集装置200或其他系统可以获取到相应的道路异常信息,且处理器100可以直接将该道路异常信息作为目标信息显示在反射成像部50上,例如目标信息为“前方一百米有交通事故”等;或者,也可以以AR显示的方式在反射成像部50上标识出异常道路的位置。例如,若信息采集装置检测到道路路面有障碍物(例如石子、冰面、坑洞等),可以确定障碍物的位置,例如,异常位置,并将该异常位置投影到反射成像部50上的投影位置作为目标位置,进而可以在该目标位置处显示相应的目标信息(例如与障碍物形状相匹配的图形等),从而可以直观地向驾驶员显示障碍物的位置,能够更有效地提醒驾驶员。
例如,在雨天、雾天、或者夜间等能见度较低的环境中,该抬头显示系统也可以对驾驶员进行提醒。例如,该周围信息还包括当前能见度,在该当前能见度小于预设的能见度阈值时,说明当前能见度较低,驾驶环境较恶劣,信息采集装置200包括车载雷达、距离传感器等在能见度低的环境下也可以正常检测外界对象的部件,基于该信息采集装置200可以采集到外界对象的位置信息;之后,处理器100配置为将外界对象的位置信息作为目标信息;或者,处理器100配置为确定外界对象投影至反射成像部50上的投影位置,将投影位置或者投影位置的边缘作为目标位置,并控制目标显像装置在目标位置处显示预先设置的目标信息。
本公开的实施例中,外界对象的位置信息包括外界对象的位置以及与外界对象与交通工具(例如车辆)之间的距离,该抬头显示系统检测到外界对象时,可以显示外界对象的位置信息;或者,以AR方式更直观地标出外界对象的位置,从而通知驾驶员外界对象的方位,避免相撞。例如,也可以将道路作为外界对象,根据实时道路状况和联网的道路信息确定道路的位置,从而可以在反射成像部50上对行驶路线进行辅助显示,例如在正确行驶道路上标示辅助线和转向标志等。
例如,对于异常道路和能见度较低环境下的外界对象,可以全部作为重点标注的对象,可以确定为预警状态;或者,也可以进行细分,并分为正常状态和预警状态等。例如,可以基于与交通工具之间的距离进行划分,若异常道路或外界对象距离该交通工具较远,可以为正常状态;若距离较近,可以为预警状态。对于不同状态下的目标信息,可以以相应的显示方式进行显示。
本领域技术人员可以理解,上述多个实施例中的目标信息指的是当前可以在反射成像部50上显示的一个内容;在同一时间点,反射成像部50上可以显示多个目标信息。例如,若当前存在外界对象,一个外界对象可以对应至少一个目标信息。或者,上述实施例所确定的预警状态和正常状态所对应的也可以是一个目标信息的状态,即在同一时刻,不同的目标信息所对应的状态可以不同。例如,交通工具为车辆,且车辆前方有两个行人A和B,行人A距离车辆较近,而行人B距离车辆较远,对于行人A来说,可以确定为预警状态,比如可以在反射成像部50(例如车辆的挡风玻璃)上用红色框标识出该行人A;而对于行人B来说,可以确定为正常状态,可以在反射成像部50上用绿色框标识该行人B,可以在反射成像部50上可以用红色框和绿色框分别标出行人A和行人B,二者可以互不影响。
例如,在一些实施例中,处理器100还配置为:根据目标信息生成推送信息,并将推送信息发送至服务器或发送至预设距离内的其他设备,设备,例如交通工具、手机、笔记本等。本公开的实施例中,安装有该抬头显示系统的交通工具可以将采集到的信息共享至其他交通工具,其可以直接发送至附近的其他设备,也可以上传至服务器,由服务器转发至需要该信息的设备。例如,该抬头显示系统将外界对象的位置信息、重点对象的位置信息、道路异常信息等作为目标信息时,可将该目标信息共享给其他设备;或者,当本地车辆发生车道偏移时,也可以通知附近的其他车辆,提醒其他车辆避让等。
本公开至少一实施例还提供一种抬头显示系统,参照图1,该抬头显示系统包括显像装置组和凹面反射元20;显像装置组包括多个单层显像装置,图1中示出为第一单层显像装置11以及第二单层显像装置12。
每个单层显像装置配置为发出入射至凹面反射元件20的成像光线,至少部分单层显像装置的物距相同或者至少部分单层显像装置所对应的物距不相同,所述物距包括所述成像光线从相对应的所述单层显像装置到所述凹面反射元件的传播路径长度。例如,不同的单层显像装置所对应的物距互不相同,物距为成像光线从相对应的单层显像装置到凹面反射元件的传播路径长度或者单层显像装置到凹面反射元件的距离;凹面反射元件20配置为将成像光线反射,例如反射至反射成像部,以使得反射成像部能够将成像光线反射至预设范围内,例如眼盒范围内。反射成像部例如为汽车的挡风玻璃等。
例如,在一些实施例中,参照图2,抬头显示系统还包括定向显像装置30,如图3所示,定向显像装置30包括光源装置、方向控制元件32、弥散元件33和光调制层34。光源装置包括至少一个光源31。
方向控制元件32配置为将光源装置发出的光线会聚,例如会聚至预设区域,预设区 域为预设范围(例如眼盒范围)内的一个位置或范围。弥散元件33和光调制层34可以设置在方向控制元件32的同一侧。弥散元件配置33为将从方向控制元件32出射的经过会聚的光弥散,以使经过会聚的光形成用于形成光斑的光束。光调制层34配置为对用于形成光斑的光束进行调制,以使光束形成定向成像光线。例如,光调制层34配置为遮挡或透射光线,并在透射时发出朝向反射成像部的定向成像光线,以使得反射成像部能够将定向成像光线反射至预设范围内。例如,光调制层34包括液晶层(例如,液晶显示面板)或电润湿显示层,光调制层可以将经过其的光线转化为包括图像信息的光线(例如,成像光线)。
例如,在一些实施例中,参考图7,抬头显示系统还可以包括第一透反元件21;第一透反元件21能够透射具有第一特性的光线,并反射具有第二特性的光线;显像装置组包括能够发出第一成像光线的第一单层显像装置11和能够发出第二成像光线的第二单层显像装置12;例如,第一单层显像装置11可以设置在第一透反元件21的一侧,第二单层显像装置12和凹面反射元件20可以设置在第一透反元件21的另一侧;上述“一侧”与“另一侧”为相反侧。第一成像光线具有第一特性,第二成像光线具有第二特性。由此第一透反元件可透射第一成像光线并且反射第二成像光线。
例如,在一些实施例中,参考图8和图9,抬头显示系统还包括第二透反元件22,且显像装置组还包括能够发出第三成像光线的第三单层显像装置13,第三成像光线具有第三特性。例如,第三单层显像装置13对应的第三物距与第一物距和第二物距均不同,第三物距为第三成像光线从第三单层显像装置13到凹面反射元件20的传播路径长度或者第三单层显像装置13到凹面反射元件20的距离。
例如,第二透反元件22能够透射具有第一特性的光线,并反射具有第三特性的光线;第一透反元件21还能够透射具有第三特性的光线;参照图8,第二透反元件可以设置22在第一单层显像装置11与第一透反元件21之间,且第三单层显像装置13与第一透反元件21可以设置在第二透反元件22的同一侧;或者,第二透反元件22能够透射具有第二特性的光线,并反射具有第三特性的光线;第一透反元件还能够反射具有第三特性的光线;参照图9,第二透反元件22可以设置在第二单层显像装置12与第一透反元件21之间,且第三单层显像装置13与第一透反元件21可以设置在第二透反元件22的同一侧。
例如,在一些实施例中,抬头显示系统还包括一个或多个平面反射元件;参照如图10,当抬头显示系统包括一个平面反射元件23时,平面反射元件23配置为将显像装置组内至少一个单层显像装置发出的成像光线反射至凹面反射元件20;或者,当抬头显示系统包括多个平面反射元件23且至少一个单层显像装置包括多个单层显像装置时,多个平面反射元件23配置为将多个单层显像装置发出的成像光线反射至所述凹面反射元件。例如,在一些实施例中,多个平面反射镜23与多个单层显像装置一一对应;每个平面反射镜23用于将相对应的单层显像装置发出的成像光线反射至曲面镜20;或者也可以两个平面反射镜23对应反射三个单层显像装置发出的成像光线等,本公开的实施例对多个平面反射镜与多个单层显像装置的对应关系不做限定。
例如,反射成像部上的定向成像区域的面积大于放大成像区域的面积;定向成像区域为定向成像光线能够入射至反射成像部表面的区域,放大成像区域为显像装置组发出的成像光线能够入射至反射成像部表面的区域。
本公开至少一实施例还提供一种抬头显示系统的控制方法,该抬头显示系统包括显像装置组和定向显像装置;显像装置组包括至少一个单层显像装置,定向显像装置的成像范围大于至少一个单层显像装置的成像范围;该控制方法包括:确定待显示的目标信息和目标显像装置,并控制目标显像装置显示所述目标信息;例如,目标显像装置为从定向显像装置、显像装置组内的至少一个单层显像装置中选取的一个显像装置。
例如,该抬头显示系统可以为本公开实施例提供的任意一种抬头显示系统,该控制方法还可以实现上述抬头显示系统中处理器的各项功能以及执行的各种操作,具体可以参见上述实施例,在此不再赘述。
本公开至少一实施例还提供一种交通工具,例如车辆、船舶或飞机,该交通工具包括本公开实施例提供的任一抬头显示系统,从而用户(例如驾驶员)可以利用该平视显示系统更安全、便捷地操控交通工具。
还有以下几点需要说明:
(1)本公开实施例的附图只涉及到与本公开实施例涉及到的结构,其他结构可参考通常设计。
(2)为了清晰起见,在用于描述本公开的实施例的附图中,层或区域的厚度被放大或缩小,即这些附图并非按照实际的比例绘制。
(3)在不冲突的情况下,本公开的实施例及实施例中的特征可以相互组合以得到新的实施例。
以上,仅为本公开的具体实施方式,但本公开的保护范围并不局限于此,本公开的保护范围应以权利要求的保护范围为准。

Claims (38)

  1. 一种抬头显示系统,包括:显像装置组、凹面反射元件、定向显像装置和处理器,其中,所述显像装置组包括至少一个单层显像装置;所述定向显像装置包括光源装置、方向控制元件、弥散元件和光调制层;所述处理器分别与所述定向显像装置、所述显像装置组内的所述至少一个单层显像装置连接;
    所述方向控制元件配置为将所述光源装置发出的光线会聚;所述弥散元件配置为将从所述方向控制元件出射的经过会聚的光弥散,以使所述经过会聚的光形成用于形成光斑的光束;所述光调制层配置为对所述用于形成光斑的光束进行调制,以使所述光束形成定向成像光线;
    所述至少一个单层显像装置配置为发出入射至所述凹面反射元件的成像光线;
    所述凹面反射元件配置为反射所述成像光线;
    所述处理器配置为:确定待显示的目标信息和目标显像装置,控制所述目标显像装置显示所述待显示的目标信息,其中,所述目标显像装置为从所述定向显像装置、所述显像装置组内的所述至少一个单层显像装置中选取的一个或多个显像装置。
  2. 根据权利要求1所述的抬头显示系统,还包括反射成像部,其中,所述反射成像部配置为将所述定向成像光线和所述成像光线反射至所述抬头显示系统的眼盒范围;
    所述定向成像光线被所述反射成像部反射至所述眼盒范围内的预设范围;
    所述成像光线被所述反射成像部反射至所述预设范围或所述眼盒范围内的所述预设范围之外的范围。
  3. 根据权利要求1或2所述的抬头显示系统,其中,所述至少一个单层显像装置包括多个单层显像装置;并且
    至少部分单层显像装置所对应的物距相同或者至少部分单层显像装置所对应的物距不相同,所述物距包括所述成像光线从相对应的所述单层显像装置到所述凹面反射元件的传播路径长度。
  4. 根据权利要求1-3任一所述的抬头显示系统,还包括第一透反元件,其中,所述第一透反元件能够透射具有第一特性的光线,并反射具有第二特性的光线;
    所述显像装置组包括能够发出第一成像光线的第一单层显像装置和能够发出第二成像光线的第二单层显像装置;所述第一成像光线具有所述第一特性,所述第二成像光线具有所述第二特性;
    所述第一透反元件配置为透射所述第一成像光线并且反射所述第二成像光线。
  5. 根据权利要求4所述的抬头显示系统,还包括第二透反元件,其中,所述显像装置组还包括能够发出第三成像光线的第三单层显像装置,所述第三成像光线具有第三特性;
    其中,
    所述第二透反元件配置为透射具有第一特性的第一成像光线,并反射具有第三特性 的第三成像光线;所述第一透反元件还配置为透射具有第三特性的第三成像光线;或者,
    所述第二透反元件配置为透射具有第二特性的第二成像光线,并反射具有第三特性的第三成像光线;所述第一透反元件还配置为反射具有第三特性的第三成像光线。
  6. 根据权利要求1-5任一所述的抬头显示系统,还包括一个或多个平面反射元件;
    其中,当所述抬头显示系统包括一个平面反射元件时,所述平面反射元件配置为将所述显像装置组内所述至少一个单层显像装置发出的成像光线反射至所述凹面反射元件;
    或者,当所述抬头显示系统包括多个平面反射元件且所述至少一个单层显像装置包括多个单层显像装置时,所述多个平面反射元件配置为将所述多个单层显像装置发出的成像光线反射至所述凹面反射元件。
  7. 根据权利要求2所述的抬头显示系统,其中,所述反射成像部包括定向成像区域和放大成像区域,所述定向成像区域的面积大于所述放大成像区域的面积;
    所述定向成像区域为所述定向成像光线能够入射至所述反射成像部表面的区域,所述放大成像区域为所述显像装置组中之一的单个单层显像装置发出的成像光线能够入射至所述反射成像部表面的区域。
  8. 根据权利要求1-7任一所述的抬头显示系统,其中,
    所述方向控制元件包括至少一个光线聚集元件;所述至少一个光线聚集元件设置在所述光源装置与所述弥散元件之间,所述至少一个光线聚集元件配置为将所述光源装置包括的不同的光源发出的光线会聚。
  9. 根据权利要求8所述的抬头显示系统,其中,所述方向控制元件包括至少一个准直元件;
    所述准直元件配置为将其对应的光源发出的光线的出射方向调整至预设角度范围内,并将调整后的光线通过所述至少一个光线聚集元件发射至所述弥散元件。
  10. 根据权利要求9所述的抬头显示系统,其中,所述方向控制元件还包括反射元件;
    所述反射元件包括灯杯;
    所述灯杯包括具有内反光面的壳体;所述灯杯的远离开口的尾端配置为设置所述灯杯对应的光源;
    或者,所述灯杯包括实心灯杯,所述实心灯杯包括实心透明部件,所述实心灯杯的远离所述出光口的端部配置为设置所述实心灯杯对应的光源,以使得所述光源发出的至少部分光线射向所述实心灯杯的表面时发生全反射。
  11. 根据权利要求10所述的抬头显示系统,其中,
    所述实心灯杯在所述端部设有空腔,所述空腔靠近所述实心灯杯的出光口的一面为凸向所述端部的凸面;
    或者,所述实心灯杯在所述出光口设有开孔,所述开孔的底面为凸向所述出光口的凸面。
  12. 根据权利要求11所述的抬头显示系统,其中,所述至少一个准直元件包括多个准直元件,所述准直元件设置在所述灯杯的内部,且所述准直元件的尺寸小于或等于所述灯杯的开口大小;所述准直元件配置为将所述灯杯内的光源发出的部分光线进行准直。
  13. 根据权利要求2所述的抬头显示系统,其中,所述处理器还配置为:
    确定目标位置,所述目标位置为所述反射成像部具有的用于显示所述待显示的目标信息的位置;
    根据所述目标位置确定目标显像装置;以及
    控制所述目标显像装置在所述目标位置处显示所述待显示的目标信息。
  14. 根据权利要求13所述的抬头显示系统,其中,所述处理器还配置为:
    在外界对象投影至所述反射成像部上的投影位置位于放大成像区域内时,将所述外界对象作为目标对象,并确定参照物与所述目标对象之间的目标距离,其中,所述放大成像区域为所述显像装置组中一个单个显像装置发出的成像光线能够入射至所述反射成像部表面的区域;
    将位于所述放大成像区域内的所述投影位置或者所述投影位置的边缘作为目标位置;以及
    根据所述目标距离从所述显像装置组内选取目标显像装置;或者
    所述处理器还配置为:在外界对象投影至所述反射成像部上的投影位置位于定向成像区域内时,将所述外界对象作为目标对象,其中,所述定向成像区域为所述定向显像装置发出的定向成像光线能够入射至所述反射成像部表面的区域;以及
    将位于所述定向成像区域内的所述投影位置或者所述投影位置的边缘作为目标位置。
  15. 根据权利要求14所述的抬头显示系统,其中,所述处理器还配置为:
    对于同一外界对象,控制定向显像装置显示所述外界对象的第一信息,并且控制显像装置组显示所述外界对象的第二信息,所述第一信息和所述第二信息包括的内容至少部分不同;或者,
    对于多个外界对象,利用定向显像装置显示满足第一选取条件的外界对象,并且利用显像装置组显示满足第二选取条件的外界目标。
  16. 根据权利要求1-15任一所述的抬头显示系统,还包括信息采集装置,其中,所述信息采集装置与所述处理器连接;
    所述信息采集装置配置为采集本地信息和周围信息,并将采集到的所述本地信息和所述周围信息发送至所述处理器;
    所述处理器还配置为:获取所述本地信息和所述周围信息,根据所述本地信息和所述周围信息生成所述目标信息。
  17. 根据权利要求16所述的抬头显示系统,其中,所述本地信息包括本地速度信息,所述周围信息包括参照物与外界对象之间的距离,所述处理器根据所述本地信息和周围信息生成所述目标信息包括:
    根据所述本地速度信息确定安全间距,判断所述参照物与外界对象之间的距离是否大于所述安全间距;
    在所述参照物与外界对象之间的距离不大于所述安全间距时,确定参照物当前处于预警状态,并将相应的第一预警信息作为所述目标信息,所述第一预警信息包括第一预警图像和第一预警视频中的一项或多项。
  18. 根据权利要求16所述的抬头显示系统,其中,所述周围信息包括参照物与所述外界对象之间的距离以及外界对象的位置;所述处理器根据所述本地信息和周围信息生成所述目标信息包括:
    将参照物与所述外界对象之间的距离以及所述外界对象的位置作为目标信息;
    或者,确定所述外界对象投影至所述反射成像部上的投影位置,将所述投影位置或者所述投影位置的边缘作为目标位置,并控制所述目标显像装置在所述目标位置处显示预先设置的目标信息。
  19. 根据权利要求18所述的抬头显示系统,其中,所述处理器还配置为:
    在所述外界对象为重点对象时,确定当前处于预警状态,并将相应的第二预警信息作为所述目标信息,所述第二预警信息包括第二预警图像和第二预警视频中的一项或多项。
  20. 根据权利要求17所述的抬头显示系统,其中,所述安全间距包括前安全间距;所述处理器还配置为:
    在所述外界对象位于参照物的前方,在所述参照物与外界对象之间的距离不大于所述前安全间距,且所述安全间距与所述参照物与外界对象之间的距离之差大于预设距离差值和/或处于预警状态的时长超过预设时长阈值时,生成制动信号或减速信号,并输出所述制动信号或减速信号。
  21. 根据权利要求16所述的抬头显示系统,其中,所述周围信息包括车道位置信息,所述本地信息包括车辆位置信息;所述处理器根据所述本地信息和当围信息生成所述目标信息包括:
    根据所述车道位置信息和所述车辆位置信息确定车辆偏离当前行驶车道的偏移参数,并判断所述偏移参数是否大于相应的偏移阈值;所述偏移参数包括偏移角度和/或偏移距离;
    在所述偏移参数大于相应的偏移阈值时,确定当前处于预警状态,并将相应的第三预警信息作为所述目标信息,所述第三预警信息包括第三预警图像、第三预警视频和优先行驶车道中的一项或多项。
  22. 根据权利要求21所述的抬头显示系统,其中,所述本地信息还包括车辆状态信息,所述车辆状态信息包括车辆速度、车辆加速度、转向灯状态中的一项或多项;所述处理器根据所述本地信息和周围信息生成所述目标信息还包括:
    在所述偏移参数大于相应的偏移阈值,且满足预警条件时,确定当前处于预警状态;
    其中,所述预警条件包括所述车辆速度大于第一预设速度值、所述车辆加速度不大 于零、车辆的所述偏移角度所对应的方向相反一侧的转向灯未处于开启状态、当前为不可变道状态、偏离车道的时长大于预设的第一偏离时长阈值中的一种或多种。
  23. 根据权利要求21所述的抬头显示系统,其中,所述处理器还配置为:
    根据所述车辆位置信息和所述车道位置信息确定车辆的优先行驶车道,并将所述优先行驶车道作为目标对象;确定所述目标对象投影至所述反射成像部上的投影位置,将所述投影位置或者所述投影位置的边缘作为目标位置,并控制所述目标显像装置在所述目标位置处显示预先设置的目标信息;
    或者,根据所述车辆位置信息和所述车道位置信息确定车辆的优先行驶车道,将所述优先行驶车道两侧的边界线作为目标对象,并将形状与所述边界线相匹配的图形作为目标信息;确定所述目标对象投影至所述反射成像部上的投影位置,将所述投影位置作为目标位置,并控制所述目标显像装置在所述目标位置处以预设颜色显示所述目标信息。
  24. 根据权利要求21所述的抬头显示系统,其中,所述处理器还配置为:
    在所述偏移参数大于相应的偏移阈值,且所述偏移参数与所述偏移阈值之差大于预设的偏移差值和/或处于偏移状态的时长超过预设的安全偏移时长时,生成制动信号或减速信号,并输出所述制动信号或减速信号。
  25. 根据权利要求16所述的抬头显示系统,其中,所述周围信息包括前方信息,所述前方信息包括前方车辆的车速和/或与前方车辆的当前距离;
    所述处理器还配置为将所述前方车辆的目标区域投影至所述反射成像部上的投影位置作为目标位置,并将所述前方信息作为目标信息,控制目标显像装置在所述目标位置处显示所述目标信息;其中,所述前方车辆的目标区域为所述前方车辆的后轮之间的空白区域或后备箱所在区域。
  26. 根据权利要求17-25任一所述的抬头显示系统,其中,所述处理器还配置为:
    在参照物的当前状态处于预警状态时,控制所述目标显像装置以正常方式或第一突出显示方式显示所述目标信息,所述第一突出显示方式包括动态显示、高亮显示、以第一颜色显示中的一种或多种;以及
    在参照物的当前状态处于正常状态时,控制所述目标显像装置以正常方式或第二突出显示方式显示所述目标信息,所述第二突出显示方式包括以第二颜色显示。
  27. 根据权利要求17-25任一所述的抬头显示系统,还包括发声装置和/或振动装置,其中,所述处理器还配置为:
    在当前处于预警状态时,
    向所述发声装置发送提醒语音,并控制所述发声装置播放所述提醒语音;
    和/或者,向所述振动装置发送振动信号,控制所述振动装置振动。
  28. 根据权利要求16所述的抬头显示系统,其中,所述周围信息包括道路异常信息,所述道路异常信息包括障碍物位置、不平路段位置、危险路段位置、维修路段位置、事故路段位置、临时检查路段位置中的一项或多项;所述处理器还配置为:
    将所述道路异常信息作为目标信息;
    或者,根据所述道路异常信息确定异常位置,并确定所述异常位置投影至所述反射成像部上的投影位置,将所述投影位置或者所述投影位置的边缘作为目标位置,并控制所述目标显像装置在所述目标位置处显示与所述道路异常信息相对应的目标信息。
  29. 根据权利要求16所述的抬头显示系统,其中,所述周围信息包括当前能见度;所述处理器还配置为:
    在所述当前能见度小于预设的能见度阈值时,获取信息采集装置采集的外界对象的位置信息;
    将所述外界对象的位置信息作为目标信息;
    或者,确定所述外界对象投影至所述反射成像部上的投影位置,将所述投影位置或者所述投影位置的边缘作为目标位置,并控制所述目标显像装置在所述目标位置处显示预先设置的目标信息。
  30. 根据权利要求16所述的抬头显示系统,其中,所述处理器还配置为:
    根据所述目标信息生成推送信息,并将所述推送信息发送至服务器或发送至预设距离内的其他设备。
  31. 根据权利要求16所述的抬头显示系统,其中,所述信息采集装置包括图像采集设备、车载雷达、红外传感器、激光传感器、超声波传感器、速度传感器、转速传感器、角速度传感器、全球定位系统、汽车信息交换系统、高级驾驶辅助系统中的一种或多种。
  32. 一种抬头显示系统,包括:显像装置组和凹面反射元件;所述显像装置组包括多个单层显像装置;
    其中,所述多个单层显像装置中的每个配置为发出入射至所述凹面反射元件的成像光线,至少部分单层显像装置的物距相同或者至少部分单层显像装置所对应的物距不相同,所述物距包括所述成像光线从相对应的所述单层显像装置到所述凹面反射元件的传播路径长度;
    所述凹面反射元件配置为反射所述成像光线。
  33. 根据权利要求32所述的抬头显示系统,还包括定向显像装置,其中,所述定向显像装置包括光源装置、方向控制元件、弥散元件和光调制层;
    所述方向控制元件配置为将所述光源装置发出的光线会聚;所述弥散元件配置为将从所述方向控制元件出射的经过会聚的光弥散,以使所述经过会聚的光形成用于形成光斑的光束;所述光调制层配置为对所述用于形成光斑的光束进行调制,以使所述光束形成定向成像光线。
  34. 根据权利要求32或33所述的抬头显示系统,还包括第一透反元件;其中,所述第一透反元件能够透射具有第一特性的光线,并反射具有第二特性的光线;
    所述显像装置组包括能够发出第一成像光线的第一单层显像装置和能够发出第二成像光线的第二单层显像装置;所述第一成像光线具有所述第一特性,所述第二成像光线具有所述第二特性;
    所述第一透反元件配置为透射所述第一成像光线并且反射所述第二成像光线。
  35. 根据权利要求34所述的抬头显示系统,还包括第二透反元件,其中,所述显像装置组还包括能够发出第三成像光线的第三单层显像装置,所述第三成像光线具有第三特性;
    其中,
    所述第二透反元件配置为透射具有第一特性的第一成像光线,并反射具有第三特性的第三成像光线;所述第一透反元件还配置为透射具有第三特性的第三成像光线;或者,
    所述第二透反元件配置为透射具有第二特性的第二成像光线,并反射具有第三特性的第三成像光线;所述第一透反元件还配置为反射具有第三特性的第三成像光线。
  36. 根据权利要求32-35任一所述的抬头显示系统,还包括一个或多个平面反射元件;
    其中,当所述抬头显示系统包括一个平面反射元件时,所述平面反射元件配置为将所述显像装置组内所述至少一个单层显像装置发出的成像光线反射至所述凹面反射元件;
    或者,当所述抬头显示系统包括多个平面反射元件且所述至少一个单层显像装置包括多个单层显像装置时,所述多个平面反射元件配置为将所述多个单层显像装置发出的成像光线反射至所述凹面反射元件。
  37. 一种抬头显示系统的控制方法,其中,所述抬头显示系统包括显像装置组和定向显像装置;所述显像装置组包括至少一个单层显像装置,所述定向显像装置的成像范围大于所述至少一个单层显像装置的成像范围;
    所述控制方法包括:确定待显示的目标信息和目标显像装置,以及控制所述目标显像装置显示所述目标信息;其中,所述目标显像装置为从所述定向显像装置、所述显像装置组内的所述至少一个单层显像装置中选取的一个显像装置。
  38. 一种交通工具,包括权利要求1-36任一所述的抬头显示系统。
PCT/CN2021/071090 2020-01-10 2021-01-11 抬头显示系统及其控制方法、交通工具 WO2021139812A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21739002.0A EP4089467A4 (en) 2020-01-10 2021-01-11 HEAD-UP DISPLAY SYSTEM AND CONTROL METHOD THEREOF AND VEHICLE
US17/792,014 US20230046484A1 (en) 2020-01-10 2021-01-11 Head up display system and control method thereof, and vehicle

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010026623.1A CN113109941B (zh) 2020-01-10 2020-01-10 一种分层成像的抬头显示系统
CN202010026623.1 2020-01-10

Publications (1)

Publication Number Publication Date
WO2021139812A1 true WO2021139812A1 (zh) 2021-07-15

Family

ID=76709855

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/071090 WO2021139812A1 (zh) 2020-01-10 2021-01-11 抬头显示系统及其控制方法、交通工具

Country Status (4)

Country Link
US (1) US20230046484A1 (zh)
EP (1) EP4089467A4 (zh)
CN (1) CN113109941B (zh)
WO (1) WO2021139812A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115616778A (zh) * 2022-01-21 2023-01-17 华为技术有限公司 一种显示装置和交通工具

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022152607A (ja) * 2021-03-29 2022-10-12 本田技研工業株式会社 運転支援装置、運転支援方法、及びプログラム
CN113135193B (zh) * 2021-04-16 2024-02-13 阿波罗智联(北京)科技有限公司 输出预警信息的方法、设备、存储介质及程序产品

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015019567A1 (ja) * 2013-08-09 2015-02-12 株式会社デンソー 情報表示装置
CN107045255A (zh) * 2017-02-03 2017-08-15 中国电子科技集团公司第五十五研究所 一种薄型液晶投影显示led偏光光源
CN108422933A (zh) * 2017-02-14 2018-08-21 现代摩比斯株式会社 用于实现可单独控制的多显示场的抬头显示器装置及其显示控制方法
JP2018136420A (ja) * 2017-02-21 2018-08-30 株式会社デンソー ヘッドアップディスプレイ装置
CN110031977A (zh) * 2019-03-18 2019-07-19 惠州市华阳多媒体电子有限公司 一种基于偏振分光的双屏显示系统
CN209381917U (zh) * 2018-11-30 2019-09-13 深圳点石创新科技有限公司 一种抬头显示器及汽车

Family Cites Families (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07225363A (ja) * 1993-12-17 1995-08-22 Matsushita Electric Ind Co Ltd 液晶投写装置および液晶表示装置
JPH08225030A (ja) * 1995-02-20 1996-09-03 Nippondenso Co Ltd ヘッドアップディスプレイ
JPH10115871A (ja) * 1996-10-14 1998-05-06 Canon Inc 画像投射装置
US6974234B2 (en) * 2001-12-10 2005-12-13 Galli Robert D LED lighting assembly
US7083313B2 (en) * 2004-06-28 2006-08-01 Whelen Engineering Company, Inc. Side-emitting collimator
CN100535842C (zh) * 2004-10-27 2009-09-02 富士通天株式会社 显示装置
US8061857B2 (en) * 2008-11-21 2011-11-22 Hong Kong Applied Science And Technology Research Institute Co. Ltd. LED light shaping device and illumination system
US8395529B2 (en) * 2009-04-02 2013-03-12 GM Global Technology Operations LLC Traffic infrastructure indicator on head-up display
WO2011078963A1 (en) * 2009-12-22 2011-06-30 Alcon Research, Ltd. Light collector for a white light led illuminator
TW201209347A (en) * 2010-08-25 2012-03-01 Neo Optic Tek Co Ltd Light guide device of light emission system and light guiding method
EP2960095B1 (en) * 2013-02-22 2019-06-26 Clarion Co., Ltd. Head-up display apparatus for vehicle
CN204141382U (zh) * 2014-08-22 2015-02-04 惠阳帝宇工业有限公司 Led应急灯
CN204761594U (zh) * 2015-06-17 2015-11-11 广州鹰瞰信息科技有限公司 基于多屏的车载抬头显示器
JP6451523B2 (ja) * 2015-06-17 2019-01-16 株式会社デンソー ヘッドアップディスプレイ装置
JP6455339B2 (ja) * 2015-06-26 2019-01-23 株式会社デンソー ヘッドアップディスプレイ装置
CN108473054B (zh) * 2016-02-05 2021-05-28 麦克赛尔株式会社 平视显示器装置
WO2017146452A1 (ko) * 2016-02-26 2017-08-31 엘지전자 주식회사 차량용 헤드 업 디스플레이 장치
WO2017195026A2 (en) * 2016-05-11 2017-11-16 WayRay SA Heads-up display with variable focal plane
CN106125305A (zh) * 2016-06-28 2016-11-16 科世达(上海)管理有限公司 一种抬头显示系统、车辆控制系统及车辆
CN106125306A (zh) * 2016-06-28 2016-11-16 科世达(上海)管理有限公司 一种抬头显示系统、车辆控制系统及车辆
JP6834537B2 (ja) * 2017-01-30 2021-02-24 株式会社リコー 表示装置、移動体装置、表示装置の製造方法及び表示方法。
JP6721453B2 (ja) * 2016-08-08 2020-07-15 マクセル株式会社 ヘッドアップディスプレイ装置およびその映像表示装置
CN108116312A (zh) * 2016-11-28 2018-06-05 宁波舜宇车载光学技术有限公司 影像系统及其成像方法
KR101899981B1 (ko) * 2016-12-02 2018-09-19 엘지전자 주식회사 차량용 헤드 업 디스플레이
CN106740116B (zh) * 2017-02-14 2023-12-12 深圳前海智云谷科技有限公司 一种一体式的抬头显示装置
JP6760188B2 (ja) * 2017-04-05 2020-09-23 株式会社デンソー ヘッドアップディスプレイ装置
CN107479196A (zh) * 2017-07-13 2017-12-15 江苏泽景汽车电子股份有限公司 一种ar‑hud双屏显示光学系统
CN107472244B (zh) * 2017-07-31 2020-02-14 江苏理工学院 一种基于vlc的车辆智能防撞预警系统
JP6887732B2 (ja) * 2017-08-04 2021-06-16 アルパイン株式会社 ヘッドアップディスプレイ装置
EP3729169A4 (en) * 2017-12-18 2021-10-13 LEIA Inc. MULTI-BEAM HEAD-UP DISPLAY, SYSTEM AND PROCEDURE
JP7117578B2 (ja) * 2018-03-20 2022-08-15 パナソニックIpマネジメント株式会社 ヘッドアップディスプレイ及び移動体
CN108490616B (zh) * 2018-04-03 2020-04-10 京东方科技集团股份有限公司 抬头显示器及显示控制方法
KR20210006892A (ko) * 2018-05-04 2021-01-19 하만인터내셔날인더스트리스인코포레이티드 조절 가능한 3차원 증강 현실 헤드업 디스플레이
WO2019224922A1 (ja) * 2018-05-22 2019-11-28 三菱電機株式会社 ヘッドアップディスプレイ制御装置、ヘッドアップディスプレイシステム、及びヘッドアップディスプレイ制御方法
JP7310817B2 (ja) * 2018-07-05 2023-07-19 日本精機株式会社 ヘッドアップディスプレイ装置
CN114252994A (zh) * 2020-09-23 2022-03-29 未来(北京)黑科技有限公司 抬头显示装置及车辆

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015019567A1 (ja) * 2013-08-09 2015-02-12 株式会社デンソー 情報表示装置
CN107045255A (zh) * 2017-02-03 2017-08-15 中国电子科技集团公司第五十五研究所 一种薄型液晶投影显示led偏光光源
CN108422933A (zh) * 2017-02-14 2018-08-21 现代摩比斯株式会社 用于实现可单独控制的多显示场的抬头显示器装置及其显示控制方法
JP2018136420A (ja) * 2017-02-21 2018-08-30 株式会社デンソー ヘッドアップディスプレイ装置
CN209381917U (zh) * 2018-11-30 2019-09-13 深圳点石创新科技有限公司 一种抬头显示器及汽车
CN110031977A (zh) * 2019-03-18 2019-07-19 惠州市华阳多媒体电子有限公司 一种基于偏振分光的双屏显示系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4089467A4

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115616778A (zh) * 2022-01-21 2023-01-17 华为技术有限公司 一种显示装置和交通工具
CN115616778B (zh) * 2022-01-21 2024-03-01 华为技术有限公司 一种显示装置和交通工具

Also Published As

Publication number Publication date
EP4089467A4 (en) 2024-01-24
US20230046484A1 (en) 2023-02-16
CN113109941A (zh) 2021-07-13
CN113109941B (zh) 2023-02-10
EP4089467A1 (en) 2022-11-16

Similar Documents

Publication Publication Date Title
WO2021139812A1 (zh) 抬头显示系统及其控制方法、交通工具
US10232713B2 (en) Lamp for a vehicle
JP7113259B2 (ja) 表示システム、表示システムを備える情報提示システム、表示システムの制御方法、プログラム、及び表示システムを備える移動体
US20170329143A1 (en) Heads-up display with variable focal plane
US20130076787A1 (en) Dynamic information presentation on full windshield head-up display
TWI728117B (zh) 動態信息系統及動態信息系統的操作方法
CN113165513A (zh) 平视显示器、车辆用显示系统以及车辆用显示方法
WO2019003929A1 (ja) 表示システム、情報提示システム、表示システムの制御方法、プログラムと記録媒体、及び移動体装置
CN113109939B (zh) 一种多层次成像系统
WO2019004244A1 (ja) 表示システム、情報提示システム、表示システムの制御方法、プログラム、及び移動体
CN113126295A (zh) 一种基于环境显示的抬头显示设备
CN114466761A (zh) 平视显示器及图像显示系统
JP2018081276A (ja) 虚像表示装置
CN115891644A (zh) 显示方法、设备、交通工具及存储介质
WO2021139792A1 (zh) 平视显示系统及其控制方法、交通工具
WO2021132259A1 (en) Display apparatus, display method, and program
CN113219657B (zh) 一种车辆平视显示系统
JP2021117089A (ja) 表示装置、及び表示方法
JP2021117704A (ja) 表示装置、及び表示方法
CN113103955A (zh) 一种多层次成像系统
WO2021147972A1 (zh) 交通工具平视显示系统、显示方法及交通工具
WO2022102374A1 (ja) 車両用表示システム
JP2019174349A (ja) 移動ルート案内装置、移動体及び移動ルート案内方法
WO2021139818A1 (zh) 多层次成像系统、抬头显示器、交通工具以及多层次成像方法
WO2021132250A1 (en) In-vehicle display device and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21739002

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2021739002

Country of ref document: EP

Effective date: 20220810