CN113126295A - Head-up display device based on environment display - Google Patents

Head-up display device based on environment display Download PDF

Info

Publication number
CN113126295A
CN113126295A CN202010039956.8A CN202010039956A CN113126295A CN 113126295 A CN113126295 A CN 113126295A CN 202010039956 A CN202010039956 A CN 202010039956A CN 113126295 A CN113126295 A CN 113126295A
Authority
CN
China
Prior art keywords
information
key
vehicle
component
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010039956.8A
Other languages
Chinese (zh)
Other versions
CN113126295B (en
Inventor
方涛
吴慧军
徐俊峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Future Beijing Black Technology Co ltd
Original Assignee
Future Beijing Black Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Future Beijing Black Technology Co ltd filed Critical Future Beijing Black Technology Co ltd
Priority to CN202010039956.8A priority Critical patent/CN113126295B/en
Publication of CN113126295A publication Critical patent/CN113126295A/en
Application granted granted Critical
Publication of CN113126295B publication Critical patent/CN113126295B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Instrument Panels (AREA)

Abstract

The invention provides a head-up display device based on environment display, which comprises: an imaging device and a processor; the imaging device is arranged on one side of the reflective imaging layer and comprises a backlight source component, a main optical axis adjusting component, a light beam expanding component and a liquid crystal panel; the processor is used for acquiring external environment information, determining key information which needs to be displayed currently according to the external environment information, and controlling the imaging device to display the key information. According to the head-up display device based on the environment display, provided by the embodiment of the invention, the main optical axis adjusting component can converge imaging light rays with different incident angles into the same preset area and disperse the imaging light rays into the eye box range, so that the imaging brightness can be improved, and the imaging range can be ensured; the imaging device can be arranged in a large range, so that a large-area directional imaging area is formed on the surface of the reflective imaging layer, large-range imaging is realized, and the display effect of the reflective imaging layer can be improved.

Description

Head-up display device based on environment display
Technical Field
The invention relates to the technical field of safe driving, in particular to a head-up display device based on environment display.
Background
HUD (head up display) technique can avoid the driver to look at the distraction that the panel board leads to driving in-process head-lowering, can improve driving safety factor, also can bring better driving experience simultaneously. Therefore, HUDs that use automobile windshields for imaging are receiving increasing attention.
However, the field of View (FOV) of the conventional HUD based on the free-form surface reflector is usually very small, generally within 10 degrees, which results in that the display size of the HUD image is very small, generally only vehicle speed or direction information can be displayed, and richer contents such as a navigation map and complex safety information cannot be displayed, and the dimension and density of information presentation are low, so that it is difficult for a driver to receive and control various information during vehicle driving; if want to increase and show the size, then need increase HUD's volume, its consumption also can greatly increased, causes HUD display effect limited, can't expand more application, has restricted HUD further popularization and application.
Disclosure of Invention
To solve the above problems, an embodiment of the present invention provides a head-up display device based on an environment display.
The embodiment of the invention provides a head-up display device based on environment display, which comprises: an imaging device and a processor; the imaging device is arranged on one side of the reflective imaging layer and comprises a backlight source component, a main optical axis adjusting component, a light beam expanding component and a liquid crystal panel;
the main optical axis adjusting component is used for converging light rays emitted by the backlight source component to the same preset area, and the preset area is a position or a range in the range of the eye box;
the light beam expanding component and the liquid crystal panel are arranged on one side of the main optical axis adjusting component close to the reflection imaging layer; the light beam expanding component is used for diffusing emergent light of the main optical axis adjusting component and forming light spots covering the range of the eye box; the liquid crystal panel is used for shielding or transmitting light rays, emitting imaging light rays facing the reflective imaging layer when the imaging light rays are transmitted, and reflecting the imaging light rays to the range of the eye box through the reflective imaging layer;
the processor is connected with the imaging device; the processor is used for acquiring external environment information, determining key information which needs to be displayed currently according to the external environment information, and controlling the imaging device to display the key information.
In the scheme provided by the embodiment of the invention, the main optical axis adjusting component can converge the imaging light rays with different incidence angles into the same preset area and disperse the imaging light rays into the range of the eye box, so that the imaging brightness can be improved, and the imaging range can be ensured; the imaging device can be arranged in a large range, so that a large-area directional imaging area is formed on the surface of the reflective imaging layer, large-range imaging is realized, and the display effect of the reflective imaging layer can be improved.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 illustrates an imaging principle schematic diagram of a head-up display device provided by an embodiment of the invention;
fig. 2 is a schematic diagram illustrating a first structure of a head-up display device according to an embodiment of the present invention;
fig. 3 is a schematic diagram illustrating a second structure of a head-up display device according to an embodiment of the invention;
FIG. 4 is a schematic diagram illustrating a first configuration of a solid lamp cup according to an embodiment of the present invention;
FIG. 5 is a schematic diagram illustrating a second configuration of a solid lamp cup according to an embodiment of the present invention;
fig. 6 is an electrical schematic diagram of a head-up display apparatus according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a reflective imaging layer displaying a picture when a pedestrian approaches in accordance with an embodiment of the present invention;
FIG. 8 is a flow chart illustrating a process for determining whether there is an offset in a lane by a processor provided by an embodiment of the present invention;
FIG. 9 is a schematic diagram of a reflective imaging layer display during lane departure in an embodiment of the present invention;
FIG. 10 is another schematic diagram of a reflective imaging layer display during lane departure in an embodiment of the invention;
FIG. 11 is a schematic diagram of a reflective imaging layer displaying a bird's-eye view in an embodiment of the invention.
Icon:
10-imaging device, 101-backlight component, 102-main optical axis adjustment component, 1021-light direction component, 1022-collimation component, 103-light beam expansion component, 104-liquid crystal panel, 105-reflection component, 1051-reflection surface, 1052-cavity, 1053-convex surface, 1054-notch, 1055-convex surface, 20-reflection imaging layer, 31-preset area, 32-eye box range, 501-text, 502-rectangular frame, 503-bird's eye view, 504-arrow, 72-rear vehicle, 73-local vehicle, 74-pedestrian, 75-current driving lane.
Detailed Description
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", and the like, indicate orientations and positional relationships based on those shown in the drawings, and are used only for convenience of description and simplicity of description, and do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be considered as limiting the present invention.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
In the present invention, unless otherwise expressly specified or limited, the terms "mounted," "connected," "secured," and the like are to be construed broadly and can, for example, be fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
The head-up display equipment based on the environment display can display based on the environment of a local vehicle and assist a driver in driving. The head-up display apparatus includes: an imaging device 10 and a processor. Referring to fig. 1, the imaging device 10 is disposed on one side of the reflective imaging layer 20 for emitting imaging light to the reflective imaging layer 20, and the imaging light can be reflected by the reflective imaging layer 20 to the eye box area 32, so that a user (such as a driver or the like) at the eye box area 32 can view an image of the imaging device 10. In this embodiment, the reflective imaging layer 20 may be a windshield of a vehicle (such as an automobile), or a reflective film inside the windshield, which can reflect the imaging light emitted from the imaging device 10 and does not affect the driver to observe the objects or scenes outside the vehicle through the reflective film. This imaging device 10 can be set up to the large-size imaging device for imaging light that imaging device 10 sent can be incided to the most region on reflection imaging layer 20 surface, thereby the driver can see through most reflection imaging layer 20 and observe formation of image, virtual image in fig. 1 promptly, and then makes can show information on a large scale on the windshield of vehicles such as vehicles, can show more abundanter content. The imaging device 10 may be specifically disposed below the reflective imaging layer 20, such as at an IP (Instrument Panel) desk of an automobile, or the like.
Referring to fig. 2, the image forming apparatus 10 includes a backlight unit 101, a main optical axis adjusting unit 102, a light beam expanding unit 103, and a liquid crystal panel 104. The main optical axis adjusting component 102 is used for converging the light emitted from the backlight component 101 to the same preset area 31, and the preset area 31 is a position or a range in the eye box range 32. The light beam expanding component 103 and the liquid crystal panel 104 are arranged on one side of the main optical axis adjusting component 102 close to the reflective imaging layer 20; the light beam expanding member 103 is used for diffusing the outgoing light of the main optical axis adjusting member 102 and forming a light spot covering the eye box range 32; the liquid crystal panel 104 is used for blocking or transmitting light, and emits imaging light toward the reflective imaging layer 20 when transmitting, and the imaging light is reflected to the eye box range 32 through the reflective imaging layer 20. The eye box (eyebox) range 32 refers to an area range where the driver can observe an image presented by the imaging light, that is, when the eyes of the driver are located in the eye box range 32, the driver can view an image formed by the imaging device 10, and the image is specifically a virtual image in fig. 1.
The liquid crystal panel 104 in this embodiment may be made of an existing liquid crystal, and the liquid crystal is controlled by conduction to sequence, so as to shield or transmit light, and the proportion of the transmitted light can be adjusted, so as to adjust the brightness of the light and form an image. In this embodiment, the light beam expanding component 103 and the liquid crystal panel 102 are located on the light emitting side of the main optical axis adjusting component 102, and the light beam expanding component 103 may be disposed on the lower side of the liquid crystal panel 102 or may be disposed on the upper side of the liquid crystal panel 102; in fig. 2, the light beam expanding member 103 is illustrated as being disposed below the liquid crystal panel 102.
The processor is connected with the imaging device 10; the processor is configured to obtain external environment information, determine key information that needs to be displayed currently according to the external environment information, and control the imaging device 10 to display the key information. In this embodiment, the external environment information is specifically surrounding environment information that can be collected by the driver during the driving process, for example, surrounding vehicle or road related information, and a prompt content can be generated based on the current external environment information and displayed on the reflective imaging layer 20, so that the driver can be provided with required information, and the driver can be reminded.
In the embodiment of the present invention, referring to fig. 2, the light rays are converged by the main optical axis adjusting member 102. Specifically, referring to fig. 2, backlight components 101 are disposed at different positions, and fig. 2 illustrates that 7 backlight components 101 are disposed; accordingly, 7 main optical axis adjusting members 102 may be provided to control the direction of light emitted from the backlight unit 101. As shown in fig. 2, in the absence of the light beam expanding member 103, the main optical axis adjusting member 102 converges the light emitted from the plurality of backlight units 101 to the predetermined area 31. In fig. 2, 31 is taken as an example of a point, and the preset area 31 in this embodiment may also be a small area, that is, only the light emitted from the backlight component 101 needs to be converged into the area. Specifically, the direction of light emitted from the backlight unit 101 is adjusted by setting the orientation of the main optical axis adjusting unit 102 at different positions, so that light convergence is achieved.
Meanwhile, if only the light rays at different positions are converged to the preset area 31 in a small range, the light rays emitted from the backlight component 101 can only form an image in a small range after passing through the liquid crystal panel 104, which is inconvenient for the driver to view the image formed by the imaging device 10. In the embodiment, the light is diffused by the light beam expanding component 103, and a light spot (i.e. the eye box range 32) with a preset shape and a larger imaging range is formed, so that a driver can conveniently view the imaging device 10 for imaging in a large range. Specifically, taking the leftmost main optical axis adjustment component 102 in fig. 2 as an example, as shown in fig. 2, when the light beam expansion component 103 is not present, the light ray a emitted by the leftmost backlight component 101 can be emitted to the preset area 31 along the optical path a; after the light beam expanding component 103 is disposed outside the main optical axis adjusting component 102, the light beam expanding component 103 disperses the light ray a into a plurality of light rays (including the light ray a1, the light ray a2, etc.) and disperses the light rays into a range, namely, the eye box range 32, so that an observer can view an image formed by the imaging device 10 in the range of the eye box range 32. Optionally, the light Beam expanding component 103 may be a Diffractive Optical Element (DOE), such as a Beam Shaper (Beam Shaper); the size and shape of the spot is determined by the microstructure of the beam shaper, including but not limited to circular, elliptical, square, rectangular, batwing shapes. For example, the diffusion angle of the diffused light spot in the side view direction is 10 degrees, preferably 5 degrees; the dispersion angle in the front view direction is 50 degrees, preferably 30 degrees.
The number of the main optical axis adjusting components 102 may be multiple, and different main optical axis adjusting components 102 are disposed at different positions and are used for adjusting the emitting directions of the light rays emitted by the backlight components 101 at different positions, and the emitting directions of the light rays emitted by the backlight components 101 at different positions all point to the same preset area 31. As shown in fig. 2, the number of the main optical axis adjusting parts 102 in fig. 2 is 7. The main optical axis adjusting component 102 may adjust light emitted by one backlight component 101, and may also adjust light emitted by a plurality of backlight components 101, which is not limited in this embodiment.
Those skilled in the art will appreciate that the diffusion effect of the light beam expanding component 103 in fig. 2 is only schematically illustrated, and the light beam expanding component 103 can diffuse the light into the eye box area 32, and does not completely restrict the light emitted from the backlight component 101 to the eye box area 32. That is, the light ray a may form a wider range of light spots after passing through the light beam expanding component 103, and the light rays emitted by other backlight components 101 may form other light spots after passing through the light beam expanding component 103, but all the light rays emitted by the backlight components 101 may reach the inside of the eye box range 32.
In this embodiment, after the light that backlight unit 101 sent passes through the effect of primary optical axis adjusting part 102, can incide to liquid crystal display panel 104 in the restraint scope, make liquid crystal display panel 104 can send the formation of image light towards eye box scope 32 at the during operation, make formation of image light assemble to eye box scope 32 in, thereby can improve the luminance of formation of image light, and make things convenient for the driver to see the image that imaging device 10 formed in eye box scope 32 under the effect of light beam expanding part 103, when improving light luminance, can also enlarge the formation of image scope. In this embodiment, since the imaging light can be converged, the driver can observe the virtual image formed by the reflective imaging layer 20 without the imaging device 10 having particularly high brightness; and the imaging device 10 may have a large area so that the imaging light can be reflected to a large position on the surface of the reflective imaging layer 20, and the driver can view the image formed on the reflective imaging layer 20. The imaging device 10 may be laid on the surface of an IP table of a vehicle.
It should be noted that the light emitted from the imaging device 10 may cover the whole reflective imaging layer 20 due to scattering or the like, but since the light is only reflected by the reflective imaging layer 20 and reaches the eye box area 32 to be viewed by the driver, the "imaging light" in the present embodiment refers to the light emitted from the imaging device 10 and can be imaged in the eye box area 32. That is, on the surface of the reflective imaging layer 20, only the area on which imaging light capable of imaging in the eye box area 32 is incident is taken as an imaging area.
In the embodiment of the invention, the head-up display device further comprises a processor for determining the key information to be displayed. In this embodiment, the processor may output the key information to the imaging device 10, so that the imaging device 10 displays the key information, so that the reflective imaging layer 20 forms a virtual image at a corresponding imaging position (e.g., a position where the virtual image is located in fig. 1), and thus the key information may be displayed in the imaging area for the driver to watch. For example, it is currently necessary to display the vehicle speed in the imaging area, i.e., the vehicle speed may be used as key information.
It should be noted that "displaying the key information in the imaging region" in this embodiment means that the driver can view the key information through the imaging region so that the key information appears from the driver's perspective to be displayed in the imaging region, but a virtual image corresponding to the key information is substantially located outside the reflective imaging layer 20, for example, where the virtual image is located in fig. 1. The same or similar descriptions as "displaying key information in the imaging region" in this embodiment (e.g., "displaying key information on the reflective imaging layer", etc.) are for convenience of description only, and are not intended to limit the imaging region of the reflective imaging layer 20, etc. to display key information by itself.
According to the head-up display device provided by the embodiment of the invention, the main optical axis adjusting component 102 can converge imaging light rays with different incident angles into the same preset area and disperse the imaging light rays into the eye box range, so that the imaging brightness can be improved, and the imaging range can be ensured; the imaging device 10 can be arranged in a large range, so that a large-area directional imaging area is formed on the surface of the reflective imaging layer, large-range imaging is realized, and the display effect of the reflective imaging layer can be improved.
On the basis of the above-mentioned embodiment, the main optical axis adjusting component 102 in the embodiment of the present invention may be a light focusing component disposed toward the preset region 31, as shown in fig. 2. Alternatively, as shown in FIG. 3, the main optical axis adjustment component 102 includes a light ray direction component 1021; the light directing member 1021 is disposed between the backlight unit 101 and the light beam expanding member 103, and the light directing member 1021 is used for converging the light emitted from different backlight units 101 to the same preset area 31. Optionally, the main optical axis adjusting component 102 may further include a collimating component 1022; the collimating component 1022 is configured to adjust an emitting direction of the light emitted from the backlight component 101 to be within a preset angle range, and emit the adjusted light to the light beam expanding component 103. Wherein,
when the main optical axis adjustment member 102 includes the collimating component 1022, the light directing component 1021 is disposed between the collimating component 1022 and the light expanding component 103; the light directing unit 1021 is used to converge different light into a same predetermined area 31. That is, even if the orientation of the main optical axis adjusting member 102 is not particularly set, different light beams can be converged to one predetermined region 31 by the light beam directing member 1021. As shown in fig. 3, a plurality of collimating elements 1022 may be disposed on the light directing unit 1021. Specifically, the collimating component 1022 is a collimating lens, which includes one or more of a convex lens, a concave lens, a fresnel lens, or a combination of the above lenses, and the lens combination may be a combination of a convex lens and a concave lens, a combination of a fresnel lens and a concave lens, and the like; alternatively, the collimating component 1022 is a collimating film, and is configured to adjust the emitting direction of the light to be within a preset angle range. At this time, the distance between the collimating component 1022 and the position of the backlight component 101 is the focal length of the collimating component 1022, i.e., the backlight component 101 is disposed at the focal point of the collimating component 1022.
Optionally, referring to fig. 2 and 3, the main optical axis adjustment component 102 further includes a light reflection component 105; the light reflection member 105 is used to reflect incident light emitted from the backlight unit 101 to the light beam expansion unit 103.
Specifically, the light reflecting member 105 includes a lamp cup; the lamp cup is a hollow shell surrounded by a reflecting surface, and the opening direction of the lamp cup faces the light beam expanding component 103; the bottom of the lamp cup away from the opening is used to dispose a backlight unit 101. Wherein, the inner wall of the lamp cup (i.e. the inner wall of the groove of the light reflecting part 105) is the light reflecting surface of the lamp cup.
In addition, the main optical axis adjustment unit 102 may include a collimation unit 1022; the collimating component 1022 is disposed inside the lamp cup, and the size of the collimating component 1022 is smaller than the opening size of the lamp cup; the collimating component 1022 is configured to collimate a part of light emitted by the backlight component 101 in the lamp cup and emit the collimated light to the light beam expanding component 103.
Or the lamp cup is a solid lamp cup, that is, the lamp cup is a solid transparent component with a reflecting surface 1051, and the refractive index of the solid transparent component is greater than 1; the opening direction of the solid lamp cup faces the light beam expanding component 103; the tail end of the solid lamp cup away from the opening is used for arranging a backlight component 101. The specific structure of the solid lamp cup can be seen in fig. 4 and 5. The opening direction of the solid lamp cup refers to the opening direction of the reflecting surface 1051 of the solid lamp cup.
Also, the collimating component 1022 may be integrated on a solid lamp cup. Referring to fig. 4, the solid transparent member is provided with a cavity 1052 at the end remote from the opening of the solid lamp cup, and the surface of the cavity 1052 near the opening of the solid lamp cup is convex 1053. Alternatively, as shown in fig. 5, the solid transparent member is provided with a slot 1054 at the middle position near the end of the solid lamp cup opening, and the bottom surface of the slot 1054 is a convex surface 1055.
In this embodiment, the convex surface 1053 of the cavity 1052 or the convex surface 1055 of the slot 1054 are used for collimating the light emitted from the backlight unit 101, i.e. the convex surface 1053 or the convex surface 1055 is equivalent to the collimating unit 1022. The convex surface 1053 or the convex surface 1055 is arranged in the middle of the solid transparent part, and the size of the convex surface 1053 or the convex surface 1055 is smaller than the size of the opening of the solid lamp cup; the convex surface 1053 or the convex surface 1055 is used to collimate part of the light emitted from the backlight unit 101 in the solid cup and emit the collimated light to the light beam expanding unit 103. As shown in fig. 4, by disposing the convex surface 1053 in the cavity at the end of the solid cup, the convex surface 1053 forms a convex lens to collimate the light directed to the convex surface 1053. Or, referring to fig. 5, a slot 1054 is disposed in the middle of the solid transparent member, the bottom surface of the slot 1054 is a convex surface 1055, the convex surface 1055 of the solid lamp cup is used for collimating the light that cannot be reflected by the reflecting surface 1051 of the solid lamp cup, and the other light with a larger exit angle is totally reflected in the solid lamp cup and then collimated out of the solid lamp cup. The solid lamp cup is made of a transparent material with the refractive index larger than 1, such as a polymer transparent material and glass, so as to realize total reflection.
On the basis of the above embodiment, referring to fig. 6, the head-up display device further includes an information acquisition apparatus 200, wherein the information acquisition apparatus 200 is communicatively connected to the processor 100; the information collecting device 200 is configured to collect current external environment information and send the collected external environment information to the processor 100, so that the processor 100 can determine key information that needs to be displayed currently according to the external environment information.
In the embodiment of the present invention, the information acquisition device 200 may specifically include one or more of an image acquisition device, a Vehicle-mounted radar, an infrared sensor, a laser sensor, an ultrasonic sensor, a rotation speed sensor, an angular velocity sensor, a GPS (Global Positioning System), a V2X (Vehicle to X, which represents information exchange between a Vehicle and the outside), and an ADAS (Advanced Driving assistance System). Different information acquisition devices can be installed at different positions based on the functional requirements, and the detailed description is omitted here.
In this embodiment, the processor 100 determines the key information that needs to be displayed currently according to the external environment information, and controls the imaging device 10 to display the key information, including:
the processor determines key information needing to be displayed currently according to the external environment information and determines a display position; controlling the liquid crystal unit corresponding to the display position in the imaging device 10 to work to display the key information; the display position is a preset position for displaying key information; alternatively, the display position is an intersection position between the key object projected to the eye box range 32 and a virtual image of the imaging device 10 formed by the reflective imaging layer 20; the key object is an object selected from external objects.
In the embodiment of the present invention, the processor 100 determines the key information to be displayed and the corresponding display position, i.e., where to display the key information.
The display position may be preset, for example, if the target information is a vehicle speed and the display vehicle speed is preset at the lower left of the reflective imaging layer 20, the corresponding position at the lower left of the reflective imaging layer 20 may be directly used as the display position. Alternatively, the display position may be determined based on the current actual scene; in this embodiment, when some objects (i.e., external objects) exist outside the vehicle, one or more key objects that need to be displayed with emphasis may be selected from the external objects, so that the driver can observe the existence of the key objects more prominently. In this embodiment, the reflective imaging layer 20 may form a virtual image of the imaging device 10, i.e., the virtual image shown in fig. 1; along the observation direction of the driver, the key object may be projected to the eye box range 32, that is, there may be an intersection position between the virtual image and a connection line between the key object and the eye box range 32, and the intersection position is taken as a display position in this embodiment; when the liquid crystal cell corresponding to the display position displays the key information, the virtual image can display the corresponding key information at the display position so that the driver can see the key information and it appears that the reflective imaging layer 20 is displaying the key information. For example, when there is a pedestrian outside the vehicle, the pedestrian may be used as a key object, and at this time, a graph corresponding to the position of the pedestrian needs to be formed to remind the driver, and the graph is the key information, and the position on the reflective imaging layer 20 where the key information needs to be displayed corresponds to the display position. In this embodiment, the display position may be a position point or a position range, which may be determined based on actual conditions.
In the embodiment of the present invention, the external object is an object located outside the reflective imaging layer 20, and includes a stationary object such as a road surface and an indicator, and may also include a movable object such as a motor vehicle, a pedestrian, an animal, and a non-motor vehicle. External object can project and map to reflective imaging layer 20 on, and is specific, external object can project and map to reflective imaging layer 20's certain projection position along the direction towards eye box scope 32, and this projection position corresponds with the display position of virtual image, and external object, display position, projection position, eye box scope collineation promptly for the driver can see through reflective imaging layer 20 of this projection position department and see external object in eye box scope department. Meanwhile, the imaging device 10 is controlled to display the key information at the display position, and the display position is consistent with the projection position of the external object, so that the driver in the eye box range can view the key information attached to the external object (for example, the external object is framed out, and the like), and the driver can be effectively reminded. When a plurality of external objects exist outside, all the external objects can be used as key objects, or the external objects which need to particularly remind the driver can be selected from the external objects as the key objects, for example, front vehicles, pedestrians crossing roads, traffic lights and the like can be used as the key objects.
On the basis of the above-described embodiments, the external environment information includes location information of the external object, which may include a current location of the external object and/or a current distance between a local (i.e., a local vehicle, such as a local vehicle, etc.) and the external object. If the external object is a pedestrian, a non-motor vehicle and the like, the external object generally has higher safety priority, namely, the position of the pedestrian and the like needs to be considered preferentially in the driving process of the vehicle, so as to avoid collision; therefore, the warning is given preferentially when the external object is a pedestrian, a non-motor vehicle and the like. In this embodiment, when the external object is a pedestrian, an animal or a non-motor vehicle, the determining, by the processor 100, the key information that needs to be displayed currently according to the external environment information may include:
step A1: and if the external object meets the key warning condition, determining that the external object is in the key state at present, and taking corresponding key warning information as key information, wherein the key warning information comprises one or more of key warning characters, key warning images and key warning videos.
Step A2: and if the external object does not meet the key warning condition, determining that the external object is in a normal state at present, and taking display information for normal display as key information, wherein the display information comprises one or more of an empty set, non-warning characters, non-warning images and non-warning videos.
The key warning conditions comprise one or more of an empty set, a preset distance threshold value smaller than the current distance between the empty set and an external object, the external object moving towards the current driving lane, the external object being located in the current driving lane, the current object being located in an object dense area, and the sight line position information in the local driving data not matching with the current position of the external object.
In the embodiment of the invention, when the external object is a pedestrian, an animal or a non-motor vehicle and the like needing special attention, whether the external object is an object needing important warning or not can be judged. Specifically, whether the external object is an object needing key warning is judged based on key warning conditions; if the external object is an object needing important warning, the external object can be used as a key object, and key information corresponding to the position of the key object is displayed. In this embodiment, if the external object is a pedestrian, an animal or a non-motor vehicle, the external object may be directly used as a key object, that is, the key warning condition at this time is an empty set. Or, if the current distance between the vehicle and the external object is smaller than the preset distance value, the external object is closer to the vehicle, and the current distance can be used as a key state; the preset distance value may be a preset distance value, for example, a safety distance determined based on the vehicle speed. If the external object is moving toward the current driving lane or the external object is located in the current driving lane of the vehicle, it indicates that the vehicle has a high possibility of colliding with the external object, and this may be regarded as a critical state. Alternatively, when it can be determined that the external object is located in a person-intensive area such as a school or a hospital based on GPS, a large number of pedestrians are generally present, and thus the external object can be set to a key state to remind the driver. Alternatively, the information acquisition device 200 may further include an image acquisition device, an infrared sensor, and the like, and determine the sight line orientation information of the driver based on the information acquisition device, such as the positions of both eyes and the sight line of the driver; if the sight line direction information does not match the current position of the external object, it indicates that the driver is most likely not to notice the external object, and at this time, the driver may be set to a key state to remind the driver. The information collecting device 200 may specifically determine the gaze direction information based on an eyeball tracking technology, and may also adopt other technologies, which are not limited herein.
In this embodiment, when it is determined that the current state is the key state, important warning information for reminding the driver, such as "pedestrian is present in front, attention is paid to avoidance", "school in front, and pedestrian is paid to" may be generated, and the important warning information is used as the key information. As shown in fig. 7, when a pedestrian 74 is detected in front of the reflective imaging layer 20, the heads-up display device may display an early warning text 501, i.e., "notice pedestrian", on the reflective imaging layer 20, and may also select the pedestrian 74 through a rectangular frame 502 and remind the driver that the pedestrian is currently moving toward the current driving lane 75 through an arrow 504 capable of indicating the movement trend of the pedestrian 74. Meanwhile, the key information may be displayed in a normal manner or a first highlight manner, or auxiliary reminding may be performed in a manner of voice reminding, which is basically similar to that of the above embodiment and is not described herein again.
Optionally, when the position information includes a current distance between the local and the external object, if the external object is located in front, and the current distance is not greater than the safety distance, and a difference between the safety distance and the current distance is greater than a preset distance difference value and/or a duration in a key state exceeds a preset duration threshold, the processor generates a braking signal or a deceleration signal, and sends the braking signal or the deceleration signal to an external driving system.
In the embodiment of the invention, if the current distance between the local vehicle and the external object is not greater than the safe distance, the current state can be a key state; meanwhile, if the difference between the safety distance and the current distance is greater than the preset distance difference value, or the time length in the key state exceeds the preset time length, it indicates that the external object is too close to the vehicle, or the distance between the external object and the vehicle is within a dangerous range for a long time, at this time, the processor 100 may generate a braking signal or a deceleration signal, and send the braking signal or the deceleration signal to an external driving system, so that the vehicle may be decelerated or braked, and the safety distance between the vehicle and the external object may be maintained. The driving system is a system capable of actively controlling the vehicle to run, may be a system embedded in the vehicle, and may also be other driving assistance systems, which is not limited in this embodiment.
On the basis of the embodiment, when the vehicle is a vehicle, the head-up display device can also monitor whether the lane is deviated or not, and determine that the lane is deviated when the vehicle deviates from the lane, and at the moment, early warning can be carried out. Specifically, the information acquisition device may include an image acquisition device, a vehicle-mounted radar, a GPS, and the like, and may determine a lane condition in front of the vehicle, that is, lane position information, based on the image acquisition device and the like, where the lane position information may specifically include a lane where the vehicle is currently located, a lane adjacent to the vehicle, and the like; the position of the vehicle, namely vehicle position information, such as the current driving lane of the vehicle, the orientation of the vehicle and the like, can be determined based on the information acquisition device; if the processor 100 can obtain the lane position information and the vehicle position information, referring to fig. 8, the determining, by the processor 100, the key information that needs to be displayed currently according to the external environment information may include:
step S101: the processor determines vehicle position information of a local vehicle, determines an offset parameter of the local vehicle deviating from a current driving lane according to the lane position information and the vehicle position information, and judges whether the offset parameter is greater than a corresponding offset threshold value; the offset parameter includes an offset angle and/or an offset distance.
In the embodiment of the invention, the vehicle position information can represent the position of the local vehicle or the position of the lane where the local vehicle is located, and then whether the vehicle is located in the proper lane can be determined based on the vehicle position information and the lane position information, namely whether the vehicle deviates or not can be judged. The vehicle position information may be determined by a positioning device such as a GPS, or may be determined by an image capturing device outside the local vehicle. If the local vehicle is located in the corresponding lane, the offset parameter may be zero, that is, the offset distance and the offset angle are both zero; if the vehicle may have an offset, such as a vehicle line, it is necessary to determine a corresponding offset distance, and if the driving direction of the local vehicle is not consistent with the lane direction, it is necessary to determine a corresponding offset angle, that is, an angle of the vehicle deviating from the lane. Whether the current offset exists can be determined by comparing the offset parameter with the preset offset threshold value.
Step S102: and when the offset parameter is larger than the corresponding offset threshold value, determining that the current vehicle is in a key state, and taking the corresponding offset early warning information as key information, wherein the offset early warning information comprises one or more of offset early warning characters, offset early warning images, offset early warning videos and priority driving lanes.
In the embodiment of the invention, if the current offset parameter is greater than the offset threshold value, the offset angle is too large and/or the offset distance is too large, the vehicle is indicated to have an offset risk at this time, namely the vehicle can be regarded as being in a key state, and corresponding offset early warning information is taken as key information to remind a driver. The deviation early warning information comprises deviation early warning words, deviation early warning images or deviation early warning videos related to lane deviation, and the current priority driving lane can be marked, namely the priority driving lane is used as key information. Specifically, the priority driving lane may be used as an external object, and the projection position of the priority driving lane mapped onto the reflective imaging layer 20 may be determined to determine a corresponding target position, for example, the projection position or an edge of the projection position is used as the target position, and the priority driving lane is displayed at the target position on the reflective imaging layer 20. Specifically, an arrow matching the priority traveling lane, a fan ring with a gradually decreasing width (corresponding to the priority traveling lane requiring turning), a trapezoid (corresponding to the priority traveling lane going straight), and the like may be displayed on the reflective imaging layer 20. The graphic shape displayed on the reflective imaging layer 20 may be specifically determined based on the actual shape of the priority driving lane mapped onto the reflective imaging layer 20.
Optionally, when the offset parameter is greater than the corresponding offset threshold, it may be directly determined that the mobile terminal is in the key state; or further, when the offset parameter is larger than the corresponding offset threshold, comprehensively judging whether the lane is offset at present or not based on the local information of the vehicle, namely whether the lane can be regarded as a key state or not. Specifically, the information collecting device 200 includes a speed sensor, an acceleration sensor, an angular velocity sensor, and the like, which can be respectively used for collecting the vehicle speed, the vehicle acceleration, the vehicle steering angle, and the like; the system based on the local vehicle can determine the state of the steering lamp, namely whether the steering lamp is in an on state; the present embodiment generates vehicle state information based on information such as vehicle speed, vehicle acceleration, steering angle, status of the turn signal, etc., and transmits the vehicle state information as a kind of local information to the processor 100, and the processor 100 determines whether it is a critical state based on the current offset parameter and the vehicle state information.
Specifically, when the offset parameter is greater than the corresponding offset threshold and meets the early warning condition, it is determined that the current state is in a key state. The early warning condition comprises one or more of the condition that the vehicle speed is greater than a first preset speed value, the vehicle acceleration is not greater than zero, a steering lamp on the same side of the direction corresponding to the offset angle of the vehicle is not in an on state, the current state is a lane-unchangeable state, and the time length of lane departure is greater than a preset first departure time length threshold value.
In the embodiment of the invention, if the offset parameter is greater than the corresponding offset threshold, the offset risk is indicated to exist, then whether the offset state is normal or not is judged based on the vehicle state information, and if the offset parameter is abnormal, the offset state can be used as a key state. Specifically, if the vehicle speed is greater than the first preset speed value or the vehicle acceleration is not greater than zero, it indicates that the vehicle speed is too high, or the vehicle still does not decelerate under the condition of deviation, and at this time, the vehicle may be considered to be dangerous, that is, the vehicle may be considered to be in a critical state. Or, if the turn signal lamp on the same side of the direction corresponding to the offset angle of the vehicle is not in the on state, for example, the vehicle is offset to the left, and the turn signal lamp on the left side is not on, that is, it can also be indirectly considered that the driver is not turning to the left side in a normative manner, there is a great risk at this time, and the state is a critical state. Alternatively, if the lane change is currently not possible, for example, if there is another vehicle in the lane corresponding to the offset direction, the lane change is not allowed, and if the driver continues to change the lane in the offset direction, the traffic accident is likely to occur, and therefore, the lane change may be regarded as a critical state. Or if the time length of the vehicle deviating from the lane is greater than a preset first deviation time length threshold value, the vehicle is indicated to deviate from the lane for a long time, and the driver should be reminded.
Accordingly, when the deviation parameter is greater than the corresponding deviation threshold value, some situations are normal deviation, and the driver may not be specially reminded at the moment, namely, the situation is normal at the moment, or the situation of vehicle deviation is not considered at the moment. Specifically, the vehicle state information collected by the information collection device 200 includes one or more of a vehicle speed, a vehicle acceleration, a turn signal light state, a double blinker state, and a yaw rate. The processor 100 may specifically make the following determination based on the vehicle state information:
and when the offset parameter is larger than the corresponding offset threshold value and meets a normal condition, determining that the current state is in a key state. The normal conditions comprise one or more of the condition that the vehicle speed is less than a second preset speed value, the vehicle acceleration is less than zero, a steering lamp on the same side of the direction corresponding to the offset angle of the vehicle is in an on state, a double-flashing signal lamp is in an on state, the yaw rate is greater than a preset angular speed threshold value, the time length of lane departure is less than a preset second departure time length threshold value, and the direction corresponding to the offset angle of the sight line direction information of the driver is the same.
In the embodiment of the present invention, if the offset parameter is greater than the corresponding offset threshold, it indicates that there is an offset risk, but if it is determined that the current offset is a normal offset (e.g., a normal lane change) based on the vehicle state information, the current offset may be regarded as a normal state without performing an early warning. Specifically, if the vehicle speed is less than the second preset speed value or the vehicle acceleration is less than zero, it indicates that the vehicle speed is not fast or is decelerating, and the risk is smaller at this time, and the vehicle can be taken as a normal state. If the turn signal lamp on the opposite side of the direction corresponding to the offset angle of the vehicle is in the on state, it indicates that the vehicle is currently deviated from the lane, but the driver is turning in the same direction of the offset direction, that is, the driver is changing lanes or turning normally, and at this time, the vehicle can be considered to be in the normal state. If the double-flashing signal lamp is in an on state, or the yaw rate is greater than the preset angular speed threshold value, it indicates that the vehicle needs to deviate or change lanes due to faults, or the vehicle encounters an emergency situation to cause emergency steering, avoidance and the like, and at this time, the vehicle can not be regarded as a situation that lane deviation needs early warning, that is, for lane deviation, the vehicle can also be used as a normal state which does not belong to the lane deviation situation. In addition, if the direction corresponding to the sight line direction information of the driver is the same as the direction corresponding to the offset angle, it is described that although the vehicle is offset from the lane at present, the driver notices the offset condition, and at this time, the driver can be taken as a normal state without extra warning to remind the driver.
Step S103: and when the offset parameter is not greater than the corresponding offset threshold value, determining that the vehicle is in a normal state currently, and taking corresponding normal prompt information as key information, wherein the normal prompt information comprises one or more of an empty set, normal prompt characters, normal prompt images, normal prompt videos and a preferential driving lane.
In the embodiment of the invention, if the current offset parameter is not greater than the offset threshold, it indicates that the offset distance is not large and/or the offset angle is not large, and at this time, it indicates that the vehicle is running normally, that is, the vehicle can be regarded as being in a normal state, and at this time, the corresponding normal prompt information can be used as the key information.
Alternatively, when the vehicle is in different states, such as a critical state or a normal state, the critical information may be displayed in different display manners, such as when the vehicle is currently in the critical state, the processor 100 may instruct the imaging device 10 to display the critical information in a normal manner or in a first highlighting manner, which includes one or more of scrolling, jumping, flashing, highlighting, displaying in a first color (e.g., red, etc.). While currently in the normal state, the processor 100 may instruct the imaging device 10 to display the key information in the normal manner or in a second highlighted manner, including display in a second color (e.g., green, etc.).
In the embodiment of the present invention, when the key state or the normal state is adopted, the target information may be displayed in the same manner (i.e., in a normal manner), but the target information displayed in different states is different, and the normal manner includes one or more of still display, scrolling display, jumping display, flashing display, highlighting display, and the like. Wherein the scrolling in the normal mode and the like are the same display modes but not exactly the same as the scrolling in the first highlight mode, for example, the scrolling in the normal mode performs scrolling at a first speed, and the scrolling in the first highlight mode performs scrolling at a second speed, both of which are displayed in a scrolling mode but at different scrolling speeds; similarly, the jitter display, the blinking display, the highlighting display, etc. are similar to the above, such as different jitter frequencies, etc., and will not be described in detail herein.
Or, in different states, not only the key information displayed is different, but also the display mode is different. For example, in a critical state, text or images, etc. may be displayed in a first color (e.g., red), such as displaying the text "vehicle offset, please turn right"; in a normal state, the heads-up display device may display in a second color (e.g., green), for example displaying the text "lane keeping in progress". Alternatively, the same key information may be displayed in different display modes in different states. For example, if the local vehicle is currently offset from the vehicle, a red travel arrow may be displayed in an AR (Augmented Reality) form on the current travel lane; if the local vehicle is currently and normally driven, that is, no lane deviation exists, a green driving arrow can be displayed in the current driving lane in an AR form. Or, the external object is a pedestrian, and the head-up display device needs to identify the pedestrian in an AR manner currently, for example, the position of the pedestrian is marked by a rectangular frame; if the current state is the key state, the rectangular frame can be displayed in a first color (for example, red), that is, the rectangular frame in red is displayed; if the current state is normal, the rectangular frame may be displayed in a second color (e.g., green), i.e., a rectangular frame displaying green. As shown in fig. 7, if the pedestrian 74 is not moving toward the current lane of travel, it may be marked with a green rectangular box 502; if the pedestrian 74 moves toward the current lane of travel, it may be marked with a red rectangular box 502.
Specifically, in this embodiment, there are two cases when the vehicle is in a normal state, that is, if the offset parameter is not greater than the corresponding offset threshold, it indicates that the vehicle is running normally and there is no offset, and at this time, simple key information, such as displaying the text "lane keeping in progress", may be determined. If the offset parameter is greater than the corresponding offset threshold value, but the situation belongs to the normal state, it indicates that the vehicle is currently deviated from the lane, but the corresponding key information can be displayed in a prompting manner at the time when the lane is changed normally, the vehicle is turned, and the like. For example, the head-up display device AR displays images corresponding to the current lane and the turning lane, such as projecting a direction arrow pointing to blue of the turning lane, projecting a blue virtual road attached to the current road and projecting a green lane attached to the turning lane; alternatively, an abridged map of the road may be projected, including the current lane and the turn lanes, which may be distinguished by a particular color, shape. For example, when the driver is about to exit the highway, the driver turns to the right ramp, and at this time, the images of the main lane and the ramp are projected on the reflective imaging layer 20 and an arrow pointing to the ramp is provided; as shown in fig. 9, the head-up display device may determine that the current driving lane 75 is a right-turn lane according to the lane position information corresponding to the current driving lane 75, and if the vehicle continues to go straight, the offset angle of the vehicle may be increased, at this time, the early warning text 501, that is, "please turn right", may be displayed on the reflective imaging layer 20, and at the same time, the arrow 504 attached to the current driving lane 75 may be displayed, so as to intuitively remind the driver of turning right. Or, when the driver changes lanes or overtakes, the reflective imaging layer 20 may also project images of the lane and the overtaking lane, and may indicate an arrow or flash to remind the driver of the lane change trajectory. As shown in fig. 10, if the driver changes lane to the left at present, the direction corresponding to the offset angle of the vehicle is the left direction at this time; if the driver does not turn on the left turn light currently, the driver changes lane in violation currently, and at this time, a warning text 501 "please turn on the left turn light" may be displayed on the reflective imaging layer 20 to remind the driver to turn on the left turn light; at the same time, the current direction of travel of the vehicle may also be indicated by arrow 504, alerting the driver that the vehicle is currently drifting to the left.
Optionally, in the vehicle driving process, no matter the vehicle is in a critical state or a normal state, the preferential driving lane can be displayed in real time. Specifically, the processor 100 determines a priority traveling lane of the vehicle according to the lane position information and the vehicle position information, and takes the priority traveling lane as a key object; determines the projection position of the key object onto the reflective imaging layer 20 and instructs the imaging device 10 to display preset key information at the projection position or at the edge of the projection position.
In the embodiment of the invention, the priority driving lane of the vehicle can be determined in real time, the projection position of the priority driving lane projected onto the reflective imaging layer 20 is determined based on the position of the priority driving lane, and then the key information is displayed. The distance between the whole lane and the vehicle is gradually increased, namely the priority driving lane can not be treated as a point; at this time, a plurality of points may be selected from the priority travel lane as sampling points, so that the head-up display apparatus may more accurately determine at which positions of the reflective imaging layer 20 to display the contents of the following with the priority travel lane.
Further, the priority travel lane may be continuously displayed in different display manners in different states. For example, in a normal state, the priority travel lane is displayed in green display; when the vehicle is offset, the priority lane may be displayed in red. Specifically, a figure, an arrow, or the like visually attached to the priority travel lane may be displayed to guide the driver to travel.
On the basis of the above embodiment, if the offset parameter is greater than the corresponding offset threshold, and the difference between the offset parameter and the offset threshold is greater than the preset offset difference value and/or the time length in the offset state exceeds the preset safe offset time length, a braking signal or a deceleration signal may be generated, and the braking signal or the deceleration signal may be sent to an external driving system.
In this embodiment, when the current state is the key state, other reminding manners may also be adopted for auxiliary reminding. Specifically, the processor 100 may be further configured to: sending early warning voice to a sound generating device and indicating the sound generating device to play the early warning voice; or sending a vibration signal to the vibration device to indicate the vibration device to vibrate; the vibration means is a device that can be brought into contact with a user. In this embodiment, a speaker may be added to the head-up display device, or a speaker on the vehicle may be used to perform voice reminding, where the warning voice may be a warning ring tone without specific meaning, or may be a specific voice, such as "caution |)! Lane offset! "and the like. In addition, a mechanical vibration device may be provided at a position where a driver may directly contact, such as a steering wheel or a seat of a vehicle, so that the driver can be alerted in a vibrating manner in a critical state.
Optionally, in the embodiment of the present invention, if the offset parameter of the vehicle is greater than the corresponding offset threshold, there may be an offset risk currently; meanwhile, if the difference between the offset parameter and the offset threshold is greater than a preset offset difference value, or the time length in the offset state exceeds a preset safe offset time length, it indicates that the current offset degree of the vehicle is too large, or the vehicle runs under the condition of being offset for a long time, and the risk coefficient is higher, at this time, the processor 100 may generate a braking signal or a deceleration signal, and send the braking signal or the deceleration signal to an external driving system, so that the vehicle may be decelerated or braked, and a traffic accident caused by serious offset problem of the vehicle is avoided.
On the basis of the above embodiments, the information collecting apparatus 200 may include an image collecting device or a distance sensor, and the number is not limited, and is preferably multiple, and the information collecting apparatus may be specifically arranged outside the local vehicle, and may also be arranged inside the vehicle, and is specifically determined based on actual conditions; based on the image acquisition device or the distance sensor, the current position of the external object or the position information such as the current distance between the external object and the external object may be determined, and the information acquisition device 200 sends the acquired position information to the processor 100, so that the processor 100 may determine the key information that needs to be displayed currently based on the external environment information including the position information, and the process may specifically include:
determining the relative speed of the external object according to the change value of the current position of the external object, and determining the meeting time according to the current distance; and when the meeting time is less than a preset meeting time threshold and/or the current distance is less than a preset distance threshold, determining that the current state is in a key state, and using the position information of the external object as key information.
In the embodiment of the invention, when other external objects exist on the local outer side, such as pedestrians and vehicles, whether collision with the external object is possible or not and when the collision is possible can be predicted. In this embodiment, the description is mainly given by taking the external object as another vehicle (such as a co-traveling vehicle) because the local vehicle generally has a higher traveling speed and the pedestrian has a slower speed. Specifically, the head-up display device may determine a relative speed of the external object based on a change value of a current position of the external object, where the relative speed is a relative speed between the local vehicle and the external object, and if the speeds are the same, the relative speed is zero. After the relative speed of the external object is determined, the meeting time of the local vehicle and the external object can be determined based on the current distance, if the meeting time is smaller than a preset meeting time threshold value, it is indicated that a collision risk exists between the local vehicle and the external object, at the moment, the current state is determined to be in a key state, and key information related to the external object is displayed to remind a driver of paying attention to the external object. Or, when the current distance is smaller than the preset distance threshold, it may be directly determined that the current state is in the key state.
Wherein the detection range may be set in advance. Specifically, before determining the relative velocity of the external object according to the change value of the current position of the external object, the processor 100 is further configured to: and judging whether the external object is positioned in a preset detection range, and determining the relative speed of the external object according to the change value of the current position of the external object when the external object is positioned in the detection range.
In the embodiment of the invention, by taking an external object as a co-running vehicle as an example, the position of the co-running vehicle can be determined through the collected information of the co-running vehicle; when the vehicles in the same row enter the detection display range, the moving track of the information of the vehicles in the same row can be converted into a digital signal, the relative speed and the expected meeting time are calculated, and when the distance between the local vehicle and the nearby vehicles in the same row is smaller than or equal to a preset threshold value, or when the expected meeting time between the local vehicle and the nearby vehicles is smaller than or equal to a preset meeting time threshold value, alarm information is triggered. The vehicles on the adjacent lanes are preferentially selected by the vehicles in the same line, and the vehicle tracks of the adjacent lanes have reference and record values; and vehicles in the front-rear direction of the same lane are selected, and the track and safety monitoring of the vehicles in the front-rear direction of the same vehicle have warning value.
Optionally, the information acquisition device 200 may also obtain information of vehicles nearby the current environment according to networking, and supplement the information of vehicles in the same driving (i.e., position information of external objects and the like) when meeting a sharp-turn road surface, an arched bridge and a road condition with a large drop. Correspondingly, the head-up display equipment can also share the information of the vehicles in the same line acquired by the head-up display equipment, so that the mutual communication and information sharing among the vehicles are realized.
In this embodiment, the head-up display device displays information on the reflective imaging layer 20 in a large range, and can mark general information that is helpful for driving. For example: the road width of the safe driving area is the driving central axis taking the vehicle as the axis, and vehicles nearby are marked according to different distances, so that the safe distance can be marked by green safety marks, and the vehicles too close to the safe distance can be marked by red warning marks; and according to the real-time road condition and the networking road information, the driving route is assisted in the picture, and an auxiliary line and a steering sign are marked on the correct driving road. The position for displaying the key information can be concentrated in the front right by the driver, and can also be displayed in front of the copilot to remind passengers. If the vehicles in the same row exist in the detection range, the information of the vehicles in the same row in the detection range can be displayed on the reflective imaging layer 20, and the related information such as the speed, the distance and the like of the vehicles in the same row can be displayed; if the vehicles in the same row move relatively in the exploration range, the initial display position and the movement track of the vehicles in the same row can be marked; if the initial display position of the vehicles in the same traveling is on one side of the safe traveling area line close to the traveling central axis, when the relative distance is reduced, the moving tracks of the vehicles in the same traveling are close to the central axis; if the relative distance or the expected meeting time is less than or equal to the preset value, displaying key information for alarming, wherein the key information can adopt the color deepening, rolling, flickering, jumping and the like to improve the warning effect.
On the basis of the above embodiments, the information collecting apparatus 200 may include an image collecting device or a communication module, and may collect information of an external traffic sign or information contained in a sign based on the image collecting device; or, the traffic identification information of the current position is acquired from a public database, such as traffic broadcast, based on the communication module. The traffic identification information comprises a traffic sign position and a traffic sign state; for example, the position and color of the traffic light, after the processor 100 acquires the traffic identification information, the processor 100 may determine the key information that needs to be displayed currently according to the external environment information including the traffic identification information, and the process specifically includes: and selecting key traffic identification information from the traffic identification information, and converting the key traffic identification information into corresponding key information in an image form or a character form.
On roads with complex traffic environments, especially roads that are not familiar to drivers, the drivers may not observe critical traffic information in time; or, the driving traffic sign is difficult to see under the environment with poor sight, the driver cannot see the sign information in time due to the fact that the driver and the traffic sign are shielded by the barrier in the driving process, or the driver does not observe the key information due to the fact that the information is not divided into priority when the driver needs to pay attention to a plurality of information at the same time, and the like, and various troubles and even accidents are brought to the driver. In the embodiment, the corresponding traffic identification information is acquired and displayed through the head-up display equipment, so that the driver is assisted in driving operation, and the driver can better judge the information to operate.
In the embodiment, because the external part has the related information of various traffic signs, the key traffic sign information needing to be displayed is selected from the information, so that the influence on the normal driving of a driver caused by excessive information is avoided. Specifically, traffic lights, traffic control notices (electronic notice boards), turning information of a bifurcation intersection, close distance of an accident in front, speed limit indication and the like can be used as key traffic identification information. Non-critical information may include: scenic spot information, spot information unrelated to a trip, some service information, and the like.
Optionally, whether the prompt information is displayed or not may be determined according to the driving state. In this embodiment, the selecting, by the processor 100, the key traffic identification information from the traffic identification information includes: acquiring vehicle state information, wherein the vehicle state information comprises one or more of vehicle speed, vehicle acceleration, vehicle position, vehicle angular speed, wheel steering angle and vehicle parameters; and judging whether the display information is suitable at present according to the vehicle state information, and selecting key traffic identification information from the traffic identification information or selecting the key traffic identification information corresponding to the vehicle state information from the traffic identification information when the display information is suitable at present.
In this embodiment, whether the display information is currently suitable is determined based on vehicle state information such as vehicle speed and vehicle position, if the local vehicle is currently driving normally, the head-up display device is suitable for displaying the information, and key traffic identification information is selected from the traffic identification information to be displayed, such as displaying a traffic light. Or, key traffic identification information corresponding to the vehicle state information can be selected from the traffic identification information; for example, the current vehicle speed is 100km/h, the front sign is "speed limit 80", and the "speed limit 80" can be displayed to the driver as key traffic identification information to remind the driver of reducing the speed; or, the vehicle parameter includes the vehicle height, and the preceding sign is "limit for height 2 m", can show this "limit for height 2 m" this moment, also can remind the driver based on whether the vehicle height satisfies the requirement of this sign, for example remind the driver to park or change other lanes of going when the vehicle height is too high. If the information is not suitable for being displayed at present, the information can be selectively displayed or not displayed, so that the influence on the normal driving of the driver is avoided. For example, when the local vehicle is overtaking or turning, the driver needs to concentrate on observing the surrounding vehicle information, which is not beneficial to prompt some information irrelevant to driving.
On the basis of the above embodiment, after the processor 100 selects the key traffic identification information from the traffic identification information, the key traffic identification information may be converted into the corresponding key information in the form of an image or a character; for example, some traffic signs which are easy to be mistaken can be converted into a text form, so that misunderstanding of a driver is avoided; alternatively, the key traffic identification information is converted into an image or the like more noticeable to the driver. In addition, if the driver belongs to a color vision deficient crowd such as color weakness or color blindness, the key information in a corresponding image form or character form can be converted, so that the driver can normally read the traffic information. Specifically, when the driver belongs to the color vision deficient crowd, the processor 100 uses the traffic indicator light as key traffic identification information; then converting the key traffic identification information into corresponding key information in a character form or an image form; or, converting the key traffic identification information into key information with a preset color.
In the embodiment of the invention, when the driver is a color vision deficiency crowd, the key traffic identification information can be converted into the key information in an image form or a character form which can be read by the crowd according to the characteristics of the color vision deficiency crowd; for example, in general, the traffic light is circular, and at this time, the red light is converted into a square patch image, and the green light is converted into a triangular patch image with text display. Or, the key traffic identification information may be converted into key information of a preset color that may be recognized by the color vision deficient crowd. For example, for the color-blind people, red light and green light can be converted into images with colors such as purple, blue, black and white and the like which can be identified by the color-blind people; for people with weak color, the signal lamp can be subjected to color mixing processing, the saturation of the image is adjusted (the color is basically changed), and the image is converted into the image with high saturation, so that the people with weak color can normally identify the signal lamp.
In this embodiment, the local vehicle may be provided with an identity recognition system, a database or a memory for storing identity information, and the like, where the driver stores personal identity information, such as information of a color weakness/color blindness patient, in the in-vehicle memory in advance, and the identity recognition system may recognize the identity information of the current driver through information such as iris, fingerprint, voiceprint, face, and the like, query whether the user is a color vision-deficient person according to the recognized identity information, and determine a mode of subsequent image information conversion, where the color saturation of an image is enhanced if the color weakness driver is the color weakness driver, the red-green color blindness driver performs conversion processing on a red-green image, and the blue-yellow blind driver performs conversion processing on a yellow image, and the like. Meanwhile, the generated conversion mode can be stored for direct calling when the driver drives the local vehicle later.
Optionally, the driver can be reminded in a voice reminding mode. For example, voice broadcast "current red light", or report "apart from green light still 10 seconds, 9 seconds … …" etc. carry out the supplementary driving to the colour vision defect crowd through the multimode and remind.
On the basis of the above-described embodiment, the head-up display device may also display a bird's-eye view figure in a top view, which may include other vehicle information around the local vehicle, or the surrounding traffic road conditions, etc., so that the driver can understand the overall conditions around. Specifically, the information collection device 200 may be an image collection device (e.g., an on-board camera, a vehicle recorder, etc.), a laser radar, an infrared camera, etc. disposed around the vehicle, and senses and acquires external environment information around the local vehicle based on the information collection device 200, where the external environment information may be surrounding real-time vehicle distance information, road surface information, traffic signal lamps, pedestrian information, etc., and the processor 100 generates a corresponding bird's-eye view graph based on the external environment information collected by the information collection device 200. Specifically, the external environment information may include location information of the external object, where the location information includes a current location of the external object and/or a current distance between the local and external objects. The processor 100 determining the key information that needs to be displayed currently according to the external environment information includes:
when the integral display instruction is obtained, determining the relative position between the external object and the local according to the position information of the external object, generating a bird's-eye view graph according to the relative position of the external object, and taking the bird's-eye view graph as key information; the integral display instruction is an instruction input by a user, or the integral display instruction is an instruction which is automatically generated when the parking time exceeds a preset parking time threshold value or is currently in a congested road section.
In the embodiment of the invention, the driver can actively display the aerial view graph, namely, the driver actively inputs an overall display instruction; or the head-up display device may automatically generate the overall display instruction, for example, when the parking time exceeds a preset parking time threshold (e.g., 7 seconds, 10 seconds, etc.), or when the vehicle is currently in a congested road segment, the head-up display device may automatically generate a bird's-eye view image for the driver to view the overall situation around. The bird's-eye view graph comprises the relative position between an external object and a local vehicle; the external object may be other vehicles around, pedestrians around, roads around, buildings, traffic lights, or a destination, and when the external object is the destination, navigation information may be generated according to the current road condition and displayed as part or all of the information in the bird's-eye view graph.
For example, the head-up display device monitors the positions and distances of all other vehicles around the local vehicle in real time, and generates a bird's-eye view graph of the local vehicle, wherein the bird's-eye view graph can schematically represent the positions of external objects in each direction around the local vehicle, so that a driver can conveniently and quickly view the surrounding environment; referring to fig. 11, other vehicles around the local vehicle 73 may be displayed on the reflective imaging layer 20 in a bird's eye view, for example, a rear vehicle 72 behind the local vehicle 73 to be overtaken is displayed in a bird's eye view 503, and an early warning text 501, that is, "rear overtaking", may be displayed. In addition, other vehicles around may be displayed in different colors to indicate different levels of danger.
The above description is only an embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present invention, and the present invention shall be covered by the claims. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (20)

1. A heads-up display device based on environment display, comprising: an imaging device and a processor; the imaging device is arranged on one side of the reflective imaging layer and comprises a backlight source component, a main optical axis adjusting component, a light beam expanding component and a liquid crystal panel;
the main optical axis adjusting component is used for converging light rays emitted by the backlight source component to the same preset area, and the preset area is a position or a range in the range of the eye box;
the light beam expanding component and the liquid crystal panel are arranged on one side of the main optical axis adjusting component close to the reflection imaging layer; the light beam expanding component is used for diffusing emergent light of the main optical axis adjusting component and forming light spots covering the range of the eye box; the liquid crystal panel is used for shielding or transmitting light rays, emitting imaging light rays facing the reflective imaging layer when the imaging light rays are transmitted, and reflecting the imaging light rays to the range of the eye box through the reflective imaging layer;
the processor is connected with the imaging device; the processor is used for acquiring external environment information, determining key information which needs to be displayed currently according to the external environment information, and controlling the imaging device to display the key information.
2. The head-up display device of claim 1,
the main optical axis adjusting component is a light focusing component arranged towards the preset area;
or, the main optical axis adjusting component comprises a light ray directing component; the light directional component is arranged between the backlight source component and the light beam expanding component and used for converging light rays emitted by different backlight source components to the same preset area.
3. The heads-up display device of claim 2 wherein the primary optical axis adjustment component includes a collimating component;
the collimation component is used for adjusting the emergent direction of the light rays emitted by the backlight source component to be within a preset angle range, and emitting the adjusted light rays to the light beam expanding component.
4. The heads-up display device of claim 2 wherein the main optical axis adjustment component further comprises a light reflecting component;
the light reflecting component comprises a lamp cup; the lamp cup is a hollow shell surrounded by a reflecting surface, and the opening direction of the lamp cup faces the light beam expanding part; the tail end of the lamp cup, which is far away from the opening, is used for arranging the backlight source component;
alternatively, the light reflecting member comprises a solid lamp cup; the solid lamp cup is a solid transparent component with a reflecting surface, and the refractive index of the solid transparent component is greater than 1; the opening direction of the solid lamp cup faces the light beam expanding component; the end part of the solid lamp cup far away from the opening is used for arranging the backlight source component; the light emitted by the backlight source component is totally reflected when being emitted to the reflecting surface.
5. The head-up display device of claim 4,
the end part of the solid transparent component far away from the opening of the solid lamp cup is provided with a cavity, and one surface of the cavity close to the opening of the solid lamp cup is a convex surface;
or the middle position of the solid transparent component close to the end part of the solid lamp cup opening is provided with a slot, and the bottom surface of the slot is a convex surface;
or, the main optical axis adjusting part further includes: a collimating component; the collimating component is arranged inside the lamp cup, and the size of the collimating component is smaller than the size of the opening of the lamp cup; the collimation component is used for collimating partial light rays emitted by the backlight source component in the lamp cup and then emitting the collimated partial light rays to the light beam expanding component.
6. The head-up display device according to claim 1, wherein the processor determines key information that needs to be displayed currently according to the external environment information, and controls the imaging device to display the key information, and includes:
the processor determines key information needing to be displayed currently according to the external environment information and determines a display position; controlling a liquid crystal unit corresponding to the display position in the imaging device to work so as to display the key information;
the display position is a preset position for displaying the key information; or the display position is an intersection position between the key object and a virtual image of the imaging device, which is formed by the reflective imaging layer when the key object is projected to the eye box range; the key object is an object selected from external objects.
7. The heads-up display device of claim 6, wherein the external environment information includes location information of an external object, the location information including a current location of the external object and/or a current distance between a local and the external object;
the processor determines the key information needing to be displayed currently according to the external environment information, and the key information comprises the following steps:
when the external object is a pedestrian, an animal or a non-motor vehicle, if the external object meets a key warning condition, determining that the external object is in a key state at present, and taking corresponding key warning information as the key information, wherein the key warning information comprises one or more of key warning characters, key warning images and key warning videos;
if the external object does not meet the key warning condition, determining that the external object is in a normal state at present, and taking display information for normal display as the key information, wherein the display information comprises one or more of an empty set, non-warning characters, non-warning images and non-warning videos;
the key warning condition comprises one or more of an empty set, a preset distance threshold value smaller than the current distance between the empty set and the external object, the external object moving towards the current driving lane, the external object being located in the current driving lane and located in an object-dense area, and the sight line position information in the local driving data not matching with the current position of the external object.
8. The heads-up display device of claim 7 wherein when the location information includes a current distance between the local and the external object, if the external object is located in front, the processor generates a braking signal or a deceleration signal and sends the braking signal or the deceleration signal to an external driving system when the current distance is not greater than a safe distance, and a difference between the safe distance and the current distance is greater than a preset distance difference and/or a duration of the critical state exceeds a preset duration threshold.
9. The heads-up display device of claim 6 wherein the external environmental information includes lane position information;
the processor determines the key information needing to be displayed currently according to the external environment information, and the key information comprises the following steps:
the processor determines vehicle position information of a local vehicle, determines an offset parameter of the local vehicle deviating from a current driving lane according to the lane position information and the vehicle position information, and judges whether the offset parameter is greater than a corresponding offset threshold value; the offset parameter comprises an offset angle and/or an offset distance;
when the offset parameter is larger than the corresponding offset threshold value, determining that the current state is in a key state, and taking corresponding offset early warning information as the key information, wherein the offset early warning information comprises one or more of offset early warning characters, offset early warning images, offset early warning videos and priority driving lanes;
and when the offset parameter is not greater than the corresponding offset threshold value, determining that the vehicle is in a normal state currently, and taking corresponding normal prompt information as the key information, wherein the normal prompt information comprises one or more of an empty set, normal prompt characters, normal prompt images, normal prompt videos and a preferential driving lane.
10. The heads-up display device of claim 9 wherein the processor is further configured to obtain vehicle state information including one or more of vehicle speed, vehicle acceleration, turn signal light state, dual blinker state, yaw rate;
when the offset parameter is larger than the corresponding offset threshold value and meets the early warning condition, determining that the current state is in a key state;
when the offset parameter is larger than the corresponding offset threshold value and meets a normal condition, determining that the current state is in a normal state;
the early warning condition comprises one or more of the condition that the vehicle speed is greater than a first preset speed value, the vehicle acceleration is not greater than zero, a steering lamp on the same side of the direction corresponding to the offset angle of the vehicle is not in an on state, the current lane-unchangeable state is obtained, and the time length of lane departure is greater than a preset first departure time length threshold value;
the normal condition includes one or more of that the vehicle speed is less than a second preset speed value, the vehicle acceleration is less than zero, a turn signal lamp on the same side of the vehicle in the direction corresponding to the offset angle is in an on state, a double-flashing signal lamp is in an on state, the yaw rate is greater than a preset angular speed threshold value, the time length of lane departure is less than a preset second departure time length threshold value, and the direction information of the driver is the same as the direction corresponding to the offset angle.
11. The head-up display device according to claim 9, wherein when the offset parameter is greater than a corresponding offset threshold value, and a difference between the offset parameter and the offset threshold value is greater than a preset offset difference value and/or a time length in an offset state exceeds a preset safe offset time length, a braking signal or a deceleration signal is generated and sent to an external driving system.
12. The heads-up display device of claim 6, wherein the external environment information includes location information of an external object, the location information including a current location of the external object and/or a current distance between a local and the external object;
the processor determines the key information needing to be displayed currently according to the external environment information, and the key information comprises the following steps:
determining the relative speed of the external object according to the change value of the current position of the external object, and determining the meeting time according to the current distance; and when the meeting time is smaller than a preset meeting time threshold and/or the current distance is smaller than a preset distance threshold, determining that the current state is in a key state, and taking the position information of the external object as the key information.
13. The heads-up display device of claim 12 wherein the processor, prior to determining the relative velocity of the foreign object based on the change in the current position of the foreign object, is further configured to:
and judging whether the external object is located in a preset detection range, and determining the relative speed of the external object according to the change value of the current position of the external object when the external object is located in the detection range.
14. The heads-up display device of claim 6 wherein the external environmental information includes traffic identification information including a traffic sign location and a traffic sign status;
the processor determines the key information needing to be displayed currently according to the external environment information, and the key information comprises the following steps:
and selecting key traffic identification information from the traffic identification information, and converting the key traffic identification information into corresponding key information in an image form or a character form.
15. The heads-up display device of claim 14 wherein the selecting key traffic identification information from the traffic identification information comprises:
acquiring vehicle state information, wherein the vehicle state information comprises one or more of vehicle speed, vehicle acceleration, vehicle position, vehicle angular speed, wheel steering angle and vehicle parameters;
and judging whether the display information is suitable at present according to the vehicle state information, and when the display information is suitable at present, selecting key traffic identification information from the traffic identification information, or selecting key traffic identification information corresponding to the vehicle state information from the traffic identification information.
16. The head-up display device according to claim 14, wherein the selecting key traffic identification information from the traffic identification information and converting the key traffic identification information into corresponding key information in an image form or a text form comprises:
when the driver belongs to the color vision defect crowd, the traffic indicator lamp is used as key traffic identification information;
converting the key traffic identification information into corresponding key information in a character form or an image form; or converting the key traffic identification information into key information with preset colors.
17. The heads-up display device according to any one of claims 6 to 16, wherein the external environment information includes position information of an external object, the position information including a current position of the external object and/or a current distance between a local and the external object;
the processor determines the key information needing to be displayed currently according to the external environment information, and the key information comprises the following steps:
when an integral display instruction is acquired, determining the relative position between the external object and the local according to the position information of the external object, generating a bird-eye view graph according to the relative position of the external object, and taking the bird-eye view graph as key information;
the integral display instruction is an instruction input by a user, or the integral display instruction is an instruction which is automatically generated when the parking time exceeds a preset parking time threshold value or is currently in a congested road section.
18. The head-up display device according to any one of claims 7 to 13,
when the key state is currently in, the processor instructs the target display device to display the key information in a normal mode or a first highlighting mode, wherein the first highlighting mode comprises one or more of scrolling, jumping, flashing, highlighting and displaying in a first color;
when the target display device is currently in the normal state, the processor instructs the target display device to display the key information in a normal manner or a second highlighting manner, wherein the second highlighting manner comprises displaying in a second color.
19. The heads-up display device of any of claims 7-13 wherein the processor, while currently in the critical state, is further configured to:
sending a reminding voice to a sound generating device and instructing the sound generating device to play the reminding voice;
or sending a vibration signal to a vibration device to indicate the vibration device to vibrate; the vibration device is a device that can be brought into contact with the driver.
20. The head-up display device of claim 1, further comprising an information acquisition device coupled to the processor for acquiring the external environment information;
the information acquisition device comprises one or more of image acquisition equipment, a vehicle-mounted radar, an infrared sensor, a laser sensor, an ultrasonic sensor, a speed sensor, a brightness sensor, a GPS, a V2X system and an ADAS.
CN202010039956.8A 2020-01-15 2020-01-15 Environment display-based head-up display device Active CN113126295B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010039956.8A CN113126295B (en) 2020-01-15 2020-01-15 Environment display-based head-up display device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010039956.8A CN113126295B (en) 2020-01-15 2020-01-15 Environment display-based head-up display device

Publications (2)

Publication Number Publication Date
CN113126295A true CN113126295A (en) 2021-07-16
CN113126295B CN113126295B (en) 2024-08-13

Family

ID=76771208

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010039956.8A Active CN113126295B (en) 2020-01-15 2020-01-15 Environment display-based head-up display device

Country Status (1)

Country Link
CN (1) CN113126295B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113609973A (en) * 2021-08-04 2021-11-05 河南华辰智控技术有限公司 Social security platform wind control management system based on biological recognition technology
CN113835230A (en) * 2021-10-12 2021-12-24 上海仙塔智能科技有限公司 Display processing method and device for vehicle HUD, electronic equipment and medium
CN116052396A (en) * 2023-02-23 2023-05-02 阿波罗智联(北京)科技有限公司 Vehicle information prompting method, device, electronic equipment and medium
CN116909024A (en) * 2023-07-26 2023-10-20 江苏泽景汽车电子股份有限公司 Image display method, device, electronic equipment and storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101872068A (en) * 2009-04-02 2010-10-27 通用汽车环球科技运作公司 Daytime pedestrian on the full-windscreen head-up display detects
CN101881885A (en) * 2009-04-02 2010-11-10 通用汽车环球科技运作公司 Peripheral salient feature on the full-windscreen head-up display strengthens
CN101915990A (en) * 2009-04-02 2010-12-15 通用汽车环球科技运作公司 Enhancing road vision on the full-windscreen head-up display
TW201239394A (en) * 2011-03-21 2012-10-01 Nat Univ Tsing Hua Head-up display (HUD)
CN205982817U (en) * 2016-02-26 2017-02-22 长春车格斯科技有限公司 Image display device
CN107357040A (en) * 2017-08-22 2017-11-17 苏州车萝卜汽车电子科技有限公司 A kind of compact head-up display device and display methods
CN107577046A (en) * 2017-08-22 2018-01-12 苏州车萝卜汽车电子科技有限公司 A kind of HUD illuminators, head-up display device and implementation method
CN206960782U (en) * 2017-06-16 2018-02-02 江苏泽景汽车电子股份有限公司 A kind of HUD backlight display devices
CN207867138U (en) * 2017-10-20 2018-09-14 苏州车萝卜汽车电子科技有限公司 HUD lighting systems, head-up display device
CN109642707A (en) * 2016-09-12 2019-04-16 麦克赛尔株式会社 Light supply apparatus
CN208938530U (en) * 2018-08-03 2019-06-04 深圳前海智云谷科技有限公司 Imaging unit and its automobile head-up-display system
US20190293934A1 (en) * 2018-03-20 2019-09-26 Panasonic Intellectual Property Management Co., Ltd. Head-up display and moving object

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101872068A (en) * 2009-04-02 2010-10-27 通用汽车环球科技运作公司 Daytime pedestrian on the full-windscreen head-up display detects
CN101881885A (en) * 2009-04-02 2010-11-10 通用汽车环球科技运作公司 Peripheral salient feature on the full-windscreen head-up display strengthens
CN101915990A (en) * 2009-04-02 2010-12-15 通用汽车环球科技运作公司 Enhancing road vision on the full-windscreen head-up display
TW201239394A (en) * 2011-03-21 2012-10-01 Nat Univ Tsing Hua Head-up display (HUD)
CN205982817U (en) * 2016-02-26 2017-02-22 长春车格斯科技有限公司 Image display device
CN109642707A (en) * 2016-09-12 2019-04-16 麦克赛尔株式会社 Light supply apparatus
CN206960782U (en) * 2017-06-16 2018-02-02 江苏泽景汽车电子股份有限公司 A kind of HUD backlight display devices
CN107357040A (en) * 2017-08-22 2017-11-17 苏州车萝卜汽车电子科技有限公司 A kind of compact head-up display device and display methods
CN107577046A (en) * 2017-08-22 2018-01-12 苏州车萝卜汽车电子科技有限公司 A kind of HUD illuminators, head-up display device and implementation method
CN207867138U (en) * 2017-10-20 2018-09-14 苏州车萝卜汽车电子科技有限公司 HUD lighting systems, head-up display device
US20190293934A1 (en) * 2018-03-20 2019-09-26 Panasonic Intellectual Property Management Co., Ltd. Head-up display and moving object
CN208938530U (en) * 2018-08-03 2019-06-04 深圳前海智云谷科技有限公司 Imaging unit and its automobile head-up-display system

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113609973A (en) * 2021-08-04 2021-11-05 河南华辰智控技术有限公司 Social security platform wind control management system based on biological recognition technology
CN113609973B (en) * 2021-08-04 2024-02-20 河南华辰智控技术有限公司 Social security platform wind control management system based on biological recognition technology
CN113835230A (en) * 2021-10-12 2021-12-24 上海仙塔智能科技有限公司 Display processing method and device for vehicle HUD, electronic equipment and medium
CN116052396A (en) * 2023-02-23 2023-05-02 阿波罗智联(北京)科技有限公司 Vehicle information prompting method, device, electronic equipment and medium
CN116909024A (en) * 2023-07-26 2023-10-20 江苏泽景汽车电子股份有限公司 Image display method, device, electronic equipment and storage medium
CN116909024B (en) * 2023-07-26 2024-02-09 江苏泽景汽车电子股份有限公司 Image display method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113126295B (en) 2024-08-13

Similar Documents

Publication Publication Date Title
CN113126295B (en) Environment display-based head-up display device
CN112046391B (en) Image projection apparatus and method
KR101631963B1 (en) Head up display device and vehicle having the same
CN113109941B (en) Layered imaging head-up display system
WO2017134861A1 (en) Head-up display device
JP2019059248A (en) Head-up display device
JP2010143520A (en) On-board display system and display method
JP2019012483A (en) Display system, information presentation system having display system, method for controlling display system, program, and mobile body having display system
WO2019003929A1 (en) Display system, information presentation system, method for controlling display system, program and recording medium for display system, and mobile body device
CN113219655B (en) Vehicle display system that multi-view shows
JP2019113809A (en) Head-up display device
JP2019059247A (en) Head-up display device
CN115891644A (en) Display method, device, vehicle and storage medium
CN113119862B (en) Head-up display device for driving assistance
US20210268961A1 (en) Display method, display device, and display system
CN113219656B (en) Vehicle-mounted head-up display system
CN113219658B (en) Vehicle head-up display system with view-angle-division intelligent display function
JP6814416B2 (en) Information providing device, information providing method, and information providing control program
JP2021117089A (en) Display device and method for display
WO2021139792A1 (en) Head-up display system and control method therefor, and means of transport
CN115657308A (en) Vehicle head-up display system
CN113147595B (en) Vehicle driving control system based on stereoscopic vision display
CN112009357B (en) Image projection apparatus and method
JP2021117220A (en) Display device, mobile body, method for display, and program
CN113109940A (en) High-brightness head-up display system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant