CN116684565A - Display method, device, vehicle and storage medium - Google Patents

Display method, device, vehicle and storage medium Download PDF

Info

Publication number
CN116684565A
CN116684565A CN202310711554.1A CN202310711554A CN116684565A CN 116684565 A CN116684565 A CN 116684565A CN 202310711554 A CN202310711554 A CN 202310711554A CN 116684565 A CN116684565 A CN 116684565A
Authority
CN
China
Prior art keywords
display
area
vehicle
indication information
virtual image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310711554.1A
Other languages
Chinese (zh)
Inventor
董道明
邓远博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Ruiweishi Technology Co ltd
Original Assignee
Nanjing Ruiweishi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Ruiweishi Technology Co ltd filed Critical Nanjing Ruiweishi Technology Co ltd
Priority to CN202310711554.1A priority Critical patent/CN116684565A/en
Publication of CN116684565A publication Critical patent/CN116684565A/en
Pending legal-status Critical Current

Links

Abstract

The present application relates to the field of projection display technologies, and in particular, to a display method, a device, a vehicle, and a storage medium. According to the application, based on the condition information such as the real scene position and the topography recorded in the map resource, the representation content for meeting the indication purpose is simulated at the real scene corresponding to the display area of the HUD display device, and the virtual image corresponding to the representation content is rendered and projected on the display focal plane corresponding to the HUD display device according to the light transmission relation between the simulation position of the representation content in the real scene and the actual position of the user viewpoint, so that a driver can see the display effect of perfectly superposing the virtual image on the real scene through the windshield. The application can well improve the fitting degree of the projection display content and the external live-action of the windshield, reduce the pressure of instant processing and improve the watching experience of users.

Description

Display method, device, vehicle and storage medium
Technical Field
The present application relates to the field of projection display technologies, and in particular, to a display method, a device, a vehicle, and a storage medium.
Background
The HUD (Head Up Display) is a brand new way of realizing vehicle-mounted Display by using reflection on a vehicle windshield, and specifically, a light machine of the HUD Display device emits Display light, and the Display light is projected on the windshield through a corresponding optical lens to generate a virtual image, so that an augmented reality effect is formed with the real world outside the windshield. Accordingly, the virtual image may display traveling information (such as a vehicle speed) of the vehicle, and may display contents attached to the real scene such as navigation information (such as a navigation instruction line). However, the lamination effect between the virtual image of the projection display and the real scene outside the windshield is not good, and taking the navigation information of the projection display as an example, the navigation indication line cannot be perfectly laminated on the appointed road, so that the driver cannot clearly understand the navigation route. In addition, virtual image lamination is to perform instant image processing according to the real scene scanned by the camera and then render the appointed display content at the corresponding position, so that the algorithm is complex to realize and the calculation force requirement is high.
Disclosure of Invention
The application aims to provide a display method, equipment, a vehicle and a storage medium, which solve the technical problems that in the prior art, the content of projection display cannot be effectively attached to a real scene outside a windshield, and the processing efficiency does not meet the real-time requirement.
In order to solve the technical problems, the application adopts the following technical scheme:
in a first aspect, the present application provides a display method, including:
the HUD display device projects a display area on a windshield of the vehicle;
the display area corresponds to a first live-action area outside the windshield;
determining first indication information according to map resources of the first live-action area, wherein the first indication information is configured to be used for meeting the indication content of the indication purpose in the first live-action area;
and displaying a first virtual image on a display focal plane of the display area, wherein the first virtual image corresponds to the first indication information.
According to the description, the HUD display device can project the virtual image which is finished based on the map resource in advance on the windshield, so that scanning and image processing of a camera are not needed, only the virtual image corresponding to the first indication information is determined by utilizing the optical relationship, the processing speed is high, and the attaching effect is better.
In an optional implementation manner of the first aspect, the first virtual image is displayed in an animated manner.
According to the above description, the degree of attention of the driver to the virtual image at the time of display, such as periodically moving highlighting, can be improved.
In an optional implementation manner of the first aspect, the displaying area corresponds to a first live-action area in front of the windshield and includes:
the display area is configured to view the first live-action area through a windshield in which the display area is located.
In an optional implementation manner of the first aspect, the displaying area corresponds to a first live-action area in front of the windshield and includes:
and determining the first live-action area according to the positioning information of the vehicle.
According to the above description, when the position of the vehicle is known, the position to the first live-action area corresponding to the display area can be determined.
In an optional implementation manner of the first aspect, the determining the first live-action area according to the positioning information of the vehicle includes:
the positioning information of the vehicle determines the coordinate position and the headstock orientation of the vehicle through a positioning module.
In an optional implementation manner of the first aspect, the determining the first live-action area according to the positioning information of the vehicle includes:
And determining the first live-action area according to the relative position relation between the user viewpoint and the display area.
According to the description, the three-point and one-line positions of the user viewpoint, the display area and the first live-action area are ensured, and the rule of optical transmission is met.
In an optional implementation manner of the first aspect, the map resource of the first live-action area includes a coordinate position and a topography of the first live-action area.
According to the description, the map resource comprises specific real situations, and the instruction information is simulated through the data stored in advance, so that the processing efficiency is improved.
In an optional implementation manner of the first aspect, the first indication information is a simulation image corresponding to the first virtual image or structural data representing the simulation image.
According to the above description, the first indication information is not an object actually existing in the first live-action area, but is information corresponding to a virtual object for determining the first virtual image, which may be directly structural data as a basis for rendering the first virtual image or may be a simulated image in which the structural data is directly simulated.
In an optional implementation manner of the first aspect, the determining, according to the map resource of the first live-action area, first indication information, where the first indication information is configured to be used in the first live-action area for indicating that the indication content meets an indication purpose includes:
And determining the road condition leading to the destination direction according to the map resource of the first real scene area, so that the first indication information represents the route guidance on the road of the first real scene area.
According to the above description, the navigation virtual image can be perfectly attached to the road according to the navigation indication information corresponding to the first indication information, and the driver can intuitively know the driving route by only observing the navigation virtual image attached to the road.
In an optional implementation manner of the first aspect, the determining, according to the map resource of the first real scene area, a road condition leading to a destination direction, so that the first indication information indicates a route guidance on the road of the first real scene area includes:
the first live-action area includes a first current lane of the vehicle and a second lane leading to a destination direction, and the first indication information represents guidance content for switching from the first lane to the second lane.
According to the above description, the navigation instruction information is more coherent and unambiguous, and the driver can easily understand the navigation route.
In an optional implementation manner of the first aspect, the determining, according to the map resource of the first live-action area, first indication information, where the first indication information is configured to be used in the first live-action area for indicating that the indication content meets an indication purpose includes:
And when the vehicle is at a first position, determining first indication information according to map resources of the first real-scene area, wherein the first position is the position of the vehicle before passing through a second position, and the display area corresponds to the first real-scene area outside the windshield when the vehicle is at the second position.
According to the description, the HUD display device can determine the first indication information in advance according to the map resource of the first real-scene area, and render the virtual image of the projection display, so that the efficiency of real-time processing can be improved.
In an alternative implementation of the first aspect, the responding to the vehicle when in the first position comprises:
and triggering processing when the coordinate position of the vehicle is consistent with the coordinate position of the first position.
In an optional implementation manner of the first aspect, after the displaying the first virtual image on the display focal plane of the display area, the displaying method further includes:
responding to the situation that the vehicle is in a third position, wherein the display area corresponds to a second real-scene area, the third position is the position of the vehicle after passing through the second position, the display area corresponds to a first real-scene area outside the windshield when the vehicle is in the second position, and the first real-scene area at least partially coincides with the second real-scene area;
Determining second indication information according to map resources of the second real scene area, wherein the second indication information is configured to be used for meeting the indication content of the indication purpose in the second real scene area, and the first indication information is at least partially identical with the second indication information;
and displaying a second virtual image on a display focal plane of the display area, wherein the second virtual image corresponds to the second indication information.
According to the above description, the display area can realize a coherent virtual image display according to the movement of the vehicle, and the observed effect feels that the object corresponding to the virtual image is actually at the corresponding position outside the windshield.
In an optional implementation manner of the first aspect, the first indication information and the second indication information are uniformly determined according to map resources corresponding to the real scene.
According to the description, map resources (such as the same intersection) of the live-action where the second position and the third position are located together can be determined in advance at the first position, indication information is uniformly determined, and corresponding first virtual images and second virtual images are respectively determined according to the first live-action area and the second live-action area, so that a smooth projection display effect can be achieved.
In an optional implementation manner of the first aspect, the displaying a first virtual image on a display focal plane of the display area, where the first virtual image corresponds to the first indication information includes:
and determining the position and the size of the first virtual image displayed on the display focal plane according to the position of the user viewpoint and the position of the first indication information represented by the first real scene area.
According to the above description, it is possible to make the virtual image displayed on the display focal plane coincide with the live view outside the windshield in view effect, i.e., the virtual image feel the same at the live view position.
In a second aspect, the present application provides a display device comprising a memory, a processor and a computer program stored on the memory and running on the processor, the processor implementing the steps of the display method of the first aspect when executing the computer program.
In a third aspect, the present application provides a vehicle comprising the display device of the second aspect.
In a fourth aspect, the present application provides a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the display method of the first aspect.
Compared with the prior art, the method simulates the representation content for meeting the indication purpose at the real scene corresponding to the display area of the HUD display device based on the condition information of the real scene position, the terrain and the like recorded in the map resource, and renders and projects the virtual image corresponding to the representation content on the display focal plane corresponding to the HUD display device according to the light transmission relation between the simulation position of the representation content in the real scene and the actual position of the user viewpoint, so that a driver can see the display effect of perfectly overlaying the virtual image on the real scene through the windshield. The application can well improve the fitting degree of the projection display content and the external live-action of the windshield, reduce the pressure of instant processing and improve the watching experience of users.
Drawings
In order to more clearly illustrate the technical solution of the present application, the drawings used in the description of the technical solution will be briefly described. It is obvious that the drawings in the following description are only some examples of the present application, and that other drawings may be obtained from these drawings without inventive effort for a person of ordinary skill in the art.
FIG. 1 is a schematic view of a HUD projection display in some examples of the application.
Fig. 2 is a schematic diagram of a HUD display device according to some examples of the present application.
Fig. 3 is a schematic view of HUD projection display scene in some examples of the application.
Fig. 4 is a schematic diagram of a HUD display device module according to some examples of the present application.
FIG. 5 is a simulated schematic diagram of information indicated in some examples of the application.
FIG. 6 is a simulated schematic diagram of information indicated in some examples of the application.
FIG. 7 is a schematic diagram of simulation of information indicated in some examples of the application.
FIG. 8 is a simulated schematic diagram of information indicated in some examples of the application.
FIG. 9 is a simulated schematic diagram of information indicated in some examples of the application.
FIG. 10 is a schematic diagram of simulation of information indicated in some examples of the application.
Fig. 11 is a schematic diagram showing the effect of HUD projection display in some examples of the present application.
FIG. 12 is a schematic illustration of a vehicle dynamics during operation in some examples of the application.
Fig. 13 is a schematic diagram of display contents of a display area in some examples of the present application.
Fig. 14 is a schematic diagram showing the effect of HUD projection display in some examples of the present application.
Fig. 15 is a schematic diagram of a virtual object simulation method according to some examples of the present application.
Fig. 16 is a schematic diagram of an optical mapping of a HUD projection display in some examples of the application.
Fig. 17 is a schematic diagram of an optical mapping of a HUD projection display in some examples of the application.
Fig. 18 is a schematic diagram showing the transformation of coordinates of a region in some examples of the present application.
Fig. 19 is a schematic view of a HUD projection display area in some examples of the application.
Fig. 20 is a schematic view of a HUD projection display area in some examples of the application.
Fig. 21 is a schematic diagram of a HUD display device module according to some examples of the present application.
Fig. 22 is a schematic illustration of a vehicle in some examples of the application.
Description of the embodiments
The present application will be described in detail below with reference to the attached drawings, but the descriptions are only examples of the present application and are not limited to the application, and variations in structure, method or function etc. according to these examples are included in the protection scope of the present application.
It should be noted that in different examples, the same reference numerals or labels may be used, but these do not represent absolute relationships in terms of structure or function. Also, the references to "first," "second," etc. in the examples are for descriptive convenience only and do not represent absolute distinguishing relationships between structures or functions, nor should they be construed as indicating or implying a relative importance or number of corresponding objects. Unless specifically stated otherwise, reference to "at least one" in the description may refer to one or more than one, and "a plurality" refers to two or more than two.
In addition, in representing the feature, the character "/" may represent a relationship in which the front-rear related objects exist or exist, for example, a head-up display/head-up display may be represented as a head-up display or a head-up display. In the expression operation, the character "/" may indicate that there is a division relationship between the front and rear related objects, for example, the magnification m=l/P may be expressed as L (virtual image size) divided by P (image source size). Also, "and/or" in different examples is merely to describe the association relationship of the front and rear association objects, and such association relationship may include three cases, for example, a concave mirror and/or a convex mirror, and may be expressed as the presence of a concave mirror alone, the presence of a convex mirror alone, and the presence of both concave and convex mirrors.
The HUD mainly utilizes the optical reflection principle, imaging light to be displayed is reflected into human eyes through the transparent surface, the human eyes can observe corresponding information along the opposite direction of the light, and a special display screen is not needed, so that another convenient implementation mode is provided for information display. In particular, a transparent surface (such as a windshield) is arranged in the front view of the driver, and if the driver needs to view information when driving the vehicle, the view does not need to be turned to a place beyond the front of the vehicle, so that the driving safety of the driver is improved. In some examples, a HUD display device may be fixedly mounted on a vehicle center console, where the HUD display device includes an optical engine, an optical lens, and the like, a backlight of the optical engine may implement illumination based on LED (Light Emitting Diode ), laser, and the like, and an image source of the optical engine may implement display based on LCD (Liquid Crystal Display ), DMD (Digital Micromirror Devices, digital micromirror device), MEMS (Micro-Electro-Mechanical System, microelectromechanical system) micromirror, LCOS (Liquid Crystal on silicon ), and the like. The display surface of the optical machine corresponding to the image source can display an image (display content) to be projected on an imaging position, display light of the image is projected, the display light is finally reflected on a windshield of the vehicle through light path planning of an optical lens, the windshield serves as a transparent surface for reflecting the display light and can serve as a display screen, a driver can directly observe a virtual image corresponding to the display content through the windshield, and the display content can be the running speed of the vehicle, navigation information and the like.
As shown in fig. 1, the HUD display device may at least include an optical machine 1, a first mirror 2, and a second mirror 3, where in this example, the first mirror 2 and the second mirror 3 are optical lens groups that cooperate to implement optical path transmission, and the first mirror 2 and the second mirror 3 in the optical lens groups may project display light projected by the optical machine 1 onto a windshield 4. In some examples, the first mirror 2, the second mirror 3 may be provided as a concave mirror, a convex mirror, or the like as required. In some examples, the optical lens group may also enable planning of the optical path by one or more transmissive lenses. The optical machine 1 projects light rays for displaying corresponding information, and the first reflecting mirror 2 and the second reflecting mirror 3 are used for realizing light path planning, so that light path customization can be carried out in a smaller space, and different projection display requirements are met. The display light projected by the optical machine 1 is finally projected on the windshield 4 of the vehicle through multiple reflections of the first reflecting mirror 2 and the second reflecting mirror 3, and a driver 6 in the vehicle can see a virtual image 5 formed by the projection light of the optical machine 1 passing through the windshield 4 against the windshield 4, and the virtual image can be corresponding to parameter information of the vehicle and the like. In some examples, the first mirror 2 and the second mirror 3 may also be adjusted to a certain degree of angle, so as to change the projection position of the projection light on the windshield 4, so as to adapt to the heights of different drivers 6. It should be added that, for the characteristics of different optical machines, a diffuse mirror can be correspondingly arranged to adjust the corresponding imaging effect. In some examples, fresnel lenses, waveguide optics, diffractive optics, holographic optics, tapered fibers, etc. may also be included in the HUD display device to enable light path planning and optimization.
As shown in fig. 2, the HUD display device 100 including the optical machine 1, the first mirror 2, and the second mirror 3 can be enveloped by the housing 101, and the optical machine and the optical lens are accommodated in the inner space of the housing 101 and stably fixed to the inside of the housing 101 by a bracket or the like. Referring to fig. 1, an optical engine 1, a first mirror 2, and a second mirror 3 are mutually matched to realize a certain light path planning in a housing 101, and finally display light is projected out through a window 102 formed in the housing 101. When the HUD display device 100 is embedded in a center console of an automobile, the window 102 on the housing 101 faces the vehicle windshield above the center console, and accordingly, display light projected from the window 102 is reflected on the windshield to form a virtual image that can be seen by the human eye.
As shown in fig. 3, a HUD display device shown in fig. 2 may be provided in the center console 10, and the HUD display device forms a corresponding virtual image on the windshield 4 by projecting display light, that is, a display image corresponding to the display area 50, which includes traveling information and navigation information of the vehicle. The driver on the main driver can know the current running state of the vehicle by directly watching the information on the windshield and determine the running route according to the indicator corresponding to the navigation information, for example, the vehicle is running at 60 kilometers per hour in the figure, and prompts to continue to run forwards. In the example of fig. 3, since the indicator of the navigation is not attached to the real road, in a complex road environment, such as multiple intersections, it is difficult for the driver to determine which intersection the indicator corresponds to, and the driver is required to determine the actual driving route according to the analysis of the actual road, so that the driving attention of the user is dispersed, and the user experience of the navigation is very poor. On the other hand, the windshield is not only provided with roads, but also facilities and buildings on two sides of the roads, and the effective information display is not performed by using HUD display equipment because the virtual and actual fitting difficulty based on camera scanning is high, so that the application range of projection display is greatly limited. It should be noted that, in the context of the present description, reference is made to "inside" of the windshield, and "outside" refers to the interior of the windshield, with respect to the vehicle cabin in which the driver is seated, and the exterior of the windshield, with respect to the cabin in which the driver is seated. Taking a front windshield as an example, that is, a windshield where a driver drives a vehicle to observe a road ahead in the traveling direction, the external reality of the windshield may be the road ahead in the traveling direction of the vehicle, or the like.
The HUD display device for projection display may be added after purchase of the vehicle, or may be directly integrated into the center console before shipment of the vehicle. The corresponding HUD display device can also interact with the vehicle machine in order to display the parameter information of the vehicle. Fig. 4 shows a schematic block diagram between the HUD display device and the vehicle 72, where the HUD display device is powered by the vehicle 72 and data, or the HUD display device itself may be used to power and generate data. The HUD display device may include a processor 71, an ethernet interface 701, a CAN (Controller Area Network ) interface 702, a power management module 703, a run memory 704, a storage memory 705, a motor 706, a backlight 707, an image source 708, a positioning module 709, a radar 710, a camera 711, and the like.
It should be noted that the various modules listed in fig. 4 are merely exemplary descriptions and not limiting in any way, and in some examples, the HUD display device may also include other modules. In addition, the modules described above may be implemented in one or more hardware in different examples, or a single module may be implemented by a combination of a plurality of hardware.
The processor 71, as a control center for the HUD display device, includes one or more processing units of any type including, but not limited to, a micro-control unit, a microcontroller, a DSP (Digital Signal Processor, digital signal control unit), or any combination thereof. The processor 71 is configured to generate an operation control signal according to a computer program, and control other respective modules.
The ethernet interface 701 is a network data connection port for lan communication, defining a series of software and hardware standards by which multiple electronic devices may be connected together, in this example, for information interaction with the vehicle 72.
The CAN interface 702 is a network data connection port of the controller area network, provides a standard bus for a control system and embedded industrial control in the automobile, and realizes communication interaction between the control nodes, and in this example, CAN also perform information interaction with the automobile machine 72.
The power management module 703 is connected to the vehicle 72 to receive the power provided by the vehicle 72, and provides a regulated power supply for each module of the HUD display device, so as to ensure that the processor 71 and the like cannot be burnt out.
The running Memory 704 is used for storing computer programs executed by the processor 71, and temporarily storing operation data, data exchanged with the storage Memory, and may be SDRAM (Synchronous Dynamic Random-access Memory), or the like.
The storage memory 705 is used for storing resources such as related display content of the display device, and may be Flash, or may provide an interface to access an external memory.
The motor 706 is used to drive the optical lens in the display device to rotate, so as to implement corresponding light path customization, such as adjusting the projection position of the virtual image or the virtual image distance.
The backlight 707 is used for providing illumination light and adjusting the brightness of the illumination light according to the control of the processor 71, and cooperates with the image source 708 to realize the main functions of the projector projection display, including LEDs and the like.
An image source 708 for displaying an image of the corresponding information and projecting display light out, including an LCD or the like, according to the control of the processor 71.
The positioning module 709 is configured to accurately position and orient the vehicle, and may include a GPS (Global Positioning System ), a beidou satellite navigation system, and other global navigation satellite systems, and determine the position of the receiver by measuring the distance between the satellite and the receiver on the vehicle at different positions. Optionally, an inertial navigation system can be further included, based on newton's law of mechanics, by measuring the acceleration of the vehicle in the inertial reference system, integrating it over time, and transforming it into the navigation coordinate system, so as to obtain information such as speed, yaw angle and position in the navigation coordinate system. In some examples, the inertial navigation system assists the global navigation satellite system in achieving more accurate vehicle positioning.
The radar 710, which is used to determine the position of the target object by electromagnetic waves, is generally known as the distance from the target object, but since electromagnetic waves are generally reflected from the surface opposite to the target object, it is not easy to obtain the approximate height of the target object.
The camera 711 is used to determine the position of a target object through visual recognition, such as a monocular camera or a binocular camera, and the biggest difference between the monocular camera and the binocular camera is that the binocular camera can capture images at two different viewing angles, so that distance information in a three-dimensional space can be obtained, and the monocular camera can capture images at a single viewing angle only, so that the visual effect is relatively single, but the binocular camera has more viewing angle sensing elements for capturing images, so that the cost is higher than that of the monocular camera.
In some examples, the positioning module 709, the radar 710 and the camera 710 may be directly connected to the vehicle 72, and are not connected to the processor 71 of the HUD display device, for example, the positioning module for position tracking and the radar and the camera for automatic driving are integrated on the vehicle 72, and the HUD display device may acquire the acquired data of the positioning module, the radar and the camera in real time through communication with the vehicle 72, so that the data may be used to assist functions of positioning and navigation of the vehicle.
In some examples, the software package implementing the display method may be integrated in a vehicle system, or may be integrated in a HUD display device system, and the corresponding vehicle system or HUD display device system includes, but is not limited to, linux-based variants, android-based variants, QNX, or other embedded vehicle-mounted operating systems, and the specific implementation may be determined according to the requirements of the host factory or HUD display device vendor. In some examples, the rendering drawing function of the HUD display device requires the use of an image API, such as the image APIs of OpenGL ES, vulkan, etc., commonly used in embedded platforms. Alternatively, the software integration form of the whole system can be integrated in the form of source code, dynamic link libraries or static link libraries.
In some examples, to enhance the display capabilities of the HUD display device, some virtual objects may be simulated over a real scene (i.e., an actual scene outside the windshield), which may be specific logos of the electronic media display, such as icons, leads, etc., or specific objects used in a real scene, such as ice cream barrels, guideboards, etc. After the corresponding shape and content of the virtual object are simulated, the virtual object can be displayed in a display area corresponding to the HUD display device, so that a driver can see not only a real scene outside the windshield, but also a virtual image corresponding to the virtual object when observing the windshield. The virtual image is visually superimposed on the real scene, so that the virtual image is perceived as if the virtual object itself were the real scene.
As shown in fig. 5, in some examples, a specific navigation indication line 81 may be superimposed on a live view outside the windshield 4, and the navigation indication line 81 may be determined according to the result of navigating to the destination and marked on the road through which the vehicle needs to pass in a lead-line manner, like a marking on the road itself. Since the navigation indication line 81 is determined according to the actual navigation destination, the navigation indication line 81 is more flexible than the marked line actually drawn on the road, and the driver does not deviate from the planned navigation route when driving along the navigation indication line 81. It should be noted that, in this example and the examples of fig. 6 to 10, the virtual object is merely an overall effect simulated for convenience of description, and is used to illustrate a case where the entire virtual object is superimposed on a real scene, the illustrated effect is not necessarily a final effect of the HUD display device projected on the windshield, and a specific display portion of the virtual object is determined by a display area projected by the HUD display device, which will be described in detail below.
As shown in fig. 6, in some examples, at the intersection where turning is actually required, similar markings are not superimposed on the road, but a water horse 82 with an arrow turning to the right is simulated on the corresponding road, and the water horse 82 is a virtually shaped object extending along the lane where the vehicle is located up to the lane where turning is required, as compared with the example of fig. 5. Alternatively, the turning arrow on the water horse 82 may also be periodically illuminated with a specific color of brightness to create a movement effect that moves to the right. The water horse 82 is similar to a fence provided for road closure in a real road, and accordingly, according to a navigation route, the water horse 82 is placed in front of a straight road to guide a driver to travel rightward. Because the water horse 82 in this example is not a fence actually required to be placed at a specified position of a road, different fence placement modes can be simulated for navigation routes of different vehicles, and most importantly, the traffic of other vehicles is not affected at all, so that the two effects of guiding and normal traffic are achieved. In some examples, the water horse 82 may be implemented in conjunction with the navigation indication line 81 in fig. 5, increasing the effectiveness of the navigation directions. Alternatively, the water horse 82 may also take the form of a simplified display, only retaining the form of the turning arrow, and not having the external shape of the enclosure, which may be more concise and striking.
As shown in fig. 7, in some examples, another way to prompt turning to switch lanes is to make a virtual object simulated in a live-action be a real sign, the simulated sign 83 is placed at a position where turning is required, the sign 83 displays a running direction, a driver knows that the vehicle cannot run straight any more when driving to the corresponding position, and can run right according to the prompting direction, and other vehicles cannot see the sign 83 and can continue to run straight. As shown in fig. 8, the sign post 83 is changed to a traffic post 84, the traffic post 84 may be displayed at an advanced position as in an actual road, the traffic post 84 may display the next intersection to be turned, and optionally, the distance from the corresponding intersection. Since the traffic guideboard 84 is virtual, the influence of the standing position on road traffic is not considered, and accordingly, the traffic guideboard 84 can be directly standing at the middle position of the road instead of at the two sides of the road, and as illustrated in fig. 8, the traffic guideboard 84 stands on the left lane line of the lane where the vehicle is located, so that the traffic guideboard can be at the clearest position of the field of view and can be flexibly arranged.
As shown in fig. 9, in some examples, the simulated virtual object is not limited to only the virtual object serving the navigation, but may be a corresponding virtual billboard. To reduce the violations of virtual billboard settings and live-action, the billboard may be placed in a location of the same type as the live-action scene. Taking fig. 9 as an example, the simulated advertising board 85 may be disposed on the building surfaces on two sides of the road, and corresponding advertising information is displayed on the advertising board 85, and optionally, different distribution contents may be adopted according to the identity of the driver, etc. Alternatively, the billboard may be modeled as a shape with posts standing on the roadside or across the sides of the road. As shown in fig. 10, the billboard 86 is installed on a remote high-rise wall in a simulated manner, and because virtual display is adopted, the consent of a building manager is not required, so that the information display richness of the HUD display device is greatly enhanced, and the spreading range of advertisement contents can be improved without cost.
In the above examples, it is only described how the simulated virtual object cooperates with the real scene outside the windshield to achieve a specific display purpose, such as visually indicating the driver to enter the correct navigation route, etc. However, in the projection display of the actual HUD display device, the driver who is not inside the windshield must be able to see the complete virtual object to appear outside the windshield, and the portion out of view is not observed as if the outside real scene was actually viewed on the windshield. According to the light transmission characteristics, only the real scene within the visual field can be seen, but the real scene outside the visual field cannot be seen, accordingly, the virtual object can simulate the same effect as the real scene display, namely, the observable display part of the virtual object is determined according to the range of the display area, so that on one hand, the sense of reality as the real scene can be realized, and on the other hand, the consistent effect of attaching to the real scene can be realized.
As shown in fig. 11, taking the scene of the water horse 82 in fig. 6 as an example, the complete water horse 82 is indication information which is simulated according to navigation information and the condition of the intersection outside the windshield, the indication information is used for indicating that a specific shape of the enclosure is formed in the illustrated intersection, and the specific indication information can be a simulated image as shown in fig. 6 or structural data of the simulated image, which is corresponding information generated by displaying a virtual image which can be observed on the windshield for a display area. Accordingly, the display area 50 is a visual area formed by the HUD display device projecting on the windshield, and information projected by the HUD display device can only be displayed on the display area 50, but outside the display area 50, the HUD display device cannot project, and accordingly, cannot display an image of the virtual object. The area range of the display area 50 is determined by the image source display surface range of the HUD display device and the optical imaging properties of the optical lens, and is formed at a specified height position of the windshield, and is relatively fixed without adjustment. As described above, since the position of the simulated water horse 82 is relatively fixed, that is, the position of the intersection at which the vehicle turns, the simulated water horse 82 in the display area 50 corresponding to the real scene area is presented only according to the position of the display area 50, and the portion beyond the display area, although the corresponding real scene area may also simulate the portion corresponding to the water horse 82, the portion beyond the display area 50 cannot be displayed due to the projection limitation imposed on the HUD display device. Accordingly, the display area 50 is understood to be a visual window through which a virtual object can be viewed, which is generally smaller than the area of the windshield 4, and thus smaller than the field of view through the windshield, so that only a partial image of the water horse 82 simulated on the road and the corresponding real scene can be seen in the display area 50.
As shown in fig. 12, when the vehicle moves rightward in the direction indicated by the water horse 82, the live-action area outside the windshield thereof also changes, and the virtual image observable in the display area also changes. Referring to fig. 1, the display focal plane of the display area 50 generates a certain virtual image distance due to the projection principle of the HUD display device, that is, the display focal plane may be visually present outside the vehicle windshield, for example, at a position about 10 meters from the human eye, and the display focal plane of the display area 50 in fig. 12 is in front of the vehicle. Accordingly, the human eyes, the display focal plane (display area 50) and the water horses 82 form an optical three-point linear relation, the optical linear transmission relation changes along with the change of the position and the orientation of the vehicle, the corresponding real scene area also changes, and at the moment, the virtual image in the display area can be controlled to also change in display so as to form a visual effect consistent with the real scene, namely, the virtual object seen through the display area also moves along with the movement of the vehicle.
As shown in fig. 13, the water horse seen in the display area 50 in (a) is a partial image that the vehicle can see when in position a, and the entire appearance of the water horse across multiple lanes cannot be displayed due to the size limitation of the display area 50. The water horse visual portion in the display area 50 indicates that the vehicle needs to travel to the right, and is about 1 meter or so on the right course. As the vehicle continues to turn right to position B, the visible portion of the water horse in the display area 50 also moves continuously to the right based on the optical properties, and (B) the visible portion of the water horse reaches a position of about 2 meters as compared to (a) the position that would otherwise be opposite to 1 meter, as the vehicle moves. The vehicle moves further to position C, the virtual image in the display area also reaches the state in (C) with a continuous change, at which time the 3 meter and 4 meter marks of the water horse in the virtual image are displayed, prompting the state that the vehicle has reached during the right turn. As shown in fig. 14, the situation that the driver observes the change of the display area 50 in the cabin is more intuitively shown, and the driver can see not only the front real scene in the visual field through the windshield, but also the virtual object simulated at the corresponding real scene position, such as a water horse crossing a plurality of lanes, through the display area 50. Since the position of the display area 50 in the windscreen is determined by the projection of the HUD display device, it is not related to the position and orientation of the vehicle. Accordingly, as the position and orientation of the vehicle change, the real view of the windshield in which the display area 50 is located will change, and the view of the display area 50 in fig. 14 will deflect to the right, compared to fig. 11, to face the position between the straight lane and the right-turn lane. Further, to represent a fit with the real scene, the corresponding virtual object display also only displays the portion of the water horse simulated between the straight lane and the right-turn lane, whereas the portion of the water horse previously in the straight lane has been shifted outside the display area 50 due to the change in the visual range of the display area 50, but not displayed, and this dynamic process gives the driver the experience as if the front is actually having a water horse object placed on the road, the visual portion of which changes with the field of view provided by the display area 50.
In order to achieve the display effect of the display area described above, it is necessary to simulate the virtual object referred to in fig. 5 to 10. In some examples, as shown in fig. 15, the virtual object simulation method specifically includes the following steps:
step S1501, determining that the HUD display device needs to display a live-action area where the virtual object is located. As described above, the HUD display device realizes image display by reflection projected on the windshield, and thus display can be realized in a display area corresponding to the windshield by projecting an image of a virtual object. The virtual object has the display effect that a driver observes that the virtual object is overlapped on a specified real scene area, so that the driver can intuitively know the corresponding indication meaning by combining the real scene area, for example, the virtual object is used for navigation indication in fig. 5-8, and the virtual object is used for advertisement information indication in fig. 9 and 10. It is therefore necessary to know first where the live-action area is, and then the corresponding virtual object can be arranged according to the live-action area. The determination of the real scene area can directly obtain the position according to the indication purpose, for example, the navigation can select the crossing along the way in advance, specifically, different virtual objects (such as navigation indication lines, water horses and the like) are arranged at the specific crossing according to the navigation requirements of different destinations. More precisely, a more specific live-action area can be judged by utilizing the vision determined by the position and the orientation of the vehicle according to the running route of the vehicle on the corresponding road, so that the unnecessary data amount can be prevented from being increased by simulating the whole intersection. In some examples, the determination of the live-action area may also determine the recorded live-action area condition in the map resource in real-time based on the current location and orientation of the vehicle.
Step S1502, the position and the topography of the live-action area are obtained through the map resource. Since the virtual object itself in the above example does not completely change with the specific state of the vehicle running, and is more limited to the real situation of the specified location, it is possible to simulate the virtual object not by real-time data acquired from a camera, radar, or the like on the vehicle, but from map resources that are already present in advance, so that on the one hand, processing can be performed in advance, and on the other hand, the pressure of real-time processing can be reduced. Accordingly, the real-scene area determined in step S1501 is utilized to find specific positions and terrains in the corresponding map resources, such as the number of specific lanes and the turning curvature of the lanes in the map resources. In some examples, map resources may include GIS (Geographic Information System ), BIM (Building Information Modeling, building information model), etc., where not only specific coordinate location information, but even model information of buildings along the way may be included, which may provide data support for the simulated placement of corresponding virtual objects on roads, building surfaces. When the position of the real scene area is obtained from the map resource, position coordinates can be provided for the virtual object simulated in the real scene area, and further, accurate shape configuration can be provided for the simulated virtual object according to data such as topography.
Step S1503, determining a model of the virtual object according to the instruction purpose. The virtual object is determined according to specific indication purposes, for example, the virtual object can be determined to be a model of a navigation indication line, a water horse and the like according to navigation purposes. In some examples, the corresponding virtual object may also be determined according to a user's pre-selected configuration, such as whether to select a navigation indicator line or a water horse for navigation purposes, which may be configured to receive user parameter input. In some examples, the model of the virtual object is a corresponding 3D preform, the 3D preform is a 3D mesh with material, the material information may be composed of different material information such as simple albeido, metallic, normal, and the mesh is mainly based on data of the 3D point cloud, and represents a 3D object.
Step S1504, defining a relationship between the virtual object and the live-action area according to the location and the topography of the live-action area. After determining the model of the virtual object in step S1503, the specified virtual object can be simulated at the corresponding live-action area position. Taking fig. 6 as an example, when the navigation indicates that the user needs to make a right turn at a corresponding intersection, relevant parameters of the intersection are obtained, and then the form of a virtual object, such as the required length, height, curvature shape and the like of the virtual object, which are required to be located at the right turn road position, is simulated according to the parameters, wherein the parameters are related to the specific situation of a real scene area. In some examples, the virtual object specifically simulated above may also exist in the map resource in advance as required, and may be implemented by directly calling after searching in the map resource. The simulated virtual object exists in the form of indication information, and the indication information may be a simulated image of the virtual object itself or may be structural data expressing the simulated image. The indication information is configured as a representation content for meeting the indication purpose in the real scene area, and a virtual image to be displayed on a display focal plane of the display area can be calculated according to coordinate position information in the indication information, coordinate position information of eyes and the like, so that the HUD display device can be used for projection display on a windshield, and the eyes of a driver can actually see the image of the virtual object in the real scene area at the corresponding coordinate position.
As shown in fig. 16, the real scene area 800 is an area where the human eye 6 sees an actual scene in front of the vehicle through the windshield of the vehicle (specifically, refer to fig. 5 to 10), and the display area 50 is a display focal plane range that can be formed by the HUD display device, and since the display area 50 is smaller than the windshield, the corresponding real scene area 80 is smaller than the real scene area 800. In this example, the virtual object simulated in the real scene area 80 may be displayed on the display area 50 through conversion, so that the line of sight of the human eye 6 passes through the display area 50 to finally reach the real scene area 80, the virtual image displayed on the display area 50 may be superimposed with the real scene on the real scene area 80, and the human eye 6 may feel that the virtual object corresponding to the virtual image displayed on the display area 50 is located in the real scene area 80. As shown in fig. 17, a virtual image 5 of an ice cream cone is displayed on the display area 50, and a real ice cream cone is perceived at the position of the real area 80 by the extension of light, so that a virtual-real combination effect can be achieved. As described above, in the implementation process of actual display, the virtual object of the ice cream cone is firstly simulated at the position of the real scene area 80, and because the real scene area 80 can determine the corresponding coordinate position based on map resources and the like, the human eye 6 in the vehicle can acquire an accurate coordinate position according to the position and viewpoint analysis of the vehicle, and accordingly, the depth of the display focal plane of the display area 50 can be determined according to the attribute of the HUD display device. Therefore, according to the relation of light transmission, the position of the ice cream barrel which is required to be displayed on the display area 50 can be obtained, and due to the far and near effect of light transmission, the ice cream barrel which is originally simulated in the real area 80 is reduced to be a virtual image of the ice cream barrel on the display area 50, so that the ice cream barrel which is actually displayed on the display area 50 can feel on the real area 80.
Referring to fig. 16 and 17, three key positions for achieving virtual-real bonding are the position of the human eye 6, the position of the display focal plane of the display area 50, and the position of the live-action area 80. The virtual object is visually needed to be perceived to exist in the real scene area 80, the virtual object can be determined by utilizing the linear relation among three positions, the line of sight between the position of the human eye 6 and the position of the virtual object on the real scene area 80 can be specifically represented by a space function, and on the premise that the depth information of the display focal plane of the display area 50 is known, the specific position of the corresponding virtual object to be displayed on the display focal plane can be obtained according to the corresponding space function. Alternatively, the anchor point positions such as the center point of the virtual object displayed on the display focal plane can be calculated first, and then the size of the virtual object is scaled according to the degree of optical reduction and displayed on the corresponding anchor point positions. Alternatively, the virtual object simulated at the position of the real scene area 80 may be regarded as being composed of a plurality of pixel points, and corresponding mapping points on the display area 50 are found according to the spatial function, and all the mapping points will display the corresponding virtual object shape.
In some examples, the spatial function of the line of sight may be derived using a spatial two-point linear equation. Assume that the position of the virtual object simulated in the live-action area 80 has been obtained by the map resource, and is (x 1, y1, z 1), where x1 represents dimension information of the position in the first direction, y1 represents dimension information of the position in the second direction, z1 represents dimension information of the position in the third direction, and the first direction, the second direction and the third direction are spatial axis directions in a unified coordinate system, and in this example, are based on three coordinate axes perpendicular to each other in a three-dimensional cartesian coordinate system. Assume that the position of the viewpoint (human eye) has been obtained by the corresponding positioning module and the viewpoint capturing camera as (x 2, y2, z 2), where x2 represents dimension information of the position in the first direction, y2 represents dimension information of the position in the second direction, and z2 represents dimension information of the position in the third direction. Two points in space may determine a straight line, and thus a spatial function representing a straight line between a viewpoint and a virtual object may be expressed according to the following formula:
(x - x1) / (x2 - x1) = (y - y1) / (y2 - y1) = (z - z1) / (z2 - z1),
where (x, y, z) may represent any point on the line of sight corresponding to a straight line, including a point intersecting the display focal plane of the display area 50, as described above, the point at which the line of sight intersects the display focal plane is the location at which the virtual image is to be displayed by the actual virtual object on the display focal plane. As described above, the depth information of the display focal plane between the human eye 6 and the live-action area 80 is known, which is determined by the attribute of the HUD display device, so that at least one direction of the virtual image to be displayed on the display focal plane is known in at least the first direction, the second direction, and the third direction, and in this example, assuming that the dimensional information of the position of the displayed virtual image in the third direction can be determined by the virtual image distance of the display focal plane, the corresponding z value can be substituted into the above formula, and the dimensional information (x value) of the position of the virtual image in the first direction and the dimensional information (y value) of the position in the second direction on the display focal plane can be calculated, and accordingly, a unique position point can be determined in the three-dimensional cartesian coordinate system using the x value, the y value, and the z value. It should be noted that, the calculation formula of the third position is determined by the space function and limited to the above, other manners may be adopted, for example, vector calculation according to the following formula:
(x3,y3,z3)= (x2,y2,z2) + [(x1,y1,z1)- (x2,y2,z2)]*[(zf-z2)]/(z1-z2)],
Wherein (x 3, y3, z 3) is a vector from the position of the virtual image to the origin, and zf is known dimension information showing the position of the focal plane in the third direction.
In this example, since the dimensional information of the different positions is represented with respect to the three-dimensional cartesian coordinate system, the determined position of the virtual image on the display focal plane is also represented by the three-dimensional cartesian coordinate system, and accordingly, the x value and the y value corresponding to the virtual image cannot be directly applied to the control of the HUD display device. The HUD display device adopts a pixel coordinate system, because the content of projection display is determined by an image source in an optical machine, and the principle of displaying an image by the image source is based on a combination of a plurality of pixels, and the image source display is realized by specifically controlling the brightness and the color of the corresponding pixels by a processor (refer to fig. 4) on a designated circuit. In order to realize the display processing, as shown in fig. 18, if the coordinate position of the virtual image represented in the spatial coordinate system is acquired, the position of the virtual image needs to be converted from the spatial coordinate system to the pixel coordinate system. For a pixel coordinate system, the pixel coordinate system comprises two mutually perpendicular U-axes and V-axes, wherein the U-axes correspond to the x-axis in the three-dimensional Cartesian coordinate system, the V-axes correspond to the y-axis in the three-dimensional Cartesian coordinate system, and the coordinate axes in the pixel coordinate system are in units of pixels. Each point position of the display area on the display focal plane can be accurately represented through a pixel coordinate system, and pixels in the display area correspond to pixels of an image source in the optical machine one by one. The display area may be adjusted to a position on the corresponding display focal plane by adjusting the optical lens group, but the pixel coordinate system is positioned with respect to the display area, so that the U value and the V value may be positioned in a specific display area regardless of the position of the entire display area on the display focal plane as long as the U value and the V value are determined. Alternatively, the origin of the pixel coordinate system may be located at the upper left corner of the display area, and the relative positional relationship with the origin of the pixel coordinate system is located by the pixel values corresponding to the U value and the V value, so that in order to achieve the conversion between the spatial coordinate system and the pixel coordinate system, it is also necessary to know the x value and the y value of the origin of the pixel coordinate system in the three-dimensional cartesian coordinate system, if the origin of the pixel coordinate system is located at the upper left corner of the display area, that is, the dimensional information of the upper left corner of the display area in the three-dimensional cartesian coordinate system, see (x 4, y4, z 4) in fig. 18. The dimension information is related to the projection position of the display area, can be determined according to projection configuration parameters of the HUD display device, and specifically can store numerical values calibrated in advance by the HUD display device in a database and be called according to projection adjustment parameters.
Assuming that the display focal plane position of the virtual image in the display area 50 has been obtained by calculation, (x 3, y3, z 3) is known, the U value and V value of the third position can be calculated according to the following formula:
U=Lp*(x3-x4)/L,
V=Hp*(y3-y4)/H,
where L, H are optical parameters of the projection of the display area, and are used to represent the physical dimensions of the display area, which are fixed after the HUD display device is designed and shipped, and thus can be determined by calling a preset value, for example, L is 21 cm and H is 12 cm. Lp and Hp are pixel sizes corresponding to the display area, which are also fixed parameters related to the image source of the optical engine, so that the pixel sizes can be determined by calling preset values, for example, lp is 854 pixels, and Hp is 480 image sources. As mentioned above, x3, y3, x4, y4 are all known values that can be obtained, and accordingly, the corresponding U and V values can be easily determined. Further, as shown in fig. 18, the virtual image 5 may be displayed at a corresponding position of the display area according to the U value and the V value. The algorithm is relatively simple in display, unnecessary delay caused by information display can be reduced, the attaching effect can be greatly improved, and the user experience can be well enhanced.
In some examples, only the corresponding virtual image is displayed in the display area 50, and the virtual image corresponds to the shape of the simulated virtual object, and as described above, the virtual object is simulated according to the map resource of the real scene area, and through optical superposition, the virtual object can be combined with the real scene area corresponding to the display area 50, so that the virtual object is perceived to be in the position of the real scene area. As shown in fig. 19, it can be easily seen that the size of the display area 50 determines the displayable range of the virtual object that can be superimposed on the external reality of the windshield of the vehicle, and the larger the display area 50, the more complete the corresponding virtual object is displayed, and the more consistent the effect of viewing the external reality of the windshield. Referring to the examples of fig. 9 and 10, if the size of the display area 50 is not satisfactory, it is difficult to achieve an adaptive display in the display area 50 under a normal viewing angle. In some examples, the size of the display area 50 may be increased by changing the light path in the HUD display device, and when the size of the display area 50 is consistent with the size of the windshield 4, the observable virtual object effect is completely consistent with the real scene effect, and if the virtual image is rendered very realistic in the HUD display device, the driver in the cabin may not be able to distinguish the difference between the virtual and real objects, improving the user experience.
In some examples, as shown in fig. 20, the HUD display device may change the virtual image distance of the display focal plane where the display area 50 is located by changing the optical path length, the focal length of the optical lenses 2 and 3, or may be a little closer to the human eye 6 or a little farther from the human eye 6. Referring to the example of fig. 18, such distance may determine dimensional information of the virtual image in the third direction on the display focal plane, and thus may adjust the corresponding spatial function calculation parameters when the virtual image distance is changed. In some examples, the virtual image of the display focal plane can be adjusted to be a little farther away, so that the fit degree between the display focal plane and the real scene can be improved, and the large-scale display error caused by shaking of human eyes is reduced. Optionally, when the distance between the vehicle and the real scene is smaller than the virtual image distance of the display focal plane, the virtual image distance adjustment can be adaptively performed, so as to meet the requirement of virtual-real lamination.
In some examples, a display method applied to a HUD display device generates indication information in a corresponding real-scene area using map resources, wherein the indication information includes a representation content of a virtual object in the real-scene area, and the virtual object is an object which is expected to be seen in the corresponding real-scene area according to the indication purpose. A virtual image to be projected and displayed on the HUD display device is determined on the basis of the instruction information, and the virtual image is formed in a display area on the windshield while being matched with a live-action area corresponding to the display area. The virtual image corresponds to the virtual object represented by the indication information, optionally, the position and the size of the virtual image displayed on the display focal plane can be determined according to the position of the viewpoint of the user and the position represented by the indication information in the real scene area, and because the position of the display focal plane is inconsistent with the position of the real scene area, the information of the display focal plane and the position of the real scene area needs to be converted to meet the visual requirement, so that the display effect of the virtual image on the display focal plane can be equivalent to the display effect of the virtual object on the real scene area, and the examples of fig. 16 and 17 can be referred to specifically.
It should be noted that, the display area is bound to the vehicle itself, and the driver in the vehicle can observe the real-scene area outside the windshield through the display area, if the display area has the content of projection display, the driver in the vehicle can also see the corresponding virtual image, and because the corresponding windshield is transparent, the virtual image will be superimposed with the real-scene area outside the windshield. The real scene area corresponding to the display area is related to the position of the vehicle, the coordinate position, the head direction and the like of the vehicle can be determined through the positioning module, the position of the human eyes in the vehicle is also related, and the relative position relation between the user viewpoint and the display area can be determined through the camera in the vehicle. In order to superimpose the virtual image displayed in the display area on the specified live-action area, different virtual images are displayed according to different live-action areas corresponding to the display area.
In some examples, when the vehicle is in the second position, the display area corresponds to the first real scene area, and when the vehicle is in the third position, the display area corresponds to the second real scene area, and the first real scene area and the second real scene area are located in the same range of adjacent positions, such as in the same intersection (refer to fig. 11-14 in particular), a part of the virtual object simulated at the intersection according to the map resource is located in the first real scene area, and a part of the virtual object is located in the second real scene area, so that the first virtual image and the second virtual image can be respectively displayed in the display area according to the optical relationship formed by the virtual object under the view of different position viewpoints. Specifically, first indication information is determined according to map resources of the first real scene area, the first indication information is configured to be used for meeting indication contents of indication purposes in the first real scene area, and a first virtual image is determined according to the first indication information and user viewpoint information. And determining second indication information according to the map resource of the second real scene area, wherein the second indication information is configured to be used for meeting the indication content of the indication purpose in the second real scene area, and determining a second virtual image according to the second indication information and the viewpoint information of the user. When the second position is close to the third position, corresponding virtual objects can be simulated in a common real scene area of the second position and the third position uniformly, the indication information corresponding to the virtual objects respectively provides first indication information and second indication information, and then a first virtual image displayed at the second position and a second virtual image displayed at the third position are formed. Correspondingly, the second position is closer to the third position, and the superposition content of the first virtual image and the second virtual image is more, namely, the coherent change is observed in the display area, so that the coherent sense that the human eyes watch the content of the first real scene area to perform line-of-sight conversion to the content of the second real scene area is just like, and the virtual object is the sense that the virtual object really exists outside the windshield is provided. Optionally, the display area can not only see the simulated virtual object at the intersection, but also realize continuous information display along the driving process, for example, the navigation indication line is overlapped on the corresponding road from the vehicle until reaching the destination according to the navigation information, and provides the whole navigation indication for the driver.
In some examples, the vehicle passes the first location before reaching the second location, where the vehicle may predict the vehicle to pass the second location and/or the third location, and accordingly, the indication of the intersection at which the second location and the third location are located may be determined in advance at the first location. In examples where specific indication information is included in the map resource, the corresponding indication information may be invoked when the vehicle passes through the first location. Optionally, when the vehicle passes through the first position, the first virtual image and/or the second virtual image may be rendered in advance according to the indication information, so as to be used for projection display in a display area of the HUD display device when the vehicle passes through the second position and/or the third position.
In some examples, a navigation method applied to the HUD display device may refer to the display method in the above examples, and may refer to fig. 5 to 8 and fig. 11 to 14, which may simulate a virtual object for navigation indication, such as a navigation indication line, a water horse, a guideboard, etc., in a real-scene area corresponding to the display area. Accordingly, the indication information representing the virtual object is not only based on the position and the topography in the map resource, but also needs to determine specific representation content according to the navigation information provided by the map resource to reach the destination, such as needing to drive to the right at the designated intersection. Further, the virtual image corresponding to the indication information is displayed on the display focal plane of the display area, the virtual image meeting the navigation purpose can be perfectly attached to the road, and a driver can intuitively know the driving route by only observing the navigation virtual image attached to the road.
In some examples, as shown in fig. 21, the display device includes a processor 2101, a memory 2102, an input device 2103, and an output device 2104, which may be, in particular, a HUD display device integrated within a vehicle center console. The input device 2103 may include keys, touch screen, etc. on the console and the display device may receive input control commands and data via the input device 2103. The output device 2104 may include a backlight, image source, etc. of the HUD display device, which may output corresponding instructions or data to the output device 2104. The memory 2102 stores a computer program running on the processor 2101, and the processor 2101 executes the computer program to implement the display method described above.
As shown in fig. 22, in some examples, the vehicle may employ the display method described above or incorporate the display device described above, and the corresponding virtual image is projected onto the vehicle windshield by projection, so that viewing the corresponding display area of the windshield from within the cockpit has the effect of viewing the corresponding virtual object superimposed on the real scene outside the windshield. In addition, the driver can also look over the projected vehicle speed information and the like against the windshield in the driving process without looking over the traditional instrument panel at low head, so that the driving safety is improved. The vehicle is not limited to the automobile shown in fig. 22, and may include buses, trucks, excavators, motorcycles, trains, high-speed rails, ships, yachts, airplanes, spacecraft, and the like. The projected windshield is not limited to the front windshield of the automobile, and may be a transparent surface in other positions.
In some examples, a computer readable storage medium stores a computer program that when executed by a processor implements the display method described above.
In connection with the above examples, the present application may be embodied directly in hardware, in a software module executed by a control unit, in one or more steps and/or in a combination of one or more steps, in a software module corresponding to a computer program flow, in a hardware module, such as an ASIC (Application Specific Integrated Circuit ), an FPGA (Field-Programmable Gate Array, field programmable gate array) or other programmable logic device, a discrete gate or transistor logic device, a discrete hardware component, or any suitable combination thereof. For convenience of description, the above description is described as functionally divided into various modules, and of course, the functions of each module may be implemented in the same or multiple pieces of software and/or hardware when implementing the present application.
From the above description of examples, it will be apparent to those skilled in the art that the present application may be implemented in software plus a necessary general hardware platform. Based on such understanding, the technical solutions to which the application relates may be embodied in the form of software products in essence or in part contributing to the prior art. The software is executed by the micro-control unit and may include any type of one or more micro-control units, including but not limited to micro-control unit 8, a microcontroller, a DSP (Digital Signal Processor, digital signal control unit), or any combination thereof, depending on the desired configuration. The software is stored in a memory, such as a volatile memory (e.g., random access memory, etc.), a non-volatile memory (e.g., read only memory, flash memory, etc.), or any combination thereof.
In summary, according to the application, based on the situation information such as the real scene position and the topography recorded in the map resource, the representation content for meeting the instruction purpose is simulated at the real scene corresponding to the display area of the HUD display device, and the virtual image corresponding to the representation content is rendered and projected on the display focal plane corresponding to the HUD display device according to the light transmission relation between the simulation position of the representation content in the real scene and the actual position of the viewpoint, so that the driver can see the display effect of perfectly superposing the virtual image on the real scene through the windshield. The application can well improve the fitting degree of the projection display content and the external live-action of the windshield, reduce the pressure of instant processing and improve the watching experience of users.
It should be understood that while this specification includes examples, any of these examples does not include only a single embodiment, and that this depiction of the specification is for clarity only. Those skilled in the art will recognize that the embodiments of the present application may be combined as appropriate with one another to form other embodiments as would be apparent to one of ordinary skill in the art.
The above list of detailed descriptions is only specific to practical embodiments of the present application, and they are not intended to limit the scope of the present application, and all equivalent embodiments or modifications that do not depart from the teachings of the present application should be included in the scope of the present application.

Claims (11)

1. A display method, comprising:
the HUD display device projects a display area on a windshield of the vehicle;
the display area corresponds to a first live-action area outside the windshield;
determining first indication information according to map resources of the first live-action area, wherein the first indication information is configured to be used for meeting the indication content of the indication purpose in the first live-action area;
and displaying a first virtual image on a display focal plane of the display area, wherein the first virtual image corresponds to the first indication information.
2. The display method according to claim 1, wherein the display area corresponds to a first live-action area in front of the windshield, comprising:
the display area is configured to view the first live-action area through a windshield in which the display area is located.
3. The display method according to claim 1, wherein the first instruction information is a simulation image corresponding to the first virtual image or structure data representing the simulation image.
4. The display method according to claim 1, wherein the determining first indication information according to the map resource of the first live-action area, the first indication information being configured to indicate content for meeting an indication purpose in the first live-action area includes:
and determining the road condition leading to the destination direction according to the map resource of the first real scene area, so that the first indication information represents the route guidance on the road of the first real scene area.
5. The display method according to claim 4, wherein the determining a road condition to a destination direction from the map resource of the first real-world area so that the first indication information indicates a route guidance on the road of the first real-world area includes:
the first live-action area includes a first current lane of the vehicle and a second lane leading to a destination direction, and the first indication information represents guidance content for switching from the first lane to the second lane.
6. The display method according to claim 1, wherein the determining first indication information according to the map resource of the first live-action area, the first indication information being configured to indicate content for meeting an indication purpose in the first live-action area includes:
and when the vehicle is at a first position, determining first indication information according to map resources of the first real-scene area, wherein the first position is the position of the vehicle before passing through a second position, and the display area corresponds to the first real-scene area outside the windshield when the vehicle is at the second position.
7. The display method according to claim 1, wherein after the first virtual image is displayed on the display focal plane of the display area, the display method further comprises:
responding to the situation that the vehicle is in a third position, wherein the display area corresponds to a second real-scene area, the third position is the position of the vehicle after passing through the second position, the display area corresponds to a first real-scene area outside the windshield when the vehicle is in the second position, and the first real-scene area at least partially coincides with the second real-scene area;
Determining second indication information according to map resources of the second real scene area, wherein the second indication information is configured to be used for meeting the indication content of the indication purpose in the second real scene area, and the first indication information is at least partially identical with the second indication information;
and displaying a second virtual image on a display focal plane of the display area, wherein the second virtual image corresponds to the second indication information.
8. The display method according to claim 1, wherein the displaying a first virtual image on a display focal plane of the display area, the first virtual image corresponding to the first indication information includes:
and determining the position and the size of the first virtual image displayed on the display focal plane according to the position of the user viewpoint and the position of the first indication information represented by the first real scene area.
9. A display device comprising a memory, a processor and a computer program stored on the memory and running on the processor, the processor implementing the steps of the display method according to any one of claims 1-8 when the computer program is executed by the processor.
10. A vehicle comprising the display device of claim 9.
11. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program which, when executed by a processor, implements the steps of the display method according to any one of claims 1-8.
CN202310711554.1A 2023-06-15 2023-06-15 Display method, device, vehicle and storage medium Pending CN116684565A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310711554.1A CN116684565A (en) 2023-06-15 2023-06-15 Display method, device, vehicle and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310711554.1A CN116684565A (en) 2023-06-15 2023-06-15 Display method, device, vehicle and storage medium

Publications (1)

Publication Number Publication Date
CN116684565A true CN116684565A (en) 2023-09-01

Family

ID=87788868

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310711554.1A Pending CN116684565A (en) 2023-06-15 2023-06-15 Display method, device, vehicle and storage medium

Country Status (1)

Country Link
CN (1) CN116684565A (en)

Similar Documents

Publication Publication Date Title
JP6861375B2 (en) Display system, information presentation system, display system control method, program, and mobile
US10591738B2 (en) Heads-up display with variable focal plane
EP2919057B1 (en) Navigation display method and system
JP7065383B2 (en) Display systems, information presentation systems, display system control methods, programs, and moving objects
KR101409846B1 (en) Head up display apparatus based on 3D Augmented Reality
CN109484299A (en) Control method, apparatus, the storage medium of the display of augmented reality display device
EP3444139A1 (en) Image processing method and image processing device
US20080091338A1 (en) Navigation System And Indicator Image Display System
WO2015079654A1 (en) Head-up display device
JP2015069656A (en) Three-dimensional (3d) navigation
JP2006501443A (en) Method and apparatus for displaying navigation information on a vehicle
JP6883759B2 (en) Display systems, display system control methods, programs, and mobiles
JP2012035745A (en) Display device, image data generating device, and image data generating program
JP6796806B2 (en) Display system, information presentation system, display system control method, program, and mobile
JP7310560B2 (en) Display control device and display control program
JP2012083534A (en) Transmission display device
CN116400507A (en) Display method, display device, vehicle and storage medium
CN113126294A (en) Multi-level imaging system
KR101611167B1 (en) Driving one's view support device
CN116684565A (en) Display method, device, vehicle and storage medium
CN113119862B (en) Head-up display device for driving assistance
JPWO2020040276A1 (en) Display device
JP7338632B2 (en) Display device
JP7434894B2 (en) Vehicle display device
WO2023145852A1 (en) Display control device, display system, and display control method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination