CN112242009A - Display effect fusion method, system, storage medium and main control unit - Google Patents

Display effect fusion method, system, storage medium and main control unit Download PDF

Info

Publication number
CN112242009A
CN112242009A CN202011121206.1A CN202011121206A CN112242009A CN 112242009 A CN112242009 A CN 112242009A CN 202011121206 A CN202011121206 A CN 202011121206A CN 112242009 A CN112242009 A CN 112242009A
Authority
CN
China
Prior art keywords
virtual image
image
virtual
driving
display surface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011121206.1A
Other languages
Chinese (zh)
Inventor
王兴
陈灵
姜豪
刘风雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Crystal Optech Co Ltd
Original Assignee
Zhejiang Crystal Optech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Crystal Optech Co Ltd filed Critical Zhejiang Crystal Optech Co Ltd
Priority to CN202011121206.1A priority Critical patent/CN112242009A/en
Publication of CN112242009A publication Critical patent/CN112242009A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Instrument Panels (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides a display effect fusion method, a display effect fusion system, a storage medium and a main control unit, wherein the method is applied to a head-up display system and comprises the following steps: acquiring a driving image of a driving vehicle, wherein the driving image comprises an attention object; acquiring the eye position of a driver in a driving vehicle; and displaying the virtual object image in the virtual image display surface according to the eye position, the driving image and the preset virtual image display surface, so that the virtual object image observed from the eye position is overlapped with the attention object in the actual scene. The factors such as height, driving habit and vision habit of different people can be considered like this for the virtual image can fuse with real environment well, avoids the condition of virtual image skew as far as possible, and can follow navigating mate's eyes position change and adjust the demonstration condition of virtual image in real time, thereby can be adapted to the navigating mate of different heights, driving habit, vision habit, guarantee virtual image display effect, can promote navigating mate's driving experience.

Description

Display effect fusion method, system, storage medium and main control unit
Technical Field
The application relates to the field of intelligent automobile driving, in particular to a display effect fusion method, a display effect fusion system, a storage medium and a main control unit.
Background
head-Up display is called hud (head Up display) for short, also called parallel display system, and refers to a multifunctional instrument panel with blind operation centered on a driver. The AR HUD is a HUD technology that incorporates an AR (Augmented Reality) technology.
HUD technology has been used in vehicles at present, and a HUD system is an automotive integrated electronic display device composed of an electronic component, an optical display component, and a control unit and the like. The HUD for a vehicle can display information derived from a cluster and the inside of the vehicle and an object fusion frame on a front windshield, so that a driver can easily obtain driving information while driving. For example, the vehicle speed, the navigation information, the active safety warning signal and the like are projected to the front of the driver through the optical assembly in the form of images and characters to form a virtual image, so that the driver can see the images, the characters and the like superposed on an external scene simultaneously when observing road conditions outside the vehicle.
HUD on the existing market is because different people's height, driving habit and vision habit diverse when concrete application, because the influence factor of these differences for the virtual image that navigating mate observed can't fuse well with reality, thereby causes the not good problem of virtual image display effect.
Disclosure of Invention
An object of the embodiments of the present application is to provide a method, a system, a storage medium, and a main control unit for fusing display effects to improve a virtual image display effect of an automotive HUD.
In order to achieve the above object, embodiments of the present application are implemented as follows:
in a first aspect, an embodiment of the present application provides a display effect fusion method, which is applied to a head-up display system, and the method includes: acquiring a driving image of a driving vehicle, wherein the driving image comprises an attention object, and the attention object comprises one or more of a lane line, a road sign, other vehicles and pedestrians; acquiring the eye position of a driver in the driving vehicle; and displaying a virtual object image in the virtual image display surface according to the eye position, the driving image and a preset virtual image display surface, so that the virtual object image observed from the eye position coincides with the attention object in an actual scene, wherein the virtual object image represents a virtual image displayed by the attention object in the virtual image display surface, and the actual scene represents a real scene containing the attention object observed by the eyes of the driver.
In the embodiment of the present application, by acquiring a driving image in which a vehicle is driven (in which one or more of a lane line, a road sign, another vehicle, a pedestrian, and the like is included), and acquiring the eye position of the driver, a virtual image of an object (a virtual image in which the object of interest is displayed in a virtual image display surface) is displayed on a preset virtual image display surface based on the eye position and the driving image, so that the virtual image of the object observed from the eye position coincides with the object of interest in an actual scene (a real scene in which the object of interest is included as observed by the eyes of the driver). Can consider factors such as different people's height, driving habit and vision habit (eyes position, sight etc.) like this, make the virtual image fuse with real environment well, avoid the condition of virtual image skew (the virtual image that the navigating mate saw promptly and the skew between the actual scene) as far as possible, and can follow navigating mate's eyes position change and adjust the demonstration condition of virtual image in real time, thereby can be adapted to different heights, driving habit, the navigating mate of vision habit, guarantee virtual image display effect, the driving experience that can promote the navigating mate.
With reference to the first aspect, in a first possible implementation manner of the first aspect, the acquiring the eye position of the driver in the driving vehicle includes: acquiring a person image containing the driver in the driving vehicle; and determining the eye position of the eyes of the driver in a world coordinate system according to the personnel image, wherein the world coordinate system is established based on the driving vehicle.
In this implementation, the eye position of the driver's eyes in the world coordinate system (established based on the driving vehicle) is determined by acquiring a person image containing the driver within the driving vehicle. Can acquire navigating mate's eyes position in real time like this, be favorable to the real-time adjustment that the virtual image shows to can make the adjustment to the object virtual image that shows rapidly when navigating mate eyes position changes, guarantee virtual image display effect, avoid the long-time dislocation of object virtual image and concern object and bring negative effects for navigating mate, thereby be favorable to promoting navigating mate's use and experience.
With reference to the first aspect, in a second possible implementation manner of the first aspect, the displaying a virtual image of an object in a virtual image display surface according to the eye position, the driving image, and a preset virtual image display surface includes: determining the object position of the attention object in the world coordinate system according to the driving image; determining a conversion matrix according to the eye position and the virtual image display surface, wherein the conversion matrix is used for converting position coordinates in the world coordinate system into virtual image coordinates of the virtual image display surface; determining an object virtual image position of the attention object on the virtual image display surface according to the object position and the conversion matrix; and displaying the virtual object image in the virtual image display surface according to the virtual object image position.
In this implementation, the object position of the object of interest in the world coordinate system is determined based on the driving image, and a conversion matrix (for converting the position coordinates in the world coordinate system into virtual image coordinates of the virtual image display surface) determined based on the eye position and the virtual image display surface is combined to determine the object virtual image position of the object of interest on the virtual image display surface, and virtual image display is performed accordingly. The position, the size and the like of the virtual image of the object to be displayed on the virtual image display surface can be accurately determined in such a mode, so that the virtual image object and the attention object at the visual angle of the driver are accurately superposed, and the fusion effect of the virtual image and the reality can be well realized.
With reference to the second possible implementation manner of the first aspect, in a third possible implementation manner of the first aspect, the determining a conversion matrix according to the eye position and the virtual image display surface includes: determining a sight line vector of the eye position facing the virtual image display surface according to the eye position and the virtual image display surface; determining a view matrix according to the eye position and the sight line vector, wherein the view matrix is used for correcting the object position; acquiring the size of a virtual image and near-far interface parameters when the virtual image is displayed, and determining a projection matrix according to the size of the virtual image and the near-far interface parameters; and determining the conversion matrix according to the view matrix and the projection matrix.
In this implementation, a gaze vector (i.e., a vector determined from the eye position toward the virtual display surface) may be determined from the eye position and the virtual display surface, and the view matrix may be further determined based on the eye position and the gaze vector. Therefore, on the one hand, the accuracy of the view matrix can be guaranteed, the display fusion of the object virtual image and the attention object is facilitated, on the other hand, the method is simple, accurate and efficient, and the view matrix (used for correcting the position of the object) can be quickly determined so as to correct the position of the object of the attention object. And after the projection matrix is determined according to the virtual image size and the near-far interface parameters, the conversion matrix is determined according to the view matrix and the projection matrix, and then the conversion matrix can be rapidly and accurately determined, so that the rapid adjustment of virtual image display can be realized, and the real-time performance of display fusion can be favorably ensured.
With reference to the third possible implementation manner of the first aspect, in a fourth possible implementation manner of the first aspect, the determining the view matrix according to the eye position and the gaze vector includes: determining an eye position matrix according to the eye positions; determining an eye visual angle matrix according to the sight line vector; and the eye visual angle matrix is multiplied by the eye position matrix to determine the view matrix.
In this implementation, the view matrix may be determined quickly and accurately by left-multiplying the eye perspective matrix determined based on the gaze vector by the eye position matrix determined based on the eye position.
With reference to the third possible implementation manner of the first aspect, in a fifth possible implementation manner of the first aspect, the determining the view matrix according to the eye position and the gaze vector includes: acquiring a reference vector of the virtual image display surface, wherein the reference vector represents a vector which is perpendicular to the center of the virtual image display surface and faces the outside of the driving vehicle; and calculating the view matrix through OpenGL according to the eye position, the sight line vector and the reference vector.
In this implementation, the view matrix can be calculated quickly and efficiently by OpenGL based on the eye position, the sight line vector, and a reference vector (a vector perpendicular to the virtual image display surface center and directed to the outside of the driving vehicle). Moreover, the calculation of the conversion matrix is carried out by adopting OpenGL, so that the acceleration of a GPU is favorably realized when virtual image display is carried out, the virtual image display speed is increased, and the visual experience of drivers is improved.
With reference to the second possible implementation manner of the first aspect, in a sixth possible implementation manner of the first aspect, after the determining, according to the driving image, an object position of the object of interest in the world coordinate system, the method further includes: constructing an object three-dimensional model of the object of interest containing the object position according to the object of interest and the object position; and after determining a conversion matrix according to the eye position and the virtual image display surface, the method further comprises: determining a two-dimensional object graph used by the object three-dimensional model to be displayed on the virtual image display surface according to the object three-dimensional model and the conversion matrix; correspondingly, the displaying the virtual object image in the virtual image display surface according to the virtual object image position includes: and displaying the virtual object image in the virtual image display surface according to the virtual object image position and the two-dimensional object graph.
In this implementation, by constructing a three-dimensional model of an object of interest (including an object position), a two-dimensional object graph of the three-dimensional model of the object (for display on a virtual image display surface) can be determined in combination with the transformation matrix, and further, according to the object virtual image position and the two-dimensional object graph, a virtual object image is displayed on the virtual image display surface. In such a way, on one hand, the accuracy of virtual image display of the object can be ensured; on the other hand, the consistency of the display effect of the virtual image of the object and the visual effect of the attention object in practice can be ensured, so that the fusion effect of the virtual image and reality can be improved.
In a second aspect, an embodiment of the present application provides a storage medium, where one or more programs are stored, and the one or more programs are executable by one or more processors to implement the display effect fusion method according to any one of the first aspect or possible implementation manners of the first aspect.
In a third aspect, an embodiment of the present application provides a main control unit, which includes a memory and a processor, where the memory is configured to store information including program instructions, and the processor is configured to control execution of the program instructions, where the program instructions are loaded and executed by the processor, to implement the display effect fusion method according to the first aspect or any one of possible implementation manners of the first aspect.
In a fourth aspect, an embodiment of the present application provides a head-up display system, including: the first camera shooting mechanism is used for shooting a driving image of a driving vehicle, wherein the driving image comprises an attention object, and the attention object comprises one or more of a lane line, a road sign, other vehicles and pedestrians; the second camera shooting mechanism is used for shooting personnel images of a driver in a driving vehicle; the main control unit is used for obtaining the driving image and the personnel image, determining the eye positions of the eyes of the driver in a world coordinate system according to the personnel image, displaying an object virtual image in a virtual image display surface so as to enable the object virtual image observed from the eye positions to coincide with the attention object in an actual scene, wherein the world coordinate system is established based on the driving vehicle, the object virtual image represents the attention object in the virtual image display surface, and the actual scene represents the real scene of the attention object, wherein the eyes of the driver observe the virtual image.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
Fig. 1 is a schematic diagram of a head-up display system according to an embodiment of the present disclosure.
Fig. 2 is a flowchart of a display effect fusion method according to an embodiment of the present application.
Fig. 3 is a process diagram of an exemplary display effect fusion method provided in an embodiment of the present application.
Fig. 4 is a block diagram of a main control unit according to an embodiment of the present disclosure.
Icon: 100-head-up display system; 110-a first camera mechanism; 120-a second camera mechanism; 130-a master control unit; 131-a memory; 132-a communication module; 133-a bus; 134-a processor; 140-virtual image display unit.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
Referring to fig. 1, fig. 1 is a schematic diagram of a head-up display system 100 according to an embodiment of the present disclosure.
In this embodiment, the head-up display system 100 may include a first image capturing mechanism 110, a second image capturing mechanism 120, a main control unit 130, and a virtual image display unit 140, where the first image capturing mechanism 110, the second image capturing mechanism 120, and the virtual image display unit 140 are respectively connected to the main control unit 130.
Illustratively, the first image capturing mechanism 110 is configured to capture driving images of a driving vehicle, and the driving images may include an object of interest (including one or more of a lane line, a road sign, another vehicle, and a pedestrian) as a basis for a virtual image object that needs to be displayed by the virtual image display unit 140.
To obtain the driving image more comprehensively, the first camera 110 may be disposed above the windshield of the driving vehicle as close as possible to the center of the windshield (e.g., disposed just at the center), for example, so as to obtain the driving image more comprehensively. Of course, the driving image display device can also be arranged above the windshield and is deviated to the position of a driver, so that the difference between the driving image and the scene actually observed by the driver is reduced as much as possible, and the quality of the driving image is favorably ensured.
In this embodiment, the first camera 110 may adopt a driving assistance module including a camera function to be combined with an assistance unit for driving the vehicle as much as possible, so as to improve the adaptability of the head-up display system 100 to the driving vehicle. For example, the first imaging mechanism 110 may be an Advanced Driving Assistance System (ADAS), such as the blessing X1 or other ADAS. Of course, other driving assistance modules including an image capturing function may be used, and the driving assistance module is not limited to this.
Illustratively, the second camera 120 is used to capture images of a person driving the vehicle. The second camera mechanism 120 is used for acquiring the images of the driver, the state of the driver can be acquired in real time, and the state of the driver can be considered in the virtual image display process, so that the real-time adjustment of the head-up display system 100 on the basis of the state of the driver on the virtual image display is facilitated, and the display fusion effect is facilitated to be improved (the displayed virtual image and the actual scene are fused).
For example, in order to better acquire the image of the driver in the driving vehicle, the second camera 120 may be disposed in the vehicle, for example, suspended above and in front of the driving vehicle, or disposed on the windshield, and located above and in front of the driver, so as to acquire the image of the driver as accurately as possible, which is beneficial to real-time adjustment of the virtual image display by the heads-up display system 100.
In order to better acquire the status information (e.g., position, distance, etc.) of the driver, the second camera 120 may employ a depth camera, such as a 3D structured light module, a binocular vision camera, etc., which is not limited herein. The obtained personnel image also contains depth information, which is beneficial to accurately sensing the real-time state of the driver.
In this embodiment, main control unit 130 is used for carrying out corresponding back of handling based on driving image and personnel's image, realizes the demonstration to the virtual image for the virtual image that shows has fine integration effect with the reality scene, thereby promotes the virtual image display quality of new line display system 100, promotes user experience.
For example, the main control unit 130 may be an independent control unit (for example, a purposefully developed intelligent terminal, such as a controller designed by using an enzhipu NXP development board, or some commonly used intelligent terminals, such as a smart phone, a tablet computer, etc., or the main control unit 130 may also be a server, and when the main control unit 130 is the server, the first camera 110 and the second camera 120 may be wirelessly connected with the server), so that the system is suitable for different vehicles and has low configuration requirements on the vehicles. Of course, in order to improve the adaptability of the head-up display system 100 to the vehicle, save cost, and the like, the main control unit 130 may also be an on-board computer on the vehicle, so as to improve the adaptability of the head-up display system 100 to the vehicle as much as possible.
Illustratively, the connection between the first camera 110 and the main control unit 130 may be implemented by a CAN (Controller Area Network), so that the communication quality and reliability CAN be ensured. The requirement of the second camera 120 may be slightly lower, for example, a USB (Universal Serial Bus) may be used to implement communication with the main control unit 130, which is convenient for the second camera 120 to use and may also ensure communication quality; of course, the CAN may also be used to implement communication with the master control unit, or other communication methods, and is not limited herein.
In this embodiment, in order to improve the virtual image display effect and the display quality of the head-up display system 100, the head-up display system 100 may be further connected to other auxiliary function modules of the driving vehicle: for example, the head-up display system 100 may be connected to a controller (an on-board computer, a chassis control module, or the like) of the vehicle, and obtain vehicle state parameters (such as a vehicle speed, a steering angle, and other parameters) and driving environment parameters (such as a vehicle distance) of the vehicle, so as to adjust the virtual image display in real time, so that the displayed virtual image is better merged with the real image, and the virtual image display effect is further improved.
And, in the present embodiment, a HUD unit (i.e., the virtual image display unit 140) may be employed as a component for implementing virtual image display, and then, the HUD unit may be connected with the main control unit 130. The main control unit 130 may receive the driving image sent by the first camera mechanism 110 and the person image sent by the second camera mechanism 120, and determine a virtual image that needs to be displayed based on the driving image and the person image, so that the HUD unit displays the virtual image.
For example, in order to facilitate observation by a driver, the HUD unit can be arranged below a windshield of a vehicle body, so that a virtual image display surface determined by the HUD unit is positioned in the sight line of the driver and on a surface which is a certain distance (for example, 1-3 meters) away from the windshield of the driving vehicle, and the observation by the driver is facilitated.
The above is an introduction to the head-up display system 100 provided in this embodiment, and in order to ensure the display effect of the virtual image, the embodiment of the present application further provides a display effect fusion method, where the display effect fusion method may be executed by the main control unit 130 (or the method may be executed by the whole head-up display system 100, and is not limited here), so that the displayed virtual image is fused with the real scene, and the display effect of the head-up display system 100 is improved.
Referring to fig. 2, fig. 2 is a flowchart illustrating a display effect fusion method according to an embodiment of the present disclosure. The display effect fusion method may include step S10, step S20, step S30.
Before the main control unit executes step S10, a driving image of the driving vehicle may be captured by the first camera, and the driving image includes an object of interest (including one or more of a lane line, a road sign, another vehicle, and a pedestrian), and of course, other objects may be selected as the object of interest according to needs, such as a road boundary, a road gradient, and the like, which is not limited herein. And a person image of a driver in the driving vehicle may be captured by the second imaging mechanism.
After the first image capturing mechanism (i.e., the first image capturing mechanism 110) captures the driving image and the second image capturing mechanism (i.e., the second image capturing mechanism 120) captures the person image, the main control unit may perform steps S10 and S20.
Step S10: the method comprises the steps of obtaining driving images of a driving vehicle, wherein the driving images contain attention objects, and the attention objects comprise one or more of lane lines, road signs, other vehicles and pedestrians.
Step S20: the eye position of a driver in driving a vehicle is acquired.
It should be noted that the execution sequence of steps S10 and S20 is not limited, and step S10 may be executed first and step S20 may be executed second, or step S20 may be executed first and step S10 may be executed first, or step S10 and step S20 may be executed at the same time, which is not limited herein. Step S10 can be executed after the first camera shooting the driving image, and step S20 can be executed after the second camera shooting the people image.
Certainly, in order to ensure the fusion effect, the shooting time of the driving image and the shooting time of the personnel image can be close to the same time, and scene change caused by time inconsistency is avoided as much as possible, so that the fusion effect of the displayed virtual image and the actual scene can be ensured as much as possible.
Here, step S10 is described first. The main control unit may obtain the driving image, for example, by receiving the driving image sent after being shot by the first camera shooting mechanism, or may receive the driving image transmitted by another device, and it is only necessary to ensure that the driving image is the driving image shot by the first camera shooting mechanism at a moment.
With regard to the execution of step S20, the main control unit may acquire a person image and determine the eye position of the driver from the person image.
For example, the main control unit may receive a person image captured and transmitted by the second camera, where the person image includes a driver. Then, the main control unit can determine the positions of the eyes of the driver in a world coordinate system (based on a three-dimensional space coordinate system established by the driving vehicle, so that the positions of the driver, the first camera shooting mechanism, the second camera shooting mechanism, the virtual image display surface and the like can be revealed through the world coordinate system and the position relation of the driver and a certain point in the driving vehicle).
For example, the positions of the eyes of the driver in the images can be determined by performing face feature extraction on the images of the people, and then the positions of the eyes of the driver in the world coordinate system can be determined by combining the positions of the second camera shooting mechanism in the world coordinate system.
The acquisition of the eye position of the driver is not limited to the above-described method, and the acquisition of the eye position may be transmitted by another device. For example, there is a proprietary eye position determination device that captures the eye position of the driver in real time and sends the real time eyes to the master control unit. Therefore, the present application should not be considered as limited herein.
By acquiring a person image containing the driver in the driving vehicle and determining therefrom the eye position of the driver's eyes in a world coordinate system (established on the basis of the driving vehicle). Can acquire navigating mate's eyes position in real time like this, be favorable to the real-time adjustment that the virtual image shows to can make the adjustment to the object virtual image that shows rapidly when navigating mate eyes position changes, guarantee virtual image display effect, avoid the long-time dislocation of object virtual image and concern object and bring negative effects for navigating mate, thereby be favorable to promoting navigating mate's use and experience.
After acquiring the driving image and acquiring the eye position, the main control unit may perform step S30.
Step S30: and displaying a virtual object image in the virtual image display surface according to the eye position, the driving image and a preset virtual image display surface, so that the virtual object image observed from the eye position coincides with the attention object in an actual scene, wherein the virtual object image represents a virtual image displayed by the attention object in the virtual image display surface, and the actual scene represents a real scene containing the attention object observed by the eyes of the driver.
In this embodiment, the main control unit may display a virtual image of a subject (i.e., a virtual image in which the object of interest is displayed in a virtual image display surface) in accordance with the eye position, the driving image, and a preset virtual image display surface (i.e., one surface for displaying the virtual image) such that the virtual image of the subject viewed from the eye position coincides with the object of interest in an actual scene (i.e., a real scene containing the object of interest viewed by the eyes of the driver).
Can consider factors such as different people's height, driving habit and vision habit (eyes position, sight etc.) like this, make the virtual image fuse with real environment well, avoid the condition of virtual image skew (the virtual image that the navigating mate saw promptly and the skew between the actual scene) as far as possible, and can follow navigating mate's eyes position change and adjust the demonstration condition of virtual image in real time, thereby can be adapted to different heights, driving habit, the navigating mate of vision habit, guarantee virtual image display effect, the driving experience that can promote the navigating mate.
For example, the main control unit may determine the object position of the attention object in the world coordinate system according to the driving image.
For example, by determining the position of a certain (or a plurality of) calibration points (e.g. one or a plurality of fixed points on the vehicle head) in the image, the position of the fixed point is fixed relative to the position of the first imaging means, and therefore the position of the calibration point in the image is also fixed, so that the position information of other pixels in the image can be determined, and the position of the object of interest in the image can be determined. Then, the main control unit may determine the object position of the object of interest in the world coordinate system by converting the position of the object of interest in the image into the position in the world coordinate system in combination with the position of the first camera mechanism in the world coordinate system.
Of course, the position of the object of interest in the world coordinate system may be determined in other ways. For example, when the vehicle image is a depth image, the distance information of the object of interest may be specified so that the object position of the object of interest in the world coordinate system may be specified in association with the position of the first imaging means in the world coordinate system. For another example, the main control unit may further acquire distance information acquired by a distance sensor provided in the vehicle body, determine the distance of the object of interest, and thereby determine the object position of the object of interest in the world coordinate system in combination with the position of the first imaging mechanism in the world coordinate system.
Therefore, the manner of determining the object position of the object of interest in the world coordinate system should not be considered as a limitation of the present application. In this way, the object position of the attention object in the world coordinate system can be accurately determined.
After determining the object position of the attention object in the world coordinate system, the main control unit may construct an object three-dimensional model of the attention object including the object position according to the attention object and the object position. Of course, the manner of constructing the three-dimensional object model of the object of interest here may be to extract contour features of the object of interest to construct a corresponding three-dimensional object model. The three-dimensional model of the object of interest may also be obtained by identifying the object of interest, determining a corresponding model from a preset three-dimensional model, and then matching the corresponding model with the size, angle, and the like of the object of interest in the image (of course, the model may not be matched, but is converted by a subsequent conversion matrix, which is not limited herein). Therefore, the manner in which the three-dimensional model of the object is constructed herein should not be construed as limiting the application.
By constructing the object three-dimensional model of the attention object in the above manner, the model of the object in the driving image can be well restored, and the virtual image display effect is ensured.
And the main control unit can determine the conversion matrix according to the eye position and the virtual image display surface. Here, the conversion matrix is used to convert the position coordinates in the world coordinate system into virtual image coordinates of the virtual image display surface. Of course, in some other possible ways, the parameter for converting the position coordinate in the world coordinate system into the virtual image coordinate of the virtual image display surface may not be a matrix, for example, a conversion function, and therefore, the conversion matrix is taken as an example in this embodiment, but should not be taken as a limitation to this application.
For example, the main control unit may determine, according to the eye position and the virtual image display surface, a sight line vector of the eye position toward the virtual image display surface. For example, the sight line vector may be a vector between the eye position and the virtual image display surface center position in the world coordinate system, and the direction is the eye position and the virtual image display surface center position.
The master control unit may then determine a view matrix (for correcting the object position) from the eye position and the gaze vector.
While the specific manner of determining the view matrix, two exemplary manners will be provided in this embodiment, but should not be construed as limiting the present application.
The first way to determine the view matrix is: the main control unit can determine an eye position matrix according to the eye position and determine an eye visual angle matrix according to the sight line vector; and then, the eye visual angle matrix is multiplied by the eye position matrix to determine a view matrix.
For example, the eye position is (x, y, z), then the eye position matrix T may be:
Figure BDA0002731117810000141
and the parameters of the sight line vector are (x)1,y1,z1) Then, the eye perspective matrix R may be:
Figure BDA0002731117810000142
then, a view matrix can be determined: c ═ RT. From this, the view matrix C can be determined.
By left-multiplying the eye view angle matrix determined based on the gaze vector with the eye position matrix determined based on the eye position, the view matrix can be determined quickly and accurately. It should be noted that the order of determining the eye position matrix and the eye view angle matrix is not limited.
Second way of determining the view matrix: the main control unit may acquire a reference vector of the virtual image display surface (i.e., a vector perpendicular to the center of the virtual image display surface and oriented to the outside of the driving vehicle, or may be understood as a normal vector of the virtual image display surface), and calculate the view matrix through OpenGL according to the eye position, the sight line vector, and the reference vector.
For example, eye position: cameraPos ═ QVector3D (x, y, z).
The sight line vector: cameraFront ═ QVector3D (x)1,y1,z1)。
Reference vector: camera up — QVector3D (0, 1, 0).
Then, the master unit may compute the view matrix through OpenGL:
lookAt(cameraPos,cameraPos+cameraFront,cameraUp)。
therefore, the main control unit can calculate the view matrix lookAt through OpenGL.
The view matrix can be calculated quickly and efficiently by OpenGL based on the eye position, the sight line vector, and a reference vector (a vector perpendicular to the virtual image display surface center and directed to the outside of the driving vehicle). Moreover, the calculation of the conversion matrix is carried out by adopting OpenGL, so that the acceleration of a GPU is favorably realized when virtual image display is carried out, the virtual image display speed is increased, and the visual experience of drivers is improved.
After the view matrix is determined, the main control unit can acquire the size of a virtual image (namely the parameters such as the height and the width of the virtual image to be displayed, the distance of a virtual image display surface and the like) and the near-far interface parameters when the virtual image is displayed (namely the distance of a virtual image of an object to be displayed in the virtual image display interface), and determine the projection matrix according to the size of the virtual image and the near-far interface parameters.
The determination of the corresponding view matrix is also described here in two ways by way of example.
Illustratively, corresponding to the first way of determining the view matrix in the foregoing, the first way of determining the projection matrix here is:
the main control unit can acquire the virtual image size: virtual image width l and virtual image height t; and acquiring the near-far interface parameters: the virtual image plane distance n and the farthest distance f. Based on the virtual image width l and the virtual image height t, and the virtual image surface distance n and the farthest distance f, calculating a projection matrix P:
Figure BDA0002731117810000151
in this way, the projection matrix can be accurately calculated.
Illustratively, corresponding to the second way of determining the view matrix in the foregoing, the second way of determining the projection matrix here is:
the main control unit can acquire a field angle parameter (namely, a field angle of the first camera mechanism, which is also called a field angle in optical engineering, and the size of the field angle determines a field range of an optical instrument), a virtual image size (namely, parameters such as a height and a width that a virtual image needs to be displayed), and a near-far interface parameter when the virtual image is displayed (namely, a distance between a virtual image of an object that needs to be displayed and a virtual image in a virtual image display interface), and determine a projection matrix according to the field angle parameter, the virtual image size, and the near-far interface parameter.
For example, the field angle parameter: fov (parameters may be indicated by aspect).
Virtual image size: width (virtual image width) and height (virtual image height).
Parameters of the near-far interface: including n (near), f (far).
Then, the master control unit may compute the projection matrix through OpenGL:
projectMat.perspective(aspect,width()/height(),n,f)。
thus, the main control unit can calculate the projection matrix project through OpenGL.
By adopting OpenGL, the projection matrix can be determined quickly and efficiently according to the field angle parameter, the virtual image size and the near-far interface parameter. Moreover, the calculation of the conversion matrix is carried out by adopting OpenGL, so that the acceleration of a GPU is favorably realized when virtual image display is carried out, the virtual image display speed is increased, and the visual experience of drivers is improved.
In the present embodiment, the view matrix is calculated first, and then the projection matrix is calculated, but the present application is not limited thereto. In practical applications, the order of determining the view matrix and the projection matrix is not limited. And what kind of method is used to calculate the view matrix and the projection matrix is not specifically limited, and may be selected according to actual needs, for example, the view matrix is calculated by the first method, and the projection matrix is calculated by the second method.
After the view matrix and the projection matrix are determined, the main control unit can determine a conversion matrix according to the view matrix and the projection matrix. For example, the main control unit may obtain the transformation matrix by left-multiplying the projection matrix by the view matrix: x ═ CP. Therefore, the conversion matrix can be calculated efficiently and accurately.
After the conversion matrix is determined, the main control unit can determine the object virtual image position of the attention object on the virtual image display surface according to the object position and the conversion matrix.
For example, the main control unit may determine, according to the object three-dimensional model and the transformation matrix, a two-dimensional object graph that the object three-dimensional model is used for displaying on the virtual image display surface. That is, by converting the matrix, the conversion of the object three-dimensional model into the two-dimensional object graphic can be realized.
Then, the main control unit may display a virtual image of the subject in the virtual image display surface according to the position of the virtual image of the subject. For example, the main control unit may display the virtual object image in the virtual image display surface according to the virtual object image position and the two-dimensional object figure.
The method comprises the steps of determining the object position of an attention object in a world coordinate system based on a driving image, combining a conversion matrix determined based on the eye position and a virtual image display surface to determine the object virtual image position of the attention object on the virtual image display surface, and displaying a virtual image according to the object virtual image position. The position, the size and the like of the virtual image of the object to be displayed on the virtual image display surface can be accurately determined in such a mode, so that the virtual image object and the attention object at the visual angle of the driver are accurately superposed, and the fusion effect of the virtual image and the reality can be well realized. And an object three-dimensional model of the attention object is established, a two-dimensional object graph of the object three-dimensional model can be determined by combining the transformation matrix, and the object virtual image is displayed in the virtual image display surface according to the object virtual image position and the two-dimensional object graph. In such a way, on one hand, the accuracy of virtual image display of the object can be ensured; on the other hand, the consistency of the display effect of the virtual image of the object and the visual effect of the attention object in practice can be ensured, so that the fusion effect of the virtual image and reality can be improved.
In the above, the display effect fusion method provided in the embodiment of the present application is introduced, and here, an application flow of the method will be briefly described with reference to a specific example.
Referring to fig. 3, fig. 3 is a process diagram of an exemplary display effect fusion method according to an embodiment of the present disclosure.
For example, in one aspect, the ADAS (i.e., the first camera) may capture driving images and send the driving images to the main control unit through the CAN (i.e., convert the driving images into CAN data), and the main control unit may analyze the CAN data and construct a three-dimensional model of an object of interest (e.g., a lane line 3-dimensional model, a vehicle 3-dimensional model, a pedestrian 3-dimensional model, etc.). On the other hand, the depth camera (i.e., the second camera mechanism) may capture an image of a person (including a driver), and then the main control unit may process the image of the person (e.g., calibrating the eyes) to determine corresponding conversion parameters (i.e., a conversion matrix). Through the conversion parameter, the main control unit can carry out virtual image conversion to the three-dimensional model of object to determine corresponding object virtual image (for example lane line virtual image, vehicle virtual image, pedestrian virtual image etc.), and then realize the demonstration of virtual image. The virtual image display realized by the method can coincide with the actual scene observed by the driver in reality, thereby improving the virtual image display effect and improving the experience of the driver.
Referring to fig. 4, an embodiment of the present application further provides a main control unit, and fig. 4 is a block diagram of a structure of a main control unit 130 according to the embodiment of the present application.
For example, the master control unit 130 may include: a communication module 132 connected to the outside through a network, one or more processors 134 for executing program instructions, a bus 133, a Memory 131 of different forms, such as a magnetic disk, a ROM (Read-Only Memory), a RAM (Random Access Memory), or any combination thereof. The memory 131, the communication module 132 and the processor 134 are connected by a bus 133.
Illustratively, the memory 131 has stored therein a program. The processor 134 may call and run the programs from the memory 131, so that the display effect fusion method may be performed by running the programs to enhance the display effect of the virtual image.
The embodiment of the present application further provides a storage medium, where one or more programs are stored, and the one or more programs may be executed by one or more processors to implement the display effect fusion method in the embodiment.
To sum up, the present application provides a display effect fusion method, system, storage medium, and main control unit, by acquiring a driving image of a driving vehicle (including one or more attention objects among lane lines, road signs, other vehicles, pedestrians, and the like), and acquiring an eye position of a driver, so as to display an object virtual image (a virtual image displayed by an attention object in a virtual image display surface) on a preset virtual image display surface based on the eye position and the driving image, so that an object virtual image observed from the eye position coincides with the attention object in an actual scene (a real scene including the attention object observed by eyes of the driver). Can consider factors such as different people's height, driving habit and vision habit (eyes position, sight etc.) like this, make the virtual image fuse with real environment well, avoid the condition of virtual image skew (the virtual image that the navigating mate saw promptly and the skew between the actual scene) as far as possible, and can follow navigating mate's eyes position change and adjust the demonstration condition of virtual image in real time, thereby can be adapted to different heights, driving habit, the navigating mate of vision habit, guarantee virtual image display effect, the driving experience that can promote the navigating mate.
In the embodiments provided in the present application, it should be understood that the disclosed method can be implemented in other ways, and the above-described device embodiments are merely illustrative.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (10)

1. A display effect fusion method is applied to a head-up display system, and comprises the following steps:
acquiring a driving image of a driving vehicle, wherein the driving image comprises an attention object, and the attention object comprises one or more of a lane line, a road sign, other vehicles and pedestrians;
acquiring the eye position of a driver in the driving vehicle;
and displaying a virtual object image in the virtual image display surface according to the eye position, the driving image and a preset virtual image display surface, so that the virtual object image observed from the eye position coincides with the attention object in an actual scene, wherein the virtual object image represents a virtual image displayed by the attention object in the virtual image display surface, and the actual scene represents a real scene containing the attention object observed by the eyes of the driver.
2. The display effect fusion method according to claim 1, wherein the obtaining of the eye position of the driver in the driving vehicle comprises:
acquiring a person image containing the driver in the driving vehicle;
and determining the eye position of the eyes of the driver in a world coordinate system according to the personnel image, wherein the world coordinate system is established based on the driving vehicle.
3. The display effect fusion method according to claim 1, wherein the eye position is a position of the eyes of the driver in a world coordinate system, and the displaying a virtual image of the subject in the virtual image display surface according to the eye position, the driving image and a preset virtual image display surface comprises:
determining the object position of the attention object in the world coordinate system according to the driving image;
determining a conversion matrix according to the eye position and the virtual image display surface, wherein the conversion matrix is used for converting position coordinates in the world coordinate system into virtual image coordinates of the virtual image display surface;
determining an object virtual image position of the attention object on the virtual image display surface according to the object position and the conversion matrix;
and displaying the virtual object image in the virtual image display surface according to the virtual object image position.
4. The display effect fusion method of claim 3, wherein determining a transformation matrix according to the eye position and the virtual image display surface comprises:
determining a sight line vector of the eye position facing the virtual image display surface according to the eye position and the virtual image display surface;
determining a view matrix according to the eye position and the sight line vector, wherein the view matrix is used for correcting the object position;
acquiring the size of a virtual image and near-far interface parameters when the virtual image is displayed, and determining a projection matrix according to the size of the virtual image and the near-far interface parameters;
and determining the conversion matrix according to the view matrix and the projection matrix.
5. The method according to claim 4, wherein the determining the view matrix according to the eye position and the sight line vector comprises:
determining an eye position matrix according to the eye positions;
determining an eye visual angle matrix according to the sight line vector;
and the eye visual angle matrix is multiplied by the eye position matrix to determine the view matrix.
6. The method according to claim 4, wherein the determining the view matrix according to the eye position and the sight line vector comprises:
acquiring a reference vector of the virtual image display surface, wherein the reference vector represents a vector which is perpendicular to the center of the virtual image display surface and faces the outside of the driving vehicle;
and calculating the view matrix through OpenGL according to the eye position, the sight line vector and the reference vector.
7. The display effect fusion method according to claim 3, wherein after the determining the object position of the attention object in the world coordinate system according to the driving image, the method further comprises:
constructing an object three-dimensional model of the object of interest containing the object position according to the object of interest and the object position;
and after determining a conversion matrix according to the eye position and the virtual image display surface, the method further comprises:
determining a two-dimensional object graph used by the object three-dimensional model to be displayed on the virtual image display surface according to the object three-dimensional model and the conversion matrix;
correspondingly, the displaying the virtual object image in the virtual image display surface according to the virtual object image position includes:
and displaying the virtual object image in the virtual image display surface according to the virtual object image position and the two-dimensional object graph.
8. A storage medium storing one or more programs executable by one or more processors to implement the display effect fusion method according to any one of claims 1 to 7.
9. A master control unit comprising a memory for storing information comprising program instructions and a processor for controlling the execution of the program instructions, the program instructions being loaded and executed by the processor to implement the display effect fusion method of any one of claims 1 to 7.
10. A heads-up display system, comprising:
the first camera shooting mechanism is used for shooting a driving image of a driving vehicle, wherein the driving image comprises an attention object, and the attention object comprises one or more of a lane line, a road sign, other vehicles and pedestrians;
the second camera shooting mechanism is used for shooting personnel images of a driver in a driving vehicle;
the main control unit is used for obtaining the driving image and the personnel image, determining the eye positions of the eyes of the driver in a world coordinate system according to the personnel image, displaying an object virtual image in a virtual image display surface so as to enable the object virtual image observed from the eye positions to coincide with the attention object in an actual scene, wherein the world coordinate system is established based on the driving vehicle, the object virtual image represents the attention object in the virtual image display surface, and the actual scene represents the real scene of the attention object, wherein the eyes of the driver observe the virtual image.
CN202011121206.1A 2020-10-19 2020-10-19 Display effect fusion method, system, storage medium and main control unit Pending CN112242009A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011121206.1A CN112242009A (en) 2020-10-19 2020-10-19 Display effect fusion method, system, storage medium and main control unit

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011121206.1A CN112242009A (en) 2020-10-19 2020-10-19 Display effect fusion method, system, storage medium and main control unit

Publications (1)

Publication Number Publication Date
CN112242009A true CN112242009A (en) 2021-01-19

Family

ID=74169039

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011121206.1A Pending CN112242009A (en) 2020-10-19 2020-10-19 Display effect fusion method, system, storage medium and main control unit

Country Status (1)

Country Link
CN (1) CN112242009A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113434620A (en) * 2021-06-25 2021-09-24 阿波罗智联(北京)科技有限公司 Display method, device, equipment, storage medium and computer program product
CN113655618A (en) * 2021-08-04 2021-11-16 杭州炽云科技有限公司 ARHUD image display method and device based on binocular vision
CN115171412A (en) * 2022-08-09 2022-10-11 阿波罗智联(北京)科技有限公司 Method, system and device for displaying vehicle running state
CN115665400A (en) * 2022-09-06 2023-01-31 东软集团股份有限公司 Augmented reality head-up display imaging method, device, equipment and storage medium
CN115665400B (en) * 2022-09-06 2024-05-28 东软集团股份有限公司 Augmented reality head-up display imaging method, device, equipment and storage medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113434620A (en) * 2021-06-25 2021-09-24 阿波罗智联(北京)科技有限公司 Display method, device, equipment, storage medium and computer program product
CN113655618A (en) * 2021-08-04 2021-11-16 杭州炽云科技有限公司 ARHUD image display method and device based on binocular vision
CN115171412A (en) * 2022-08-09 2022-10-11 阿波罗智联(北京)科技有限公司 Method, system and device for displaying vehicle running state
CN115171412B (en) * 2022-08-09 2024-04-12 阿波罗智联(北京)科技有限公司 Method, system and device for displaying running state of vehicle
CN115665400A (en) * 2022-09-06 2023-01-31 东软集团股份有限公司 Augmented reality head-up display imaging method, device, equipment and storage medium
CN115665400B (en) * 2022-09-06 2024-05-28 东软集团股份有限公司 Augmented reality head-up display imaging method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
US11241960B2 (en) Head up display apparatus and display control method thereof
CN109649275B (en) Driving assistance system and method based on augmented reality
WO2021197189A1 (en) Augmented reality-based information display method, system and apparatus, and projection device
CN112242009A (en) Display effect fusion method, system, storage medium and main control unit
US9563981B2 (en) Information processing apparatus, information processing method, and program
CN110478901B (en) Interaction method and system based on augmented reality equipment
EP3845861A1 (en) Method and device for displaying 3d augmented reality navigation information
US10539790B2 (en) Coordinate matching apparatus for head-up display
WO2023071834A1 (en) Alignment method and alignment apparatus for display device, and vehicle-mounted display system
CN111462249B (en) Traffic camera calibration method and device
EP4339938A1 (en) Projection method and apparatus, and vehicle and ar-hud
EP3811326B1 (en) Heads up display (hud) content control system and methodologies
WO2021197190A1 (en) Information display method, system and apparatus based on augmented reality, and projection device
CN113483774A (en) Navigation method, navigation device, electronic equipment and readable storage medium
CN115525152A (en) Image processing method, system, device, electronic equipment and storage medium
CN112484743B (en) Vehicle-mounted HUD fusion live-action navigation display method and system thereof
CN115493614B (en) Method and device for displaying flight path line, storage medium and electronic equipment
CN212873085U (en) Head-up display system
CN107848460A (en) For the system of vehicle, method and apparatus and computer-readable medium
CN209290277U (en) DAS (Driver Assistant System)
CN114463832A (en) Traffic scene sight tracking method and system based on point cloud
JP6385621B2 (en) Image display device, image display method, and image display program
KR20180026418A (en) Apparatus for matching coordinate of head-up display
CN117173252A (en) AR-HUD driver eye box calibration method, system, equipment and medium
CN116597425B (en) Method and device for determining sample tag data of driver and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination