CN115984122A - HUD backlight display system and method - Google Patents

HUD backlight display system and method Download PDF

Info

Publication number
CN115984122A
CN115984122A CN202211483882.2A CN202211483882A CN115984122A CN 115984122 A CN115984122 A CN 115984122A CN 202211483882 A CN202211483882 A CN 202211483882A CN 115984122 A CN115984122 A CN 115984122A
Authority
CN
China
Prior art keywords
image
hud
virtual
coordinates
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211483882.2A
Other languages
Chinese (zh)
Inventor
曹俊威
刘子平
冯超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Hadbest Electronics Co ltd
Original Assignee
Shenzhen Hadbest Electronics Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Hadbest Electronics Co ltd filed Critical Shenzhen Hadbest Electronics Co ltd
Priority to CN202211483882.2A priority Critical patent/CN115984122A/en
Publication of CN115984122A publication Critical patent/CN115984122A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a HUD backlight display system and a method, wherein light rays are reflected to a driver sight frame position through a semi-reflective and semi-transparent reflecting screen to enable a driver to see a HUD projection virtual image in front of the reflecting screen, the HUD projection virtual image is subjected to image preprocessing to obtain a HUD image, binocular cameras are horizontally placed in an eye movement range, the two cameras are symmetrical about the center of the eye movement range and respectively collect pictures of the HUD image, camera calibration parameters are adopted to correct and match the two images, the two images are calculated to obtain a parallax map, a virtual image distance is calculated according to a parallax value and a distance of a virtual image part in the parallax map, a vehicle-mounted AR HUD model is constructed according to the HUD projection virtual image and the virtual image distance to enable the driver to observe that the image projected by the HUD is fused with a real scene, scene enhancement display is realized, image information is displayed in a driver front visual field, the time for the driver to look over instrument information at the head when the vehicle runs is reduced, and driving safety is improved.

Description

HUD backlight display system and method
Technical Field
The invention belongs to the technical field of head-up display, and particularly relates to a HUD backlight display system and method.
Background
The vehicle environment perception technology uses various sensors such as radars, vision sensors, GPS and the like to collect surrounding environment information of a vehicle, helps a driver to judge potential dangers in advance, enables the driver to focus on current driving behaviors, and ensures the driving safety of the vehicle. Because the vehicle adopts devices such as a central control display, an instrument panel and the like, the environment perception information can not be accurately transmitted to a driver in real time, the driver can shift the sight from the front driving road to the instrument panel when driving normally, and the driver needs 4-7 seconds to turn back to the front driving road after obtaining the effective information displayed by the instrument, so that the driving behavior has huge potential safety hazards.
At present, the inside of an automobile is generally provided with a communication function and an entertainment system, the social network greatly enhances the functions of the automobile, the sense of a driver is enhanced by rich information, the emotional experience of the driver is promoted, meanwhile, the driver generates larger cognitive load, complex information can lead to the cognitive processing process of people, cognitive resources are overloaded, and therefore a user generates wrong decision and execution actions to influence a main driving task, if the driver looks at the entertainment information on a screen at a low head or calls to ignore road conditions and the like, traffic safety accidents in driving are caused, therefore, the HUD display system which reasonably superposes driving information and actual traffic conditions in a driver sight area and reduces or avoids the 'visual field blind area' caused by looking at instrument information at the low head when the driver is expanded to perceive the driving environment so as to effectively improve the driving safety is necessary to be provided.
Disclosure of Invention
In view of the above, the present invention provides a HUD backlight display system and method capable of improving real-time type of target detection in an embedded environment and enhancing stability and safety of driving environment information and the system, so as to solve the above technical problems, and is specifically implemented by using the following technical solutions.
In a first aspect, the present invention provides a HUD backlight display system comprising:
the image generation unit is used for reflecting the light rays to the position of a driver sight frame through the semi-reflective and semi-transparent reflecting screen so that the driver sees a HUD projection virtual image in front of the reflecting screen, wherein the HUD projection virtual image comprises a virtual image shape and a position;
the image preprocessing unit is used for carrying out image preprocessing on the HUD projection virtual image to obtain an HUD image, wherein the image preprocessing comprises dynamic region-of-interest selection, color feature extraction and gray level processing, smoothing processing and image binarization;
the system comprises a virtual image measuring unit, a HUD image acquisition unit, a virtual image processing unit and a virtual image processing unit, wherein the virtual image measuring unit is used for horizontally placing binocular cameras in an eye movement range, the two cameras are symmetrical about the center of the eye movement range, respectively acquire pictures of HUD images, correct and match the two images by adopting camera calibration parameters, calculate the two images to obtain a disparity map, and calculate to obtain a virtual image distance according to a disparity value and a distance of a virtual image part in the disparity map, the eye movement range is a space in which a driver can freely move without influencing the visualization effect of the virtual image, and comprises a horizontal movement distance and a vertical movement distance;
and the model building unit is used for building the vehicle-mounted AR HUD model according to the HUD projection virtual image and the virtual image distance so that the driver observes the image projected by the HUD and fuses with the real scene, and the scene enhancement display is realized.
As the further improvement of above-mentioned technical scheme, according to HUD projection virtual image and virtual image distance build on-vehicle AR HUD model so that the driver observes the image that HUD throws out and fuses mutually with real scene, include:
adopt nonlinear fitting function training earlier to go out the network model that can predict the virtual projection screen vertex coordinate of AR HUD, adopt nonlinear fitting function training to predict the network model of the predistortion mapping table of AR HUD projection virtual image again, carry out the predistortion processing with the image according to the mapping relation of predistortion mapping table to realize under the dynamic eye position continuous AR HUD virtual image distortion correction, the algorithm model is as follows:
presetting the coordinates of the eye position of the driver as
Figure BDA0003959058310000031
The coordinate of a certain point on the virtual equivalent plane of the AR HUD is P (x) i ,y i, z i ) (i =1,2.. N), the pixel coordinate of this point in the original input image is U i,j (u i,j ,v i,j ) N (i =1,2.. Times; j =1,2.. M), the mapping relation from the virtual image plane to the original input image plane, i.e., the pre-distortion transformation expression of the virtual image is ÷ r>
Figure BDA0003959058310000032
The network structure comprises an input layer, a plurality of hidden layers and an output layer, wherein the expression of the network input is->
Figure BDA0003959058310000033
The ideal output of the network model is->
Figure BDA0003959058310000034
The actual output of the network model is->
Figure BDA0003959058310000035
The eye position of the driver is continuously changed, the mean square error is used for measuring the network loss, and the network error function is
Figure BDA0003959058310000036
Establishing a mapping relation between an original input image and a HUD output virtual image under a plurality of eye positions and using the eye positions as a neural network learning training sample set, presetting n characteristic points in an input dot matrix as the original image and a corresponding distortion point set in a virtual projection screen under the current eye position, selecting m eye positions, wherein each eye position can establish n corresponding relations, and forming an expression formula of the neural network input and output sample set into
Figure BDA0003959058310000037
Performing offline iterative learning on the network model through neural network error back propagation to enable the error E to be smaller than or equal to a beta value, obtaining a network weight coefficient W, dynamically adjusting the learning rate by adopting learning rate exponential decay, wherein the adjustment expression is L r =L r *g e Wherein g represents the base of the learning rate adjustment multiple, and e represents the training step number;
and continuously mapping any point on the virtual image plane to one point on the original input image plane under the condition of dynamic eye positions by applying the neural network after learning and training, and continuously interpolating in a high-dimensional space according to the nonlinear fitting characteristic of the neural network so as to realize the predistortion treatment of the image on the virtual image plane.
As a further improvement of the above technical solution, performing continuous interpolation in a high-dimensional space according to a nonlinear fitting characteristic of a neural network to implement a predistortion processing of an image on a virtual image plane, includes:
if the virtual image plane point P (x) i ,y 0 ,z i ) (i =1,2.. N) and eye coordinates
Figure BDA0003959058310000041
Figure BDA0003959058310000042
The output expression of the network model is obtained by inputting the input into the network model and is greater or less than>
Figure BDA0003959058310000043
Figure BDA0003959058310000044
The eye position three-dimensional coordinate data is input to an AR HUD network model to determine image data in the HUD input image.
As a further improvement of the above technical solution, the performing continuous interpolation in a high-dimensional space according to a nonlinear fitting characteristic of a neural network further includes:
calculating the eye position of the driver by adopting a pupil detection algorithm, and generating different distortion mapping tables according to different eye positions, wherein the pupil detection algorithm is as follows:
acquiring distortion mapping tables under K eye positions, regarding the K eye positions as standard eye positions, and respectively measuring the spatial coordinates of each of the K set eye positions to obtain K eye position spatial coordinates;
respectively obtaining virtual images of the normalized dot matrix map under K eye positions by taking the normalized dot matrix map as an input image, obtaining K virtual images, respectively obtaining the spatial coordinates of each feature point in the input image based on the spatial coordinates of each eye position in the spatial coordinates of the K eye positions, obtaining the spatial coordinates of each feature point in the input image and the corresponding coordinates in the virtual image equivalent plane under each eye position, obtaining K coordinate sets, obtaining the three-dimensional coordinates of four vertexes of a virtual projection screen in each coordinate set of the K coordinate sets, and obtaining the inverse mapping relation from the corresponding virtual image under the current eye position to the input image by adopting a linear interpolation algorithm;
obtaining a multi-eye mapping table from the virtual images under the K eye positions to the input image based on the inverse mapping relation from the virtual image under each eye position to the input image, and presetting the eye position coordinate of the driver obtained by a pupil detection algorithm as E 1 (E x ,E y ,E z ) Three convex combination standard eye positions adjacent to the driver's eye position are respectively
Figure BDA0003959058310000045
And &>
Figure BDA0003959058310000046
The expression for eye position E may be expressed as->
Figure BDA0003959058310000047
Wherein beta is 1 、β 2 And beta 3 Representing the weight of three adjacent convex combination standard eye positions, and the corresponding relation of the coordinates of any point q in the virtual projection screen under the eye position E in the input image is ≥ r>
Figure BDA0003959058310000051
Wherein (u, v) T Represents the coordinates of the point q in the input image at the eye position E, (u) 1 ,v 1 ) T 、(u 2 ,v 2 ) T And (u) 3 ,v 3 ) T Indicating point q at eye position E 1 、E 2 And E 3 A lower coordinate;
and calculating the pixel coordinates of each pixel point under the current eye position corresponding to the input image, forming a mapping table of the mapping relation between each pixel point of the virtual projection screen under the current eye position and the original input image, and processing the image according to the mapping table so as to realize the distortion correction of the dynamic eye position on the AR HUD image.
As a further improvement of the above technical solution, the multiple linear interpolation algorithm for AR HUD image predistortion under the dynamic eye position condition includes:
respectively measuring the spatial coordinates of each of K set eye positions to obtain K eye position spatial coordinates, wherein the eye position is the middle position of two eyes, and the spatial coordinates are coordinates in a vehicle coordinate system;
respectively acquiring virtual images of the normalized dot matrix map under K eye positions by taking the normalized dot matrix map as an input image to obtain K virtual images;
respectively acquiring the spatial coordinates of each feature point in the input image and the corresponding coordinates of the feature point in the virtual image equivalent plane under each eye position based on the spatial coordinates of the K eye positions to obtain K coordinate sets;
acquiring three-dimensional coordinates of four vertexes of the virtual projection screen according to each coordinate set in the K coordinate sets, and acquiring the inverse mapping relation from a virtual image under the corresponding current eye position to an input image by adopting a linear interpolation algorithm;
acquiring a multi-eye mapping image from the virtual images under K eye positions to the input image based on the inverse mapping relation from the virtual image under each eye position to the original input image;
acquiring the real-time eye position of the driver through a pupil detection algorithm, and generating a distortion mapping table for AR HUD pre-distortion processing under the current eye position by adopting a difference algorithm based on a multi-eye position mapping table from the virtual images under the K eye positions to the input image;
and transforming the output image pixel by pixel into the pixel corresponding to the virtual projection screen according to the generated distortion mapping table to realize the pre-distortion processing of the image.
As a further improvement of the foregoing technical solution, a specific implementation procedure of the virtual image measurement unit includes:
completely checking a dot matrix of an eye movement region of an AR HUD projection image according to the resolution set by hardware equipment of the AR HUD and a driver, acquiring the dot matrix projected by the AR HUD by adopting an auxiliary camera, and solving the coordinates of each characteristic point under a vehicle coordinate system;
the depth of field of a virtual projection screen of the AR HUD at the position line of the auxiliary camera in a vehicle coordinate system is preset to be Y 0 Mapping all feature points to depth of field of Y 0 On the plane of (2), the coordinate of each feature point in the vehicle coordinate system is marked as P i (i =1,2.. N), where n represents the number of feature points;
presetting a projection matrix of an auxiliary camera as M and characteristic points P i (i =1,2.. N) the coordinates of a point q in the auxiliary camera image are (u, v), and the coordinates in the vehicle coordinate system are (x, y, z), then
Figure BDA0003959058310000061
Make->
Figure BDA0003959058310000062
Figure BDA0003959058310000063
And &>
Figure BDA0003959058310000064
Namely X = A -1 B, calculating coordinates of all feature points in the dot matrix map in a vehicle coordinate system after the feature points are mapped to a set depth-of-field screen, namely a virtual equivalent plane;
and selecting the maximum inscribed rectangle area as a virtual projection screen according to the distribution of the characteristic points on the virtual equivalent plane, and discretizing the maximum inscribed rectangle area into a set resolution.
As a further improvement of the above technical solution, the calculating of the coordinates of the vehicle coordinate system in which all feature points in the bitmap are mapped onto the set depth-of-field screen, i.e., the virtual equivalent plane, includes:
the three-dimensional coordinates in the AR HUD system comprise a vehicle coordinate system, a forward-looking camera coordinate system and a behavior detection camera coordinate system, the conversion relation under the three-dimensional coordinates is represented by adopting rotation conversion and translation conversion of the coordinate system, and the vehicle coordinate system in the AR HUD system is preset to be C W The coordinate system of the front-view camera is C F And the behavior detection camera coordinate system C E ,C W The coordinate of the next point p in the coordinate system is (x) w ,y w ,z w ) The rotation matrix in the transformation relationship from the forward-looking camera coordinate system to the vehicle coordinate system is R 1 Translation matrix is T 1 Then is at C F Coordinate (x) of point p in coordinate system F ,y F ,z F ) Can be expressed as
Figure BDA0003959058310000071
Any point under a vehicle coordinate system is represented by a three-dimensional coordinate vector, all three-dimensional coordinate vectors can be mapped onto the same two-dimensional plane, any imaging point in the AR HUD projection virtual image can be mapped onto the virtual projection screen, and the mapping point is uniquely represented under the vehicle coordinate system;
according to the method, after the forward-looking camera acquires the coordinates of a certain target in an image coordinate system, the corresponding coordinates of the target in the camera coordinate system and a vehicle coordinate system are determined through coordinate transformation, and the eye position of a driver is subjected to coordinate system transformation on the result of pupil detection to determine the coordinates of the target in the vehicle coordinate system, so that the conversion relation among the coordinates of the cameras in the whole system is determined.
As a further improvement of the above technical solution, determining coordinates corresponding to a camera coordinate system and a vehicle coordinate system by coordinate transformation after acquiring coordinates of a certain target in an image coordinate system by a front-view camera, includes:
the conversion relation from each camera coordinate system to a vehicle coordinate system is calculated by adopting a multi-camera combined calibration algorithm and the coordinate system conversion relation, a calibration plate is respectively placed in the front of a vehicle and in a vehicle cab, the calibration plates are parallel to each other, relative position information between the two calibration plates is obtained by adopting a laser range finder, the conversion relation from the two cameras to the coordinate system of the shot calibration plate is respectively calculated by a Zhang-Zhengyou calibration method so as to establish the relation between the two cameras, and the calibration process comprises the following steps:
calculating distortion coefficients and internal and external parameters of each camera by adopting a Zhangyingyou calibration algorithm, respectively placing a calibration plate in front of the vehicle and in a vehicle cab, and respectively collecting images of the calibration plates by adopting the cameras;
carrying out distortion processing on the acquired picture through the calibrated distortion coefficient, respectively obtaining the conversion relation of each camera to a coordinate system of a calibration plate, obtaining the conversion relation between coordinate systems of two cameras according to coordinate conversion, establishing the conversion relation of the coordinate systems of each camera under a vehicle coordinate system, and taking the relation as external reference of the camera;
obtaining internal and external parameters and distortion parameters of the forward-looking camera and the behavior detection camera and a rotation matrix R according to a conversion algorithm between a calibration algorithm and a coordinate system 1 、R 2 And translation matrix T 1 And T 2
All coordinates are in the same vehicle coordinate system, if there is a point p in the real world 0 ,p 0 The coordinates of the point in the vehicle coordinate system are (x) w ,y w ,z w ) T Pupil examinationThe coordinate under the coordinate system of the surveying camera is (x) E ,y E ,Z E ) T
As above-mentioned technical scheme's further improvement, reflect the driver to the driver and look frame position with light through the reflecting screen of half reflecting and partly passing through and make the driver see HUD projection virtual image in reflecting screen the place ahead, include:
a ray in the preset space passes through two parallel planes at the same time, and the coordinate of the intersection point of the ray and the first screen is recorded as (u) 0 ,v 0 ) And the coordinate of the intersection point of the ray and the second plane is(s) 0 ,t 0 ) From the optical properties, the coordinates (u) can be used 0 ,v 0 ,s 0 ,t 0 ) To determine the direction and position of the light.
In a second aspect, the present invention further provides a HUD backlight display method, including the steps of:
the method comprises the steps that environmental information of a forward-looking camera shooting vehicle running direction is obtained, a shot picture is transmitted to an information processing module, and a driving state of a driver is shot through a behavior detection camera of the driver;
analyzing the driving state of the driver by adopting a pupil detection algorithm, acquiring an image needing HUD projection by an information processing module according to a vehicle driving scene and the driving state information of the driver, and rendering and distortion removing the image;
and transmitting the image to the HUD for projection, so that the driver observes the image projected by the HUD and is fused with a real scene to realize scene enhancement.
The invention provides an HUD backlight display system and a method, wherein light rays are reflected to a driver sight frame position through a semi-reflective semi-transparent reflecting screen, so that a driver sees a HUD projection virtual image in front of the reflecting screen, the HUD projection virtual image is subjected to image preprocessing to obtain a HUD image, binocular cameras are horizontally placed in an eye movement range, the two cameras are symmetrical about the center of the eye movement range, pictures of the HUD image are respectively collected, camera calibration parameters are adopted to correct and match the two images, the two images are calculated to obtain a disparity map, a virtual image distance is calculated according to the parallax value and the distance of a virtual image part in the disparity map, a vehicle-mounted AR D model is built according to the HUD projection virtual image and the distance to enable the driver to observe that the HUD projected image is fused with a real scene, scene enhancement display is realized, vehicle body information and data information detected by a target are processed, data information needing to be displayed in a driver sight line is classified, target registration information is obtained through three-dimensional registration, and the perception capability of the driver to a target object can be enhanced through visual design of an output image. The AR HUD system classifies pedestrians and vehicles, divides the pedestrians and vehicles on image pixels, combines a virtual image generated by a computer with a real scene after vehicle environment information is obtained, enhances the environment perception capability of a driver, displays the image information in the front visual field of the driver, reduces the time for the driver to look over instrument information when the vehicle runs and enables the driver to focus attention on the running direction, and driving safety is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
FIG. 1 is a block diagram of a HUD backlight display system according to the present invention;
FIG. 2 is a process diagram of the present invention for aberration correction;
FIG. 3 is a flowchart of the HUD backlight display method of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
Referring to fig. 1, the present invention provides a HUD backlight display system including:
the image generation unit is used for reflecting light rays to the position of a sight frame of a driver through the semi-reflective and semi-transparent reflecting screen so that the driver sees a HUD projection virtual image in front of the reflecting screen, wherein the HUD projection virtual image comprises a virtual image shape and a virtual image position;
the image preprocessing unit is used for carrying out image preprocessing on the HUD projection virtual image to obtain an HUD image, wherein the image preprocessing comprises dynamic region-of-interest selection, color feature extraction and gray level processing, smoothing processing and image binarization;
the virtual image measuring unit is used for horizontally placing the binocular cameras in an eye movement range, the two cameras are symmetrical about the center of the eye movement range, pictures of the HUD images are respectively collected, camera calibration parameters are adopted to correct and match the two images, the two images are calculated to obtain a disparity map, a virtual image distance is obtained through calculation according to the disparity value and the distance of a virtual image part in the disparity map, the eye movement range is a space where a driver can freely move and the virtual image visualization effect cannot be influenced, and the eye movement range comprises a horizontal movement distance and a vertical movement distance;
and the model building unit is used for building the vehicle-mounted AR HUD model according to the HUD projection virtual image and the virtual image distance so that the driver observes the image projected by the HUD and fuses with the real scene, and the scene enhancement display is realized.
In this embodiment, a specific execution process of the virtual image measurement unit includes: completely checking a dot matrix of an eye movement region of an AR HUD projection image according to the resolution set by hardware equipment of the AR HUD and a driver, acquiring the dot matrix projected by the AR HUD by adopting an auxiliary camera, and solving the coordinates of each characteristic point under a vehicle coordinate system; the depth of field of a virtual projection screen of the AR HUD at the position line of the auxiliary camera in a vehicle coordinate system is preset to be Y 0 Mapping all feature points to depth of field of Y 0 On the plane of (2), the coordinate of each feature point in the vehicle coordinate system is marked as P i (i =1,2.. N), where n represents the number of feature points; presetting a projection matrix of an auxiliary camera as M and characteristic points P i (i =1,2.. N) one point q has (u, v) pixel coordinates of the auxiliary camera-acquired image, and is on the vehicleThe coordinates in the vehicle coordinate system are (x, y, z)
Figure BDA0003959058310000101
Make/combine>
Figure BDA0003959058310000102
Figure BDA0003959058310000103
And &>
Figure BDA0003959058310000104
I.e. X = A -1 B, calculating coordinates of all feature points in the dot matrix map in a vehicle coordinate system after the feature points are mapped to a set depth-of-field screen, namely a virtual equivalent plane; and selecting the maximum inscribed rectangle area as a virtual projection screen according to the distribution of the characteristic points on the virtual equivalent plane, and discretizing the maximum inscribed rectangle area into a set resolution. Reflect the driver with light through half reflection and half transmission's reflecting screen and look frame position and make the driver see HUD projection virtual image in reflecting screen the place ahead, include: a ray in the preset space passes through two parallel planes at the same time, and the coordinate of the intersection point of the ray and the first screen is recorded as (u) 0 ,v 0 ) And the coordinate of the intersection of the ray and the second plane is expressed as(s) 0 ,t 0 ) From the optical properties, the coordinates (u) can be used 0 ,v 0 ,s 0 ,t 0 ) To determine the direction and position of the light.
It should be noted that, the AR is to project image information generated by a computer through a device, so that a driver sees that virtual information is fused with real information, and after a three-dimensional registration result of an object in a real scene is obtained, the virtual image is processed at the registered result by using computer graphics and the like, so that the virtual image has a real-time effect and a "visual enhancement" effect, which means that the virtual image can timely change along with the change of the pose of an observer, and the virtual image can be matched with the real environment, and the "visual enhancement" means that the virtual image can give different visual prompts to the driver according to different objects in the real scene. In order to realize real-time fusion of virtual information and real scenes in the real world, an object to be subjected to visual enhancement needs to be positioned, and a three-dimensional registration algorithm is adopted to acquire registration points of the object, but the premise of three-dimensional registration is that a virtual image projected by an AR HUD system needs to be spatially positioned and measured, and meanwhile, distortion correction is carried out on the virtual image which is still needed for not influencing the imaging effect and registration precision of the AR HUD. The image that the HUD throws out is not real image, after the complicated optical lens group reflection of HUD during this image, light gets into the virtual image that the driver's sight formed through the unrestrained windshield reflection of car, and this virtual image covers under the driver visual angle on the real scene in front, and from windshield the place ahead be the virtual image that can not the official working. The HUD projection virtual image is subjected to image preprocessing to obtain a HUD image, wherein the image preprocessing comprises dynamic region of interest selection, color feature extraction and gray level processing, smoothing processing and image binarization, and the accuracy of image processing can be improved.
It should be understood that, an irregular image during the image that HUD projected, in order to present a good display effect, need carry out predistortion processing to the image that HUD projected, the image carries out the preliminary treatment before HUD projects promptly, make the image after handling have a good visual effect after HUD projects, gather AR HUD projected through the camera and preset the dot matrix diagram, calculate the space coordinate of each characteristic point in the dot matrix diagram, and with this obtain the virtual projection screen coordinate of AR HUD in the above-mentioned content, and discretize the projection screen and predetermine the resolution ratio. And then establishing a mapping relation between each pixel point of the virtual projection screen and the original input image through the corresponding relation between each feature point in the virtual projection screen and the feature point of the original input image, storing the mapping relation in a form of a mapping table, and finally processing the image projected by the AR HUD through the corresponding relation in the mapping table so as to improve the distortion problem generated by the AR HUD system. The system can improve the perception capability of the driver to the driving environment, reduce the time for the driver to look over the vehicle instrument information by lowering the head, effectively concentrate the attention of the driver on the driving front of the vehicle, reduce the adjustment of the visual focus of the driver and improve the driving safety of the driver.
Optionally, constructing an on-vehicle AR HUD model according to the HUD projected virtual image and the virtual image distance so that the driver observes the image projected by the HUD to be fused with the real scene, includes:
adopt nonlinear fitting function training earlier to go out the network model that can predict virtual projection screen vertex coordinate of AR HUD, adopt nonlinear fitting function training to predict the network model of the predistortion mapping table of AR HUD projection virtual image again, carry out predistortion processing with the image according to the mapping relation of predistortion mapping table to realize under the dynamic eye position continuous AR HUD virtual image distortion correction, the algorithm model is as follows:
presetting the coordinates of the eye position of the driver as
Figure BDA0003959058310000121
The coordinate of a certain point on the virtual equivalent plane of the AR HUD is P (x) i ,y i, z i ) (i =1,2.. N), the pixel coordinate of this point in the original input image is U i,j (u i,j ,v i,j ) N (i =1,2.. Times; j =1,2.. M), the mapping relation from the virtual image plane to the original input image plane, i.e., the pre-distortion transformation expression of the virtual image is ÷ r>
Figure BDA0003959058310000122
The network structure comprises an input layer, a plurality of hidden layers and an output layer, wherein the expression of the network input is->
Figure BDA0003959058310000123
Ideal output for network model is &>
Figure BDA0003959058310000124
Actual output of network model is &>
Figure BDA0003959058310000125
The eye position of the driver is continuously changed, the mean square error is used for measuring the network loss, and the network error function is
Figure BDA0003959058310000126
Establishing a mapping relation between an original input image and a HUD output virtual image under a plurality of eye positions and using the eye positions as a neural network learning training sample set, presetting n characteristic points in an input dot matrix diagram as the original image and a corresponding distortion point set in a virtual projection screen under the current eye position, selecting m eye positions, wherein each eye position can establish n corresponding relations, and forming an expression formula of the neural network input and output sample set as
Figure BDA0003959058310000131
Performing offline iterative learning on the network model through neural network error back propagation to enable the error E to be smaller than or equal to a beta value, obtaining a network weight coefficient W, dynamically adjusting the learning rate by adopting learning rate exponential decay, wherein the adjustment expression is L r =L r *g e Wherein g represents the base of the learning rate adjustment multiple, and e represents the training step number;
and continuously mapping any point on the virtual image plane to one point on the original input image plane under the condition of dynamic eye positions by applying the neural network after learning and training, and continuously interpolating in a high-dimensional space according to the nonlinear fitting characteristic of the neural network so as to realize the predistortion treatment of the image on the virtual image plane.
In this embodiment, performing continuous interpolation in a high-dimensional space according to a nonlinear fitting characteristic of a neural network to implement predistortion processing of an image on a virtual image plane includes: if the virtual image plane point P (x) i ,y 0 ,z i ) (i =1,2.. N) and eye coordinates
Figure BDA0003959058310000132
The network model is input to obtain a network model output expression as
Figure BDA0003959058310000133
The eye position three-dimensional coordinate data is input to the AR HUD network model to determine image data in the HUD input image. Performing continuous interpolation in a high-dimensional space according to the nonlinear fitting characteristic of the neural network, further comprising: calculating the eye position of the driver by adopting a pupil detection algorithm, and generating different eye positionsDistortion mapping table, pupil detection algorithm as follows: acquiring distortion mapping tables under K eye positions, regarding the K eye positions as standard eye positions, and respectively measuring the spatial coordinates of each of the K set eye positions to obtain K eye position spatial coordinates; respectively obtaining virtual images of the normalized dot matrix map under K eye positions by taking the normalized dot matrix map as an input image, obtaining K virtual images, respectively obtaining the spatial coordinates of each feature point in the input image based on the spatial coordinates of each eye position in the spatial coordinates of the K eye positions, obtaining the spatial coordinates of each feature point in the input image and the corresponding coordinates in the virtual image equivalent plane under each eye position, obtaining K coordinate sets, obtaining the three-dimensional coordinates of four vertexes of a virtual projection screen in each coordinate set of the K coordinate sets, and obtaining the inverse mapping relation from the corresponding virtual image under the current eye position to the input image by adopting a linear interpolation algorithm; obtaining a multi-eye mapping table from the virtual images under the K eye positions to the input image based on the inverse mapping relation from the virtual image under each eye position to the input image, and presetting the eye position coordinate of the driver obtained by a pupil detection algorithm as E 1 (E x ,E y ,E z ) Three convex combination standard eye positions adjacent to the driver's eye position are respectively
Figure BDA0003959058310000141
And &>
Figure BDA0003959058310000142
The expression for eye bit E can be expressed as
Figure BDA0003959058310000143
Wherein beta is 1 、β 2 And beta 3 Representing the weight of three adjacent convex combined standard eye positions, wherein the corresponding relation of the coordinates of any point q in the virtual projection screen under the eye position E in the input image is
Figure BDA0003959058310000144
Wherein (u, v) T Represents the coordinates of the point q in the input image at the eye position E, (u) 1 ,v 1 ) T 、(u 2 ,v 2 ) T And (u) 3 ,v 3 ) T Indicating point q at eye position E 1 、E 2 And E 3 Coordinates of the lower part; and calculating the pixel coordinate of each pixel point under the current eye position corresponding to the input image, forming a mapping table of the mapping relation between each pixel point of the virtual projection screen under the current eye position and the original input image, and processing the image according to the mapping table so as to realize the distortion correction of the dynamic eye position on the AR HUD image. />
It should be noted that, the process of calculating the coordinates of the feature points in the virtual image equivalent plane is as follows: the calibrated camera is arranged at the eye position of the driver, and the coordinate of the camera is set as E 0 (X e ,Y e ,Z e ) Acquiring a dot-matrix image projected by the HUD by using a camera, and preprocessing a calibration board image; determining an interested region, intercepting a dot matrix region in a captured image, performing image gray processing and smoothing processing, performing self-adaptive binarization, detecting and removing noise points from a contour, and calculating the center of the dot matrix contour, namely the pixel coordinate of a characteristic point. The method comprises the steps of calculating space coordinates of a plurality of characteristic points in an acquired AR HUD projection virtual image on a virtual image equivalent plane through visual measurement, taking a maximum inscribed rectangle of the characteristic points in a distribution area in the virtual image equivalent plane as a virtual projection screen, setting the resolution of the virtual projection screen to be 800 multiplied by 500, and carrying out discretization representation on the area where the virtual projection screen is located. The execution process of the linear interpolation algorithm for AR HUD image predistortion under the condition of static eye position comprises the following steps: extracting the characteristics of the dot matrix image, calculating the three-dimensional coordinates of all characteristic points in a vehicle coordinate system, determining the three-dimensional coordinates of four vertexes of the virtual projection screen according to the maximum inscribed rectangle in a region formed by the extracted characteristic points, discretizing the region according to a preset resolution, establishing the corresponding relation between the extracted characteristic points in the virtual projection screen and the characteristic points of the original input image, storing the corresponding relation in a dictionary form, determining the pixel coordinates of each pixel point except the characteristic points in the virtual projection screen in the original input image by adopting a linear interpolation method of adjacent characteristic point convex combination, traversing the pixel points in the virtual projection screen, establishing the mapping relation between each pixel point of the virtual projection screen and the original input image, storing the mapping relation in a mapping table form, and converting the output image into the virtual projection image pixel by pixel to obtain the virtual projection imageAnd in the pixels corresponding to the shadow screen, the pre-distortion processing of the image is realized, and the virtual image imaging effect observed by a driver is improved.
Referring to fig. 2, optionally, the multiple linear interpolation algorithm for AR HUD image predistortion under dynamic eye position condition includes:
s20: respectively measuring the spatial coordinates of each of K set eye positions to obtain K eye position spatial coordinates, wherein the eye position is the middle position of two eyes, and the spatial coordinates are coordinates in a vehicle coordinate system;
s21: taking the normalized dot matrix map as an input image, and respectively obtaining virtual images of the normalized dot matrix map under K eye positions to obtain K virtual images;
s22: respectively acquiring the spatial coordinates of each feature point in the input image and the corresponding coordinates of the feature point in the virtual image equivalent plane under each eye position based on the spatial coordinates of the K eye positions to obtain K coordinate sets;
s23: acquiring three-dimensional coordinates of four vertexes of the virtual projection screen according to each coordinate set in the K coordinate sets, and acquiring the inverse mapping relation from a virtual image under the corresponding current eye position to an input image by adopting a linear interpolation algorithm;
s24: acquiring a multi-eye mapping image from the virtual images under the K eye positions to the input image based on the inverse mapping relation from the virtual image under each eye position to the original input image;
s25: acquiring the real-time eye position of the driver through a pupil detection algorithm, and generating a distortion mapping table for AR HUD pre-distortion processing under the current eye position by adopting a difference algorithm based on a multi-eye position mapping table from the virtual images under the K eye positions to the input image;
s26: and transforming the output image into pixels corresponding to the virtual projection screen pixel by pixel according to the generated distortion mapping table so as to realize the pre-distortion processing of the image.
In this embodiment, the multiple linear interpolation algorithm of the AR HUD image predistortion under the dynamic eye position condition adopts the interpolation algorithm to calculate the distortion mapping table of the corresponding eye position in real time, so as to achieve the purpose of distortion correction. However, the accuracy is limited by the number and position of the standard eye positions, and when the eye positions change continuously, the undistorted image may jump. The network model that can predict virtual projection screen vertex coordinate of AR HUD is trained out with nonlinear fitting function earlier, then trains out the network model that can predict the predistortion mapping table of AR HUD projection virtual image with nonlinear fitting function, carries out predistortion processing with the image according to the mapping relation of predistortion mapping table at last to realize under the dynamic eye position continuous AR HUD virtual image distortion correction. The eye position coordinates are obtained through a pupil detection algorithm, the mapping table for AR HUD virtual image predistortion treatment under different eye positions of a driver is calibrated, the mapping table for AR HUD virtual image predistortion treatment under the current eye position of the driver is obtained through an interpolation method, and therefore the problem of AR HUD image dynamic distortion correction under the dynamic eye position is solved.
Optionally, the calculating coordinates of all feature points in the bitmap under a vehicle coordinate system after the feature points are mapped onto a set depth-of-field screen, that is, a virtual equivalent plane, includes:
the three-dimensional coordinates in the AR HUD system comprise a vehicle coordinate system, a forward-looking camera coordinate system and a behavior detection camera coordinate system, the conversion relation under the three-dimensional coordinates is represented by adopting rotation conversion and translation conversion of the coordinate system, and the vehicle coordinate system in the AR HUD system is preset to be C W The coordinate system of the front-view camera is C F And the behavior detection camera coordinate system C E ,C W The coordinate of a point p under the coordinate system is (x) w ,y w ,z w ) The rotation matrix in the conversion relation from the coordinate system of the front-view camera to the coordinate system of the vehicle is R 1 Translation matrix is T 1 Then is at C F Coordinate (x) of point p in coordinate system F ,y F ,z F ) Can be expressed as
Figure BDA0003959058310000161
Any point under a vehicle coordinate system is represented by a three-dimensional coordinate vector, all three-dimensional coordinate vectors can be mapped onto the same two-dimensional plane, any imaging point in the AR HUD projection virtual image can be mapped onto the virtual projection screen, and the mapping point is uniquely represented under the vehicle coordinate system;
according to the method, coordinates corresponding to a certain target under an image coordinate system are determined through coordinate transformation after the forward-looking camera acquires the coordinates of the target under the image coordinate system, and coordinates of the driver under the vehicle coordinate system are determined through coordinate system transformation of a pupil detection result so as to determine a conversion relation among the coordinates of the cameras in the whole system.
In this embodiment, determining, according to coordinate transformation after the forward-looking camera acquires coordinates of a certain target in an image coordinate system, coordinates corresponding to the camera coordinate system and a vehicle coordinate system includes: the method comprises the following steps of calculating the conversion relation between a coordinate system of each camera and a coordinate system of a vehicle by adopting a multi-camera combined calibration algorithm and the conversion relation between the coordinate systems, respectively placing a calibration plate in front of the vehicle and in a cab of the vehicle, wherein the calibration plates are parallel to each other, acquiring relative position information between the two calibration plates by adopting a laser range finder, and respectively calculating the conversion relation between the two cameras and the coordinate system of the shot calibration plate by a Zhang-Zhengyou calibration method so as to establish the relation between the two cameras, wherein the calibration process comprises the following steps: calculating distortion coefficients and internal and external parameters of each camera by adopting a Zhangyingyou calibration algorithm, respectively placing a calibration plate in front of the vehicle and in a vehicle cab, and respectively collecting images of the calibration plates by adopting the cameras; carrying out distortion processing on the acquired picture through the calibrated distortion coefficient, respectively obtaining the conversion relation of each camera to a coordinate system of a calibration plate, obtaining the conversion relation between coordinate systems of two cameras according to coordinate conversion, establishing the conversion relation of the coordinate systems of each camera under a vehicle coordinate system, and taking the relation as external reference of the camera; obtaining internal and external parameters and distortion parameters of the forward-looking camera and the behavior detection camera and a rotation matrix R according to a conversion algorithm between a calibration algorithm and a coordinate system 1 、R 2 And translation matrix T 1 And T 2 (ii) a All coordinates are in the same vehicle coordinate system, if there is a point p in the real world 0 ,p 0 The coordinates of the point in the vehicle coordinate system are (x) w ,y w ,z w ) T And the coordinate of the pupil detection camera in the coordinate system is (x) E ,y E ,Z E ) T
It should be noted that the multi-camera combined calibration aims to obtain the internal reference and distortion coefficient of each camera, i.e. the forward-looking camera and the pupil detection camera, and calculate the transformation relationship from each camera coordinate system to the vehicle coordinate system by using the coordinate transformation principle, and take the relationship as the external reference of the camera. Any imaging point in the AR HUD projection virtual image can be mapped onto the virtual projection screen, the mapping point is uniquely represented under a vehicle coordinate system, after a forward-looking camera acquires the coordinate of a certain target under the image coordinate system, the coordinate corresponding to the target under the camera coordinate system and the vehicle coordinate system can be determined through coordinate system transformation, the eye position of a driver can also be determined through coordinate system transformation of the pupil detection result, and therefore the transformation relation among the camera coordinate systems in the whole system is determined. Effectively improve the problem of AR HUD virtual image distortion under the dynamic eye position. The neural network learning algorithm of AR HUD virtual image predistortion passes through the machine learning mode under the dynamic eye position condition, has established the predistortion model that multidimension space continuous mapping, and when eye position continuous variation, predistortion image ability continuous variation avoids the driver to observe the image after the undistorted that AR HUD throws and takes place the jump, can effectively improve the virtual image effect that the driver observed under different eye positions simultaneously.
Referring to fig. 3, the present invention also provides a HUD backlight display method, including the steps of:
s30: the method comprises the steps that environmental information of a forward-looking camera shooting vehicle driving direction is obtained, a shot picture is transmitted to an information processing module, and a driving state of a driver is shot through a behavior detection camera of the driver;
s31: analyzing the driving state of the driver by adopting a pupil detection algorithm, acquiring an image needing HUD projection by an information processing module according to a vehicle driving scene and the driving state information of the driver, and rendering and distortion removing the image;
s32: and transmitting the image to the HUD for projection, so that the driver observes that the image projected by the HUD is fused with a real scene to realize scene enhancement.
In the embodiment, the vehicle body information and the data information of the target detection are processed, the data information which needs to be displayed in the sight of the driver is classified, the target registration information is obtained through three-dimensional registration, and the perception capability of the driver on the target object can be enhanced through the visual design of the output image. The AR HUD system classifies pedestrians and vehicles, divides the pedestrians and vehicles on image pixels, combines a virtual image generated by a computer with a real scene after vehicle environment information is obtained, enhances the environment perception capability of a driver, displays the image information in the front visual field of the driver, reduces the time for the driver to look over instrument information when the vehicle runs and enables the driver to focus attention on the running direction, and driving safety is improved.
In all examples shown and described herein, any particular value should be construed as exemplary only and not as a limitation, and thus other examples of example embodiments may have different values.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The above examples are merely illustrative of several embodiments of the present invention, and the description thereof is more specific and detailed, but not to be construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention.

Claims (10)

1. A HUD backlight display system, comprising:
the image generation unit is used for reflecting the light rays to the position of a driver sight frame through the semi-reflective and semi-transparent reflecting screen so that the driver sees a HUD projection virtual image in front of the reflecting screen, wherein the HUD projection virtual image comprises a virtual image shape and a position;
the image preprocessing unit is used for carrying out image preprocessing on the HUD projection virtual image to obtain an HUD image, wherein the image preprocessing comprises dynamic region-of-interest selection, color feature extraction and gray level processing, smoothing processing and image binarization;
the virtual image measuring unit is used for horizontally placing the binocular cameras in an eye movement range, the two cameras are symmetrical about the center of the eye movement range, pictures of the HUD images are respectively collected, camera calibration parameters are adopted to correct and match the two images, the two images are calculated to obtain a disparity map, a virtual image distance is obtained through calculation according to the disparity value and the distance of a virtual image part in the disparity map, the eye movement range is a space where a driver can freely move and the virtual image visualization effect cannot be influenced, and the eye movement range comprises a horizontal movement distance and a vertical movement distance;
and the model building unit is used for building the vehicle-mounted AR HUD model according to the HUD projection virtual image and the virtual image distance so that the driver observes the image projected by the HUD and fuses with the real scene, and the scene enhancement display is realized.
2. The HUD backlight display system of claim 1, wherein constructing an on-board AR HUD model from the HUD projected virtual images and virtual image distances to enable the driver to observe the image projected by the HUD to blend with the real scene comprises:
adopt nonlinear fitting function training earlier to go out the network model that can predict virtual projection screen vertex coordinate of AR HUD, adopt nonlinear fitting function training to predict the network model of the predistortion mapping table of AR HUD projection virtual image again, carry out predistortion processing with the image according to the mapping relation of predistortion mapping table to realize under the dynamic eye position continuous AR HUD virtual image distortion correction, the algorithm model is as follows:
presetting the coordinates of the eye position of the driver as
Figure FDA0003959058300000011
The coordinate of a certain point on the virtual equivalent plane of the AR HUD is P (x) i ,y i, z i ) (i =1,2.. N), and the pixel coordinate of the point in the original input image is U i,j (u i,j ,v i,j ) N (i =1,2.. Times; j =1,2.. M), the pre-distortion transformation expression of the virtual image is ≥ based on the mapping relationship from the virtual image plane to the original input image plane, i.e., the virtual image>
Figure FDA0003959058300000021
The network structure comprises an input layer, a plurality of hidden layers and an output layer, wherein the expression input by the network is->
Figure FDA0003959058300000022
The ideal output of the network model is->
Figure FDA0003959058300000023
The actual output of the network model is->
Figure FDA0003959058300000024
The eye position of the driver is continuously changed, the mean square error is used for measuring the network loss, and the network error function is
Figure FDA0003959058300000025
Establishing a mapping relation between an original input image and a HUD output virtual image under a plurality of eye positions and using the eye positions as a neural network learning training sample set, presetting n characteristic points in an input dot matrix as the original image and a corresponding distortion point set in a virtual projection screen under the current eye position, selecting m eye positions, wherein each eye position can establish n corresponding relations, and forming an expression formula of the neural network input and output sample set into
Figure FDA0003959058300000026
Performing offline iterative learning on the network model through neural network error back propagation to enable the error E to be smaller than or equal to a beta value, obtaining a network weight coefficient W, dynamically adjusting the learning rate by adopting learning rate exponential decay, wherein the adjustment expression is L r =L r *g e Wherein g represents the base of the learning rate adjustment multiple, and e represents the training step number;
and continuously mapping any point on the virtual image plane to one point on the original input image plane under the condition of dynamic eye positions by applying the neural network after learning and training, and continuously interpolating in a high-dimensional space according to the nonlinear fitting characteristic of the neural network so as to realize the predistortion treatment of the image on the virtual image plane.
3. The HUD backlight display system according to claim 2, wherein continuous interpolation in a high-dimensional space is performed according to a nonlinear fitting characteristic of a neural network to realize predistortion processing of an image on a virtual image plane, comprising:
if the virtual image plane point P (x) i ,y 0 ,z i ) (i =1,2.. N) and eye coordinates
Figure FDA0003959058300000027
Figure FDA0003959058300000031
The output expression of the network model is obtained by inputting the input into the network model and is greater or less than>
Figure FDA0003959058300000032
Figure FDA0003959058300000033
The eye position three-dimensional coordinate data is input to an AR HUD network model to determine image data in the HUD input image.
4. The HUD backlight display system of claim 3, wherein the continuous interpolation in the high-dimensional space is performed according to a non-linear fitting characteristic of a neural network, further comprising:
calculating the eye position of the driver by adopting a pupil detection algorithm, and generating different distortion mapping tables according to different eye positions, wherein the pupil detection algorithm is as follows:
acquiring distortion mapping tables under K eye positions, regarding the K eye positions as standard eye positions, and respectively measuring the spatial coordinates of each of the K set eye positions to obtain K eye position spatial coordinates;
respectively obtaining virtual images of the normalized dot matrix map under K eye positions by taking the normalized dot matrix map as an input image, obtaining K virtual images, respectively obtaining the spatial coordinates of each feature point in the input image based on the spatial coordinates of each eye position in the spatial coordinates of the K eye positions, obtaining the spatial coordinates of each feature point in the input image and the corresponding coordinates in the virtual image equivalent plane under each eye position, obtaining K coordinate sets, obtaining the three-dimensional coordinates of four vertexes of a virtual projection screen in each coordinate set of the K coordinate sets, and obtaining the inverse mapping relation from the corresponding virtual image under the current eye position to the input image by adopting a linear interpolation algorithm;
obtaining a multi-eye mapping table from the virtual images under the K eye positions to the input image based on the inverse mapping relation from the virtual image under each eye position to the input image, and presetting the eye position coordinate of the driver obtained by a pupil detection algorithm as E 1 (E x ,E y ,E z ) Three convex combination standard eye positions adjacent to the driver's eye position are respectively
Figure FDA0003959058300000034
And &>
Figure FDA0003959058300000035
The expression for eye position E may be expressed as->
Figure FDA0003959058300000036
Wherein beta is 1 、β 2 And beta 3 Representing the weight of three adjacent convex combination standard eye positions, and the corresponding relation of the coordinates of any point q in the virtual projection screen under the eye position E in the input image is ^ er>
Figure FDA0003959058300000037
Wherein (u, v) T Represents the coordinates of the point q in the input image at the eye position E, (u) 1 ,v 1 ) T 、(u 2 ,v 2 ) T And (u) 3 ,v 3 ) T Indicating point q at eye position E 1 、E 2 And E 3 Coordinates of the lower part;
and calculating the pixel coordinates of each pixel point under the current eye position corresponding to the input image, forming a mapping table of the mapping relation between each pixel point of the virtual projection screen under the current eye position and the original input image, and processing the image according to the mapping table so as to realize the distortion correction of the dynamic eye position on the AR HUD image.
5. The HUD backlight display system of claim 3, wherein the multiple linear interpolation algorithm for AR HUD image pre-distortion under dynamic eye position conditions comprises:
respectively measuring the spatial coordinates of each of K set eye positions to obtain K eye position spatial coordinates, wherein the eye position is the middle position of two eyes, and the spatial coordinates are coordinates in a vehicle coordinate system;
taking the normalized dot matrix map as an input image, and respectively obtaining virtual images of the normalized dot matrix map under K eye positions to obtain K virtual images;
respectively acquiring the spatial coordinates of each feature point in the input image and the corresponding coordinates of the feature point in the virtual image equivalent plane under each eye position based on the spatial coordinates of the K eye positions to obtain K coordinate sets;
acquiring three-dimensional coordinates of four vertexes of the virtual projection screen according to each coordinate set in the K coordinate sets, and acquiring the inverse mapping relation from a virtual image under the corresponding current eye position to an input image by adopting a linear interpolation algorithm;
acquiring a multi-eye mapping image from the virtual images under the K eye positions to the input image based on the inverse mapping relation from the virtual image under each eye position to the original input image;
acquiring the real-time eye position of the driver through a pupil detection algorithm, and generating a distortion mapping table for AR HUD pre-distortion processing under the current eye position by adopting a difference algorithm based on a multi-eye position mapping table from the virtual images under the K eye positions to the input image;
and transforming the output image pixel by pixel into the pixel corresponding to the virtual projection screen according to the generated distortion mapping table to realize the pre-distortion processing of the image.
6. The HUD backlight display system of claim 1, wherein the specific implementation of the virtual image measurement unit comprises:
completely checking a dot matrix of an eye movement region of an AR HUD projection image according to the resolution set by hardware equipment of the AR HUD and a driver, acquiring the dot matrix projected by the AR HUD by adopting an auxiliary camera, and solving the coordinates of each characteristic point under a vehicle coordinate system;
the depth of field of a virtual projection screen of the AR HUD at the position line of the auxiliary camera in a vehicle coordinate system is preset to be Y 0 Mapping all feature points to depth of field of Y 0 On the plane of (2), the coordinate of each feature point in the vehicle coordinate system is marked as P i (i =1,2.. N), where n represents the number of feature points;
presetting a projection matrix of an auxiliary camera as M and characteristic points P i (i =1,2.. N) the coordinates of a point q in the auxiliary camera image are (u, v), and the coordinates in the vehicle coordinate system are (x, y, z), then
Figure FDA0003959058300000051
Make->
Figure FDA0003959058300000052
Figure FDA0003959058300000053
And &>
Figure FDA0003959058300000054
I.e. X = A -1 B, calculating coordinates of all feature points in the dot matrix map under a vehicle coordinate system after the feature points are mapped to a set depth-of-field screen, namely a virtual equivalent plane;
and selecting the maximum inscribed rectangle area as a virtual projection screen according to the distribution of the characteristic points on the virtual equivalent plane, and discretizing the maximum inscribed rectangle area into a set resolution.
7. The HUD backlight display system of claim 6, wherein the calculating of the coordinates of the vehicle coordinate system after all the feature points in the dot-matrix map are mapped to the set depth-of-field screen, i.e. the virtual equivalent plane, comprises:
three-dimensional coordinates in an AR HUD system include vehicle coordinatesThe system comprises a forward looking camera coordinate system and a behavior detection camera coordinate system, wherein the conversion relation under the three-dimensional coordinate is represented by the rotation conversion and translation conversion of the coordinate system, and a vehicle coordinate system C in the AR HUD system is preset W The coordinate system of the front-view camera is C F And the behavior detection camera coordinate system C E ,C W The coordinate of the next point p in the coordinate system is (x) w ,y w ,z w ) The rotation matrix in the conversion relation from the coordinate system of the front-view camera to the coordinate system of the vehicle is R 1 Translation matrix is T 1 Then is at C F Coordinate (x) of point p in coordinate system F ,y F ,z F ) Can be expressed as
Figure FDA0003959058300000055
Any point under a vehicle coordinate system is represented by a three-dimensional coordinate vector, all three-dimensional coordinate vectors can be mapped onto the same two-dimensional plane, any imaging point in the AR HUD projection virtual image can be mapped onto the virtual projection screen, and the mapping point is uniquely represented under the vehicle coordinate system;
according to the method, coordinates corresponding to a certain target under an image coordinate system are determined through coordinate transformation after the forward-looking camera acquires the coordinates of the target under the image coordinate system, and coordinates of the driver under the vehicle coordinate system are determined through coordinate system transformation of a pupil detection result so as to determine a conversion relation among the coordinates of the cameras in the whole system.
8. The HUD backlight display system of claim 7, wherein determining the coordinates of the object in the image coordinate system according to the coordinates of the object obtained by the front-view camera and the coordinates of the object in the vehicle coordinate system by coordinate transformation comprises:
the method comprises the following steps of calculating the conversion relation between a coordinate system of each camera and a coordinate system of a vehicle by adopting a multi-camera combined calibration algorithm and the conversion relation between the coordinate systems, respectively placing a calibration plate in front of the vehicle and in a cab of the vehicle, wherein the calibration plates are parallel to each other, acquiring relative position information between the two calibration plates by adopting a laser range finder, and respectively calculating the conversion relation between the two cameras and the coordinate system of the shot calibration plate by a Zhang-Zhengyou calibration method so as to establish the relation between the two cameras, wherein the calibration process comprises the following steps:
calculating distortion coefficients and internal and external parameters of each camera by adopting a Zhangyingyou calibration algorithm, respectively placing a calibration plate in front of the vehicle and in a vehicle cab, and respectively collecting images of the calibration plates by adopting the cameras;
carrying out distortion processing on the acquired picture through the calibrated distortion coefficient, respectively obtaining the conversion relation of each camera to a coordinate system of a calibration plate, obtaining the conversion relation between coordinate systems of two cameras according to coordinate conversion, establishing the conversion relation of the coordinate systems of each camera under a vehicle coordinate system, and taking the relation as external reference of the camera;
obtaining internal and external parameters and distortion parameters of the forward-looking camera and the behavior detection camera according to a conversion algorithm between a calibration algorithm and a coordinate system and a rotation matrix R 1 、R 2 And translation matrix T 1 And T 2
All coordinates are in the same vehicle coordinate system, if there is a point p in the real world 0 ,p 0 The coordinates of the point in the vehicle coordinate system are (x) w ,y w ,z w ) T The coordinate of the pupil detection camera coordinate system is (x) E ,y E ,Z E ) T
9. The HUD backlight display system of claim 1, wherein reflecting light rays to the driver viewing frame via a semi-reflective and semi-transparent reflective screen causes the driver to see a virtual HUD projection image in front of the reflective screen, comprises:
a ray in the preset space passes through two parallel planes at the same time, and the coordinate of the intersection point of the ray and the first screen is recorded as (u) 0 ,v 0 ) And the coordinate of the intersection point of the ray and the second plane is(s) 0 ,t 0 ) From the optical properties, the coordinates (u) can be used 0 ,v 0 ,s 0 ,t 0 ) To determine the direction and position of the light.
10. A HUD backlight display method of a HUD backlight display system according to any of claims 1-9, comprising the steps of:
the method comprises the steps that environmental information of a forward-looking camera shooting vehicle running direction is obtained, a shot picture is transmitted to an information processing module, and a driving state of a driver is shot through a behavior detection camera of the driver;
analyzing the driving state of the driver by adopting a pupil detection algorithm, acquiring an image needing HUD projection by an information processing module according to a vehicle driving scene and the driving state information of the driver, and rendering and distortion removing the image;
and transmitting the image to the HUD for projection, so that the driver observes that the image projected by the HUD is fused with a real scene to realize scene enhancement.
CN202211483882.2A 2022-11-23 2022-11-23 HUD backlight display system and method Pending CN115984122A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211483882.2A CN115984122A (en) 2022-11-23 2022-11-23 HUD backlight display system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211483882.2A CN115984122A (en) 2022-11-23 2022-11-23 HUD backlight display system and method

Publications (1)

Publication Number Publication Date
CN115984122A true CN115984122A (en) 2023-04-18

Family

ID=85963665

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211483882.2A Pending CN115984122A (en) 2022-11-23 2022-11-23 HUD backlight display system and method

Country Status (1)

Country Link
CN (1) CN115984122A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117073543A (en) * 2023-10-17 2023-11-17 深圳华海达科技有限公司 Appearance measurement method, device and equipment of double-rotation flatness measuring machine

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117073543A (en) * 2023-10-17 2023-11-17 深圳华海达科技有限公司 Appearance measurement method, device and equipment of double-rotation flatness measuring machine
CN117073543B (en) * 2023-10-17 2023-12-15 深圳华海达科技有限公司 Appearance measurement method, device and equipment of double-rotation flatness measuring machine

Similar Documents

Publication Publication Date Title
CN109688392B (en) AR-HUD optical projection system, mapping relation calibration method and distortion correction method
US6570566B1 (en) Image processing apparatus, image processing method, and program providing medium
US6717728B2 (en) System and method for visualization of stereo and multi aspect images
KR101629479B1 (en) High density multi-view display system and method based on the active sub-pixel rendering
CN111476104B (en) AR-HUD image distortion correction method, device and system under dynamic eye position
JP4764305B2 (en) Stereoscopic image generating apparatus, method and program
JP6023801B2 (en) Simulation device
CN105432078B (en) Binocular gaze imaging method and equipment
US20160307374A1 (en) Method and system for providing information associated with a view of a real environment superimposed with a virtual object
CN108171673A (en) Image processing method, device, vehicle-mounted head-up-display system and vehicle
CN111739101B (en) Device and method for eliminating dead zone of vehicle A column
WO2023071834A1 (en) Alignment method and alignment apparatus for display device, and vehicle-mounted display system
US11961250B2 (en) Light-field image generation system, image display system, shape information acquisition server, image generation server, display device, light-field image generation method, and image display method
JP2013024662A (en) Three-dimensional range measurement system, three-dimensional range measurement program and recording medium
CN113240592A (en) Distortion correction method for calculating virtual image plane based on AR-HUD dynamic eye position
CN113421346A (en) Design method of AR-HUD head-up display interface for enhancing driving feeling
CN106570852A (en) Real-time 3D image situation perception method
WO2018222122A1 (en) Methods for perspective correction, computer program products and systems
KR20200056721A (en) Method and apparatus for measuring optical properties of augmented reality device
CN115984122A (en) HUD backlight display system and method
CN109764888A (en) Display system and display methods
US20160127718A1 (en) Method and System for Stereoscopic Simulation of a Performance of a Head-Up Display (HUD)
KR101947372B1 (en) Method of providing position corrected images to a head mount display and method of displaying position corrected images to a head mount display, and a head mount display for displaying the position corrected images
US11651506B2 (en) Systems and methods for low compute high-resolution depth map generation using low-resolution cameras
CN108760246B (en) Method for detecting eye movement range in head-up display system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination