CN113240592A - Distortion correction method for calculating virtual image plane based on AR-HUD dynamic eye position - Google Patents
Distortion correction method for calculating virtual image plane based on AR-HUD dynamic eye position Download PDFInfo
- Publication number
- CN113240592A CN113240592A CN202110402114.9A CN202110402114A CN113240592A CN 113240592 A CN113240592 A CN 113240592A CN 202110402114 A CN202110402114 A CN 202110402114A CN 113240592 A CN113240592 A CN 113240592A
- Authority
- CN
- China
- Prior art keywords
- image
- virtual
- hud
- virtual image
- plane
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 35
- 238000012937 correction Methods 0.000 title claims abstract description 21
- 238000013507 mapping Methods 0.000 claims abstract description 75
- 238000004088 simulation Methods 0.000 claims abstract description 16
- 238000012545 processing Methods 0.000 claims abstract description 8
- 238000007781 pre-processing Methods 0.000 claims abstract description 6
- 239000011159 matrix material Substances 0.000 claims description 41
- 238000001514 detection method Methods 0.000 claims description 6
- 238000004422 calculation algorithm Methods 0.000 claims description 5
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000009499 grossing Methods 0.000 claims description 4
- 230000004075 alteration Effects 0.000 abstract description 4
- 230000008859 change Effects 0.000 description 5
- 230000015572 biosynthetic process Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000003190 augmentative effect Effects 0.000 description 2
- 230000001788 irregular Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 210000001747 pupil Anatomy 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000012141 concentrate Substances 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000004424 eye movement Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000009191 jumping Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Instrument Panels (AREA)
- Controls And Circuits For Display Device (AREA)
Abstract
The invention discloses an AR-HUD dynamic eye position-based distortion correction method for calculating a virtual image plane. The aberration correction method includes the steps of: s1, positioning the vehicle body and calibrating the human eye simulation equipment; s2, capturing a virtual projection image; preprocessing the image to obtain pixel coordinates of characteristic points of the projected virtual image; establishing a mapping relation between the characteristic points of the projected virtual image and the corresponding points of the output image; s3, setting a virtual image curved surface perspective equivalent plane, and acquiring coordinates of observation equivalent points of the characteristic points on the virtual image curved surface perspective equivalent plane; setting a virtual projection screen according to the set of the observation equivalent points; s4, calculating the mapping relation between each pixel point of the virtual projection screen and the corresponding pixel point of the output image; s5, calibrating the mapping relation between each pixel point of the virtual projection screen at different positions and different display picture heights and the corresponding pixel point of the output image to obtain a calibration mapping table; and S6, performing pre-distortion processing on the output image according to the mapping table.
Description
Technical Field
The invention relates to the field of automobile instrument display, in particular to a distortion correction method for calculating a virtual image plane based on an AR-HUD dynamic eye position.
Background
The time for the driver to turn back to the front view is about 4-7 seconds after the sight of the driver is transferred to the instrument panel from the front and the instrument information is acquired. The driver belongs to the driving blind area in the period of time, and huge potential safety hazards exist. The augmented reality head-up display (AR-HUD) reasonably superposes and displays information such as the vehicle speed, the navigation, the state of a driving auxiliary system, the surrounding environment condition and the like in a visual field area of a driver by utilizing an augmented reality technology, provides more visual and vivid information for the driver, enhances the environment perception capability of the driver, reduces the switching time of the sight of the driver between a road surface and an instrument panel, enables the driver to concentrate more attention on the road surface information, and improves the driving safety. However, due to the fact that the optical system of the HUD (head-up display) and the curvature of the windshield are different, the image projected by the HUD onto the windshield of the automobile is distorted, and the HUD projected image observed by a driver is distorted.
At present, for the distortion problem of the HUD projection image, distortion correction is mostly performed only on the central eye nucleus position, that is, a distortion correction method under a fixed eye position is adopted. However, the positions of the eyes of the driver are different, and the reflection points (reflection curvature) of the HUD projection light on the windshield are different, so that the size and distortion of the virtual image seen by the driver under different viewpoints are different. Therefore, by adopting the distortion correction method under the fixed eye position, when the viewpoint of the driver deviates from the calibration viewpoint, the HUD projection image observed by the driver still has distortion, and therefore, the method is difficult to meet the actual requirements of the user.
In addition, there is also a method for solving the problem of distortion correction by multi-eye calibration, that is, calibrating the distortion parameters of the HUD projection images at multiple viewpoints, and determining the current eye position information of the driver by using the techniques such as pupil detection, so as to match different distortion parameters. However, most of the method divides the observation area of the driver into several small areas, and performs parameter matching according to the eye position information of the driver, so that the HUD output picture often has the problems of jumping and the like when the eye position of the driver changes, and the driving safety of the driver is affected. In addition, the influence of factors such as the height change of the HUD on the distortion of the HUD projection image is not considered in the two methods.
Disclosure of Invention
The invention aims to overcome the defect that the prior art cannot well solve the distortion of a HUD projection image, and provides a distortion correction method for calculating a virtual image plane based on an AR-HUD dynamic eye position.
A method for calculating HUD virtual image plane and correcting distortion under dynamic eye position. According to the eye position of a driver, the spatial position of the HUD projection virtual image plane and the mapping relation between the actual screen and the virtual projection screen are calculated in real time, the distortion deformation of the HUD projection virtual image seen by the driver from different positions is corrected, and the real imaging effect is restored.
In order to achieve the above purpose, the invention provides the following technical scheme:
an AR-HUD dynamic eye position-based distortion correction method for calculating a virtual image plane comprises the following steps:
s1, positioning the vehicle body, and calibrating the human eye simulation equipment at the current position;
s2, capturing an image of a virtual projection image obtained according to the output image of the HUD system by using human eye simulation equipment, and recording the image as a virtual projection image; preprocessing the projected virtual image to obtain pixel coordinates of characteristic points of the projected virtual image; establishing a mapping relation between the characteristic points of the HUD projection virtual image and the corresponding points of the output image;
s3, setting a plane before the virtual projection image, wherein the plane is recorded as a virtual image curved surface perspective equivalent plane, and acquiring coordinates of observation equivalent points of each characteristic point of the HUD virtual projection image on the virtual image curved surface perspective equivalent plane; setting a virtual projection screen according to the set of all observation equivalent points;
s4, establishing a mapping relation between an observation equivalent point on a virtual image curved surface perspective equivalent plane and a corresponding point of an output image, and calculating the mapping relation between each pixel point of a virtual projection screen and the corresponding pixel point of the output image by using an interpolation algorithm;
s5, adjusting the position of the human eye simulation equipment and the height of the display picture, repeating the steps S1 to S4, calibrating the mapping relation between each pixel point of the virtual projection screen and the corresponding pixel point of the output image when different positions and different heights of the display picture are different, and forming a calibration mapping table;
s6, pre-distorting the output image of the HUD system according to the mapping table.
Preferably, the output image is a dot matrix image with a regular pattern in a matrix distribution; the center of the regular pattern of the matrix distribution is the feature point.
Preferably, the step S2 is to pre-process the projected virtual image corresponding to the captured dot matrix image by:
s21, determining an area of interest, and intercepting a dot matrix area in the captured projection virtual image;
s22, processing image gray scale and smoothing;
s23, self-adaptive binarization;
s24, contour detection;
s25, removing noise points;
s26, calculating the center pixel coordinates of the outline in the dot matrix;
and S27, establishing a mapping relation between the dot matrix of the HUD projection virtual image and the dot matrix of the output image.
Preferably, the spatial position of the virtual image curved surface perspective equivalent plane is between the reflecting point projected on the windshield by the HUD and the depth of field of the virtual image imaging, and is not parallel to the Y axis of the vehicle coordinate system, and the Y axis of the vehicle coordinate system is the front of the vehicle.
Preferably, the virtual image curved surface perspective equivalent plane is perpendicular to the Y-axis of the vehicle coordinate system.
Preferably, the calculation method of the mapping relationship between each pixel point of the virtual projection screen and the corresponding pixel point of the output image in step S4 is as follows:
triangulating the virtual projection screen according to the dot matrix on the dot matrix image; any point of non-feature point in the virtual projection screen is marked as Qi(ii) a Obtaining Q' according to virtual image curved surface perspective equivalent planeiThe three-dimensional coordinates of the points, denoted as (x ″)i,y″i,z″i);Q″iThe characteristic point adjacent to the point is P ″)i1、P″i2And P ″)i3The three-dimensional coordinate distribution is represented by (x ″)i1,y″i1,z″i1)、(x″i2,y″i2,z″i2) And (x ″)i3,y″i3,z″i3) Q' is represented by formula (1)iThe three-dimensional coordinate is represented by P ″)i1、P″i2And P ″)i3Linear representation, in which: alpha is alphai1、αi2And alphai3Respectively represent neighboring feature points P ″)i1、P″i2And P ″)i3The corresponding weights are expressed as follows:
Q″i=αi1P″i1+αi2P″i2+αi3P″i3
the point Q' in the output image which is in contact with the virtual projection screeniCorresponding point is Qi(ui,vi) And P ″)i1、P″i2And P ″)i3The 3 corresponding characteristic points are Pi1(ui1,vi1)、Pi2(ui2,vi2)、Pi3(ui3,vi3) Then, the mapping relationship between each pixel point of the virtual projection screen and the corresponding pixel point of the output image is as follows:
preferably, the step S5 adjusts the position of the human eye simulation device according to the EyeBox range of the HUD system, and adjusts the height of the display screen according to the height adjustment range of the display screen; the step S5 specifically includes the following steps:
s51, dividing the display picture height adjusting range and the EyeBox range into regions, and respectively setting a plurality of display picture height gears and eye positions;
s52, setting a height gear of a display picture, repeating the steps S1 to S4, and calibrating the mapping relation between each pixel point of the virtual projection screen of each eye position and the corresponding pixel point of the output image when the height gear of the current display picture is displayed to form a multi-eye position calibration mapping table;
and S53, adjusting and updating the height gears of the display images, and repeating the step S52 until the multi-eye calibration mapping table of each height gear of the display images of the HUD system is obtained, so as to form a multi-gear multi-eye calibration mapping table.
Preferably, the step S6 includes the steps of:
s61, acquiring real-time eye positions and display picture height gears, and acquiring a real-time distortion mapping table and real-time virtual image plane representation according to the calibration mapping table and the virtual image plane representation;
and S62, pre-distorting the output image of the HUD system according to the real-time distortion mapping table.
Compared with the prior art, the invention has the beneficial effects that:
1. the method comprises the steps of accurately positioning an HUD projection virtual image dot matrix through a virtual image curved surface perspective equivalent plane, establishing a mapping relation between an actual screen and a virtual projection screen, and solving the problem of HUD image distortion through pre-distortion treatment;
2. the virtual image plane representation of the HUD system corresponding to any position in the EyeBox range and the mapping relation between the actual screen and the virtual projection screen pixels can be obtained by calibrating the virtual image plane representation of the HUD system under a plurality of eye positions in the EyeBox range and the mapping relation between the actual screen and the virtual projection screen pixels to form a multi-eye mapping table. The problem of different driver's eye position difference, the virtual image distortion of the HUD system that sees is different is solved, and simultaneously at eye position change in-process, can gather a plurality of eye positions in real time, carries out the correction of distortion to a plurality of eye positions, and the jump of the formation of image pattern when avoiding eye position to change has improved the formation of image effect of HUD system.
Description of the drawings:
fig. 1 is a flowchart of an aberration correction method for calculating a virtual image plane based on an AR-HUD dynamic eye position according to exemplary embodiment 1 of the present invention;
FIG. 2 is a checkerboard pattern calibration plate as described in exemplary embodiment 1 of the present invention;
FIG. 3 is a schematic diagram of the distortion of the virtual image projected by the HUD system shown in exemplary embodiment 1 of the present invention;
FIG. 4 is a captured dot matrix image shown in exemplary embodiment 1 of the present invention;
fig. 5 is a schematic diagram of a mapping relationship between a feature point of a virtual projection image and a corresponding point of an original input dot matrix image according to exemplary embodiment 1 of the present invention;
fig. 6 is a schematic diagram of triangulation of the virtual projection screen shown in exemplary embodiment 1 of the present invention;
Detailed Description
The present invention will be described in further detail with reference to test examples and specific embodiments. It should be understood that the scope of the above-described subject matter is not limited to the following examples, and any techniques implemented based on the disclosure of the present invention are within the scope of the present invention.
Example 1
As shown in fig. 1, the present embodiment provides an aberration correction method for calculating a virtual image plane based on an AR-HUD dynamic eye position, which specifically includes the following steps:
s1, positioning the vehicle body, and calibrating the human eye simulation equipment at the current position;
the calibration auxiliary equipment is used for positioning the vehicle body, and the calibration plate is used for calibrating the human eye simulation equipment by adopting a Zhang Zhengyou calibration method. As shown in fig. 2, the calibration board is a checkerboard pattern calibration board, and the human eye simulation device is a camera, which may be a monocular camera or a binocular camera; if the camera is a binocular camera, the distance between the left camera and the right camera is 61mm according to the average value of domestic eye distance statistics. The parameters calibrated for the human eye simulation equipment comprise internal parameters, distortion coefficients and external parameters, wherein the internal parameters are used for expressing the projection relation of a camera from a three-dimensional space (a camera coordinate system) to a two-dimensional image; the external reference is used to represent a relative positional relationship between the camera coordinate system and the world coordinate system (vehicle coordinate system).
S2, capturing an image of a virtual projection image obtained according to the output image of the HUD system by using human eye simulation equipment, and recording the image as a virtual projection image; preprocessing the projected virtual image to obtain pixel coordinates of characteristic points of the projected virtual image; establishing a mapping relation between the characteristic points of the HUD projection virtual image and the corresponding points of the output image;
as shown in fig. 3, the output image is projected by the HUD system and then reflected by the windshield to enter the field of vision of the driver, and a HUD projection virtual image is formed in front of the driver, so that the HUD projection virtual image is distorted and deformed due to reasons such as different curvatures of the windshield and the optical system of the HUD, and is not on the same plane, which brings difficulties to subsequent distortion correction and UI design; for subsequent distortion correction, pixel coordinates of feature points of the captured HUD projection virtual image are obtained, and a mapping relation between the feature points of the HUD projection virtual image and corresponding points of the output image is established.
For example, in order to conveniently and quickly obtain the mapping relationship between the feature points of the HUD projected virtual image and the corresponding points of the output image, the output image adopted in this embodiment is a lattice image in which regular patterns are distributed in a matrix, and the centers of the regular patterns in the matrix distribution are the feature points and the lattices of the lattice image. Step S2 pre-processes the projected virtual image corresponding to the captured dot matrix image by:
s21, determining an area of interest, and intercepting a dot matrix area in the captured projection virtual image;
s22, processing image gray scale and smoothing;
s23, self-adaptive binarization;
s24, contour detection;
s25, removing noise points;
s26, calculating the center pixel coordinates of the outline in the dot matrix;
and S27, establishing a mapping relation between the dot matrix of the HUD projection virtual image and the dot matrix of the output image.
As shown in fig. 4, the captured dot matrix image contains projected peripheral information. In order to eliminate the interference of peripheral information, during preprocessing, firstly intercepting an interested area, namely intercepting an area where a dot matrix is located in an image; then obtaining a binary image through operations such as gray processing, smoothing processing, binarization and the like; carrying out contour detection on the binary image, and removing the interference of noise points to obtain the contour of each regular pattern in the dot matrix; recording the central pixel of each regular pattern outline as a characteristic point, and calculating the coordinates of the characteristic point; as shown in fig. 5, a mapping relationship between the feature points of the HUD projected virtual image and the corresponding points of the output image is established.
S3, setting a plane before the virtual projection image, wherein the plane is recorded as a virtual image curved surface perspective equivalent plane, and acquiring coordinates of observation equivalent points of each characteristic point of the HUD virtual projection image on the virtual image curved surface perspective equivalent plane; setting a virtual projection screen according to the set of all observation equivalent points;
as shown in fig. 3, the virtual projection image projected by the HUD system is an irregular spatial curve. A plane is arranged in front of the projected virtual image and is recorded as a virtual image curved surface perspective equivalent plane. The vehicle coordinate system has the vehicle right front as the Y-axis, the vehicle right side as the X-axis, and the vehicle right upper side as the Z-axis. The spatial position of the virtual image curved surface perspective equivalent plane is between the reflecting point projected on the windshield by the HUD and the depth of field of the virtual image, and the plane cannot be parallel to the Y axis of the vehicle coordinate system. In this embodiment, in order to facilitate the calculation of the virtual curved surface perspective equivalent plane and the virtual projection screen, the virtual curved surface perspective equivalent plane is perpendicular to the Y axis of the vehicle coordinate system (the virtual curved surface perspective equivalent plane is represented as Y ═ Y-0,y0Is constant). Observing each point P of HUD projection virtual image curved surface through single viewpoint EiThe observed transparent line intersects with the virtual image curved surface perspective equivalent plane at a point P'iThen point P 'observed at viewpoint E'iAnd PiPoint coincidence, i.e. observation equivalence, point P'iAnd recording as an observation equivalent point. According toDepth of field y of virtual image curved surface perspective equivalent plane is set according to HUD virtual image imaging distance0And calculating the three-dimensional coordinates of the dot matrix image projected by the HUD in the virtual image curved surface perspective equivalent plane by utilizing a monocular distance measuring principle.
All of P'iThe image represented by the point set is irregular, and all P 'are selected to ensure the correctness of the projection picture of the corrected HUD system'iThe maximum inscribed rectangle area of the graph formed by the points is a display area of the HUD system, and the area is used as a virtual projection screen and is recorded as a virtual image equivalent area.
The virtual projection screen can effectively represent the spatial position and the size of the virtual projection image of the HUD system, and important parameter information is provided for virtual image predistortion treatment, virtual-real registration (treatment process in an AR-HUD system) and the like of a follow-up HUD system.
S4, establishing a mapping relation between an observation equivalent point on a virtual image curved surface perspective equivalent plane and a corresponding point of an output image, and calculating the mapping relation between each pixel point of a virtual projection screen and the corresponding pixel point of the output image by using an interpolation algorithm;
specifically, taking the output image as a dot matrix image as an example, step S4 is to calculate a mapping relationship between each pixel point of the virtual projection screen and a pixel point of the output image by using a linear interpolation algorithm according to a mapping relationship between a dot matrix of the HUD projection virtual image and a dot matrix of the original input dot matrix image, and includes the following steps:
as shown in fig. 6, the virtual projection screen is triangulated according to n lattices on the lattice image. Let any one point of non-feature point Q' in virtual projection screeni(i.e., the observation equivalent point corresponding to the dot matrix of the non-dot matrix image) has a pixel coordinate of (u ″)i,v″i) According to the virtual image curved surface perspective equivalent plane, the representation Q ″' can be obtainediThree-dimensional coordinates (x ″) of pointsi,y″i,z″i),Q″iThe characteristic point (lattice point) adjacent to the point is P ″)i1、P″i2And P ″)i3The three-dimensional coordinate distribution is represented by (x ″)i1,y″i1,z″i1)、(x″i2,y″i2,z″i2) And (x ″)i3,y″i3,z″i3) Since the lattice distribution can know P ″)i1、P″i2And P ″)i3If the three points are not on the same straight line, it can be represented by P ″)i1、P″i2And P ″)i3Linear expression, the expression pattern is shown in formula (1), wherein: alpha is alphai1、αi2And alphai3Respectively, adjacent characteristic points (lattice points) P ″)i1、P″i2And P ″)i3The corresponding weight is:
let the point Q' in the output image (dot matrix image) and the virtual projection screen beiCorresponding point is Qi(ui,vi) And P ″)i1、P″i2And P ″)i3The 3 corresponding characteristic points are Pi1(ui1,vi1)、Pi2(ui2,vi2)、Pi3(ui3,vi3) Then, the mapping relationship between each pixel point of the virtual projection screen and the corresponding pixel point of the output image is shown as formula (2):
wherein, any point of the virtual projection screen is a non-characteristic point Q ″)iHas a pixel coordinate of (u ″)i,v″i) The coordinates are under the coordinate system of the virtual projection screen; q ″)iThree-dimensional coordinates (x ″) of pointsi,y″i,z″i) Coordinates under a vehicle coordinate system; output image midpoint QiCoordinate Q ofi(ui,vi) Is an image pixel coordinate system.
S5, adjusting the position of the human eye simulation equipment and the height of the display picture, repeating the steps S1 to S4, calibrating the mapping relation between each pixel point of the virtual projection screen and the corresponding pixel point of the output image when different positions and different heights of the display picture are different, and forming a calibration mapping table;
specifically, the step S5 is to adjust the position of the human eye simulation device according to the EyeBox range (eye movement range) of the HUD system, and adjust the height of the display screen according to the adjustment range of the height of the display screen, and specifically includes the following steps:
s51, dividing the display picture height adjusting range and the EyeBox range into regions, and respectively setting a plurality of display picture height gears and eye positions;
s52, setting a height gear of a display picture, repeating the steps S1 to S4, and calibrating the mapping relation between each pixel point of the virtual projection screen of each eye position and the corresponding pixel point of the output image when the height gear of the current display picture is displayed to form a multi-eye position calibration mapping table;
and S53, adjusting and updating the height gears of the display images, and repeating the step S52 until the multi-eye calibration mapping table of each height gear of the display images of the HUD system is obtained, so as to form a multi-gear multi-eye calibration mapping table.
Taking the height adjustment range of the display screen as an example, generally, the height adjustment range of the HUD display screen is divided into n groups of regions with equal length, and n +1 height gears of the display screen are set from low to high and named from 0 to n gears. The distortion mapping table and the virtual image curved surface perspective equivalent plane of the HUD system under each eye position in each gear can be obtained through offline calibration processing of the HUD system.
S6, pre-distorting the output image of the HUD system according to the mapping table.
The virtual image of HUD system projection can take place the distortion, and the mapping table of this application can reflect the mapping relation of each pixel point of virtual projection screen and the corresponding pixel point of output image. And carrying out predistortion treatment on the output image according to the mapping relation, so that the projected virtual image of the predistorted image after the predistortion treatment is not distorted through observation, and the distortion correction of the projected virtual image of the HUD system is realized.
Specifically, step S6 includes the following steps:
s61, acquiring real-time eye positions and display picture height gears, and acquiring a real-time distortion mapping table and real-time virtual image plane representation according to the calibration mapping table and the virtual image plane representation;
the method comprises the steps of firstly collecting the current display frame height gear of the HUD system and the real-time eye position of a driver obtained by a pupil detection technology, selecting calibration data of three eye positions which are the same as the current display frame height gear and are close to the current eye position according to a calibration mapping table and virtual image plane representation obtained by offline calibration processing, and utilizing the calibration data of the three eye positions which are close to each other to calculate the real-time distortion mapping table and virtual image plane representation of the HUD system under the current gear current eye position in real time on line by an interpolation algorithm.
And S62, pre-distorting the output image of the HUD system according to the real-time distortion mapping table.
According to the mapping relation between each pixel point of the output image and the corresponding pixel point of the virtual projection screen, texture mapping is carried out through preset triangulation, a pre-distortion image of the output image of the HUD system is generated, and therefore correction of the HUD projection virtual image is achieved.
Under the condition of unifying a coordinate system, the invention accurately positions the HUD projection virtual image lattice by the virtual image curved surface perspective equivalent plane, establishes the mapping relation between an actual screen (namely an output image of the HUD system) and a virtual projection screen, and solves the problem of HUD image distortion through pre-distortion treatment; in addition, the virtual image plane representation of the HUD system and the mapping relation between the actual screen and the virtual projection screen pixels under the multiple eye positions in the EyeBox range are calibrated to form a multi-eye position mapping table, and the virtual image plane representation of the HUD system corresponding to any position in the EyeBox range and the mapping relation between the actual screen and the virtual projection screen pixels can be obtained by using a linear interpolation method. The problem of different driver's eye position difference, the virtual image distortion of the HUD system that sees is different is solved, and simultaneously at eye position change in-process, can gather a plurality of eye positions in real time, carries out the correction of distortion to a plurality of eye positions, and the jump of the formation of image pattern when avoiding eye position to change has improved the formation of image effect of HUD system.
The foregoing is merely a detailed description of specific embodiments of the invention and is not intended to limit the invention. Various alterations, modifications and improvements will occur to those skilled in the art without departing from the spirit and scope of the invention.
Claims (8)
1. An AR-HUD dynamic eye position-based distortion correction method for calculating a virtual image plane is characterized by comprising the following steps of:
s1, positioning the vehicle body, and calibrating the human eye simulation equipment at the current position;
s2, capturing an image of a virtual projection image obtained according to the output image of the HUD system by using human eye simulation equipment, and recording the image as a virtual projection image; preprocessing the projected virtual image to obtain pixel coordinates of characteristic points of the projected virtual image; establishing a mapping relation between the characteristic points of the HUD projection virtual image and the corresponding points of the output image;
s3, setting a plane before the virtual projection image, wherein the plane is recorded as a virtual image curved surface perspective equivalent plane, and acquiring coordinates of observation equivalent points of each characteristic point of the HUD virtual projection image on the virtual image curved surface perspective equivalent plane; setting a virtual projection screen according to the set of all observation equivalent points;
s4, establishing a mapping relation between an observation equivalent point on a virtual image curved surface perspective equivalent plane and a corresponding point of an output image, and calculating the mapping relation between each pixel point of a virtual projection screen and the corresponding pixel point of the output image by using an interpolation algorithm;
s5, adjusting the position of the human eye simulation equipment and the height of the display picture, repeating the steps S1 to S4, calibrating the mapping relation between each pixel point of the virtual projection screen and the corresponding pixel point of the output image when different positions and different heights of the display picture are different, and forming a calibration mapping table;
s6, pre-distorting the output image of the HUD system according to the mapping table.
2. The method of claim 1, wherein the output image is a dot matrix image distributed in a matrix in a regular pattern; the center of the regular pattern of the matrix distribution is the feature point.
3. The method for distortion correction based on calculation of a virtual image plane under an AR-HUD dynamic eye position according to claim 2, wherein said step S2 is performed by preprocessing the projected virtual image corresponding to the captured dot matrix image by:
s21, determining an area of interest, and intercepting a dot matrix area in the captured projection virtual image;
s22, processing image gray scale and smoothing;
s23, self-adaptive binarization;
s24, contour detection;
s25, removing noise points;
s26, calculating the center pixel coordinates of the outline in the dot matrix;
and S27, establishing a mapping relation between the dot matrix of the HUD projection virtual image and the dot matrix of the output image.
4. The method of claim 2, wherein the spatial position of the virtual image curved perspective equivalent plane is between the reflection point projected on the windshield by the HUD and the depth of field of the virtual image, and is not parallel to a Y axis of a vehicle coordinate system, and the Y axis of the vehicle coordinate system is right in front of the vehicle.
5. The method of claim 4, wherein the virtual image perspective equivalent plane is perpendicular to the Y-axis of the vehicle coordinate system.
6. The method for correcting distortion based on calculating a virtual image plane under an AR-HUD dynamic eye position according to claim 5, wherein the step S4 is performed by calculating the mapping relationship between each pixel point of the virtual projection screen and the corresponding pixel point of the output image as follows:
triangulating the virtual projection screen according to the dot matrix on the dot matrix image; arbitrary in virtual projection screenOne non-characteristic point is denoted as Qi(ii) a Obtaining Q' according to virtual image curved surface perspective equivalent planeiThe three-dimensional coordinates of the points, denoted as (x ″)i,y″i,z″i);Q″iThe characteristic point adjacent to the point is P ″)i1、P″i2And P ″)i3The three-dimensional coordinate distribution is represented by (x ″)i1,y″i1,z″i1)、(x″i2,y″i2,z″i2) And (x ″)i3,y″i3,z″i3) Q' is represented by formula (1)iThe three-dimensional coordinate is represented by P ″)i1、P″i2And P ″)i3Linear representation, in which: alpha is alphai1、αi2And alphai3Respectively represent neighboring feature points P ″)i1、P″i2And P ″)i3The corresponding weights are expressed as follows:
Q″i=αi1P″i1+αi2P″i2+αi3P″i3
the point Q' in the output image which is in contact with the virtual projection screeniCorresponding point is Qi(ui,vi) And P ″)i1、P″i2And P ″)i3The 3 corresponding characteristic points are Pi1(ui1,vi1)、Pi2(ui2,vi2)、Pi3(ui3,vi3) Then, the mapping relationship between each pixel point of the virtual projection screen and the corresponding pixel point of the output image is as follows:
7. the method for correcting distortion based on calculation of a virtual image plane in an AR-HUD dynamic eye position according to claim 1, wherein the step S5 is to adjust the position of the human eye simulation apparatus according to the EyeBox range of the HUD system, and to adjust the height of the display screen according to the adjustment range of the height of the display screen; the step S5 specifically includes the following steps:
s51, dividing the display picture height adjusting range and the EyeBox range into regions, and respectively setting a plurality of display picture height gears and eye positions;
s52, setting a height gear of a display picture, repeating the steps S1 to S4, and calibrating the mapping relation between each pixel point of the virtual projection screen of each eye position and the corresponding pixel point of the output image when the height gear of the current display picture is displayed to form a multi-eye position calibration mapping table;
and S53, adjusting and updating the height gears of the display images, and repeating the step S52 until the multi-eye calibration mapping table of each height gear of the display images of the HUD system is obtained, so as to form a multi-gear multi-eye calibration mapping table.
8. The method for correcting distortion based on calculating a virtual image plane in an AR-HUD dynamic eye position according to claim 1, wherein the step S6 comprises the steps of:
s61, acquiring real-time eye positions and display picture height gears, and acquiring a real-time distortion mapping table and real-time virtual image plane representation according to the calibration mapping table and the virtual image plane representation;
and S62, pre-distorting the output image of the HUD system according to the real-time distortion mapping table.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110402114.9A CN113240592A (en) | 2021-04-14 | 2021-04-14 | Distortion correction method for calculating virtual image plane based on AR-HUD dynamic eye position |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110402114.9A CN113240592A (en) | 2021-04-14 | 2021-04-14 | Distortion correction method for calculating virtual image plane based on AR-HUD dynamic eye position |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113240592A true CN113240592A (en) | 2021-08-10 |
Family
ID=77128274
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110402114.9A Pending CN113240592A (en) | 2021-04-14 | 2021-04-14 | Distortion correction method for calculating virtual image plane based on AR-HUD dynamic eye position |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113240592A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113920145A (en) * | 2021-12-08 | 2022-01-11 | 天津大学 | Projection image quality evaluation and calculation method for projection system |
CN113918024A (en) * | 2021-11-12 | 2022-01-11 | 合众新能源汽车有限公司 | Distortion removing method and device for transparent A-column curved screen and storage medium |
CN114821723A (en) * | 2022-04-27 | 2022-07-29 | 江苏泽景汽车电子股份有限公司 | Projection image plane adjusting method, device, equipment and storage medium |
CN115202476A (en) * | 2022-06-30 | 2022-10-18 | 泽景(西安)汽车电子有限责任公司 | Display image adjusting method and device, electronic equipment and storage medium |
WO2023071834A1 (en) * | 2021-10-28 | 2023-05-04 | 虹软科技股份有限公司 | Alignment method and alignment apparatus for display device, and vehicle-mounted display system |
WO2024040398A1 (en) * | 2022-08-22 | 2024-02-29 | 京东方科技集团股份有限公司 | Correction function generation method and apparatus, and image correction method and apparatus |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111476104A (en) * | 2020-03-17 | 2020-07-31 | 重庆邮电大学 | AR-HUD image distortion correction method, device and system under dynamic eye position |
CN112655024A (en) * | 2020-10-30 | 2021-04-13 | 华为技术有限公司 | Image calibration method and device |
-
2021
- 2021-04-14 CN CN202110402114.9A patent/CN113240592A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111476104A (en) * | 2020-03-17 | 2020-07-31 | 重庆邮电大学 | AR-HUD image distortion correction method, device and system under dynamic eye position |
CN112655024A (en) * | 2020-10-30 | 2021-04-13 | 华为技术有限公司 | Image calibration method and device |
Non-Patent Citations (1)
Title |
---|
周中奎: "基于机器学习的智能汽车目标检测与场景增强技术研究", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023071834A1 (en) * | 2021-10-28 | 2023-05-04 | 虹软科技股份有限公司 | Alignment method and alignment apparatus for display device, and vehicle-mounted display system |
CN113918024A (en) * | 2021-11-12 | 2022-01-11 | 合众新能源汽车有限公司 | Distortion removing method and device for transparent A-column curved screen and storage medium |
CN113918024B (en) * | 2021-11-12 | 2024-03-05 | 合众新能源汽车股份有限公司 | De-distortion method and device for transparent A-pillar curved surface screen and storage medium |
CN113920145A (en) * | 2021-12-08 | 2022-01-11 | 天津大学 | Projection image quality evaluation and calculation method for projection system |
CN113920145B (en) * | 2021-12-08 | 2022-03-08 | 天津大学 | Projection image quality evaluation and calculation method for projection system |
CN114821723A (en) * | 2022-04-27 | 2022-07-29 | 江苏泽景汽车电子股份有限公司 | Projection image plane adjusting method, device, equipment and storage medium |
CN115202476A (en) * | 2022-06-30 | 2022-10-18 | 泽景(西安)汽车电子有限责任公司 | Display image adjusting method and device, electronic equipment and storage medium |
WO2024040398A1 (en) * | 2022-08-22 | 2024-02-29 | 京东方科技集团股份有限公司 | Correction function generation method and apparatus, and image correction method and apparatus |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113240592A (en) | Distortion correction method for calculating virtual image plane based on AR-HUD dynamic eye position | |
CN109688392B (en) | AR-HUD optical projection system, mapping relation calibration method and distortion correction method | |
CN111476104B (en) | AR-HUD image distortion correction method, device and system under dynamic eye position | |
US10874297B1 (en) | System, method, and non-transitory computer-readable storage media related to correction of vision defects using a visual display | |
US11854171B2 (en) | Compensation for deformation in head mounted display systems | |
CN113421346B (en) | Design method of AR-HUD head-up display interface for enhancing driving feeling | |
WO2023071834A1 (en) | Alignment method and alignment apparatus for display device, and vehicle-mounted display system | |
CN111242866B (en) | Neural network interpolation method for AR-HUD virtual image distortion correction under dynamic eye position condition of observer | |
WO2019140945A1 (en) | Mixed reality method applied to flight simulator | |
CN104537616A (en) | Correction Method of Fisheye Image Distortion | |
CN113366491B (en) | Eyeball tracking method, device and storage medium | |
Itoh et al. | Light-field correction for spatial calibration of optical see-through head-mounted displays | |
CN109855845B (en) | Binocular eye lens measurement vehicle-mounted HUD virtual image distance and correction method | |
CN105404011B (en) | A kind of 3D rendering bearing calibration of HUD and HUD | |
US12080012B2 (en) | Systems and methods for low compute high-resolution depth map generation using low-resolution cameras | |
CN111664839B (en) | Vehicle-mounted head-up display virtual image distance measuring method | |
CN111652959B (en) | Image processing method, near-to-eye display device, computer device, and storage medium | |
WO2021227969A1 (en) | Data processing method and device thereof | |
CN110264527A (en) | Real-time binocular stereo vision output method based on ZYNQ | |
US20220233070A1 (en) | Method and device for generating refractive pattern, and computer-readable storage medium | |
CN114155300A (en) | Projection effect detection method and device for vehicle-mounted HUD system | |
JP4042356B2 (en) | Image display system and image correction service method for image display system | |
CN113844365A (en) | Method for visualizing front-view bilateral blind areas of automobile | |
JP2018157276A (en) | Image display device, image display method, and program | |
WO2021132298A1 (en) | Information processing device, information processing method, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210810 |
|
RJ01 | Rejection of invention patent application after publication |