CN117830171A - High-precision AR-HUD distortion calibration method - Google Patents

High-precision AR-HUD distortion calibration method Download PDF

Info

Publication number
CN117830171A
CN117830171A CN202311862626.9A CN202311862626A CN117830171A CN 117830171 A CN117830171 A CN 117830171A CN 202311862626 A CN202311862626 A CN 202311862626A CN 117830171 A CN117830171 A CN 117830171A
Authority
CN
China
Prior art keywords
predistortion
dot matrix
lattice
image
viewpoint
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311862626.9A
Other languages
Chinese (zh)
Inventor
郭俊佳
桂冬冬
许钦雄
张紫锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHENZHEN ROADROVER TECHNOLOGY CO LTD
Original Assignee
SHENZHEN ROADROVER TECHNOLOGY CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHENZHEN ROADROVER TECHNOLOGY CO LTD filed Critical SHENZHEN ROADROVER TECHNOLOGY CO LTD
Priority to CN202311862626.9A priority Critical patent/CN117830171A/en
Publication of CN117830171A publication Critical patent/CN117830171A/en
Pending legal-status Critical Current

Links

Landscapes

  • Closed-Circuit Television Systems (AREA)

Abstract

The invention belongs to the technical field of image processing, and particularly relates to a high-precision AR-HUD distortion calibration method, which realizes a simple and convenient density-adjustable acquisition mode of a reference dot matrix and a viewpoint reference dot matrix by means of the calibration and the calibration relation between a viewpoint camera and a vehicle coordinate system; the effective area search is expanded to be beyond the lattice range by means of the relation between the predistortion value in the predistortion matrix and the position of the predistortion value, and a larger effective display area can be found while the integrity of display content is ensured; during iteration, judging whether the current round of predistortion image can effectively detect a dot matrix, and providing a corresponding processing mechanism and method, so that the robustness and feasibility of iteration are realized; iteration is introduced, iteration feasibility judgment and a corresponding processing method are added in the iteration process, so that the robustness and feasibility of iteration are guaranteed and improved, a reference lattice is obtained more simply and conveniently, the lattice density is simple and adjustable, the effective display area is relatively larger, and a feasible AR-HUD calibration scheme is determined in an iteration mode.

Description

High-precision AR-HUD distortion calibration method
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a high-precision AR-HUD distortion calibration method.
Background
The AR-HUD has complicated optical paths, multiple transmission, reflection, and the like, and the final imaging thereof often has distortion requiring calibration in view of the problems of uniformity of materials such as transmission, reflection, and windshields, tolerances in mounting of the respective devices, and free-form surfaces of windshields.
The main steps of the prior art calibration technique similar to this patent are: 1) And placing the calibration device at the front target position of the vehicle, and controlling the viewpoint camera to acquire an image of the calibration device, wherein a lattice in the image is stored as a viewpoint reference lattice and is used for subsequent calibration calculation of the AR-HUD. After photographing, the calibration device is removed. 2) And displaying the projection original. The dot matrix image similar to fig. 2 is projected, and a certain corresponding relation exists between the dot matrix and the viewpoint reference dot matrix. 3) A sample image is acquired. The viewpoint camera acquires imaging of the projection artwork in front of the windshield. 4) And extracting a sample lattice. And extracting the lattice in the sample image to obtain a sample lattice formed by the center of the hollow circle in the figure 3. 5) And calculating the predistortion amount. The predistortion value of each point is calculated according to the difference between the sample lattice and the viewpoint reference lattice (such as the lattice formed by the solid circles in fig. 3). 6) A predistortion matrix is obtained. And obtaining a predistortion value of each pixel by adopting methods such as interpolation and the like, and obtaining a predistortion matrix.
The solution adopted in the patent such as CN115546057A, CN107527324B, CN115205138A is essentially the same as the above steps, but when the standard lattice is placed at the target position, a high precision mechanical structure is usually required, or a tedious process of repeated adjustment is required, so that there is a difficulty in obtaining the viewpoint reference lattice. When the image is de-distorted, some pixels may be adjusted to the projection range, resulting in a corresponding virtual image with a problem similar to the lower checkerboard edge deletion of fig. 6. To solve this problem, the patent CN115546057a compresses the display area to within the maximum inscribed rectangle determined by the dot coordinates of the outermost layer of the dot matrix, and then calculates the predistortion matrix, which, while guaranteeing the integrity of the projected content, objectively reduces the display area.
In terms of improving the calibration accuracy, although the iterative method adopted in the CN115205138A achieves higher accuracy, the method lacks corresponding content in terms of simple acquisition of a viewpoint reference lattice with controllable density, simple and adjustable lattice density, effective guarantee of the integrity of the display content, and more importantly, the method does not consider the condition that the lattice in the corresponding sample image cannot be detected due to the content deficiency of the predistortion image, namely, the condition that iteration cannot be performed.
Disclosure of Invention
The invention aims to provide a high-precision AR-HUD distortion calibration method, which introduces iteration and adds iteration feasibility judgment and a corresponding processing method in the iteration process, thereby ensuring and improving the robustness and feasibility of the iteration so as to solve the problems in the prior art in the background technology.
In order to achieve the above purpose, the invention adopts the following technical scheme: a high accuracy AR-HUD distortion calibration method, comprising:
s1: calibrating the relation between the viewpoint camera coordinate system and the vehicle coordinate system to obtain a coordinate system conversion relation;
s2: generating data required by AR-HUD calibration according to the coordinate system conversion relation and the set parameters, wherein the data comprises a reference dot matrix, a projection original image and a viewpoint reference dot matrix;
s3: carrying out projection display on the projection original image through an AR-HUD;
s4: collecting a virtual image in front of a windshield by using a viewpoint camera, and extracting lattice coordinates in the image to obtain a sample lattice;
s5: judging whether the distortion degree is within tolerance according to the difference between the sample dot matrix and the viewpoint reference dot matrix; if not, entering S6; if yes, jumping to S11 to judge whether the distance between the viewpoint datum point and each corresponding point of the sample lattice meets the requirement;
S6: calculating the predistortion amount of each point at the dot matrix according to the relation among the reference dot matrix, the viewpoint reference dot matrix, the sample dot matrix and the dot matrix of the projection dot matrix, and obtaining the predistortion matrix at each pixel by means of a fitting or interpolation method based on the predistortion amount of each point;
s7: finding the edge of an effective display area according to the pixel position and the corresponding predistortion value, and determining the effective display area;
s8: searching a rectangular area which has the same aspect ratio as the projection original image and meets the requirements of position and the like in the effective display area as a target display area, wherein the target display area is used for storing contents to be displayed;
s9: setting and calculating the maximum iteration times of the predistortion amount, and if the iteration times reach the maximum, entering S11; if not, entering S10;
s10: calculating a predistortion image of the projection original image by utilizing the predistortion matrix and the effective display area, taking the predistortion image as the projection original image, jumping to the S3 and entering the next iteration;
s11: judging whether the current AR-HUD needs to be calibrated according to the number of iterations; if the iteration number is 0, the current AR-HUD does not need to be calibrated, if the iteration number is 1 or more, final calibration parameters are calculated by combining the predistortion matrixes of all the rounds, and a target display area is determined based on the calibration parameters.
The relationship between the calibration viewpoint camera coordinate system and the vehicle coordinate system comprises:
placing a calibration device in front of the car according to the placing requirement; acquiring an image of a calibration device through a viewpoint camera, and determining rotation and translation between a viewpoint camera coordinate system and a standard lattice device coordinate system by means of the corresponding relation between a 2D point and a 3D point or the corresponding relation between a 3D point and a 3D point; a relationship between the viewpoint camera coordinate system and the vehicle coordinate system is calculated.
The reference dot matrix is generated according to the number, arrangement mode, target position and the like of the dot matrixes; the projection original image is a dot matrix image which is generated according to the image type parameter and can determine the corresponding relation with the reference dot matrix; the viewpoint reference dot matrix is generated by re-projecting the reference dot matrix onto an image of the viewpoint camera in combination with the determined relationship between the viewpoint camera coordinate system and the vehicle coordinate system and the internal reference of the viewpoint camera.
Judging whether the distortion degree is within tolerance according to the difference between the sample lattice and the viewpoint reference lattice; if not, entering S6; if yes, jumping to S11 to determine whether the distance between the viewpoint reference point and each corresponding point of the sample lattice meets the requirement, including:
Moving the sample lattice according to the difference value between the coordinate mean value of the viewpoint datum point and the coordinate mean value of the sample lattice;
calculating a scaling value to respectively obtain the distance sum of the sample dot matrix and the distance sum of the viewpoint datum points;
under the condition of ensuring that the coordinate mean value of the sample dot matrix is unchanged, scaling the sample dot matrix by means of the ratio of the sum of the distances of the sample dot matrix and the sum of the distances of the viewpoint datum points, and jumping to S11 if the distances between the scaled dot matrix and the corresponding points of the viewpoint datum points meet the distortion requirement or not; if not, the process proceeds to S6.
The calculating the predistortion amount of each point at the lattice according to the relation among the reference lattice, the viewpoint reference lattice, the sample lattice and the lattice of the projection lattice comprises the following steps:
and converting the sample dot matrix to the projection original image by utilizing the conversion relation between the viewpoint reference dot matrix and the projection dot matrix, calculating the predistortion amount of each point in the projection dot matrix by utilizing the difference between the projection dot matrix and the converted sample dot matrix, and finally obtaining the predistortion amount of each pixel point on the projection original image by adopting an interpolation method.
The step of finding the edge of the effective display area according to the pixel position and the corresponding predistortion value thereof, and determining the effective display area comprises the following steps:
Setting the x coordinate value on the leftmost boundary of the predistortion matrix as 0, the x coordinate value on the rightmost boundary as N-1, and the offset of the edge of the effective display area relative to the matrix boundary as
The left and right edges of the maximum display area are in turn:
the upper edge and the lower edge are sequentially as follows:
the saidThe calculation mode of (2) comprises: fast mode and precise mode.
The searching the rectangular area which has the same aspect ratio as the projection original image and meets the requirements of position and the like in the effective display area as a target display area comprises the following steps:
taking the coordinate mean value of the projection lattice as the center of the rectangle to be searched, so that the consistency of the display positions among different vehicles is realized by means of the coordinate mean value of the reference lattice; and taking the aspect ratio of the projection original image as the aspect ratio of the rectangle to be searched, and searching the rectangle area meeting the requirements in the maximum effective area.
The maximum iteration number can be set;
calculating a predistortion matrix of the projection original image by using the predistortion matrix and the effective display area, taking the predistortion image as the projection original image, jumping to the S3 and entering the next iteration, wherein the method comprises the following steps of:
performing predistortion operation on the projection original image by using a predistortion matrix to obtain a predistortion image, and performing lattice extraction on the predistortion image;
If the extraction is successful, the predistortion image is directly taken as a projection original image, and the next iteration is carried out after jumping to the S3;
if the dot matrix cannot be extracted, the projection original image is adjusted according to the effective display area to ensure successful extraction of the dot matrix, and then the adjusted projection original image is taken as the projection original image, and the next iteration is carried out after jumping to the S3.
The invention has the technical effects and advantages that: compared with the prior art, the high-precision AR-HUD distortion calibration method provided by the invention has the following advantages:
the invention realizes the simple and convenient acquisition mode of the reference dot matrix and the viewpoint reference dot matrix with adjustable density by means of the calibration relation between the viewpoint camera and the vehicle coordinate system; the effective area search is expanded to be beyond the lattice range by means of the relation between the predistortion value in the predistortion matrix and the position of the predistortion value, and a larger effective display area can be found while the integrity of display content is ensured; during iteration, judging whether the current round of predistortion image can effectively detect a dot matrix, and providing a corresponding processing mechanism and method, so that the robustness and feasibility of iteration are realized; iteration is introduced, iteration feasibility judgment and a corresponding processing method are added in the iteration process, so that the robustness and feasibility of iteration are guaranteed and improved, a reference lattice is obtained more simply and conveniently, the lattice density is simple and adjustable, the effective display area is relatively larger, and a feasible AR-HUD calibration scheme is determined in an iteration mode.
Drawings
FIG. 1 is a flow chart of a high-precision AR-HUD distortion calibration method of the present invention;
FIG. 2 is a diagram showing an example of sample lattice movement in the present invention;
FIG. 3 is a diagram showing an example of distortion at the boundary in the X direction in the present invention;
FIG. 4 is a view showing an example of the edge of the effective area in the X direction according to the present invention;
FIG. 5 is a diagram illustrating an exemplary active area and a rectangular display area according to the present invention;
fig. 6 is a diagram illustrating an example of image compression in the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. The specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The embodiment of the invention provides a high-precision AR-HUD distortion calibration method, which comprises an off-line part and an on-line part. Wherein the offline portion includes calibrated coordinate system relationships and generated content. The former is used for determining the conversion relation between coordinate systems and further used for carrying out coordinate system conversion on the point array; the latter is to generate content, and convert the content to be converted according to the conversion relation determined by the former; the two can realize the density control and the simple acquisition of the reference dot matrix or the viewpoint reference dot matrix. The online part is a complete iterative AR-HUD distortion calibration parameter calculation flow, and is used for calculating specific distortion calibration parameters.
Because the AR-HUD calibration processes are more and the methods used by the processes are interdependent or affected, the important contents and related principles of the calibration relation between the viewpoint camera coordinate system and the vehicle coordinate system, the simple generation of the viewpoint reference dot matrix based on the controllable density of the calibration relation between the viewpoint camera coordinate system and the vehicle coordinate system, the determination of the display area based on the predistortion matrix, the processing measures for realizing the robust iteration and the like are introduced in the subsequent corresponding parts. The general flow is shown in figure 1.
In this embodiment, a high-precision AR-HUD distortion calibration method includes the following steps:
1) And calibrating the coordinate system relation. The purpose is to obtain a coordinate system conversion relation, and then the relation is used for realizing conversion among the coordinate systems. Assuming that the vehicle coordinate system is used as the base coordinate system, and the reference lattice is under the vehicle coordinate system, the relationship between the viewpoint camera and the vehicle coordinate system needs to be calibrated. In the occasion that the same vehicle type and the like do not need to be recalibrated, the calibration relation is reusable.
In an embodiment, the relationship between the viewpoint camera coordinate system and the vehicle coordinate system is calibrated. The method comprises the following specific steps: firstly, a calibration device (such as a checkerboard calibration plate or a circular spot calibration plate) is placed in the front of a vehicle, and the requirements are that: a. the three axes of the calibration device are respectively parallel to the three axes of the vehicle coordinate system; b. the viewpoint camera can effectively detect dot matrix coordinates on the calibration device. At this time, the rotation between the coordinate system of the calibration device and the coordinate system of the vehicle is the unit matrix, and the translation is t board2car . Then, the viewpoint camera collects the image of the calibration device, and determines the rotation R between the viewpoint camera coordinate system and the standard lattice device coordinate system by means of the corresponding relation between the 2D point and the 3D point or the corresponding relation between the 3D point and the 3D point camera2board And translation t camera2board . Finally, the relation between the viewpoint camera coordinate system and the vehicle coordinate system is calculated, namely
The specific parameters and the method are as follows: a circular spot calibration plate is placed at the position of 2 meters in front of the automobile, and the calibration plate is adjusted by means of a level meter capable of measuring angles so that the pose of the circular spot calibration plate meets the requirements. When calculating the relation between the viewpoint camera and the calibration device, two modes are realized: pnP mode, suitable for monocular or multi-eye situation, calculating R by PnP method with the help of lattice coordinate (2D point) in image shot by viewpoint camera and lattice coordinate (3D point) on calibration device camera2board And translation t camera2board The method comprises the steps of carrying out a first treatment on the surface of the b. Registration mode, suitable for multi-view scenes, knowing the lattice coordinates (3D points) on the calibration device, and calculating R by means of iterative nearest neighbor (i.e. ICP) method from the point cloud (3D points) of the lattice on the multi-view reconstructed calibration device camera2board And translation t camera2board
2) Content is generated. The purpose is to generate data required for AR-HUD calibration according to the set parameters and the calibration relation. This step may be calculated once at the start of the procedure or after changing the parameters, without repeated generation in each calibration operation.
In the embodiment, the reference dot matrix, the projection original image, the viewpoint reference dot matrix, and the like are generated. Wherein,
a. the reference lattice is generated based on, for example, the number of lattices, arrangement, target positions, and the like. In this step, the reference virtual image may be generated, but the reference virtual image may be generated more than the reference lattice directly. Therefore, direct generation of the reference lattice is chosen here.
b. The projection original image is a dot matrix image which is generated according to parameters such as image type (chessboard, circular spot or other types) and the like and can determine the corresponding relation with the reference dot matrix.
c. The viewpoint reference lattice. And (3) re-projecting the reference lattice onto the image of the viewpoint camera by combining the relation between the viewpoint camera coordinate system and the vehicle coordinate system determined in the step (1) and the internal reference (assumed to be known) of the viewpoint camera to obtain the viewpoint reference lattice. Assuming that the coordinate of the reference lattice in the vehicle coordinate system is X and the coordinate of the reference lattice in the viewpoint camera coordinate system is Y, the following relationship exists between the two
X=R camera2car *Y+t camera2car
Further, reference lattice coordinates in the viewpoint camera coordinates can be obtained
Then, assume that the viewpoint camera focal length is f x And f y The principal point is c x And c y The coordinates of a point in the reference lattice at the viewpoint camera coordinates are (x) c ,y c ,z c ) Then according to the formula
And calculating the re-projection of all points in the reference dot matrix in the viewpoint camera image to obtain the viewpoint reference dot matrix.
Specific parameters and methods: for reference lattice generation: the target position is determined according to the space model, and 18 rows and 9 columns of reference lattices are generated at the position and are uniformly distributed. For generation of projection artwork: the dot matrix is also arranged in 18 rows and 9 columns and is uniformly distributed, and finally the projection original image shown in fig. 2 is generated. And (3) for calculating the viewpoint reference dot matrix, adopting the algorithm in c.
Finally, the parameters related to the generated content are stored as configuration parameters. After loading these parameters and the calibration relation in step 1, the reference dot matrix, the projection artwork, the viewpoint reference dot matrix amount and other contents can be generated according to these configuration information. The design can ensure that the generated content after each loading of the configuration file corresponds to the configuration parameters, and has the advantages of being convenient for modifying the parameters, reducing the storage occupation and the like.
3) And displaying the projection original. I.e. the original projection image is projected through the AR-HUD.
4) Shooting a sample image and extracting a sample lattice. The viewpoint camera collects a virtual image in front of the windshield, and extracts lattice coordinates in the image to obtain a sample lattice.
5) And judging the distortion degree. And judging whether the distortion is within the tolerance, if so, directly jumping to the 11 th step to perform result processing, and if not, entering the next step. The degree of distortion is judged in various ways, for example: the method comprises the steps of judging according to the difference between a sample dot matrix and a viewpoint reference dot matrix, judging according to the difference between the sample dot matrix converted onto a projection image and the projection dot matrix, judging according to the difference between an intersection point dot matrix of the sample dot matrix on a reference dot matrix plane and the reference dot matrix, and the like.
In an embodiment, the determination is based on a difference between the sample lattice and the viewpoint reference lattice. Firstly, judging the coordinate mean value of the viewpoint datum pointCoordinate mean with sample lattice->Is the distance between (2)Whether or not the separation is at a threshold 0 And (3) inner part. If not, entering a step 6; if yes, judging whether the distance between the viewpoint datum point and each corresponding point of the sample dot matrix meets the requirement. The method specifically comprises the following steps:
first, according toAnd->The sample lattice is shifted as shown in fig. 2, the left image is before shifting, and the intermediate image is after shifting.
Then, a scaling value is calculated. Assuming that the lattice is m rows and n columns, the formula is used respectively
And calculating the sum of the distances between two adjacent points in the horizontal direction and the sum of the distances between two adjacent points in the vertical direction.
Where d (,) represents the distance between the two points. Further, the sum of the reachable distances
D=d x +d y
Respectively obtaining the distance sum D of the sample lattice by adopting the formula sample Sum of distances D from viewpoint reference point bencn (D bench May be calculated in advance in step 2). Finally, ensureBy means of D, unchanged bench /D sample Scaling the sample lattice (to obtain the result shown in the rightmost image of fig. 2), and based on whether the distance between the scaled lattice and the point corresponding to the viewpoint datum point meets the distortion requirement: if all are at the threshold value threshold 1 If the range is within the range, jumping to the 11 th step; if not, the step 6 is entered.
6) A predistortion matrix is calculated. Firstly, calculating the predistortion amount of each point at the lattice according to the relation between the lattices; then, based on the predistortion amount of each point, the predistortion amount (i.e. predistortion matrix) at each pixel is obtained by fitting or interpolation methods.
The predistortion amount of each point in the dot matrix is calculated according to the relation among the dot matrices such as the reference dot matrix, the viewpoint reference dot matrix, the sample dot matrix, the projection dot matrix and the like. These relationships include at least: the conversion relation determined by the correspondence between points between the lattices (including the original lattice, the converted lattice), the distortion or predistortion relation determined by the difference between the corresponding points between the lattices (including the original lattice, the converted lattice), and the like. When the predistortion amount is calculated, the combination of the selected relationships is different, and the obtained schemes are also different.
In this patent embodiment: and converting the sample dot matrix to the projection original image by utilizing the conversion relation between the viewpoint reference dot matrix and the projection dot matrix, calculating the predistortion amount of each point in the projection dot matrix by utilizing the difference between the projection dot matrix and the converted sample dot matrix, and finally obtaining the predistortion amount of each pixel point on the projection original image by adopting an interpolation method.
In the specific method, the relationship between the viewpoint reference lattice and the projection lattice is described by adopting homography transformation, the sample image is transferred to the projection original image by utilizing homography transformation, and finally, interpolation operation is carried out by adopting bicubic spline interpolation.
7) An effective display area is determined. The amount of predistortion for a pixel is the amount of positional adjustment for that pixel when that pixel is undistorted in the virtual image in front of the windshield. If some pixels are adjusted beyond the image boundary, the exceeding part cannot be projected, and finally the projection content is lost. In order to prevent such content loss and to ensure the maximization of the projected picture, the present patent determines the effective display area according to the pixel positions and their corresponding predistortion values.
As shown in fig. 3, assuming that a hollow circle is an element in the original projection image, a solid circle is a position obtained by adding a predistortion amount to a corresponding hollow circle, and it can be seen from the figure that a pixel point on the leftmost boundary is beyond the original projection image, and the pixel point cannot be displayed on the windshield.
As shown in fig. 4, the X coordinate value at the open dot plus the predistortion amount in the X direction thereof gives a corresponding gray open dot, it can be seen that: a. the amount of predistortion at different columns on the same row is different due to the presence of aberrations. b. On a column of a certain row of the predistortion matrix, there is a certain point at which its X coordinate value is closest to the amount of distortion in the X direction. Thus, the edges of the effective display area can be found more accurately based on the amount of predistortion.
In an embodiment, the following scheme is used to find edges.
Assuming that the x coordinate value on the leftmost boundary of the predistortion matrix is 0, the x coordinate value on the rightmost boundary is N-1, and the offset of the edge of the effective display area relative to the matrix boundary isThe left and right edges of the maximum display area are in turn:
similarly, the upper edge and the lower edge are sequentially as follows:
for the followingThe present patent proposes two modes, the fast mode and the accurate mode as follows. The fast mode principle is relatively simple and easy to implement, but due to the distortion, the exact mode can find more accurate edges.
a. Fast mode. And directly taking element values on the boundary of the predistortion matrix as the scaling offset of the edge of the effective display area.
Based on the X-direction predistortion matrix, the following is obtained: in the case of the left-hand side,
wherein v is (i,0) Is the value at column 0 of the ith row of the predistortion matrix (i.e. the left most column of the predistortion matrix) (i.e. the predistortion value), and the value range of i is 0, n. In the case of the right-hand side,
wherein v is (i,N-1) Is the value at the i-th row of the right most column of the predistortion matrix.
Similarly, based on the Y-direction predistortion matrix, it can be seen that: in the case of the upper side of the container,
wherein v is (0,j) Is at row 0 and column j of the predistortion matrix (i.e., the uppermost row of the predistortion matrix)
The value range of j is 0, M. When the lower side is used for carrying out the process,
wherein v is (M-1,j) Is the value at the j-th column of the last row of the predistortion matrix.
b. Accurate mode. The edges are determined from the predistortion values and their positions in the predistortion matrix.
Left side is
Wherein,representing the 0 th column to the T th column of the predistortion matrix in the X direction left The value of the column having the smallest difference from the j value; t (T) left Equal to the x coordinate value of the leftmost column of the lattice of the projected artwork.
Right side is
Wherein,representing the direction from T on the ith row of the predistortion matrix in the X direction right The smallest sum of the values listed in columns N-1 and N-1-j; t (T) right Equal to the x coordinate value of the rightmost array of the projected artwork.
The upper side is
Wherein,representing the row 0 to T on the j-th column of the Y-direction predistortion matrix left The value of the row having the smallest difference from the i value; t (T) top Equal to the y coordinate value of the uppermost row of dot matrix of the projection original image.
The lower side is
Wherein,represented on the j-th column of the Y-direction predistortion matrix from T-th bottom The sum of the values from row to row M-1 and the value of N-1-j is the smallest; t (T) bottom Equal to the y coordinate value of the lowest row of dot matrix of the projection original image.
Furthermore, there may be a deviation in the amount of predistortion interpolated at the point-free pair. To eliminate such effects, further guarantee the integrity of the display content, a set of S parameters is set here as defined below:
S left =α left *N,
S right =-α right *N,
S top =β top *M,
S bottom =-β bottom *M,
wherein S is left 、S right 、S top S and S bottom The adjustment amounts of the left, right, upper and lower edges of the effective display area are sequentially corresponding to alpha left 、α right 、β top Beta and beta bottom The scale factor relative to the number of rows or columns of the matrix is theoretically (-1, 1), but in order to ensure that the maximum value of the effective display should be close to 0, the default value set in the embodiment is alpha left =α right =β top =β bottom =0.001。
Finally, the edges of the maximum display area are:
as shown in fig. 5, from outside to inside, the width and height of the black rectangular frame of the outermost layer are the same as those of the resolution of the original projection image, the envelope of the secondary outer layer is the edge of the effective display area obtained before the S parameter is added, and the envelope of the broken line of the third layer is the envelope after the S parameter is added.
8) A target display area is calculated. The goal of this step is to find a satisfactory area within the active display area that will be used to store the content that needs to be displayed. For the sake of displaying the integrity of the content and uniformity of the pixel map at the time of scaling, a rectangular area is typically chosen that is the same or similar to the aspect ratio of the projected artwork. And for rectangular areas, parameters such as aspect ratio, position, etc. are set.
In the embodiment, a rectangular region which is the same as the aspect ratio of the projection original and meets the requirements of the position and the like is searched as the target display region. Together, two modes are implemented.
a. Standard mode. The key point is to ensure that the consistency of the display positions and the display sizes among different vehicles of the same vehicle type meets the requirements. The specific operation is as follows:
first, the coordinate mean (i.e., P center ) As the center of the rectangle to be searched, the coordinate mean value of the reference dot matrix is used between different vehicles (the corresponding point in the viewpoint camera is transformed by homography and then is matched with P center Coincident) to achieve consistency of display positions; the aspect ratio of the projection artwork is used as the aspect ratio of the rectangle to be searched to ensure the scaling of the projection artwork.
Then, a rectangular region meeting the above requirements is found out within the maximum effective region. The specific steps are explained by taking fig. 5 as a schematic diagram: (1) Find the largest inscribed rectangle (i.e. the innermost rectangle as in FIG. 5) of the effective display area (i.e. the area encompassed by the dashed line in FIG. 5), in particular The implementation is a mature scheme and is not repeated; (2) Sequentially calculating P center (i.e., circular spot in FIG. 5) distance d from the vertical line above, below, left and right of the maximum inscribed rectangle top ,d bottom ,d left ,d right Obtaining the width and height of the matrix:
W=2*min(d left ,d right )
H=2*min(d top ,d bottom )
where min (,) represents taking the smaller of the two data.
More specifically, to ensure consistency of the display area position and size, two parameters are set: position consistency tolerance parameter P offset And a size consistency tolerance parameter S offset . At P center P as center offset Finding out the difference between the width and the theoretical width (determined by simulation software) which meets the aspect ratio in the range S size Rectangular in scope. Wherein P is offset Is defined as:
P offset =μ*N
where μ is the proportionality coefficient, the example takes a value of 0.01.S is S offset Is defined as:
S offset =η*N
where η is the proportionality coefficient, the example value is 0.001. As shown in fig. 5, the inner gray rectangular box is the rectangular area found in the standard mode.
b. Maximizing mode. The focus is on maximization of the display area. The specific operation is to find the largest rectangle conforming to the aspect ratio directly in the largest display area.
9) Iteration stop conditions. Setting the maximum iteration number Iter, and if the iteration number reaches Iter, entering the 11 th step; if not, jumping to the 10 th step.
In an embodiment, iter defaults to 2.
10 A predistortion image is calculated. The predistortion image is an image obtained by predistortion of the projection original image, and is used for replacing the projection original image to be projected on a windshield, and further calculating the predistortion amount of the more refined system. In the process, the patent adds iteration feasibility judgment and a corresponding processing mechanism.
In an embodiment, for the calculation of the predistortion image, the specific steps are:
a. pre-distorting the projection original image by utilizing a pre-distorting matrix to obtain a pre-distorted image;
b. and performing lattice extraction on the predistortion image.
(1) If the dot matrix coordinates of the circular spots can be extracted effectively, the predistortion operation does not affect the detection of the circular spots or the extraction of the dot matrix in the corresponding sample image, so that the predistortion matrix is taken as a projection original image and the step 3 is skipped.
(2) If the dot cannot be detected, it indicates that some circular spots are located at or beyond the edge of the image, and the corresponding sample image cannot detect the dot matrix, so that the next iteration cannot be performed. At this time, a new area (referred to as an imaging area) is determined based on the radius of the rectangular area and the circular spot calculated in step 8. Assume that: the distances between the upper edge, the lower edge, the left edge and the right edge of the rectangular area and the upper, lower, left and right boundaries of the image are d top 、d bottom 、d left 、d right The method comprises the steps of carrying out a first treatment on the surface of the The radius of the circular spot is r; in order to effectively detect the circular spots, the offset of the circular spots relative to the image boundary is r offset The method comprises the steps of carrying out a first treatment on the surface of the The initial edges of the imaging region are in turn:
where max (,) represents taking the larger of the two data, r offset Is defined as r offset λ×r, λ default value is 0.5.
Further, inAnd->The final imaging region having the same aspect ratio as the center of the rectangular region determined in step 8 is found in the determined initial imaging region. The specific method is to sequentially calculate P center (i.e., circular spots in FIG. 5) perpendicular distance +.> Obtaining the width and height of an imaging area:
where min (,) represents taking the smaller of the two data.
Compressing the projected artwork into the imaging region ifAre all greater than r+r offfset The result of the compression of fig. 2 is an image as shown in fig. 6. Then, the pre-distortion operation is performed on the compressed image (as shown in fig. 6), and the pre-distortion result is taken as a projection original image, and the step 3 is skipped.
11 Calculating calibration parameters. The aim is to obtain the calibration parameters of the current AR-HUD.
If the iteration number is 0, it is indicated that the current vehicle is undistorted or distorted within a tolerance range, i.e. no calibration of the current AR-HUD is required. If the iteration is 1 or more times, the final calibration parameters are calculated by combining the predistortion matrices of each round, and the target display area is determined based on the calibration parameters. The calibration parameters are defined differently, such as defining the calibration parameters into a predistortion matrix, or a deformation map (warp pingmap) obtained from the predistortion matrix, etc., but are essentially a map of the adjustment amount of the pixel position.
In an embodiment, the calibration parameters are defined as a predistortion matrix. If the iteration number is 0, the current AR-HUD does not need calibration parameters. If the iteration is 1 time or more than 1 time,
a. the final predistortion matrix is a superposition of the rounds of predistortion matrices, i.e
Where PreMat (i) is the result of the ith iteration and PM is the sum of the predistortion matrices for each round.
b. The final rectangular area is found out on the predistortion matrix obtained in step a by adopting the methods of the step 7 and the step 8.
Further, the compression of the artwork into the rectangular area is essentially also pixel-shifted as the predistortion of the AR-HUD, so here the PM obtained by the compression of the artwork into the rectangular area is represented in the form of a predistortion matrix rect And combining it into a final predistortion matrix to obtain
PreMat=PM+PM rect
In summary, the invention realizes a simple and convenient acquisition mode with adjustable density of the reference dot matrix and the viewpoint reference dot matrix by means of the calibration and the calibration relation between the viewpoint camera and the vehicle coordinate system; the effective area search is expanded to be beyond the lattice range by means of the relation between the predistortion value in the predistortion matrix and the position of the predistortion value, and a larger effective display area can be found while the integrity of display content is ensured; during iteration, whether the dot matrix can be effectively detected by the current round of predistortion image is judged, and a corresponding processing mechanism and a corresponding processing method are provided, so that the robustness and feasibility of iteration are realized.
Furthermore, the terms referred to in this application have the following meanings:
AR-HUD (AR-Head-up-Display): the integration of AR technology with HUD displays, imaging by reflection through the windshield, and the imaging and the environment outside the vehicle are integrated with each other, so that the driver sees the picture after the display content is combined with the actual environment.
Viewpoint camera: a camera placed at the position of the eyes (viewpoint) of the driver for capturing the picture seen by the human eye.
Projection source: when distortion is removed, the AR-HUD projects an isostatic image of a checkerboard and circular spot lattice. Is an example image of a circular spot lattice as the projection source.
Projecting a dot matrix: the lattice in the original is projected. If the projection source is a checkerboard, the projection lattice is composed of angular point coordinates, and if the projection source is a circular spot lattice, the projection lattice is composed of circle center coordinates.
Sample image: the viewpoint camera captures an image of a virtual image presented in front of the windshield. The virtual image is the imaging of a similar AR-HUD in front of the windscreen. Through the complicated optical path of AR-HUD, the sample image has certain deformation (namely distortion), as shown by the image after the solid circle is removed.
Sample lattice: lattice in the extracted sample image. The lattice formed by the circle centers is the sample lattice example.
Reference image: the dot matrix distribution such as a checkerboard calibration plate, a circular spot calibration plate and the like shot by the viewpoint camera determines the image of the known calibration device. In this patent, the reference image is used only to calibrate the viewpoint camera coordinate system and the vehicle coordinate system.
Reference lattice: and generating a lattice according to the arrangement mode, the number of the points, the position and other parameters. The function is to provide a calibrated reference for the sample lattice. The solid circles in the above are the examples of the reference dot matrix.
Target position: the reference lattice is located at a position, and the value of the reference lattice can be set, but the reference lattice needs to ensure: in the view of the viewpoint camera, the region defined by the reference dot matrix at this position overlaps with the virtual image region. The reference dot matrix (solid circle) in the camera field of view falls within the area (virtual image area) defined by the sample dot matrix (hollow circle) as the target position.
Reference virtual image: and an image generated according to parameters such as a reference dot matrix, an image type (a chessboard, circular spots or other types) and the like. The image after the hollow circle is removed is an example of the reference virtual image. The reference virtual image may be stored as a reference file and, upon performing a calibration operation, the image may be loaded and a reference lattice (or viewpoint reference lattice) extracted for subsequent calculation.
Viewpoint reference lattice: and re-projecting the reference dot matrix on the viewpoint camera image. In the process of re-projection, a connecting line between a certain point on the reference dot matrix and the origin of the coordinate system of the viewpoint camera is intersected with the viewpoint camera image, and the intersection point is the point in the viewpoint reference dot matrix.
Predistortion: according to the distortion characteristics of the AR-HUD, the reverse distortion operation is performed on the projection source in advance so as to offset the distortion of the AR-HUD, and the normal picture is seen by human eyes.
The amount of predistortion, assuming that the image width direction corresponds to the X direction and the height direction corresponds to the Y direction, a black solid circle is a point in the projected lattice, and a black open circle is a point in the sample lattice corresponding to the black solid circle and converted to the projected original. If the points in the sample lattice corresponding to the black hollow circles are overlapped with the corresponding points in the viewpoint reference lattice (i.e. de-distorted), the black hollow circles need to be moved to the black solid circles on the projection source, and the movement amount is the pre-distortion amount.
The pre-distortion amount of a certain point consists of two components of an X-direction component and a Y-direction component, and the two components are called as the X-direction distortion amount and the Y-direction distortion amount, namely the distortion amount of a pixel point consists of the X-direction pre-distortion amount and the Y-direction pre-distortion amount.
Predistortion matrix: based on the relation between the lattices, the predistortion amount of each pixel on the projection source can be obtained by further interpolating the predistortion amount of the lattices and the like. The size is the same as the size of the original projector, and can be called a predistortion matrix if represented by a matrix.
An X-direction predistortion matrix and a Y-direction predistortion matrix: since the distortion amount at each pixel is composed of the X-direction predistortion amount and the Y-direction predistortion amount, the predistortion matrix of one image is also composed of the X-direction predistortion matrix and the Y-direction predistortion matrix.
Predistortion image: the pre-distortion matrix is used for processing the projection source to obtain an image which has a certain distortion compared with the projection source, but can lead the sample image to be undistorted. The predistortion image can be used for indicating that the positions of the chessboard angular points are changed relative to the positions of the circle centers of the circular spots. The predistortion concept is shown in the form of a checkerboard, mainly to show that the predistortion image may have a missing projection, e.g. a partial missing portion of the underside of the checkerboard.
Effective display area: after the scaling (generally shrinking) process is performed on the projection source, the area (the area where the projection source is shrunk) where the projection content is not missing after the distortion removal operation is performed can be ensured.
Vehicle coordinate system: as shown therein, defining the vehicle directly in front of the Z-axis direction and the vehicle vertically below the Y-axis direction, the coordinate system conforms to the right hand rule, and the X-axis direction is determined from the Z-axis and the Y-axis.
Calibration device and coordinate system thereof: the tool for calibrating camera parameters, coordinate system conversion relations and the like is commonly provided with a checkerboard calibration plate and a circular spot calibration plate, and if the tool is a real object with a known circular center distance, the tool is an example of the circular spot calibration plate. For the coordinate system of the calibration device, the horizontal direction is the X-axis pointing from left to right, the vertical direction is the Y-axis from top to bottom, and the coordinate system accords with the right-hand rule, and then the Z-axis can be determined according to the X-axis and the Y-axis.
M, N: the number of rows and columns of the predistortion matrix is also the height and width of the resolution of the projexel.
PnP algorithm: i.e. the Perspective-n-Point algorithm. Two-dimensional points on the image: a, b, c, etc. and three-dimensional points in space: a, B, C, etc. have a certain correspondence. The PnP algorithm can calculate the conversion relationship between the camera coordinate system and the coordinate system in which the spatial point is located by using the corresponding points.
ICP algorithm: i.e. the iterativecosetpoint algorithm. Unlike PnP, ICP is the calculation of a conversion relationship between two spatial coordinate systems from the correspondence between two three-dimensional point sets.
Homography transformation: the points on one projection plane are mapped to the other projection plane, and the straight line is still a straight line after mapping, so that the projection plane has the line-protecting property. The transformation may be represented by a 3 x 3 non-singular matrix, which is a linear transformation relationship from one plane to another.
Finally, it should be noted that: the foregoing description is only illustrative of the preferred embodiments of the present invention, and although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that modifications may be made to the embodiments described, or equivalents may be substituted for elements thereof, and any modifications, equivalents, improvements or changes may be made without departing from the spirit and principles of the present invention.

Claims (9)

1. A high-precision AR-HUD distortion calibration method, comprising:
s1: calibrating the relation between the viewpoint camera coordinate system and the vehicle coordinate system to obtain a coordinate system conversion relation;
s2: generating data required by AR-HUD calibration according to the coordinate system conversion relation and the set parameters, wherein the data comprises a reference dot matrix, a projection original image and a viewpoint reference dot matrix;
S3: carrying out projection display on the projection original image through an AR-HUD;
s4: collecting a virtual image in front of a windshield by using a viewpoint camera, and extracting lattice coordinates in the image to obtain a sample lattice;
s5: judging whether the distortion degree is within tolerance according to the difference between the sample dot matrix and the viewpoint reference dot matrix; if not, entering S6; if yes, jumping to S11 to judge whether the distance between the viewpoint datum point and each corresponding point of the sample lattice meets the requirement;
s6: calculating the predistortion amount of each point at the dot matrix according to the relation among the reference dot matrix, the viewpoint reference dot matrix, the sample dot matrix and the dot matrix of the projection dot matrix, and obtaining the predistortion matrix at each pixel by means of a fitting or interpolation method based on the predistortion amount of each point;
s7: finding the edge of an effective display area according to the pixel position and the corresponding predistortion value, and determining the effective display area;
s8: searching a rectangular area which has the same aspect ratio as the projection original image and meets the requirements of position and the like in the effective display area as a target display area, wherein the target display area is used for storing contents to be displayed;
s9: setting and calculating the maximum iteration times of the predistortion amount, and if the iteration times reach the maximum, entering S11; if not, entering S10;
S10: calculating a predistortion image of the projection original image by utilizing the predistortion matrix and the effective display area, taking the predistortion image as the projection original image, jumping to the S3 and entering the next iteration;
s11: judging whether the current AR-HUD needs to be calibrated according to the number of iterations; if the iteration number is 0, the current AR-HUD does not need to be calibrated, if the iteration number is 1 or more, final calibration parameters are calculated by combining the predistortion matrixes of all the rounds, and a target display area is determined based on the calibration parameters.
2. The high-precision AR-HUD distortion calibration method according to claim 1, wherein said calibrating the relationship between the viewpoint camera coordinate system and the vehicle coordinate system comprises:
placing a calibration device in front of the car according to the placing requirement; acquiring an image of a calibration device through a viewpoint camera, and determining rotation and translation between a viewpoint camera coordinate system and a standard lattice device coordinate system by means of the corresponding relation between a 2D point and a 3D point or the corresponding relation between a 3D point and a 3D point; a relationship between the viewpoint camera coordinate system and the vehicle coordinate system is calculated.
3. The high-precision AR-HUD distortion calibration method according to claim 2, wherein said reference lattice is generated according to the number of lattices, arrangement, target position, etc.; the projection original image is a dot matrix image which is generated according to the image type parameter and can determine the corresponding relation with the reference dot matrix; the viewpoint reference dot matrix is generated by re-projecting the reference dot matrix onto an image of the viewpoint camera in combination with the determined relationship between the viewpoint camera coordinate system and the vehicle coordinate system and the internal reference of the viewpoint camera.
4. A high-precision AR-HUD distortion calibration method according to claim 3, wherein said determining whether the degree of distortion is within a tolerance is performed according to a difference between said sample lattice and said viewpoint reference lattice; if not, entering S6; if yes, jumping to S11 to determine whether the distance between the viewpoint reference point and each corresponding point of the sample lattice meets the requirement, including:
moving the sample lattice according to the difference value between the coordinate mean value of the viewpoint datum point and the coordinate mean value of the sample lattice;
calculating a scaling value to respectively obtain the distance sum of the sample dot matrix and the distance sum of the viewpoint datum points;
under the condition of ensuring that the coordinate mean value of the sample dot matrix is unchanged, scaling the sample dot matrix by means of the ratio of the sum of the distances of the sample dot matrix and the sum of the distances of the viewpoint datum points, and jumping to S11 if the distances between the scaled dot matrix and the corresponding points of the viewpoint datum points meet the distortion requirement or not; if not, the process proceeds to S6.
5. The high-precision AR-HUD distortion calibration method according to claim 4, wherein said calculating the pre-distortion of each point at the lattice according to the relationship among the reference lattice, the viewpoint reference lattice, the sample lattice and the projection lattice comprises:
And converting the sample dot matrix to the projection original image by utilizing the conversion relation between the viewpoint reference dot matrix and the projection dot matrix, calculating the predistortion amount of each point in the projection dot matrix by utilizing the difference between the projection dot matrix and the converted sample dot matrix, and finally obtaining the predistortion amount of each pixel point on the projection original image by adopting an interpolation method.
6. The method for calibrating high-precision AR-HUD distortion according to claim 5, wherein said finding edges of an effective display area according to pixel locations and their corresponding predistortion values, determining the effective display area, comprises:
setting the x coordinate value on the leftmost boundary of the predistortion matrix as 0, the x coordinate value on the rightmost boundary as N-1, and the offset of the edge of the effective display area relative to the matrix boundary as
The left and right edges of the maximum display area are in turn:
the upper edge and the lower edge are sequentially as follows:
7. the high-precision AR-HUD distortion calibration method of claim 6, wherein saidThe calculation mode of (2) comprises: fast mode and precise mode.
8. The high-precision AR-HUD distortion calibration method according to claim 6, wherein said searching a rectangular area which has the same aspect ratio as the projected artwork and meets the requirements of position and the like in the effective display area as the target display area comprises:
Taking the coordinate mean value of the projection lattice as the center of the rectangle to be searched, so that the consistency of the display positions among different vehicles is realized by means of the coordinate mean value of the reference lattice; and taking the aspect ratio of the projection original image as the aspect ratio of the rectangle to be searched, and searching the rectangle area meeting the requirements in the maximum effective area.
9. The high-precision AR-HUD distortion calibration method according to claim 8, wherein calculating a predistortion matrix of the projection artwork using the predistortion matrix and the effective display area, treating the predistortion image as the projection artwork and jumping to S3 to enter the next iteration, comprises:
performing predistortion operation on the projection original image by using a predistortion matrix to obtain a predistortion image, and performing lattice extraction on the predistortion image;
if the extraction is successful, the predistortion image is directly taken as a projection original image, and the next iteration is carried out after jumping to the S3;
if the dot matrix cannot be extracted, the projection original image is adjusted according to the effective display area to ensure successful extraction of the dot matrix, and then the adjusted projection original image is taken as the projection original image, and the next iteration is carried out after jumping to the S3.
CN202311862626.9A 2023-12-29 2023-12-29 High-precision AR-HUD distortion calibration method Pending CN117830171A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311862626.9A CN117830171A (en) 2023-12-29 2023-12-29 High-precision AR-HUD distortion calibration method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311862626.9A CN117830171A (en) 2023-12-29 2023-12-29 High-precision AR-HUD distortion calibration method

Publications (1)

Publication Number Publication Date
CN117830171A true CN117830171A (en) 2024-04-05

Family

ID=90512962

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311862626.9A Pending CN117830171A (en) 2023-12-29 2023-12-29 High-precision AR-HUD distortion calibration method

Country Status (1)

Country Link
CN (1) CN117830171A (en)

Similar Documents

Publication Publication Date Title
CN112270713B (en) Calibration method and device, storage medium and electronic device
EP1589482B1 (en) Three-dimensional image measuring apparatus and method
CN108416812B (en) Calibration method of single-camera mirror image binocular vision system
EP2202686A1 (en) Video camera calibration method and device thereof
CN112465912B (en) Stereo camera calibration method and device
CN112013792A (en) Surface scanning three-dimensional reconstruction method for complex large-component robot
CN110443879B (en) Perspective error compensation method based on neural network
US10771776B2 (en) Apparatus and method for generating a camera model for an imaging system
CN110349257B (en) Phase pseudo mapping-based binocular measurement missing point cloud interpolation method
CN115861445B (en) Hand-eye calibration method based on three-dimensional point cloud of calibration plate
CN108489398A (en) Laser adds the method that monocular vision measures three-dimensional coordinate under a kind of wide-angle scene
CN113793270A (en) Aerial image geometric correction method based on unmanned aerial vehicle attitude information
CN110415363A (en) A kind of object recognition positioning method at random based on trinocular vision
CN113360964A (en) Convergence type binocular vision guided robot positioning method under high dynamic range
CN111009030A (en) Multi-view high-resolution texture image and binocular three-dimensional point cloud mapping method
CN113298886B (en) Calibration method of projector
CN114820817A (en) Calibration method and three-dimensional reconstruction method based on high-precision line laser 3D camera
CN115880369A (en) Device, system and method for jointly calibrating line structured light 3D camera and line array camera
CN113610929B (en) Combined calibration method of camera and multi-line laser
CN112381888A (en) Dynamic compensation method for H-shaped steel cutting path
CN114993207B (en) Three-dimensional reconstruction method based on binocular measurement system
CN116402904A (en) Combined calibration method based on laser radar inter-camera and monocular camera
CN117830171A (en) High-precision AR-HUD distortion calibration method
CN113870364B (en) Self-adaptive binocular camera calibration method
CN113284196B (en) Camera distortion pixel-by-pixel calibration method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination