CN111476104B - AR-HUD image distortion correction method, device and system under dynamic eye position - Google Patents

AR-HUD image distortion correction method, device and system under dynamic eye position Download PDF

Info

Publication number
CN111476104B
CN111476104B CN202010187422.XA CN202010187422A CN111476104B CN 111476104 B CN111476104 B CN 111476104B CN 202010187422 A CN202010187422 A CN 202010187422A CN 111476104 B CN111476104 B CN 111476104B
Authority
CN
China
Prior art keywords
eye position
image
eye
virtual
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010187422.XA
Other languages
Chinese (zh)
Other versions
CN111476104A (en
Inventor
李银国
周中奎
罗啟飞
史豪豪
李科
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN202010187422.XA priority Critical patent/CN111476104B/en
Publication of CN111476104A publication Critical patent/CN111476104A/en
Application granted granted Critical
Publication of CN111476104B publication Critical patent/CN111476104B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/247Aligning, centring, orientation detection or correction of the image by affine transforms, e.g. correction due to perspective effects; Quadrilaterals, e.g. trapezoids

Abstract

The invention relates to an AR-HUD image distortion correction method, device and system under a dynamic eye position, which belong to the technical field of image processing and comprise the following steps: selecting a plurality of eye positions in the eye position adjustable space range; respectively establishing a mapping relation from a point on a projection virtual image plane to a point on an original input image plane; and for a certain eye position in the EyeBox at the current moment, estimating a linear weight coefficient by using the relative relation between the eye position and the selected eye position, estimating the mapping relation from each point on the virtual image surface to the midpoint of the original input image under the current eye position by using the same weight coefficient, and thus obtaining the image in the image plane to be input, thereby achieving the effect of distortion correction. The method solves the problem that the distortion of the virtual AR-HUD image is different when the eyes of a driver are at different positions, and the distortion correction precision and accuracy of the virtual AR-HUD image under the dynamic eye position are high.

Description

AR-HUD image distortion correction method, device and system under dynamic eye position
Technical Field
The invention belongs to the technical field of image processing, and relates to a method, a device and a system for correcting AR-HUD image distortion under a dynamic eye position.
Background
According to statistical data, the sight of a driver is transferred to an instrument panel from the front of the vehicle, and the driver returns to the front view after acquiring instrument information, and the time is probably 4-7 seconds. In the period, the driver cannot acquire the front environment information, and great potential safety hazard exists. The augmented reality head-up display (AR-HUD) reasonably superposes and displays information such as the vehicle speed, the navigation, the state of a driving auxiliary system, the surrounding environment condition and the like in a visual field area of a driver by utilizing an augmented reality technology, provides more visual and vivid information for the driver, enhances the environment perception capability of the driver, does not need or greatly reduces the conversion of the sight line of the driver between a road surface and an instrument panel, enables the driver to concentrate more attention on the front road surface, and improves the driving safety. Therefore, the AR-HUD technology has important application value. Due to the design and manufacturing errors of an optical system of the AR-HUD, the curvature imbalance of the curved surface of the windshield and the like, the image projected by the AR-HUD onto the windshield of the automobile is distorted, and the seen virtual image can generate different distortion forms along with the dynamic change of the eye positions of an observer. Because the link and the cause of distortion formed by a virtual image seen by an observer are complex, and the difficulty of dynamic distortion correction of an AR-HUD image is high, the dynamic distortion correction of the AR-HUD image becomes the key of the AR-HUD application effect, and is one of the main difficulties of the AR-HUD technology.
At present, the AR-HUD image correction in engineering application mainly adopts two approaches: one method is to improve the distortion of an output virtual image by adjusting an optical system, but the processing requirement is high, the flexibility is poor, the cost is high, the distortion caused by the optical system can only be relieved, and the AR-HUD image distortion under the eye position of an observer caused by the unbalanced curvature of front glass cannot be adapted; the other method is to perform image distortion correction through a software algorithm to form a pre-distorted image so as to achieve the purpose of distortion correction, and the algorithm is complex, easy to implement in engineering, strong in flexibility and low in cost.
The distortion correction method based on the software algorithm only considers the simple situation at present, namely the distortion correction is realized on the single viewpoint AR-HUD image under the condition of fixing the eye position. At present, there is no method that can detect the eye space coordinates of the driver in real time, calculate the mapping relationship between the original image and the virtual projection screen, and perform pre-distortion processing on the virtual image expected to be observed, so that the AR-HUD projection virtual image seen by the driver from different positions can be restored to obtain a real imaging effect.
Disclosure of Invention
In view of this, the present invention provides a method, an apparatus, and a system for correcting distortion of an AR-HUD image under a dynamic eye position, so as to solve the problem that the prior art cannot realize dynamic distortion correction of the AR-HUD image under a multi-eye position.
In order to achieve the above object, in one aspect, the present invention provides the following technical solutions:
an AR-HUD image distortion correction method under a dynamic eye position comprises the following steps:
s1: respectively measuring the spatial coordinates of each of the K set eye positions to obtain K eye position spatial coordinates; the eye position is the middle position of two eyes; the space coordinate system takes the front of the driver sitting straight and looking straight as the Y axis, the direction from the left eye to the right eye as the X axis and the right above as the Z axis;
s2: respectively acquiring virtual images of the normalized dot matrix map under K eye positions by taking the normalized dot matrix map as an input image, and acquiring K virtual images;
s3: respectively acquiring the spatial coordinates of each characteristic point in the input image and the corresponding coordinates in the virtual image equivalent plane under each eye position based on each eye position spatial coordinate in the K eye position spatial coordinates to obtain K coordinate sets;
s4: for each coordinate set in the K coordinate sets, obtaining the inverse mapping relation from the corresponding virtual image under the current eye position to the input image by using a linear interpolation method;
s5: acquiring a multi-eye mapping table from the virtual images under the K eye positions to the input image based on the inverse mapping relation from the virtual image under each eye position to the input image;
s6: and acquiring the real-time eye position of the driver through a pupil tracking algorithm, and performing AR-HUD visual image distortion correction based on the multi-eye position mapping table from the virtual images to the input image under the K eye positions.
Further, in the step S1, for each of the K set eye positions, a binocular camera is placed at the current eye position, and the binocular camera is calibrated by a calibration board, so as to obtain the spatial coordinates of the current eye position.
Further, the step S3 specifically includes the following steps:
s31: respectively acquiring virtual image pictures of the input images corresponding to each of the K eye positions through a calibrated binocular camera;
s32: extracting characteristic points of a virtual image picture of the input image, and acquiring space coordinates of the characteristic points by a binocular camera space point measuring method;
s33: and setting a Y value of the virtual image equivalent plane based on the Y average value of the space coordinates of the characteristic points, and acquiring corresponding coordinates of the characteristic points in the input image in the virtual image equivalent plane at each eye position.
Further, the step S4 specifically includes the following steps:
s41: taking the maximum inscribed rectangle of each feature point in the distribution area in the virtual image equivalent plane as a virtual projection screen;
s42: respectively obtaining 3 adjacent feature points of each pixel point P except the feature points in the virtual projection screen, and obtaining the weight between the coordinate of the pixel point P in the virtual projection screen and the 3 adjacent feature points by a linear interpolation method based on convex combination of the adjacent 3 feature points;
s43: and acquiring the coordinate of the P point in the input image based on the coordinate and the space coordinate of the 3 adjacent feature points of the P point in the virtual image, the coordinate of the pixel point P in the virtual projection screen and the weight between the pixel point P and the 3 adjacent feature points, and acquiring the inverse mapping relation from the virtual image to the input image under the current eye position.
Further, step S42 specifically includes the following steps:
firstly, listing a coordinate expression of a pixel point P in a virtual projection screen:
P(x,y,z)=α1P1(x1,y1,z1)+α2P2(x2,y2,z2)+α3P3(x2,y2,z2)
wherein, P1、P2、P3The 3 characteristic points around and nearest to the pixel point P are (x, y, z) the coordinate of the pixel point P in the virtual image, (x1,y1,z1)、(x2,y2,z2)、(x2,y2,z2) Are respectively a characteristic point P1、P2、P3Spatial coordinate of (a)1、α2、α3Are respectively a characteristic point P1、P2、P3A corresponding weight;
through formula conversion, obtaining the weight between the coordinate of the pixel point P in the virtual projection screen and the 3 adjacent feature points:
Figure BDA0002414676050000031
wherein the content of the first and second substances,
Figure BDA0002414676050000032
respectively, 3 feature points P around and nearest to the pixel point P1、P2、P3The spatial coordinates of (a) are determined,
Figure BDA0002414676050000033
is the coordinate of the pixel point P in the virtual image, alpha1、α2、α3Are respectively a characteristic point P1、P2、P3The corresponding weight.
Further, in step S43, the coordinates of the P point in the input image are:
Figure BDA0002414676050000034
wherein the content of the first and second substances,
Figure BDA0002414676050000035
as the coordinates of the pixel point P in the input image, (u)1,v1)、(u2,v2)、(u3,v3) Are respectively a characteristic point P1、P2、P3The spatial coordinates of (a);
and the relational expression of the coordinates of all the pixel points in the virtual projection screen and the coordinates of all the pixel points in the input image is the inverse mapping relation from the virtual image to the input image.
Further, in step S5, the linear interpolation of the three-point convex combination of any eye position is:
Figure BDA0002414676050000036
wherein the content of the first and second substances,
Figure BDA0002414676050000037
representing the spatial coordinates of an arbitrary point E in the eye position plane formed by the K set eye positions,
Figure BDA0002414676050000038
Figure BDA0002414676050000041
adjacent eye position E of E1、E2、E3The spatial coordinates of the optical system (c),
Figure BDA0002414676050000044
is E1、E2、E3The weight of (c);
the corresponding relation of the coordinates of any point Q in the virtual image and the input image under the eye position E is as follows:
Figure BDA0002414676050000042
wherein the content of the first and second substances,
Figure BDA0002414676050000043
is the coordinate of Q in the input image, (u)1,v1)、(u2,v2)、(u3,v3) Respectively Q point is at eye position E1、E2、E3The coordinates of the following.
On the other hand, the invention provides an AR-HUD image distortion correction system under a dynamic eye position, which comprises an eye position calibration module, a dot diagram and virtual image acquisition module, a coordinate corresponding module, a monocular bit linear interpolation module, a binocular bit linear interpolation module, a distortion correction module and an output module;
the eye position calibration module is configured to measure the spatial coordinates of each of the K set eye positions respectively to obtain K eye position spatial coordinates; the eye position is the middle position of two eyes; the space coordinate system takes the front of the driver sitting straight and looking straight as the Y axis, the direction from the left eye to the right eye as the X axis and the right above as the Z axis;
the dot matrix diagram and virtual image acquisition module is configured to take a normalized dot matrix diagram as an input image, respectively acquire virtual images of the normalized dot matrix diagram under K eye positions, and acquire K virtual images;
the coordinate corresponding module is configured to obtain spatial coordinates of each feature point in the input image and corresponding coordinates of the feature point in a virtual image equivalent plane under each eye position based on each eye position spatial coordinate in the K eye position spatial coordinates, and obtain K coordinate sets;
the monocular bit linear interpolation module is configured to acquire an inverse mapping relation from a virtual image under the corresponding current eye position to an input image by using a linear interpolation method for each coordinate set in the K coordinate sets;
the multi-eye linear interpolation module is configured to obtain a multi-eye mapping table from the virtual images under the K eye positions to the input image based on the inverse mapping relation from the virtual image under each eye position to the input image;
and the output module is configured to output the AR-HUD visual image after the distortion correction.
In a third aspect of the present invention, a processing system is provided, which includes a processor, a storage device; the processor is suitable for executing various programs; the storage device is suitable for storing a plurality of programs; the program is adapted to be loaded and executed by a processor to implement the above-described method for AR-HUD image distortion correction under dynamic eye position.
In a fourth aspect of the present invention, there is provided a storage device having stored therein a plurality of programs adapted to be loaded and executed by a processor to implement the above-described method for correcting AR-HUD image distortion under dynamic eye positions.
The invention has the beneficial effects that:
(1) according to the method for correcting the distortion of the AR-HUD image under the dynamic eye position, the mapping relation between the pixels of the HUD actual screen and the virtual projection screen under the multiple eye positions in the EyeBox range is calibrated to form a multi-eye position mapping table, the mapping relation between the pixels of the HUD actual screen and the virtual projection screen under any eye position in the EyeBox range can be obtained by using a three-point convex combined linear interpolation method of any eye position, the problems that the eye positions of drivers are different and the distortion of the observed HUD virtual image is different are solved, the real scene of the distortion of the dynamic eye position and the dynamic virtual image is fully reflected, and the distortion removing effect of the AR-HUD virtual image under the dynamic eye position is improved by matching with a reasonable calculation method and program design.
(2) The AR-HUD image distortion correction method under the dynamic eye positions utilizes a virtual image predistortion four-point linear interpolation method to realize high-precision and high-accuracy AR-HUD virtual image distortion correction under a single eye position in the dynamic eye positions.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention may be realized and attained by the means of the instrumentalities and combinations particularly pointed out hereinafter.
Drawings
For the purposes of promoting a better understanding of the objects, aspects and advantages of the invention, reference will now be made to the following detailed description taken in conjunction with the accompanying drawings in which:
FIG. 1 is a schematic flow chart of the method for correcting AR-HUD image distortion under dynamic eye position according to the present invention;
FIG. 2 is a schematic diagram of a HUD imaging process according to an embodiment of the method for correcting AR-HUD image distortion under dynamic eye positions;
FIG. 3 is a normalized dot matrix diagram and a virtual image corresponding thereto according to an embodiment of the method for correcting distortion of an AR-HUD image under a dynamic eye position;
FIG. 4 is a convex combination diagram of adjacent 3 feature points according to an embodiment of the method for correcting distortion of an AR-HUD image under a dynamic eye position;
FIG. 5 is a schematic diagram of EyeBox setting eye position according to an embodiment of the method for correcting AR-HUD image distortion under dynamic eye position;
FIG. 6 is a diagram illustrating the effect of the fixed eye distortion correction and the method of the present invention for the distortion correction of the AR-HUD image under the dynamic eye position according to an embodiment of the present invention.
Detailed Description
The following embodiments of the present invention are provided by way of specific examples, and other advantages and effects of the present invention will be readily apparent to those skilled in the art from the disclosure herein. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention in a schematic way, and the features in the following embodiments and examples may be combined with each other without conflict.
Wherein the showings are for the purpose of illustrating the invention only and not for the purpose of limiting the same, and in which there is shown by way of illustration only and not in the drawings in which there is no intention to limit the invention thereto; to better illustrate the embodiments of the present invention, some parts of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The same or similar reference numerals in the drawings of the embodiments of the present invention correspond to the same or similar components; in the description of the present invention, it should be understood that if there is an orientation or positional relationship indicated by terms such as "upper", "lower", "left", "right", "front", "rear", etc., based on the orientation or positional relationship shown in the drawings, it is only for convenience of description and simplification of description, but it is not an indication or suggestion that the referred device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and therefore, the terms describing the positional relationship in the drawings are only used for illustrative purposes, and are not to be construed as limiting the present invention, and the specific meaning of the terms may be understood by those skilled in the art according to specific situations.
The invention discloses an AR-HUD image distortion correction method under a dynamic eye position, which comprises the following steps:
step S10, respectively measuring the spatial coordinates of each of the K set eye positions to obtain K eye position spatial coordinates; the eye position is the middle position of two eyes; the space coordinate system takes the front of the driver sitting straight and looking straight as the Y axis, the direction from the left eye to the right eye as the X axis and the right above as the Z axis;
step S20, taking the normalized dot matrix as an input image, respectively obtaining virtual images of the normalized dot matrix under K eye positions, and obtaining K virtual images;
step S30, acquiring the spatial coordinates of each feature point in the input image and the corresponding coordinates in the virtual image equivalent plane under each eye position based on each eye position spatial coordinate in the K eye position spatial coordinates respectively, and acquiring K coordinate sets;
step S40, for each coordinate set in the K coordinate sets, obtaining the inverse mapping relation from the corresponding virtual image under the current eye position to the input image by using a linear interpolation method;
step S50, acquiring a multi-eye mapping table from K eye position virtual images to input images based on the inverse mapping relation from each eye position virtual image to the input images;
and step S60, acquiring the real-time eye position of the driver through a pupil tracking algorithm, and performing AR-HUD visual image distortion correction based on the multi-eye position mapping table from the virtual image to the input image under the K eye positions.
In order to more clearly describe the method for correcting AR-HUD image distortion under dynamic eye position according to the present invention, the following describes the steps in the embodiment of the method according to the present invention with reference to fig. 1.
The method for correcting the AR-HUD image distortion under the dynamic eye position comprises the following steps of S10-S60, wherein the steps are described in detail as follows:
step S10, respectively measuring the spatial coordinates of each of the K set eye positions to obtain K eye position spatial coordinates; the eye position is the middle position of two eyes; the space coordinate system takes the front of the driver sitting straight ahead as the Y axis, the direction from the left eye to the right eye as the X axis and the right above as the Z axis.
And for each of the K set eye positions, placing a binocular camera at the current eye position, and calibrating the binocular camera through a calibration plate to obtain the space coordinates of the current eye position.
Step S20, taking the normalized dot matrix map as an input image, and acquiring virtual images of the normalized dot matrix map at K eye positions, respectively, to acquire K virtual images.
As shown in fig. 2, which is a schematic view of an HUD imaging process according to an embodiment of the method for correcting AR-HUD image distortion under dynamic eye positions of the present invention, a binocular camera is placed in one of the set eye positions (the set eye position is set in a position where a complete virtual image can be observed in a normal sitting position), and the binocular camera is calibrated by using a calibration board. After the calibration is completed, the virtual chessboard image projected by the HUD is photographed.
Step S30, obtaining the spatial coordinates of each feature point in the input image and the corresponding coordinates in the virtual image equivalent plane under each eye position based on each eye position spatial coordinate in the K eye position spatial coordinates, respectively, to obtain K coordinate sets.
Each coordinate set in the K coordinate sets comprises n coordinate sets, and each coordinate set comprises a spatial coordinate of one feature point and a corresponding coordinate in a virtual image equivalent plane under the current eye position.
And step S31, acquiring virtual image pictures of the input images corresponding to each of the K eye positions through the calibrated binocular cameras respectively.
And step S32, extracting the characteristic points of the virtual image picture of the input image, and acquiring the space coordinates of each characteristic point by a binocular camera space point measuring method.
Step S33, setting a Y value of the virtual image equivalent plane based on the Y average value of the spatial coordinates of each feature point, and acquiring coordinates of each feature point in the input image corresponding to the virtual image equivalent plane at each eye position.
In an embodiment of the present invention, the physical resolution of the HUD is 864 × 480, as shown in fig. 3, a normalized lattice diagram and a virtual image corresponding to the normalized lattice diagram in an embodiment of the method for correcting AR-HUD image distortion under a dynamic eye position according to the present invention, the left diagram in fig. 3 is a selected normalized lattice diagram, the size of the normalized lattice diagram is 864 × 480, the normalized lattice diagram internally includes 40 × 20 feature points, the normalized lattice diagram is used as an input image, and the right diagram in fig. 3 is a virtual image corresponding to the left diagram in fig. 3.
And step S40, for each coordinate set in the K coordinate sets, obtaining the inverse mapping relation from the corresponding virtual image under the current eye position to the input image by using a linear interpolation method.
And step S41, using the maximum inscribed rectangle of each feature point in the distribution area in the virtual image equivalent plane as a virtual projection screen.
Step S42, respectively obtaining 3 neighboring feature points of each pixel point P except for each feature point in the virtual projection screen, and obtaining a weight between a coordinate of the pixel point P in the virtual projection screen and the 3 neighboring feature points by a linear interpolation method based on a convex combination of the neighboring 3 feature points.
Firstly, a coordinate expression of a pixel point P in a virtual projection screen is listed, as shown in formula (1):
P(x,y,z)=α1P1(x1,y1,z1)+α2P2(x2,y2,z2)+α3P3(x2,y2,z2) Formula (1)
Wherein, P1、P2、P3The 3 characteristic points around and nearest to the pixel point P are (x, y, z) the coordinate of the pixel point P in the virtual image, (x1,y1,z1)、(x2,y2,z2)、(x2,y2,z2) Are respectively a characteristic point P1、P2、P3Spatial coordinate of (a)1、α2、α3Are respectively a characteristic point P1、P2、P3The corresponding weight.
Through formula conversion, the weight between the coordinate of the pixel point P in the virtual projection screen and the 3 adjacent feature points is obtained, as shown in formula (2):
Figure BDA0002414676050000081
wherein the content of the first and second substances,
Figure BDA0002414676050000082
respectively, 3 feature points P around and nearest to the pixel point P1、P2、P3The spatial coordinates of the optical system (c),
Figure BDA0002414676050000083
is the coordinate of the pixel point P in the virtual image, alpha1、α2、α3Are respectively a characteristic point P1、P2、P3The corresponding weight.
As shown in fig. 4, which is a schematic diagram of convex combination of adjacent 3 feature points in an AR-HUD image distortion correction method according to an embodiment of the present invention, P1, P2, and P3 are adjacent 3 feature points of a pixel P, and these three points are not collinear.
Step S43, obtaining the coordinates of the P point in the input image based on the coordinates and spatial coordinates of the 3 neighboring feature points of the P point in the virtual image, the coordinates of the pixel point P in the virtual projection screen, and the weights between the pixel point P and the 3 neighboring feature points, as shown in formula (3):
Figure BDA0002414676050000084
wherein the content of the first and second substances,
Figure BDA0002414676050000085
as the coordinates of the pixel point P in the input image, (u)1,v1)、(u2,v2)、(u3,v3) Are respectively a characteristic point P1、P2、P3The spatial coordinates of (a).
And the relational expression of the coordinates of all the pixel points in the virtual projection screen and the coordinates of all the pixel points in the input image is the inverse mapping relation from the virtual image to the input image.
Step S50, based on the inverse mapping relationship between the virtual image at each eye position and the input image, a multi-eye position mapping table from the virtual images at K eye positions to the input image is obtained.
The three-point convex combination linear interpolation of any eye position is shown as the formula (4):
Figure BDA0002414676050000086
wherein the content of the first and second substances,
Figure BDA0002414676050000091
representing the spatial coordinates of an arbitrary point E in the eye position plane formed by the K set eye positions,
Figure BDA0002414676050000092
Figure BDA0002414676050000093
adjacent eye position E of E1、E2、E3The spatial coordinates of the optical system (c),
Figure BDA0002414676050000096
is E1、E2、E3The weight of (c).
The correspondence of the coordinates of any point Q under the eye position E in the virtual image and the input image is shown in formula (5):
Figure BDA0002414676050000094
wherein the content of the first and second substances,
Figure BDA0002414676050000095
is the coordinate of Q in the input image, (u)1,v1)、(u2,v2)、(u3,v3) Respectively Q point is at eye position E1、E2、E3The coordinates of the following.
As shown in fig. 5, a schematic view of EyeBox setting eye positions according to an embodiment of the dynamic eye position AR-HUD image distortion correction method of the present invention is shown, where each eye position is set at a position where a complete virtual image can be observed in a normal sitting position.
And step S60, acquiring the real-time eye position of the driver through a pupil tracking algorithm, and performing AR-HUD visual image distortion correction based on the multi-eye position mapping table from the virtual image to the input image under the K eye positions.
The obtained real-time eye position of the driver is detected by using a pupil tracking algorithm, and the mapping relation between the two-dimensional coordinate in the virtual projection screen under the eye position of the driver and the pixel coordinate of the original input image can be established by using a multi-eye position mapping table and combining the formula. Regarding the visual design image as an actual screen, adjusting the visual design image according to the mapping relation between the actual screen and the virtual projection screen pixels, thereby realizing the distortion correction of the HUD projection virtual image under the dynamic eye position.
According to the method, the mapping relation between the pixels of the HUD actual screen and the virtual projection screen under the plurality of eye positions in the EyeBox range is calibrated to form a multi-eye position mapping table, the mapping relation between the pixels of the HUD actual screen and the virtual projection screen under any eye position in the EyeBox range can be obtained by using a linear interpolation method, the problems that eye positions of different drivers are different and the distortion of the viewed HUD virtual image is different are solved, and the HUD imaging effect is improved.
As shown in fig. 6, a comparison graph of the distortion correction effect of the fixed eye position and the distortion correction effect of the method of the present invention is shown for the AR-HUD image distortion correction method under dynamic eye position of the present invention, the upper left graph is a graph of the standard eye position distortion correction effect under the fixed eye position, the lower left graph is a graph of the non-standard eye position distortion correction effect under the fixed eye position, the upper right graph is a graph of the standard eye position distortion correction effect of the method of the present invention, and the lower right graph is a graph of the non-standard eye position distortion correction effect of the method of the present invention. It can be seen that the method has better distortion effect, and is particularly superior to the method for correcting the image distortion under the non-standard eye position.
The second embodiment of the invention provides an AR-HUD image distortion correction system under a dynamic eye position, which comprises an eye position calibration module, a dot pattern and virtual image acquisition module, a coordinate corresponding module, a monocular bit linear interpolation module, a multi-eye bit linear interpolation module, a distortion correction module and an output module;
the eye position calibration module is configured to measure the spatial coordinates of each of the K set eye positions respectively to obtain K eye position spatial coordinates; the eye position is the middle position of two eyes; the space coordinate system takes the front of the driver sitting straight and looking straight as the Y axis, the direction from the left eye to the right eye as the X axis and the right above as the Z axis;
the dot matrix diagram and virtual image acquisition module is configured to take a normalized dot matrix diagram as an input image, respectively acquire virtual images of the normalized dot matrix diagram under K eye positions, and acquire K virtual images;
the coordinate corresponding module is configured to obtain spatial coordinates of each feature point in the input image and corresponding coordinates of the feature point in a virtual image equivalent plane under each eye position based on each eye position spatial coordinate in the K eye position spatial coordinates, and obtain K coordinate sets;
the monocular bit linear interpolation module is configured to acquire an inverse mapping relation from a virtual image under the corresponding current eye position to an input image by using a linear interpolation method for each coordinate set in the K coordinate sets;
the multi-eye linear interpolation module is configured to obtain a multi-eye mapping table from the virtual images under the K eye positions to the input image based on the inverse mapping relation from the virtual image under each eye position to the input image;
the distortion correction module is configured to acquire the real-time eye position of the driver through a pupil tracking algorithm, and perform AR-HUD visual image distortion correction based on a multi-eye position mapping table from the virtual image under the K eye positions to the input image;
the output module is configured to output the AR-HUD visual image after the distortion correction.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process and related description of the system described above may refer to the corresponding process in the foregoing method embodiments, and will not be described herein again.
It should be noted that, the dynamic AR-HUD image distortion correcting system under eye position provided in the foregoing embodiment is only illustrated by the division of the above functional modules, and in practical applications, the above functions may be allocated by different functional modules according to needs, that is, the modules or steps in the embodiments of the present invention are decomposed or combined again, for example, the modules in the foregoing embodiments may be combined into one module, or may be further split into multiple sub-modules, so as to complete all or part of the above described functions. The names of the modules and steps involved in the embodiments of the present invention are only for distinguishing the modules or steps, and are not to be construed as unduly limiting the present invention.
A processing system according to a third embodiment of the present invention includes a processor, a storage device; a processor adapted to execute various programs; a storage device adapted to store a plurality of programs; the program is adapted to be loaded and executed by a processor to implement the above-described method for AR-HUD image distortion correction under dynamic eye position.
A storage device according to a fourth embodiment of the present invention stores a plurality of programs, which are suitable for being loaded and executed by a processor to implement the above-described method for correcting AR-HUD image distortion at a dynamic eye position.
Finally, the above embodiments are only intended to illustrate the technical solutions of the present invention and not to limit the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions, and all of them should be covered by the claims of the present invention.

Claims (10)

1. An AR-HUD image distortion correction method under a dynamic eye position is characterized by comprising the following steps: the method comprises the following steps:
s1: respectively measuring the spatial coordinates of each of the K set eye positions to obtain K eye position spatial coordinates; the eye position is the middle position of two eyes; the space coordinate system takes the front of the driver sitting straight and looking straight as the Y axis, the direction from the left eye to the right eye as the X axis and the right above as the Z axis;
s2: respectively acquiring virtual images of the normalized dot matrix map under K eye positions by taking the normalized dot matrix map as an input image, and acquiring K virtual images;
s3: respectively acquiring the spatial coordinates of each characteristic point in the input image and the corresponding coordinates in the virtual image equivalent plane under each eye position based on each eye position spatial coordinate in the K eye position spatial coordinates to obtain K coordinate sets;
s4: for each coordinate set in the K coordinate sets, obtaining the inverse mapping relation from the corresponding virtual image under the current eye position to the input image by using a linear interpolation method;
s5: acquiring a multi-eye mapping table from the virtual images under the K eye positions to the input image based on the inverse mapping relation from the virtual image under each eye position to the input image;
s6: and acquiring the real-time eye position of the driver through a pupil tracking algorithm, and performing AR-HUD visual image distortion correction based on the multi-eye position mapping table from the virtual images to the input image under the K eye positions.
2. The method for correcting AR-HUD image distortion under dynamic eye position according to claim 1, wherein: in the step S1, for each of the K set eye positions, a binocular camera is placed at the current eye position, and the binocular camera is calibrated by a calibration board, so as to obtain the spatial coordinates of the current eye position.
3. The method for correcting AR-HUD image distortion under dynamic eye position according to claim 1, wherein: in step S3, the method specifically includes the following steps:
s31: respectively acquiring virtual image pictures of the input images corresponding to each of the K eye positions through a calibrated binocular camera;
s32: extracting characteristic points of a virtual image picture of the input image, and acquiring space coordinates of the characteristic points by a binocular camera space point measuring method;
s33: and setting a Y value of the virtual image equivalent plane based on the Y average value of the space coordinates of the characteristic points, and acquiring corresponding coordinates of the characteristic points in the input image in the virtual image equivalent plane at each eye position.
4. The method for correcting distortion of an AR-HUD image under a dynamic eye position according to claim 1, wherein: in step S4, the method specifically includes the following steps:
s41: taking the maximum inscribed rectangle of each feature point in the distribution area in the virtual image equivalent plane as a virtual projection screen;
s42: respectively obtaining 3 adjacent feature points of each pixel point P except the feature points in the virtual projection screen, and obtaining the weight between the coordinate of the pixel point P in the virtual projection screen and the 3 adjacent feature points by a linear interpolation method based on convex combination of the adjacent 3 feature points;
s43: and acquiring the coordinate of the P point in the input image based on the coordinate and the space coordinate of the 3 adjacent feature points of the P point in the virtual image, the coordinate of the pixel point P in the virtual projection screen and the weight between the pixel point P and the 3 adjacent feature points, and acquiring the inverse mapping relation from the virtual image to the input image under the current eye position.
5. The method according to claim 4, wherein the AR-HUD image distortion correction under the dynamic eye position comprises: step S42 specifically includes the following steps:
firstly, listing a coordinate expression of a pixel point P in a virtual projection screen:
P(x,y,z)=α1P1(x1,y1,z1)+α2P2(x2,y2,z2)+α3P3(x3,y3,z3)
wherein, P1、P2、P3The 3 characteristic points around and nearest to the pixel point P are (x, y, z) the coordinate of the pixel point P in the virtual image, (x1,y1,z1)、(x2,y2,z2)、(x3,y3,z3) Are respectively a characteristic point P1、P2、P3Spatial coordinate of (a)1、α2、α3Are respectively a characteristic point P1、P2、P3A corresponding weight;
through formula conversion, obtaining the weight between the coordinate of the pixel point P in the virtual projection screen and the 3 adjacent feature points:
Figure FDA0003564912440000021
wherein the content of the first and second substances,
Figure FDA0003564912440000022
respectively, 3 feature points P around and nearest to the pixel point P1、P2、P3The spatial coordinates of the optical system (c),
Figure FDA0003564912440000023
is the coordinate of the pixel point P in the virtual image, alpha1、α2、α3Are respectively a characteristic point P1、P2、P3The corresponding weight.
6. The method according to claim 5, wherein the AR-HUD image distortion correction under dynamic eye position is performed by: in step S43, the coordinates of the P point in the input image are:
Figure FDA0003564912440000024
wherein the content of the first and second substances,
Figure FDA0003564912440000025
as the coordinates of the pixel point P in the input image, (u)1,v1)、(u2,v2)、(u3,v3) Are respectively a characteristic point P1、P2、P3Coordinates in the input image;
and the relational expression of the coordinates of all the pixel points in the virtual projection screen and the coordinates of all the pixel points in the input image is the inverse mapping relation from the virtual image to the input image.
7. The method for correcting AR-HUD image distortion under dynamic eye position according to claim 1, wherein: in step S5, the linear interpolation of the three-point convex combination of any eye position is:
Figure FDA0003564912440000031
wherein the content of the first and second substances,
Figure FDA0003564912440000032
representing the spatial coordinates of an arbitrary point E in the eye position plane formed by the K set eye positions,
Figure FDA0003564912440000033
Figure FDA0003564912440000034
adjacent eye position E of E1、E2、E3The spatial coordinates of the optical system (c),
Figure FDA0003564912440000035
is E1、E2、E3The weight of (c);
the corresponding relation of the coordinates of any point Q in the virtual image and the input image under the eye position E is as follows:
Figure FDA0003564912440000036
wherein the content of the first and second substances,
Figure FDA0003564912440000037
is the coordinate of Q in the input image, (u)1,v1)、(u2,v2)、(u3,v3) Respectively Q point is at eye position E1、E2、E3The coordinates of the following.
8. An AR-HUD image distortion correction system under dynamic eye position is characterized in that: the system comprises an eye position calibration module, a dot matrix diagram and virtual image acquisition module, a coordinate corresponding module, a single-eye bit linear interpolation module, a multi-eye bit linear interpolation module, a distortion correction module and an output module;
the eye position calibration module is configured to measure the spatial coordinates of each of the K set eye positions respectively to obtain K eye position spatial coordinates; the eye position is the middle position of two eyes; the space coordinate system takes the sitting and head-up direction of a driver as a Y axis, the direction from the left eye to the right eye as an X axis and the position right above the left eye as a Z axis;
the dot matrix image and virtual image acquisition module is configured to take the normalized dot matrix image as an input image, respectively acquire virtual images of the normalized dot matrix image under K eye positions, and acquire K virtual images;
the coordinate corresponding module is configured to obtain spatial coordinates of each feature point in the input image and corresponding coordinates of the feature point in a virtual image equivalent plane under each eye position based on each eye position spatial coordinate in the K eye position spatial coordinates, and obtain K coordinate sets;
the monocular bit linear interpolation module is configured to acquire an inverse mapping relation from a virtual image under the corresponding current eye position to an input image by using a linear interpolation method for each coordinate set in the K coordinate sets;
the multi-eye linear interpolation module is configured to obtain a multi-eye mapping table from the virtual images under the K eye positions to the input image based on the inverse mapping relation from the virtual image under each eye position to the input image;
the output module is configured to output the AR-HUD visual image after the distortion correction.
9. A processing system comprising a processor, a storage device; the processor is suitable for executing various programs; the storage device is suitable for storing a plurality of programs; the method is characterized in that: the program is adapted to be loaded and executed by a processor to implement the method for AR-HUD image distortion correction under dynamic eye position as claimed in any one of claims 1-7.
10. A storage device that stores a plurality of programs, characterized in that: the program is adapted to be loaded and executed by a processor to implement the method for AR-HUD image distortion correction under dynamic eye position as claimed in any one of claims 1-7.
CN202010187422.XA 2020-03-17 2020-03-17 AR-HUD image distortion correction method, device and system under dynamic eye position Active CN111476104B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010187422.XA CN111476104B (en) 2020-03-17 2020-03-17 AR-HUD image distortion correction method, device and system under dynamic eye position

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010187422.XA CN111476104B (en) 2020-03-17 2020-03-17 AR-HUD image distortion correction method, device and system under dynamic eye position

Publications (2)

Publication Number Publication Date
CN111476104A CN111476104A (en) 2020-07-31
CN111476104B true CN111476104B (en) 2022-07-01

Family

ID=71747523

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010187422.XA Active CN111476104B (en) 2020-03-17 2020-03-17 AR-HUD image distortion correction method, device and system under dynamic eye position

Country Status (1)

Country Link
CN (1) CN111476104B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112258399A (en) * 2020-09-10 2021-01-22 江苏泽景汽车电子股份有限公司 Hud image optical correction method for reverse modeling
WO2022088103A1 (en) * 2020-10-30 2022-05-05 华为技术有限公司 Image calibration method and apparatus
CN113313656B (en) * 2020-11-18 2023-02-21 江苏泽景汽车电子股份有限公司 Distortion correction method suitable for HUD upper, middle and lower eye boxes
CN112381739A (en) * 2020-11-23 2021-02-19 天津经纬恒润科技有限公司 Imaging distortion correction method and device of AR-HUD system
CN112731664A (en) * 2020-12-28 2021-04-30 北京经纬恒润科技股份有限公司 Vehicle-mounted augmented reality head-up display system and display method
CN112614194B (en) * 2021-01-29 2021-09-03 北京经纬恒润科技股份有限公司 Data processing method, system and device of image acquisition equipment
CN113240592A (en) * 2021-04-14 2021-08-10 重庆利龙科技产业(集团)有限公司 Distortion correction method for calculating virtual image plane based on AR-HUD dynamic eye position
CN113421346B (en) * 2021-06-30 2023-02-17 暨南大学 Design method of AR-HUD head-up display interface for enhancing driving feeling
CN115202476B (en) * 2022-06-30 2023-04-11 泽景(西安)汽车电子有限责任公司 Display image adjusting method and device, electronic equipment and storage medium
CN114820396B (en) * 2022-07-01 2022-09-13 泽景(西安)汽车电子有限责任公司 Image processing method, device, equipment and storage medium
CN114998157B (en) * 2022-07-18 2022-11-15 江苏泽景汽车电子股份有限公司 Image processing method, image processing device, head-up display and storage medium
CN116017174B (en) * 2022-12-28 2024-02-06 江苏泽景汽车电子股份有限公司 HUD distortion correction method, device and system

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101000296A (en) * 2006-12-20 2007-07-18 西北师范大学 Method of 3D reconstruction metallographic structure microproject based on digital image technology
CN101610341A (en) * 2008-06-18 2009-12-23 索尼株式会社 Image processing equipment, image processing method and program
CN101986243A (en) * 2010-11-05 2011-03-16 友达光电股份有限公司 Stereoscopic image interactive system and position offset compensation method
CN102591532A (en) * 2012-01-22 2012-07-18 南京先能光电有限公司 Dual-reflector cross-positioning electronic whiteboard device
CN103142202A (en) * 2013-01-21 2013-06-12 东北大学 Prism-based medical endoscope system with measurement function and method
CN103543451A (en) * 2012-07-17 2014-01-29 中国科学院电子学研究所 Multipath virtual image suppression SAR processing method based on compressed sensing
CN103941851A (en) * 2013-01-23 2014-07-23 青岛海信电器股份有限公司 Method and system for achieving virtual touch calibration
CN104076513A (en) * 2013-03-26 2014-10-01 精工爱普生株式会社 Head-mounted display device, control method of head-mounted display device, and display system
CN104700076A (en) * 2015-02-13 2015-06-10 电子科技大学 Face image virtual sample generating method
CN104914981A (en) * 2014-03-10 2015-09-16 联想(北京)有限公司 Information processing method and electronic equipment
CN106127714A (en) * 2016-07-01 2016-11-16 南京睿悦信息技术有限公司 A kind of measuring method of virtual reality head-mounted display equipment distortion parameter
CN107272199A (en) * 2013-11-27 2017-10-20 奇跃公司 Virtual and augmented reality System and method for
CN107516335A (en) * 2017-08-14 2017-12-26 歌尔股份有限公司 The method for rendering graph and device of virtual reality
CN107665483A (en) * 2017-09-27 2018-02-06 天津智慧视通科技有限公司 Exempt from calibration easily monocular camera lens fish eye images distortion correction method
WO2018070193A1 (en) * 2016-10-13 2018-04-19 マクセル株式会社 Head-up display device
CN207625712U (en) * 2017-12-25 2018-07-17 广东虚拟现实科技有限公司 Vision display system and head-wearing display device
CN109688392A (en) * 2018-12-26 2019-04-26 联创汽车电子有限公司 AR-HUD optical projection system and mapping relations scaling method and distortion correction method

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101000296A (en) * 2006-12-20 2007-07-18 西北师范大学 Method of 3D reconstruction metallographic structure microproject based on digital image technology
CN101610341A (en) * 2008-06-18 2009-12-23 索尼株式会社 Image processing equipment, image processing method and program
CN101986243A (en) * 2010-11-05 2011-03-16 友达光电股份有限公司 Stereoscopic image interactive system and position offset compensation method
CN102591532A (en) * 2012-01-22 2012-07-18 南京先能光电有限公司 Dual-reflector cross-positioning electronic whiteboard device
CN103543451A (en) * 2012-07-17 2014-01-29 中国科学院电子学研究所 Multipath virtual image suppression SAR processing method based on compressed sensing
CN103142202A (en) * 2013-01-21 2013-06-12 东北大学 Prism-based medical endoscope system with measurement function and method
CN103941851A (en) * 2013-01-23 2014-07-23 青岛海信电器股份有限公司 Method and system for achieving virtual touch calibration
CN104076513A (en) * 2013-03-26 2014-10-01 精工爱普生株式会社 Head-mounted display device, control method of head-mounted display device, and display system
CN107329259A (en) * 2013-11-27 2017-11-07 奇跃公司 Virtual and augmented reality System and method for
CN107272199A (en) * 2013-11-27 2017-10-20 奇跃公司 Virtual and augmented reality System and method for
CN104914981A (en) * 2014-03-10 2015-09-16 联想(北京)有限公司 Information processing method and electronic equipment
CN104700076A (en) * 2015-02-13 2015-06-10 电子科技大学 Face image virtual sample generating method
CN106127714A (en) * 2016-07-01 2016-11-16 南京睿悦信息技术有限公司 A kind of measuring method of virtual reality head-mounted display equipment distortion parameter
WO2018070193A1 (en) * 2016-10-13 2018-04-19 マクセル株式会社 Head-up display device
CN107516335A (en) * 2017-08-14 2017-12-26 歌尔股份有限公司 The method for rendering graph and device of virtual reality
CN107665483A (en) * 2017-09-27 2018-02-06 天津智慧视通科技有限公司 Exempt from calibration easily monocular camera lens fish eye images distortion correction method
CN207625712U (en) * 2017-12-25 2018-07-17 广东虚拟现实科技有限公司 Vision display system and head-wearing display device
CN109688392A (en) * 2018-12-26 2019-04-26 联创汽车电子有限公司 AR-HUD optical projection system and mapping relations scaling method and distortion correction method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Review on the research of intelligent vehicle safety driving assistant technology;Wang R.;《Journal of Highway and Transportation Research and Development》;20071231;第24-29页 *
基于改进随机森林分类器在RGBD面部表情上的应用研究;张御宇等;《南京师大学报(自然科学版)》;20190320(第01期);第88-95页 *
无限远虚像显示系统图像畸变校正方法的研究;倪平涛等;《科技资讯》;20131023(第30期);第37-38 *
眼动跟踪技术及其在军事领域的应用;贾宏博等;《航天医学与医学工程》;20161015(第05期);第77-82 *

Also Published As

Publication number Publication date
CN111476104A (en) 2020-07-31

Similar Documents

Publication Publication Date Title
CN111476104B (en) AR-HUD image distortion correction method, device and system under dynamic eye position
CN109688392B (en) AR-HUD optical projection system, mapping relation calibration method and distortion correction method
US8368761B2 (en) Image correction method for camera system
JP5739584B2 (en) 3D image synthesizing apparatus and method for visualizing vehicle periphery
EP2061234A1 (en) Imaging apparatus
US10176595B2 (en) Image processing apparatus having automatic compensation function for image obtained from camera, and method thereof
WO2009116328A1 (en) Image processing device and method, driving support system, and vehicle
CN112655024B (en) Image calibration method and device
US20140247358A1 (en) Image generation device for monitoring surroundings of vehicle
CN103792674A (en) Device and method for measuring and correcting distortion of virtual reality displayer
CN113240592A (en) Distortion correction method for calculating virtual image plane based on AR-HUD dynamic eye position
WO2023071834A1 (en) Alignment method and alignment apparatus for display device, and vehicle-mounted display system
JP2009042162A (en) Calibration device and method therefor
CN110006634B (en) Viewing field angle measuring method, viewing field angle measuring device, display method and display equipment
CN109855845B (en) Binocular eye lens measurement vehicle-mounted HUD virtual image distance and correction method
KR101583663B1 (en) Method for generating calibration indicator of camera for vehicle
JP2011259152A (en) Driving assistance device
CN111242866A (en) Neural network interpolation method for AR-HUD virtual image distortion correction under observer dynamic eye position condition
JP2003091720A (en) View point converting device, view point converting program and image processor for vehicle
CN115239922A (en) AR-HUD three-dimensional coordinate reconstruction method based on binocular camera
CN104020565B (en) There is display system and the method for displaying image thereof of optical lens and display screen
JP2013024712A (en) Method and system for calibrating multiple camera
CN110035273B (en) Distortion correction method and device and display equipment using distortion correction method and device
CN115984122A (en) HUD backlight display system and method
CN112698717B (en) Local image processing method and device, vehicle-mounted system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant