CN114998157B - Image processing method, image processing device, head-up display and storage medium - Google Patents

Image processing method, image processing device, head-up display and storage medium Download PDF

Info

Publication number
CN114998157B
CN114998157B CN202210838359.0A CN202210838359A CN114998157B CN 114998157 B CN114998157 B CN 114998157B CN 202210838359 A CN202210838359 A CN 202210838359A CN 114998157 B CN114998157 B CN 114998157B
Authority
CN
China
Prior art keywords
distortion correction
image
target
output
correction data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210838359.0A
Other languages
Chinese (zh)
Other versions
CN114998157A (en
Inventor
冯学贵
张宁波
吕涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Zejing Automobile Electronic Co ltd
Original Assignee
Jiangsu Zejing Automobile Electronic Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Zejing Automobile Electronic Co ltd filed Critical Jiangsu Zejing Automobile Electronic Co ltd
Priority to CN202210838359.0A priority Critical patent/CN114998157B/en
Publication of CN114998157A publication Critical patent/CN114998157A/en
Application granted granted Critical
Publication of CN114998157B publication Critical patent/CN114998157B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/80
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Abstract

The application discloses an image processing method, an image processing device, a head-up display and a storage medium, relates to the technical field of vehicles, and can correct different distortions of virtual images projected on windshield glass at different distances. The method may be applied to a head-up display in which optical elements may form N sets of optical paths for projecting an image to be displayed on N regions of a windscreen, the method comprising: acquiring N groups of target distortion correction data; acquiring an output image and determining N output partitions of the output image; carrying out distortion correction processing on the N output subareas based on the N groups of target distortion correction data, determining N predistortion subareas corresponding to the N output subareas, and carrying out combination processing on the N predistortion subareas to obtain a predistortion image of the output image; and determining the pre-distortion image as a current image to be displayed, and projecting the current image to be displayed through N groups of optical paths.

Description

Image processing method, image processing device, head-up display and storage medium
Technical Field
The present application relates to the field of vehicle technologies, and in particular, to an image processing method and apparatus, a head-up display, and a storage medium.
Background
With the integration of Augmented Reality (AR) technology and Head Up Display (HUD) technology, the application of the Head Up Display in vehicles is more and more widespread. At present, a head-up display based on the AR technology may use a vehicle windshield as a display screen, and form virtual images of various driving information (for example, real-time vehicle speed information, information of a vehicle ahead, and the like) in a driving process of the vehicle into virtual images of different distances through different optical paths formed by internal optical elements for displaying.
However, due to the existence of factors such as uneven curvature of the windshield glass, virtual images projected on the windshield glass at different distances have different distortions, and the visual experience of a user is affected. Therefore, how to correct different distortions of virtual images projected on a windshield at different distances becomes an urgent technical problem to be solved.
Disclosure of Invention
The application provides an image processing method, an image processing device, a head-up display and a storage medium, and the scheme can be used for correcting different distortions of virtual images projected on windshield glass at different distances, so that the visual experience of a user can be improved.
In order to achieve the purpose, the following technical scheme is adopted in the application:
in a first aspect, the present application provides an image processing method that may be applied to a head-up display in which optical elements may form N sets of optical paths for projecting an image to be displayed on N areas of a windshield, the image processing method may include: acquiring N groups of target distortion correction data; a set of target distortion correction data is used for representing the distortion degree of a set of optical paths when the image to be displayed is projected; n is a positive integer greater than 1; acquiring an output image and determining N output partitions of the output image; one output partition corresponds to one set of optical paths; carrying out distortion correction processing on the N output subareas based on the N groups of target distortion correction data, determining N predistortion subareas corresponding to the N output subareas, and carrying out combination processing on the N predistortion subareas to obtain a predistortion image of the output image; and determining the pre-distortion image as a current image to be displayed, and projecting the current image to be displayed through N groups of optical paths.
According to the technical scheme, N groups of target distortion correction data can be obtained first, and N output partitions of the output image are obtained. Then, based on the N sets of target distortion correction data, the N output partitions may be subjected to distortion correction processing to obtain N predistortion partitions, and the N predistortion partitions may be subjected to merging processing to obtain a predistortion image of the output image. Then, the pre-distorted image can be determined as the current image to be displayed, and the current image to be displayed is projected through the N groups of optical paths. Because a group of target distortion correction data can be used for representing the distortion degree of a group of optical paths generated when the image to be displayed is projected, and one output subarea corresponds to a group of optical paths, N groups of target distortion correction data are adopted to respectively carry out distortion correction processing on N output subareas to obtain N pre-distortion subareas, and then the pre-distortion images obtained by combining the N pre-distortion subareas are projected as the current image to be displayed, namely the output image is adjusted into a reverse distortion image and then the reverse distortion image is projected, so that the offset of distortion can be realized, and the distortion generated in the projection process can be weakened. According to the technical scheme, the distortion generated when the image to be displayed is projected on different optical paths is corrected by adopting different target distortion correction data, and the partition correction of the image to be displayed can be realized. Therefore, the different distortions of the virtual image of the different distances of projection on windshield can be corrected by the aid of the virtual image correcting device, and visual experience of a user can be improved.
Optionally, in a possible implementation, the N sets of target distortion correction data may be N target distortion correction matrices, and the "performing distortion correction processing on N output partitions based on the N sets of target distortion correction data and determining N predistortion partitions corresponding to the N output partitions" may include:
determining a pixel matrix of the first output subarea by taking the distortion correction reference point of the first output subarea as a coordinate origin; the first output partition is any one of N output partitions;
carrying out distortion correction processing on the pixel matrix of the first output subarea based on the first target distortion correction matrix, and determining a first predistortion subarea corresponding to the first output subarea; the first target distortion correction matrix is a target distortion correction matrix corresponding to the first optical path in the N target distortion correction matrices; the first optical path is an optical path corresponding to the first output subarea in the N groups of optical paths.
Optionally, in another possible embodiment, the distortion correction reference point of the first output partition is located at a geometric center of the first output partition.
Optionally, in another possible embodiment, the acquiring N sets of target distortion correction data may include:
determining a target eyepoint position from at least one preset eyepoint position based on the current eyepoint position, and acquiring N groups of distortion correction data corresponding to the target eyepoint position from a distortion correction database; the distortion correction database comprises at least one distortion correction data set, one distortion correction data set corresponds to a preset eyepoint position, and one distortion correction data set comprises N groups of distortion correction data;
and determining N groups of target distortion correction data based on the N groups of distortion correction data corresponding to the target eyepoint position.
Optionally, in another possible embodiment, before "determining a target eyepoint position from at least one preset eyepoint position based on the current eyepoint position, and acquiring N sets of distortion correction data corresponding to the target eyepoint position from the distortion correction database", the method may further include:
acquiring N projected images of the template image, and determining N template partitions of the template image; the N projected images are projected images of the template images acquired based on the target eyepoint positions when the template images are projected on the N areas of the windshield through the N groups of optical paths; one template partition corresponds to one group of light paths;
determining N groups of distortion correction data corresponding to the target eyepoint position based on N pixel matrixes of N template partitions and N pixel matrixes of N projection images;
and determining N groups of distortion correction data corresponding to the target eyepoint position as a first distortion correction data set, and storing the corresponding relation between the target eyepoint position and the first distortion correction data set into a distortion correction database.
Optionally, in another possible embodiment, the "determining N sets of target distortion correction data based on N sets of distortion correction data corresponding to the target eyepoint position" may include:
determining N groups of distortion correction data corresponding to the target eyepoint position as N groups of target distortion correction data;
or determining N sets of target distortion correction data by adopting an interpolation algorithm based on the distance difference between the target eye point position and the current eye point position and N sets of distortion correction data corresponding to the target eye point position.
Optionally, in another possible implementation, the N groups of optical paths include a first optical path and a second optical path; the optical elements in the head-up display include a first mirror, a second mirror, and a third mirror, the first mirror and the third mirror forming a first optical path, and the first mirror, the second mirror, and the third mirror forming a second optical path.
In a second aspect, the present application provides an image processing apparatus for a head-up display in which optical elements form N sets of optical paths for projecting an image to be displayed on N areas of a windshield, the image processing apparatus may include: the device comprises an acquisition module and a processing module;
the acquisition module is used for acquiring N groups of target distortion correction data; a set of target distortion correction data is used for representing the distortion degree of a set of optical paths when the image to be displayed is projected; n is a positive integer greater than 1;
the acquisition module is also used for acquiring the output image and determining N output partitions of the output image; one output partition corresponds to one set of optical paths;
the processing module is used for carrying out distortion correction processing on the N output subareas based on the N groups of target distortion correction data, determining N predistortion subareas corresponding to the N output subareas, and carrying out merging processing on the N predistortion subareas to obtain a predistortion image of the output image;
and the processing module is also used for determining the pre-distortion image as a current image to be displayed and projecting the current image to be displayed through N groups of optical paths.
Optionally, in a possible implementation, the N groups of target distortion correction data may be N target distortion correction matrices, and the processing module is specifically configured to:
determining a pixel matrix of the first output subarea by taking the distortion correction reference point of the first output subarea as a coordinate origin; the first output partition is any one of N output partitions;
carrying out distortion correction processing on the pixel matrix of the first output subarea based on the first target distortion correction matrix, and determining a first predistortion subarea corresponding to the first output subarea; the first target distortion correction matrix is a target distortion correction matrix corresponding to the first optical path in the N target distortion correction matrices; the first optical path is an optical path corresponding to the first output subarea in the N groups of optical paths.
Optionally, in another possible embodiment, the distortion correction reference point of the first output partition is located at a geometric center of the first output partition.
Optionally, in another possible implementation, the obtaining module is specifically configured to:
determining a target eyepoint position from at least one preset eyepoint position based on the current eyepoint position, and acquiring N groups of distortion correction data corresponding to the target eyepoint position from a distortion correction database; the distortion correction database comprises at least one distortion correction data set, one distortion correction data set corresponds to a preset eyepoint position, and one distortion correction data set comprises N groups of distortion correction data;
and determining N groups of target distortion correction data based on the N groups of distortion correction data corresponding to the target eyepoint positions.
Optionally, in another possible implementation, the image processing apparatus may further include a storage module;
the acquisition module is further configured to: the method comprises the steps that on the basis of a current eyepoint position, a target eyepoint position is determined from at least one preset eyepoint position, N projection images of a template image are obtained before N groups of distortion correction data corresponding to the target eyepoint position are obtained from a distortion correction database, and N template partitions of the template image are determined; the N projected images are projected images of the template images acquired based on the target eyepoint positions when the template images are projected on the N areas of the windshield through the N groups of optical paths; one template partition corresponds to one group of light paths;
the processing module is further used for determining N groups of distortion correction data corresponding to the target eyepoint position based on the N pixel matrixes of the N template partitions and the N pixel matrixes of the N projection images;
and the storage module is used for determining the N groups of distortion correction data corresponding to the target eyepoint position as a first distortion correction data set and storing the corresponding relation between the target eyepoint position and the first distortion correction data set into a distortion correction database.
Optionally, in another possible implementation, the obtaining module is further specifically configured to:
determining N groups of distortion correction data corresponding to the target eyepoint position as N groups of target distortion correction data;
or determining N sets of target distortion correction data by adopting an interpolation algorithm based on the distance difference between the target eye point position and the current eye point position and N sets of distortion correction data corresponding to the target eye point position.
Optionally, in another possible implementation, the N groups of optical paths include a first optical path and a second optical path; the optical elements in the head-up display comprise a first reflector, a second reflector and a third reflector, wherein the first reflector and the third reflector form a first optical path, and the first reflector, the second reflector and the third reflector form a second optical path.
In a third aspect, the present application provides a head-up display, comprising a memory, a processor, a bus, and a communication interface; the memory is used for storing computer execution instructions, and the processor is connected with the memory through a bus; when the heads-up display is operating, the processor executes computer-executable instructions stored by the memory to cause the heads-up display to perform the image processing method as provided in the first aspect above.
In a fourth aspect, the present application provides a computer-readable storage medium having instructions stored therein, which when executed by a computer, cause the computer to perform the image processing method as provided in the first aspect.
In a fifth aspect, the present application provides a computer program product comprising computer instructions which, when run on a computer, cause the computer to perform the image processing method as provided in the first aspect.
It should be noted that all or part of the computer instructions may be stored on the computer readable storage medium. The computer readable storage medium may be packaged with a processor of the head-up display, or may be packaged separately from the processor of the head-up display, which is not limited in this application.
For the description of the second, third, fourth and fifth aspects in this application, reference may be made to the detailed description of the first aspect; in addition, for the beneficial effects described in the second aspect, the third aspect, the fourth aspect and the fifth aspect, reference may be made to beneficial effect analysis of the first aspect, and details are not repeated here.
In the present application, the names of the above-mentioned devices or function modules do not constitute limitations, and in actual implementation, these devices or function modules may appear by other names. Insofar as the functions of the individual devices or functional modules are similar to those of the present application, they are within the scope of the claims and their equivalents.
These and other aspects of the present application will be more readily apparent from the following description.
Drawings
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application;
FIG. 2 is a schematic partial structural view of a vehicle according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a head-up display according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram of an output image according to an embodiment of the present application;
fig. 5 is a schematic diagram of a pre-distorted image according to an embodiment of the present disclosure;
fig. 6 is a schematic flowchart of another image processing method according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a head-up display according to an embodiment of the disclosure.
Detailed Description
An image processing method, an apparatus, a device, and a storage medium provided in the embodiments of the present application are described in detail below with reference to the accompanying drawings.
The terms "first" and "second" and the like in the description and drawings of the present application are used for distinguishing different objects or for distinguishing different processes for the same object, and are not used for describing a specific order of the objects.
Furthermore, the terms "including" and "having," and any variations thereof, as referred to in the description of the present application, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
It should be noted that in the embodiments of the present application, words such as "exemplary" or "for example" are used to indicate examples, illustrations or explanations. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
In the description of the present application, the meaning of "a plurality" means two or more unless otherwise specified.
In addition, the data acquisition, storage, use, processing and the like in the technical scheme of the application all conform to relevant regulations of national laws and regulations.
At present, due to the existence of factors such as uneven curvature of windshield glass, virtual images projected on the windshield glass at different distances have different distortions, and the visual experience of a user is influenced. Therefore, it is an urgent technical problem to correct different distortions of virtual images projected on a windshield at different distances.
In view of the problems in the prior art, an embodiment of the present application provides an image processing method, where N sets of target distortion correction data may be used to perform distortion correction processing on N output partitions respectively to obtain N pre-distortion partitions, and then a pre-distortion image obtained by combining the N pre-distortion partitions is projected as a current image to be displayed, that is, an output image is first adjusted to be a reverse distortion image and then the reverse distortion image is projected, so that the impact on distortion can be realized, and distortion generated in the projection process can be weakened. Therefore, the embodiment of the application can realize that different distortions of virtual images of different distances projected on windshield glass are corrected, so that the visual experience of a user can be improved.
The image processing method provided by the embodiment of the present application may be executed by the image processing apparatus provided by the embodiment of the present application, and the image processing apparatus may be implemented by software and/or hardware and integrated in a head-up display that executes the method.
An image processing method provided by the embodiment of the present application is described below with reference to the drawings.
Referring to fig. 1, an image processing method provided by the embodiment of the present application includes S101-S104:
s101, acquiring N groups of target distortion correction data.
The target distortion correction data can be used for representing the distortion degree of the optical paths when the images to be displayed are projected; n is a positive integer greater than 1.
The image processing method provided by the embodiment of the application can be applied to a head-up display, the head-up display can comprise a plurality of optical elements, the plurality of optical elements can form N groups of optical paths, and the N groups of optical paths can be used for projecting an image to be displayed on N areas of a windshield.
Optionally, in this embodiment of the present application, N may be 2,n groups of optical paths, which may include a first optical path and a second optical path; the optical elements in the head-up display comprise a first reflector, a second reflector and a third reflector, wherein the first reflector and the third reflector form a first optical path, and the first reflector, the second reflector and the third reflector form a second optical path.
Illustratively, referring to FIG. 2, a schematic view of a portion of a vehicle is provided. As shown in fig. 2, a head-up display may be provided in the vehicle, and the head-up display may include an image Generation Unit (PGU), a first mirror, a second mirror, and a third mirror. To more clearly describe the principle of optical path design inside the head-up display, the embodiment of the present application provides an enlarged schematic view of the head-up display in fig. 2 in fig. 3. As shown in fig. 2 and 3, the PGU may convert a part of an image (for example, a left half image of the image to be displayed) of the image to be displayed into a first optical signal, and the first optical signal is projected onto the windshield glass through a first optical path to form a first virtual image. Meanwhile, the PGU may convert another partial image of the image to be displayed (for example, a right half image of the image to be displayed) into a second optical signal, and the second optical signal is projected onto the windshield glass to form a virtual image two. Specifically, the first reflector has a light splitting function, and in the first optical path, the first optical signal is transmitted from the PGU to the first reflector, reflected to the third reflector through the first reflector, and reflected to the windshield through the third reflector. In the second optical path, the second optical signal is transmitted from the PGU to the first reflecting mirror, is transmitted to the second reflecting mirror through the first reflecting mirror, is reflected to the first reflecting mirror through the second reflecting mirror, is transmitted to the third reflecting mirror through the first reflecting mirror, and is reflected to the windshield glass through the third reflecting mirror. When the driver's eyepoint position is as shown in fig. 2, virtual image one and virtual image two can be observed simultaneously. The virtual image one is a close-up image and is generally used for displaying information such as vehicle speed. The virtual image two is a distant view image, and is generally used for displaying information such as lane lines and navigation instruction arrows.
It can be seen that the optical paths corresponding to the virtual image one and the virtual image two are different, and the imaging distances are different, so that the distortions of the virtual image one and the virtual image two are different. For this reason, in this application embodiment, can acquire two sets of target distortion correction data of the first light path that virtual image one corresponds and the second light path that virtual image two corresponds respectively, correct the distortion that first light path and second light path produced respectively based on these two sets of target distortion correction data.
Optionally, in the embodiment of the present application, N sets of target distortion correction data may be obtained as follows: determining a target eyepoint position from at least one preset eyepoint position based on the current eyepoint position, and acquiring N groups of distortion correction data corresponding to the target eyepoint position from a distortion correction database; then, based on the N sets of distortion correction data corresponding to the target eyepoint position, N sets of target distortion correction data are determined.
The distortion correction database comprises at least one distortion correction data set, one distortion correction data set corresponds to a preset eyepoint position, and one distortion correction data set comprises N groups of distortion correction data. The at least one preset eyepoint position may be a plurality of predetermined eyepoint positions, and the current eyepoint position is an actual eyepoint position of the driver during the driving of the vehicle.
Drivers of different heights may have different horizontal heights of the eyepoint positions, and thus different drivers may observe different degrees of distortion of the image. If the N sets of distortion correction data corresponding to the same preset eyepoint position are used for distortion correction, the distortion correction effect will be affected. Therefore, in the embodiment of the present application, a plurality of distortion correction data sets corresponding to a plurality of preset eye point positions may be obtained in advance, in practical applications, a target eye point position corresponding to a current eye point position is determined from the plurality of preset eye point positions according to the current eye point position of a driver, and then distortion correction is performed according to the distortion correction data set corresponding to the target eye point position. Thus, the influence of the change in the eye point position on the distortion correction effect can be reduced.
Further optionally, in this embodiment of the application, the eye box region may be divided into a plurality of eye box zones, and a preset eyepoint position is determined in each eye box zone, for example, a center position of each eye box zone may be determined as the preset eyepoint position of the eye box zone.
The eye box area is a display parameter of the head-up display, and the display parameter specifies an effective area of an eyepoint, and the effective area is the eye box area. The driver can see the virtual image on the windscreen only when his eye point position moves within this active area.
Illustratively, the eye box region may be divided into three eye box partitions, namely an upper eye box partition, a lower eye box partition and a central eye box partition, and three preset eyepoint positions may be correspondingly determined. Or, the eye box area may be further divided into six eye box partitions of three rows and two columns, and six preset eye point positions may be correspondingly determined. It should be noted that, when the eye box regions are divided in the longitudinal direction (for example, the eye box region is divided into three eye box regions, namely, an upper eye box region, a lower eye box region and a central eye box region in the longitudinal direction), the current eye point position may correspond to the rotation angle of the third mirror of the head-up display in a one-to-one manner. When the eye box partition is divided in the horizontal direction, the tracking module is required to track the current eye point position, for example, when the eye box area is divided into six eye box partitions of three rows and two columns, the tracking module is required to track the current eye point position.
In a possible implementation manner, a collecting device may be disposed at a junction position of the windshield and the roof, and is configured to track a current eyepoint position of the driver, so that an eyebox partition to which the current eyepoint position belongs in the eyebox area may be determined according to the tracked current eyepoint position, and a preset eyepoint position corresponding to the eyebox partition is determined as the target eyepoint position.
In another possible implementation, the head-up display may support the driver in adjusting the angle parameter of the optical element. The driver can adjust the center positions of the virtual images projected on the windshield glass to be approximately the same level as the current eye point position by adjusting the parameters of the optical element. Therefore, in the embodiment of the application, the current eyepoint position can be determined by calculating the parameters of the optical element, and the target eyepoint position can be determined according to the distance between the current eyepoint position and each preset eyepoint position. For example, in the head up display shown in fig. 2 or 3, the driver may be supported to rotationally adjust the third mirror to adjust the heights of the virtual images one and two projected on the windshield.
Optionally, determining N sets of target distortion correction data based on the N sets of distortion correction data corresponding to the target eyepoint position may include: determining N groups of distortion correction data corresponding to the target eyepoint position as N groups of target distortion correction data; or determining N sets of target distortion correction data by adopting an interpolation algorithm based on the distance difference between the target eye point position and the current eye point position and N sets of distortion correction data corresponding to the target eye point position.
In the embodiment of the present application, N sets of distortion correction data corresponding to the target eyepoint position may be directly determined as N sets of target distortion correction data. In order to further improve the accuracy of the distortion correction and improve the distortion correction effect, the embodiment of the application can also determine N groups of target distortion correction data by adopting an interpolation algorithm.
For example, for each distortion correction data in the N sets of target distortion correction data, an interpolation algorithm may be used to calculate corresponding data interpolated from the distortion correction data according to a distance difference between a target eye point position and a current eye point position, so as to finally obtain the N sets of target distortion correction data.
Optionally, in the embodiment of the present application, an optical simulation experiment may be performed at different preset eyepoint positions in advance, and a distortion correction data set corresponding to the different preset eyepoint positions is obtained by tracking the directions of the optical signals in the N groups of optical paths, so as to obtain a distortion correction database.
In order to further improve the accuracy of the data in the obtained distortion correction database, thereby further improving the distortion correction effect. Optionally, before determining a target eyepoint position from at least one preset eyepoint position based on the current eyepoint position, and acquiring N sets of distortion correction data corresponding to the target eyepoint position from the distortion correction database, the image processing method provided in the embodiment of the present application may further include: acquiring N projected images of the template image, and determining N template partitions of the template image; determining N groups of distortion correction data corresponding to the target eyepoint position based on N pixel matrixes of N template partitions and N pixel matrixes of N projection images; and determining N groups of distortion correction data corresponding to the target eyepoint position as a first distortion correction data set, and storing the corresponding relation between the target eyepoint position and the first distortion correction data set into a distortion correction database.
The N projected images are projected images of the template images acquired based on the target eyepoint positions when the template images are projected on N areas of the windshield through N groups of optical paths; one template partition corresponds to one set of optical paths. The template image may be a sample image determined in advance, and for example, a dot matrix image may be used as the template image.
For example, taking the head-up display in fig. 2 or fig. 3 as an example, the embodiment of the present application may determine two template sections of the template image, including the first template section and the second template section, based on the optical path orientations of the first optical path and the second optical path. The first template subarea and the second template subarea are input into a PGU, after the PGU carries out optical treatment on the first template subarea and the second template subarea, the first template subarea can be projected on the windshield glass through a first light path to form a first virtual image, and the second template subarea can be projected on the windshield glass through a second light path to form a second virtual image. Then, a first virtual image and a second virtual image can be acquired by an acquisition device fixed at the position of the target eyepoint, and the acquired first virtual image and the acquired second virtual image are two projection images of the template image. Then, a group of distortion correction data corresponding to the first optical path can be determined according to the pixel matrix of the first template partition and the pixel matrix of the virtual image I; and a group of distortion correction data corresponding to the second optical path can be determined according to the pixel matrix of the second template partition and the pixel matrix of the virtual image II. For example, if the pixel matrix of the first template partition is a matrix a and the pixel matrix of the virtual image one is a matrix B, the following transformation relationship exists between the matrix a and the matrix B: a = C × B, each element in the matrix C is obtained by matrix operation, and each element in the matrix C can be used as a set of distortion correction data corresponding to the first optical path.
Similarly, in the embodiment of the present application, the template image may be projected at each preset eyepoint position in advance, and the corresponding projection image is acquired by the acquisition device, so that a plurality of distortion correction data sets corresponding to a plurality of preset eyepoint positions may be obtained.
S102, acquiring an output image, and determining N output partitions of the output image.
The output image may be an image output by the PGU in real time according to information such as the current vehicle speed and the surrounding environment of the vehicle. The embodiment of the application can determine N output subareas of the output image based on the light path trends of N groups of light paths, and one output subarea corresponds to one group of light paths.
S103, distortion correction processing is carried out on the N output subareas based on the N groups of target distortion correction data, N predistortion subareas corresponding to the N output subareas are determined, and the N predistortion subareas are combined to obtain a predistortion image of the output image.
The embodiment of the application can correct different distortions of virtual images projected on windshield glass at different distances through simple and convenient matrix operation. Alternatively, the N sets of target orthotics data may be N target orthotics matrices. In the embodiment of the application, the distortion correction reference point of the first output subarea is taken as the coordinate origin to determine the pixel matrix of the first output subarea; and carrying out distortion correction processing on the pixel matrix of the first output subarea based on the first target distortion correction matrix, and determining a first predistortion subarea corresponding to the first output subarea.
The first output subarea is any one of the N output subareas, and the first target distortion correction matrix is a target distortion correction matrix corresponding to the first optical path in the N target distortion correction matrices; the first optical path is an optical path corresponding to the first output subarea in the N groups of optical paths.
For example, if m rows and n columns of pixel points are included in the first output partition, the pixel matrix of the first output partition may be represented as a matrix I in expression (1):
Figure DEST_PATH_IMAGE002
(1)
if the first target distortion correction matrix is the matrix C, the pixel matrix I of the first output partition may be corrected according to D = C × I, so as to obtain a first predistortion partition. Where matrix D represents the pixel matrix of the first pre-distortion zone.
Optionally, the distortion correction reference point of the first output partition is located at the geometric center of the first output partition.
The degree of distortion may be different at different locations of the same imaging area due to factors such as unequal curvature of the windshield. In order to reduce the influence of the difference in the imaging region position on the distortion correction effect as much as possible, in the embodiment of the present application, when the distortion correction reference point is selected, the geometric center of each output division may be determined as the distortion correction reference point of the output division.
For example, as shown in fig. 4, the distortion correction reference point a is the geometric center of the output partition a, and the distortion correction reference point a may be determined as the distortion correction reference point of the output partition a. The distortion correction reference point B is the geometric center of the output partition B, and the distortion correction reference point B can be determined as the distortion correction reference point of the output partition B.
And S104, determining the pre-distorted image as a current image to be displayed, and projecting the current image to be displayed through N groups of optical paths.
Illustratively, as shown in FIG. 4, a schematic of an output image is provided. Taking the head-up display shown in fig. 2 or fig. 3 as an example, the output image may be divided into an output partition a and an output partition B as shown in fig. 4, and then the pixel matrix of the output partition a may be determined with the distortion correction reference point a of the output partition a as the origin of coordinates, and the pixel matrix of the output partition B may be determined with the distortion correction reference point B of the output partition B as the origin of coordinates. Then, the pixel matrix of the output partition a may be subjected to distortion correction processing by using the target distortion correction matrix corresponding to the output partition a, so as to determine a predistortion partition corresponding to the output partition a. And, the pixel matrix of the output partition B may be subjected to distortion correction processing by using the target distortion correction matrix corresponding to the output partition B, so as to determine a predistortion partition corresponding to the output partition B. Then, the predistortion partition corresponding to the output partition a and the predistortion partition corresponding to the output partition B may be merged to obtain the predistortion image shown in fig. 5. As can be seen in fig. 5, the positions of the points in the pre-distorted image have changed with respect to fig. 4. If the pre-distorted image is projected through N sets of optical paths, since the primary distortion is generated, the offset of the distortion can be realized, and the image observed by the driver is the output image shown in fig. 4.
In summary, in the image processing method provided in the embodiment of the present application, N sets of target distortion correction data may be obtained first, and N output partitions of an output image may be obtained. Then, based on the N sets of target distortion correction data, the N output partitions may be respectively subjected to distortion correction processing to obtain N predistortion partitions, and the N predistortion partitions may be subjected to merging processing to obtain a predistortion image of the output image. Then, the pre-distorted image can be determined as a current image to be displayed, and the current image to be displayed is projected through the N groups of optical paths. Because a group of target distortion correction data can be used for representing the distortion degree of a group of optical paths generated when the image to be displayed is projected, and one output subarea corresponds to a group of optical paths, N groups of target distortion correction data are adopted to carry out distortion correction processing on N output subareas to obtain N pre-distortion subareas, and then the pre-distortion images obtained by combining the N pre-distortion subareas are projected as the current image to be displayed, namely the output image is adjusted into a reverse distortion image firstly, and then the reverse distortion image is projected, so that the offset of distortion can be realized, and the distortion generated in the projection process can be weakened. Therefore, the image to be displayed can be corrected in a subarea mode by adopting different target distortion correction data to correct the distortion generated when the image to be displayed is projected by different optical paths. Therefore, the embodiment of the application can realize that different distortions of virtual images of different distances projected on windshield glass are corrected, so that the visual experience of a user can be improved.
Optionally, as shown in fig. 6, an embodiment of the present application further provides an image processing method, where the method may include S601-S606:
s601, determining a target eye point position from at least one preset eye point position based on the current eye point position, and acquiring N sets of distortion correction data corresponding to the target eye point position from a distortion correction database.
S602, determining N groups of target distortion correction data based on the N groups of distortion correction data corresponding to the target eyepoint positions.
S603, acquiring an output image, and determining N output partitions of the output image.
And S604, determining pixel matrixes of the N output partitions by taking the distortion correction reference points of the N output partitions as coordinate origins.
S605, distortion correction processing is carried out on the pixel matrixes of the N output subareas based on the N target distortion correction matrixes, N pre-distortion subareas corresponding to the N output subareas are determined, and the N pre-distortion subareas are combined to obtain a pre-distortion image of the output image.
And S606, determining the pre-distortion image as a current image to be displayed, and projecting the current image to be displayed through N groups of optical paths.
As shown in fig. 7, an embodiment of the present application further provides an image processing apparatus, which may include: an acquisition module 11 and a processing module 12.
The acquiring module 11 executes S101 and S102 in the above method embodiment, and the processing module 12 executes S103 and S104 in the above method embodiment.
An obtaining module 11, configured to obtain N sets of target distortion correction data; a set of target distortion correction data is used for representing the distortion degree of a set of optical paths when the image to be displayed is projected; n is a positive integer greater than 1;
the obtaining module 11 is further configured to obtain an output image and determine N output partitions of the output image; one output partition corresponds to one set of optical paths;
the processing module 12 is configured to perform distortion correction processing on the N output partitions based on the N groups of target distortion correction data, determine N predistortion partitions corresponding to the N output partitions, and perform merging processing on the N predistortion partitions to obtain a predistortion image of the output image;
the processing module 12 is further configured to determine the pre-distorted image as a current image to be displayed, and project the current image to be displayed through the N groups of optical paths.
Optionally, in a possible implementation, the N groups of target aberration correction data may be N target aberration correction matrices, and the processing module 12 is specifically configured to:
determining a pixel matrix of the first output subarea by taking the distortion correction reference point of the first output subarea as a coordinate origin; the first output partition is any one of N output partitions;
carrying out distortion correction processing on the pixel matrix of the first output subarea based on the first target distortion correction matrix, and determining a first predistortion subarea corresponding to the first output subarea; the first target distortion correction matrix is a target distortion correction matrix corresponding to the first optical path in the N target distortion correction matrices; the first optical path is an optical path corresponding to the first output subarea in the N groups of optical paths.
Optionally, in another possible embodiment, the distortion correction reference point of the first output partition is located at a geometric center of the first output partition.
Optionally, in another possible implementation, the obtaining module 11 is specifically configured to:
determining a target eyepoint position from at least one preset eyepoint position based on the current eyepoint position, and acquiring N groups of distortion correction data corresponding to the target eyepoint position from a distortion correction database; the distortion correction database comprises at least one distortion correction data set, one distortion correction data set corresponds to a preset eyepoint position, and one distortion correction data set comprises N groups of distortion correction data;
and determining N groups of target distortion correction data based on the N groups of distortion correction data corresponding to the target eyepoint position.
Optionally, in another possible implementation, the image processing apparatus may further include a storage module;
the obtaining module 11 is further configured to: the method comprises the steps that on the basis of a current eyepoint position, a target eyepoint position is determined from at least one preset eyepoint position, N projection images of a template image are obtained before N groups of distortion correction data corresponding to the target eyepoint position are obtained from a distortion correction database, and N template partitions of the template image are determined; the N projected images are projected images of the template images acquired based on the target eyepoint positions when the template images are projected on the N areas of the windshield through the N groups of optical paths; one template partition corresponds to one group of light paths;
the processing module 12 is further configured to determine N sets of distortion correction data corresponding to the target eyepoint position based on the N pixel matrices of the N template partitions and the N pixel matrices of the N projection images;
and the storage module is used for determining the N groups of distortion correction data corresponding to the target eyepoint position as a first distortion correction data set, and storing the corresponding relation between the target eyepoint position and the first distortion correction data set into a distortion correction database.
Optionally, in another possible implementation, the obtaining module 11 is further specifically configured to:
determining N groups of distortion correction data corresponding to the target eyepoint position as N groups of target distortion correction data;
or, based on the distance difference between the target eye point position and the current eye point position and the N sets of distortion correction data corresponding to the target eye point position, the N sets of target distortion correction data are determined by adopting an interpolation algorithm.
Optionally, in another possible embodiment, the N groups of optical paths include a first optical path and a second optical path; the optical elements in the head-up display comprise a first reflector, a second reflector and a third reflector, wherein the first reflector and the third reflector form a first optical path, and the first reflector, the second reflector and the third reflector form a second optical path.
Alternatively, the storage module may be further configured to store a program code of the image processing apparatus, and the like.
As shown in fig. 8, an embodiment of the present application further provides a head-up display, which includes a memory 41, a processor 42, a bus 43, and a communication interface 44; the memory 41 is used for storing computer execution instructions, and the processor 42 is connected with the memory 41 through a bus 43; when the heads-up display is operating, the processor 42 executes computer-executable instructions stored by the memory 41 to cause the heads-up display to perform the image processing method as provided in the embodiments described above.
In particular implementations, processor 42 may include one or more Central Processing Units (CPUs), such as CPU0 and CPU1 shown in FIG. 8, as an example. And as an example, the heads-up display may include multiple processors 42, such as the two processors 42 shown in fig. 8. Each of the processors 42 may be a single-Core Processor (CPU) or a multi-Core Processor (CPU). Processor 42 may refer herein to one or more devices, circuits, and/or processing cores that process data (e.g., computer program instructions).
The memory 41 may be, but is not limited to, a read-only memory (ROM) or other type of static storage device that can store static information and instructions, a Random Access Memory (RAM) or other type of dynamic storage device that can store information and instructions, an electrically erasable programmable read-only memory (EEPROM), a compact disk read-only memory (CD-ROM) or other optical disk storage, optical disk storage (including compact disk, laser disk, optical disk, digital versatile disk, blu-ray disk, etc.), a magnetic disk storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory 41 may be self-contained and coupled to the processor 42 via a bus 43. The memory 41 may also be integrated with the processor 42.
In a specific implementation, the memory 41 is used for storing data in the present application and computer-executable instructions corresponding to a software program for executing the present application. The processor 42 may perform various functions of the heads-up display by running or executing software programs stored in the memory 41, as well as invoking data stored in the memory 41.
The communication interface 44 may be any device, such as a transceiver, for communicating with other devices or communication networks, such as a control system, a Radio Access Network (RAN), a Wireless Local Area Network (WLAN), etc. The communication interface 44 may include a receiving unit implementing a receiving function and a transmitting unit implementing a transmitting function.
The bus 43 may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus 43 may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 8, but this is not intended to represent only one bus or type of bus.
As an example, in conjunction with fig. 7, the processing module in the image processing apparatus implements the same function as the processor in fig. 8, and the acquisition module in the image processing apparatus implements the same function as the receiving unit in fig. 8. When the image processing apparatus includes the memory module, the memory module implements the same function as the memory in fig. 8.
For the explanation of the related content in this embodiment, reference may be made to the above method embodiment, which is not described herein again.
Through the description of the above embodiments, it is clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above functions may be distributed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions. For the specific working processes of the above-described system, device and unit, reference may be made to the corresponding processes in the foregoing method embodiments, and details are not described here again.
The embodiment of the present application further provides a computer-readable storage medium, in which instructions are stored, and when the computer executes the instructions, the computer is enabled to execute the image processing method provided by the above embodiment.
The computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a RAM, a ROM, an erasable programmable read-only memory (EPROM), a register, a hard disk, an optical fiber, a CD-ROM, an optical storage device, a magnetic storage device, any suitable combination of the foregoing, or any other form of computer-readable storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an Application Specific Integrated Circuit (ASIC). In embodiments of the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The above description is only an embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. An image processing method applied to a head-up display in which optical elements form N sets of optical paths for projecting an image to be displayed on N areas of a windshield having an uneven curvature distribution, the image processing method comprising:
acquiring N groups of target distortion correction data; a set of target distortion correction data for characterizing the degree of distortion of a set of optical paths generated when projecting the image to be displayed; n is a positive integer greater than 1; when the images to be displayed are projected on the windshield glass by different optical paths, virtual images at different distances can be formed, and the distortion degrees of the virtual images at different distances are different;
acquiring an output image, and determining N output partitions of the output image; one output partition corresponds to one group of optical paths;
carrying out distortion correction processing on the N output subareas based on the N groups of target distortion correction data, determining N predistortion subareas corresponding to the N output subareas, and carrying out merging processing on the N predistortion subareas to obtain a predistortion image of the output image;
and determining the pre-distortion image as the current image to be displayed, and projecting the current image to be displayed through the N groups of optical paths.
2. The image processing method according to claim 1, wherein the N sets of target distortion correction data are N target distortion correction matrices, and the determining N pre-distortion partitions corresponding to the N output partitions by performing distortion correction processing on the N output partitions based on the N sets of target distortion correction data includes:
determining a pixel matrix of a first output subarea by taking the distortion correction reference point of the first output subarea as a coordinate origin; the first output partition is any one of the N output partitions;
performing distortion correction processing on the pixel matrix of the first output subarea based on a first target distortion correction matrix, and determining a first pre-distortion subarea corresponding to the first output subarea; the first target distortion correction matrix is a target distortion correction matrix corresponding to a first optical path in the N target distortion correction matrices; the first optical path is an optical path corresponding to the first output subarea in the N groups of optical paths.
3. The image processing method of claim 2, wherein the distortion correction reference point of the first output partition is located at a geometric center of the first output partition.
4. The image processing method of claim 1, wherein the acquiring N sets of target distortion correction data comprises:
determining a target eyepoint position from at least one preset eyepoint position based on the current eyepoint position, and acquiring N groups of distortion correction data corresponding to the target eyepoint position from a distortion correction database; the distortion correction database comprises at least one distortion correction data set, one distortion correction data set corresponds to a preset eyepoint position, and one distortion correction data set comprises N groups of distortion correction data;
and determining the N groups of target distortion correction data based on the N groups of distortion correction data corresponding to the target eyepoint position.
5. The image processing method according to claim 4, wherein before determining a target eyepoint position from at least one preset eyepoint position based on the current eyepoint position and acquiring N sets of distortion correction data corresponding to the target eyepoint position from a distortion correction database, the method further comprises:
acquiring N projection images of a template image, and determining N template partitions of the template image; the N projected images are projected images of the template image acquired based on the target eye point position when the template image is projected on N areas of the windshield through the N groups of optical paths; one template partition corresponds to one group of light paths;
determining N groups of distortion correction data corresponding to the target eyepoint position based on N pixel matrixes of the N template partitions and N pixel matrixes of the N projection images;
and determining N groups of distortion correction data corresponding to the target eyepoint position as a first distortion correction data set, and storing the corresponding relation between the target eyepoint position and the first distortion correction data set into the distortion correction database.
6. The image processing method according to claim 4, wherein the determining the N sets of target distortion correction data based on the N sets of distortion correction data corresponding to the target eyepoint position comprises:
determining N groups of distortion correction data corresponding to the target eyepoint position as the N groups of target distortion correction data;
or determining the N groups of target distortion correction data by adopting an interpolation algorithm based on the distance difference between the target eye point position and the current eye point position and the N groups of distortion correction data corresponding to the target eye point position.
7. The image processing method according to any one of claims 1 to 6, wherein the N sets of optical paths include a first optical path and a second optical path; optical elements in the heads up display include a first mirror, a second mirror, and a third mirror, the first mirror and the third mirror forming the first optical path, the first mirror, the second mirror, and the third mirror forming the second optical path.
8. An image processing apparatus applied to a head-up display in which optical elements form N sets of optical paths for projecting an image to be displayed on N areas of a windshield whose curvature distribution is unbalanced, comprising:
the acquisition module is used for acquiring N groups of target distortion correction data; a set of target distortion correction data for characterizing the degree of distortion of a set of optical paths generated when projecting the image to be displayed; n is a positive integer greater than 1; when the images to be displayed are projected on the windshield glass by different light paths, virtual images at different distances can be formed, and the distortion degrees of the virtual images at different distances are different;
the acquisition module is further used for acquiring an output image and determining N output partitions of the output image; one output partition corresponds to one group of optical paths;
the processing module is used for carrying out distortion correction processing on the N output subareas based on the N groups of target distortion correction data, determining N predistortion subareas corresponding to the N output subareas, and combining the N predistortion subareas to obtain a predistortion image of the output image;
the processing module is further configured to determine the pre-distorted image as the current image to be displayed, and project the current image to be displayed through the N groups of optical paths.
9. A head-up display is characterized by comprising a memory, a processor, a bus and a communication interface; the memory is used for storing computer execution instructions, and the processor is connected with the memory through the bus;
when the heads-up display is running, a processor executes the computer-executable instructions stored by the memory to cause the heads-up display to perform the image processing method of any of claims 1-7.
10. A computer-readable storage medium having stored therein instructions, which when executed by a computer, cause the computer to execute the image processing method according to any one of claims 1 to 7.
CN202210838359.0A 2022-07-18 2022-07-18 Image processing method, image processing device, head-up display and storage medium Active CN114998157B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210838359.0A CN114998157B (en) 2022-07-18 2022-07-18 Image processing method, image processing device, head-up display and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210838359.0A CN114998157B (en) 2022-07-18 2022-07-18 Image processing method, image processing device, head-up display and storage medium

Publications (2)

Publication Number Publication Date
CN114998157A CN114998157A (en) 2022-09-02
CN114998157B true CN114998157B (en) 2022-11-15

Family

ID=83021314

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210838359.0A Active CN114998157B (en) 2022-07-18 2022-07-18 Image processing method, image processing device, head-up display and storage medium

Country Status (1)

Country Link
CN (1) CN114998157B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115665390A (en) * 2022-10-20 2023-01-31 中国第一汽车股份有限公司 Vehicle with front windshield projection, projection adjusting method and device, vehicle machine and medium
CN115578283B (en) * 2022-10-26 2023-06-20 北京灵犀微光科技有限公司 Distortion correction method and device for HUD imaging, terminal equipment and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105954960A (en) * 2016-04-29 2016-09-21 广东美的制冷设备有限公司 Spherical surface projection display method, spherical surface projection display system and household electrical appliance
CN108600716A (en) * 2018-05-17 2018-09-28 京东方科技集团股份有限公司 Projection device and system, projecting method
CN109407316A (en) * 2018-11-13 2019-03-01 苏州车萝卜汽车电子科技有限公司 Augmented reality head-up-display system, automobile
CN111476104B (en) * 2020-03-17 2022-07-01 重庆邮电大学 AR-HUD image distortion correction method, device and system under dynamic eye position
CN114077053A (en) * 2020-08-21 2022-02-22 未来(北京)黑科技有限公司 Double-layer imaging head-up display device, head-up display system and traffic equipment

Also Published As

Publication number Publication date
CN114998157A (en) 2022-09-02

Similar Documents

Publication Publication Date Title
CN114998157B (en) Image processing method, image processing device, head-up display and storage medium
CN109688392B (en) AR-HUD optical projection system, mapping relation calibration method and distortion correction method
CN107554425B (en) A kind of vehicle-mounted head-up display AR-HUD of augmented reality
CN111476104B (en) AR-HUD image distortion correction method, device and system under dynamic eye position
US20080088527A1 (en) Heads Up Display System
CN111540004A (en) Single-camera polar line correction method and device
JP2020042789A (en) Simulation data volume extension method, device and terminal
US20230206500A1 (en) Method and apparatus for calibrating extrinsic parameter of a camera
CN114820396B (en) Image processing method, device, equipment and storage medium
CN115525152A (en) Image processing method, system, device, electronic equipment and storage medium
KR20130057327A (en) Preprocessing apparatus in stereo matching system
JP2019100924A (en) Vehicle trajectory correction device
CN114993337B (en) Navigation animation display method and device, ARHUD and storage medium
KR102071720B1 (en) Method for matching radar target list and target of vision image
KR20130057328A (en) Preprocessing apparatus in stereo matching system
CN116415652A (en) Data generation method and device, readable storage medium and terminal equipment
CN116152347A (en) Vehicle-mounted camera mounting attitude angle calibration method and system
CN112666713B (en) Method for updating calibration data of head-up display
CN113788017A (en) Lane keeping control method, system, electronic device and storage medium
EP4343408A1 (en) Augmented reality-head up display imaging methods and apparatuses, devices, and storage media
CN115578283B (en) Distortion correction method and device for HUD imaging, terminal equipment and storage medium
CN113658262A (en) Camera external parameter calibration method, device, system and storage medium
CN115690194B (en) Vehicle-mounted XR equipment positioning method, device, equipment and storage medium
CN115937007B (en) Wind shear identification method and device, electronic equipment and medium
CN117162777B (en) Content presentation method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant