CN115578283A - Distortion correction method and device for HUD imaging, terminal equipment and storage medium - Google Patents

Distortion correction method and device for HUD imaging, terminal equipment and storage medium Download PDF

Info

Publication number
CN115578283A
CN115578283A CN202211317063.0A CN202211317063A CN115578283A CN 115578283 A CN115578283 A CN 115578283A CN 202211317063 A CN202211317063 A CN 202211317063A CN 115578283 A CN115578283 A CN 115578283A
Authority
CN
China
Prior art keywords
image
input image
corrected
point coordinate
characteristic point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211317063.0A
Other languages
Chinese (zh)
Other versions
CN115578283B (en
Inventor
张亚斌
赵鑫
郑昱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Journey Technology Ltd
Original Assignee
Journey Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Journey Technology Ltd filed Critical Journey Technology Ltd
Priority to CN202211317063.0A priority Critical patent/CN115578283B/en
Publication of CN115578283A publication Critical patent/CN115578283A/en
Application granted granted Critical
Publication of CN115578283B publication Critical patent/CN115578283B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/80
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/20Linear translation of a whole image or part thereof, e.g. panning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4084Transform-based scaling, e.g. FFT domain scaling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The application is suitable for the technical field of head-up display, and provides a distortion correction method and device for HUD imaging, a terminal device and a storage medium thereof, wherein the distortion correction method comprises the following steps: acquiring an image to be corrected for distortion and a corresponding input image in the driving process of a vehicle, wherein the image to be corrected for distortion is an image obtained by projecting the corresponding input image through an optical element; partitioning a target area in the input image, and determining a first characteristic point coordinate in the input image and a second characteristic point coordinate in the distorted image to be corrected, wherein the first characteristic point coordinate corresponds to the second characteristic point coordinate; and updating the input image in a partitioning manner according to each first characteristic point coordinate and the corresponding second characteristic point coordinate so as to correct the distorted image to be corrected. Above-mentioned scheme can realize correcting the distortion of HUD projecting image to make HUD present normal undistorted image on windshield.

Description

Distortion correction method and device for HUD imaging, terminal equipment and storage medium
Technical Field
The application belongs to the technical field of head-up display, and particularly relates to a distortion correction method and device for HUD imaging, terminal equipment and a storage medium.
Background
A Head-Up Display (HUD) product is formed by integrating whole vehicle information and then feeding back the whole vehicle information to a driver in a patterning mode, so that the driver can be effectively prevented from looking down at the distraction caused by a screen. However, since the windshield of the vehicle has a certain curvature, the light is reflected to enter human eyes to deform, which easily causes distortion of the projected image of the HUD, thereby affecting the visual effect.
Therefore, it is highly desirable to provide a distortion correction method for HUD imaging, which realizes distortion correction of the projected image of HUD, so that HUD can present a normal undistorted image on the windshield.
Disclosure of Invention
The embodiment of the application provides a distortion correction method, device, terminal equipment and storage medium for HUD imaging, which can realize distortion correction of HUD projection images so that the HUD presents normal undistorted images on a windshield.
A first aspect of an embodiment of the present application provides an aberration correction method for HUD imaging, where the aberration correction method includes:
acquiring an image to be corrected for distortion and a corresponding input image in the driving process of a vehicle, wherein the image to be corrected for distortion is an image obtained by projecting the corresponding input image through an optical element;
partitioning a target area in the input image, and determining a first characteristic point coordinate in the input image and a second characteristic point coordinate in the distorted image to be corrected, wherein the first characteristic point coordinate corresponds to the second characteristic point coordinate;
and updating the input image in a partitioning manner according to each first characteristic point coordinate and the corresponding second characteristic point coordinate so as to correct the distorted image to be corrected.
A second aspect of the embodiments of the present application provides an aberration correcting device for HUD imaging, the aberration correcting device including:
the system comprises an acquisition module, a correction module and a correction module, wherein the acquisition module is used for acquiring an image to be corrected by distortion and a corresponding input image in the running process of a vehicle, and the image to be corrected by distortion is an image obtained by projecting the corresponding input image through an optical element;
the characteristic determining module is used for blocking a target area in the input image, determining a first characteristic point coordinate in the input image and a second characteristic point coordinate in the distorted image to be corrected, wherein the first characteristic point coordinate corresponds to the second characteristic point coordinate;
and the correction module is used for updating the input image in a partitioning manner according to each first characteristic point coordinate and the corresponding second characteristic point coordinate so as to correct the distorted image to be corrected.
A third aspect of an embodiment of the present application provides a terminal device, including: a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor executes the computer program to implement the method for correcting the distortion of HUD imaging according to the first aspect.
A fourth aspect of the embodiments of the present application provides a computer-readable storage medium storing a computer program, which when executed by a processor, implements the method for correcting distortion in HUD imaging according to the first aspect.
A fifth aspect of the embodiments of the present application provides a computer program product, which when run on a terminal device, causes the terminal device to execute the method for correcting distortion of HUD imaging according to the first aspect.
Compared with the prior art, the embodiment of the application has the advantages that:
in the method and the device, in the process of vehicle driving, the distorted image to be corrected and the corresponding input image are obtained, and the first characteristic point coordinate in the input image and the second characteristic point coordinate in the distorted image to be corrected are respectively determined. Above-mentioned scheme has realized correcting the distortion of HUD projecting image, can make HUD present normal undistorted image on windshield.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic flowchart of an aberration correction method for HUD imaging according to an embodiment of the present application;
fig. 2 is a schematic flowchart of an aberration correction method for HUD imaging according to the second embodiment of the present application;
fig. 3 is a schematic structural diagram of an aberration correcting apparatus for HUD imaging according to the third embodiment of the present application;
fig. 4 is a schematic structural diagram of a terminal device according to a fourth embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing a relative importance or importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather mean "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless otherwise specifically stated.
In the prior art, a head-up display device on a vehicle needs to project important driving information such as speed per hour, navigation and the like and road information onto a front windshield of a driver in real time by using an optical reflection principle in the driving process of the vehicle, so that the driver can see the driving information and the road information without lowering the head, and the driving safety is improved. However, since the windshield of the vehicle has a certain curvature and the curvatures at different positions may differ, the projected image of the head-up display device may be distorted. In the prior art, when the distortion phenomenon is processed, a fixed distortion parameter is usually set before the vehicle is shipped, so as to prevent the projected image from being distorted. However, because the curvatures of windshields with different specifications are different, the distortion phenomenon which occurs in real time in the driving process cannot be responded by setting fixed distortion parameters.
Based on the above problem, the application provides a distortion correction method for HUD imaging, which can determine a first feature point coordinate in an input image and a second feature point coordinate in a distortion image to be corrected respectively by acquiring the distortion image to be corrected and a corresponding input image in a driving process, and update the input image in a partition manner according to each first feature point coordinate and the corresponding second feature point coordinate because the first feature point coordinate corresponds to the second feature point coordinate, so that the input image is changed into a corresponding distorted image, and the projected image is an undistorted image to correct the distortion image to be corrected. According to the scheme, the image to be corrected and the corresponding input image which are acquired in real time in the driving process are utilized to carry out real-time distortion correction, so that the distortion correction of the HUD projection image is realized, and the HUD can be enabled to present a normal undistorted image on the windshield.
In order to explain the technical solution of the present application, the following description is given by way of specific examples.
It should be understood that, the sequence numbers of the steps in this embodiment do not mean the execution sequence, and the execution sequence of each process should be determined by the function and the inherent logic of the process, and should not constitute any limitation to the implementation process of the embodiment of the present application.
Referring to fig. 1, a schematic flowchart of an aberration correction method for HUD imaging according to an embodiment of the present application is shown. As shown in fig. 1, the method for correcting the distortion of HUD imaging may include the steps of:
step 101, in the vehicle driving process, acquiring an image to be corrected for distortion and a corresponding input image.
Note that the distortion correction method according to the embodiment of the present application may be executed by the distortion correction apparatus according to the embodiment of the present application. The distortion correction device of the embodiment of the present application may be configured in any terminal device to execute the distortion correction method of the embodiment of the present application. For example, the distortion correction device according to the embodiment of the present application may be disposed in a vehicle-mounted terminal or an external terminal of a vehicle, and the present application is not limited thereto.
The image to be corrected for distortion may be a projected image distorted on a windshield of a vehicle during driving of the vehicle. It should be noted that, the image acquisition device may be used to acquire the projection image on the windshield of the vehicle in real time, and if the distortion rate of the detected projection image exceeds a set value, it is determined that the projection image is distorted, and the terminal device may acquire the distorted projection image, that is, the distorted image to be corrected.
It should be understood that the image capture device may refer to a camera, a binocular camera, and the like.
The input image may be an image obtained by integrating information of a plurality of sensors of the vehicle, and it should be noted that the input image is an image input source for HUD imaging, that is, the projection images on the windshield of the vehicle are all images obtained by projecting corresponding input images through the optical element, and similarly, the image to be corrected for distortion is also an image obtained by projecting corresponding input images through the optical element.
It should be understood that, since the distortion to be corrected is an image obtained by projecting the corresponding input image through the optical element, the display contents included in the distortion to be corrected and the corresponding input image are the same, and the display contents are respectively located in the two images correspondingly, for example, the position where the speed-per-time display of the vehicle is located is the lower right corner of the input image, then the speed-per-time display of the vehicle in the distortion to be corrected is also located in the lower right corner, and the difference is only that the pixel sizes of the display contents in the two images may be different.
For example, each frame of projection images acquired by the image acquisition device in real time has a corresponding input image, for example, the acquired projection image of a current frame includes speed per hour display, road condition display and navigation display of a vehicle, so the corresponding input image is an image input source of the current projection image, the terminal device may acquire the input image corresponding to the projection image of the current frame from the plurality of image input sources, may acquire an image with the same display time as the current frame from the plurality of image input sources as a corresponding input image according to the time of the current frame acquired in real time, and may count the plurality of image input sources according to the number of frames of the current frame acquired in real time, and take the image with the same number of frames as the current frame in the image input sources as the input image corresponding to the projection image of the current frame.
For example, because the distorted projection images are corrected, the corresponding input image does not need to be acquired for each projection image, and only the input image corresponding to the distorted image to be corrected needs to be acquired, so that the correction time is saved, and the real-time performance of distortion correction is improved.
It should be noted that the above examples are only illustrative and should not be construed as limiting the present application. In practical use, an appropriate method can be selected to obtain the distorted image to be corrected and the corresponding input image according to actual needs and a specific application scenario, which is not limited in the embodiment of the present application.
And 102, partitioning a target area in the input image, and determining a first characteristic point coordinate in the input image and a second characteristic point coordinate in the distorted image to be corrected.
The target area in the input image refers to a partial display area in the input image. As a possible implementation manner, the size of the target area may be an area within a field angle of 5 × 10, and the entire input image is not occupied, so as to increase the calculation speed of the correction.
The first feature point coordinate refers to a feature point pixel coordinate of each block in the input image. It should be noted that the coordinate system where the first feature point is located is a pixel coordinate system of the input image, and as a possible implementation manner, the pixel coordinate system of the input image may use the upper left corner of the target area as an origin, the positive x direction is toward the right, and the positive y direction is toward the bottom.
It should be understood that the number of the first feature point coordinates in each block may be set according to practical situations, for example, 4, and the application is not limited thereto.
The second feature point coordinate refers to a pixel coordinate of a feature point corresponding to the first feature point in the distorted image to be corrected. It should be noted that the coordinate system in which the second feature point is located is a pixel coordinate system of the distorted image to be corrected. As a possible implementation manner, a display area corresponding to a target area of a corresponding input image in an image to be corrected for distortion is called a target correction area, and a pixel coordinate system of the image to be corrected for distortion may use an upper left corner of the target correction area as an origin, an x positive direction is toward the right, and a y positive direction is toward the bottom.
As a possible implementation manner, the first feature point coordinate and the second feature point coordinate are determined, after a target region of an input image is partitioned into a plurality of first partition regions, the first feature point coordinate in each first partition region is determined according to gray values of pixel points in the plurality of first partition regions, then a plurality of second partition regions corresponding to the plurality of first partition regions one to one in a target correction region of a distorted image to be corrected are determined according to the plurality of first partition regions, and finally, the second feature point coordinate in each second partition region is determined according to gray values of pixel points in the plurality of second partition regions. That is, in a possible implementation manner of this embodiment of the present application, step 102 may include:
partitioning a target area in an input image to obtain a plurality of first partitioned areas;
determining a plurality of corresponding second blocking areas in the distorted image to be corrected according to the blocking result of the input image;
determining a first characteristic point coordinate in each first block area according to the gray values of the pixel points in the plurality of first block areas;
and determining the coordinates of the second characteristic points in each second block area according to the gray values of the pixel points in the plurality of second block areas.
The blocking result of the input image refers to a blocking result of a target area in the input image.
As an example, according to the gray values of the pixel points in the plurality of first block regions, the pixel point (usually, four corner points of each first block region) in each first block region that has the largest gray change in any direction is determined, and these pixel points are used as the first feature point of each first block region, so that the first feature point coordinate in each first block region can be obtained.
As an example, according to the gray values of the pixel points in the plurality of second block regions, the pixel point (usually, four corner points of each second block region) in each second block region that has the largest gray change in any direction is determined, and the pixel points are used as the second feature points of each second block region, so that the coordinates of the second feature points in each second block region can be obtained.
It should be understood that the plurality of first feature points and the plurality of second feature points derived according to the above-described method are in correspondence, i.e., one-to-one correspondence.
As a possible implementation manner, the first feature point coordinate and the second feature point coordinate are determined, four corner points of the block area may also be directly determined as four feature points of the block area, respectively, and then the pixel coordinates of the four feature points are determined as the feature point coordinates of the block area.
It should be noted that the above examples are only illustrative and should not be construed as limiting the present application. In practical use, the feature point coordinates with a proper number need to be obtained according to practical needs and specific application scenarios, which is not limited in the embodiments of the present application.
And 103, updating the input image in a partitioning manner according to each first characteristic point coordinate and the corresponding second characteristic point coordinate so as to correct the distorted image to be corrected.
The updating the input image in a partitioned manner may be updating an image displayed in a partitioned area according to the first feature point coordinate and the corresponding second feature point coordinate in the partitioned area, and then updating each partitioned area, that is, updating the input image in a partitioned manner. It should be noted that, after the update of the input image in the partition manner is completed, the HUD projects the updated input image onto the windshield, and at this time, since the input image changes, the image to be corrected for distortion also changes, so that the input image can be updated in the partition manner, so as to correct the image to be corrected for distortion, that is, the distorted image imaged by the HUD is corrected by changing the image input source.
As a possible implementation, since the input image and the projected image have a fixed transformation relationship, and the image to be corrected for distortion also has a fixed transformation relationship with the input image, and since the input image is an undistorted image, if the input image is changed into a distorted image according to the transformation relationship, the projected image will become an undistorted image identical to the original input image, and thus the input image can be updated to correct the image to be corrected for distortion by determining the transformation relationship between the input image and the image to be corrected for distortion. That is, in a possible implementation manner of this embodiment of the present application, step 102 may include:
for each first block area, determining a projection transformation matrix from an input image to an image to be corrected for distortion of the first block area according to the first characteristic point coordinates in the first block area and the second characteristic point coordinates in the corresponding second block area;
and updating the input image in a partition mode according to the projective transformation matrix corresponding to each first partition area.
In this embodiment, the input image may be updated in a partition manner for the projective transformation matrix corresponding to each first partition area. For example, the target area is partitioned into 9 first partition areas, and then 9 projective transformation matrices may be obtained according to the above steps, and the 9 first partition areas of the input image are partitioned and updated by using the 9 projective transformation matrices. And the projective transformation matrix is obtained by calculation according to the first characteristic point coordinate in the first block area and the second characteristic point coordinate in the corresponding second block area. See the following calculation formula:
Figure BDA0003909799220000091
wherein (x, y) is the first characteristic point coordinate, (x ', y') is the second characteristic point coordinate, H 3×3 Is a projective transformation matrix.
It should be understood that since the first feature point coordinates are pixel coordinates in the input image pixel coordinate system; the second characteristic point is a pixel coordinate in a pixel coordinate system of the image to be corrected for distortion, and the two characteristic points are not in the same coordinate system and are difficult to calculate, so that before calculation of the projective transformation matrix, the first characteristic point coordinate and the second characteristic point coordinate need to be put into the same coordinate system, and then calculation of the projective transformation matrix needs to be performed.
In a possible implementation manner, the input image includes not only the target region but also a non-target region, the image in the target region may be updated by using the above method, and since the non-target region is a region outside the viewing angle, the non-target region may use the original input image, or perform the same operation on the non-target region by using a method for updating the target region, or determine the projection transformation matrix of the non-target region by using a method for expanding the edge region.
In a possible implementation manner, the projective transformation matrix of the non-target region is determined by using a side region expansion method, and the non-target region may be partitioned before the target region in the input image is partitioned, so as to obtain a plurality of third partitioned regions.
Correspondingly, the partition update input image may include:
acquiring projection transformation matrixes corresponding to a plurality of third block areas;
and updating the input image in a partitioning mode according to the projective transformation matrixes corresponding to the third partition areas and the projective transformation matrixes corresponding to the first partition areas.
In the embodiment of the application, since the plurality of third block regions are regions outside the view angle, the correction does not need to be performed with too high accuracy, on the premise that the basic correction effect is satisfied, the calculation time of the part can be saved, and the projection transformation matrix of the adjacent region is directly used as the projection transformation matrix of the corresponding third block region, so that the correction real-time performance is improved on the premise that the non-target region satisfies the basic correction effect.
In a possible implementation manner, obtaining the projective transformation matrices corresponding to the plurality of third block regions may refer to using the projective transformation matrices of the adjacent regions as the projective transformation matrices corresponding to the third block regions. Specifically, the following situations may be included:
if the third block area is adjacent to any first block area, determining the projective transformation matrix corresponding to the first block area as the projective transformation matrix corresponding to the adjacent third block area;
and if the third block area is adjacent to any third block area and the adjacent first block area is not detected, determining the projective transformation matrix corresponding to the third block area as the projective transformation matrix corresponding to the adjacent third block area.
In this embodiment of the application, after the projective transformation matrices corresponding to the third block regions are obtained, the third block regions and the first block regions jointly form the input image, so that the input image can be updated in a partitioned manner according to the projective transformation matrices corresponding to the third block regions and the projective transformation matrices corresponding to the first block regions.
It should be noted that, in actual use, the number of the plurality of first blocking areas and the number of the plurality of third blocking areas may be determined according to actual needs and specific application scenarios, which is not limited in the embodiment of the present application. For example, the number of the first blocking areas may be 6, 9, etc., and the number of the third blocking areas may be 4, 6, 8, etc.
In the embodiment of the application, in the running process of a vehicle, a distorted image to be corrected and a corresponding input image are obtained, and a first feature point coordinate in the input image and a second feature point coordinate in the distorted image to be corrected are respectively determined. Above-mentioned scheme has realized correcting the distortion of HUD projecting image, can make HUD present normal undistorted image on windshield.
Referring to fig. 2, a flowchart of an aberration correction method for HUD imaging according to the second embodiment of the present application is shown. As shown in fig. 2, the method for correcting the distortion of HUD imaging may include the steps of:
step 201, in the vehicle driving process, acquiring an image to be corrected for distortion and a corresponding input image.
Step 202, partitioning a target area in an input image, and determining a first feature point coordinate in the input image and a second feature point coordinate in the distorted image to be corrected.
Steps 201 to 202 of this embodiment are the same as steps 101 to 102 of the previous embodiment, and may refer to each other, which is not described herein again.
Step 203, acquiring a reference point in the input image.
The reference point refers to a point with unchanged coordinates before and after the projection of the input image, that is, the coordinates of the reference point in the pixel coordinate system of the input image and the coordinates of the pixel coordinate system of the image to be corrected for distortion are the same. It should be noted that the reference point may be any pixel point that satisfies the above conditions except the origin.
The reference points in the input image can be obtained by traversing all pixel points of the input image and all pixel points of the distorted image to be corrected.
And 204, normalizing each pixel point in the distorted image to be corrected according to the reference point so as to transform the distorted image to be corrected to the coordinate system of the input image.
In the embodiment of the application, since the pixel coordinates of the image are not uniform before and after projection, that is, the basic information such as the resolution and the like of the image is different in two planes, the image transformation operation can be performed on the image to be corrected for distortion, so that the image to be corrected for distortion is transformed to be under the coordinate system of the input image. Because the coordinates of the reference point in the two images are the same, the reference point is taken as the center in the distorted image to be corrected, image transformation operations such as translation and scaling and the like are carried out on the distorted image to be corrected, namely, each pixel point in the distorted image to be corrected is normalized, and the distorted image to be corrected is transformed to be under the coordinate system of the input image.
It should be understood that, when the distorted image to be corrected is transformed into the coordinate system of the input image, the pixel points in the input image do not need to be normalized, and the input image and the distorted image to be corrected can be in the same coordinate system.
And step 205, obtaining the updated coordinates of the second feature point in the coordinate system of the input image.
The updated coordinates may be pixel coordinates of the normalized second feature point.
In this embodiment of the application, after the second feature point is determined first, the updated coordinate after the second feature point is normalized may be determined according to the original coordinate of the second feature point.
And step 206, updating the input image in a partition mode according to the coordinates of each first characteristic point and the corresponding updated coordinates of the second characteristic point coordinates so as to correct the distorted image to be corrected.
In this embodiment, in step 206, the input image is updated in a partitioned manner according to each first feature point coordinate and the corresponding update coordinate of the second feature point coordinate, which is the same as the method used in the previous embodiment, in step 103, the input image is updated in a partitioned manner according to each first feature point coordinate and the corresponding second feature point coordinate, and may be referred to each other, and details of this embodiment are not repeated herein.
As a possible implementation, updating the input image according to the partition, and correcting the distorted image to be corrected includes:
updating the input image in a partitioning manner to obtain a pre-distortion image corresponding to the input image;
and projecting and displaying the pre-distortion image through an optical element to obtain a corrected image to be corrected.
In the embodiment of the application, the input image is firstly subjected to the predistortion operation, so that the input image is changed into the corresponding distorted image, then the predistorted image is projected and displayed through the optical element, and the projected transformation matrix between the input image and the projected image is not changed, so that the predistorted image is changed into a distortionless image after being projected, namely a rectified distorted image to be rectified.
Compared with the first embodiment, in the first embodiment of the present application, before the input image is updated in a partitioned manner according to each first feature point coordinate and the corresponding second feature point coordinate, each pixel point in the distorted image to be corrected may be normalized by obtaining the reference point of the input image, so that the distorted image to be corrected is converted into the coordinate system of the input image.
Referring to fig. 3, a schematic structural diagram of an aberration correction device for HUD imaging according to the third embodiment of the present application is shown, and for convenience of explanation, only the parts related to the third embodiment of the present application are shown.
The aberration correcting device for HUD imaging may specifically include the following modules:
the acquiring module 301 is configured to acquire an image to be corrected for distortion and a corresponding input image during vehicle driving, where the image to be corrected for distortion is an image obtained by projecting the corresponding input image through an optical element;
the feature determination module 302 is configured to block a target region in an input image, determine a first feature point coordinate in the input image, and determine a second feature point coordinate in a distorted image to be corrected, where the first feature point coordinate corresponds to the second feature point coordinate;
and the correcting module 303 is configured to update the input image in a partitioning manner according to each first feature point coordinate and the corresponding second feature point coordinate, so as to correct the distorted image to be corrected.
In this embodiment, the characteristic determining module 302 may specifically include the following sub-modules:
the first blocking submodule is used for blocking a target area in an input image to obtain a plurality of first blocking areas;
the second blocking submodule is used for determining a plurality of corresponding second blocking areas in the distorted image to be corrected according to the blocking result of the input image;
the first characteristic point determining submodule is used for determining a first characteristic point coordinate in each first block area according to the gray values of the pixel points in the plurality of first block areas;
and the second characteristic point determining submodule is used for determining the coordinates of the second characteristic points in each second block area according to the gray values of the pixel points in the plurality of second block areas.
In the embodiment of the present application, the straightening module 303 may specifically include the following sub-modules:
the matrix determination submodule is used for determining a projection transformation matrix of each first block area from an input image to an image to be corrected in distortion according to the first characteristic point coordinate in the first block area and the second characteristic point coordinate in the corresponding second block area;
and the partition updating submodule is used for updating the input image in a partition manner according to the projection transformation matrix corresponding to each first partition area.
In the embodiment of the present application, the input image includes a target region and a non-target region, and the distortion correcting apparatus for HUD imaging further includes:
the non-target blocking module is used for blocking the non-target area to obtain a plurality of third blocking areas;
correspondingly, the correction module may specifically include the following sub-modules:
the matrix acquisition submodule is used for acquiring projection transformation matrixes corresponding to the third block areas;
and the input updating submodule is used for updating the input image in a partitioning manner according to the projective transformation matrixes corresponding to the third partitioning areas and the projective transformation matrixes corresponding to the first partitioning areas.
In this embodiment, the matrix obtaining sub-module may specifically include the following units:
the first judging unit is used for determining the projective transformation matrix corresponding to the first block area as the projective transformation matrix corresponding to the adjacent third block area if the third block area is adjacent to any first block area;
and the second judging unit is used for determining the projective transformation matrix corresponding to the third block area as the projective transformation matrix corresponding to the adjacent third block area if the third block area is adjacent to any third block area and the adjacent first block area is not detected.
In the embodiment of the present application, the aberration correcting device for HUD imaging may further include the following modules:
the reference point acquisition module is used for acquiring a reference point in the input image, wherein the reference point is a point with unchanged coordinates before and after the projection of the input image;
the normalization module is used for normalizing each pixel point in the distorted image to be corrected according to the reference point so as to transform the distorted image to be corrected to a coordinate system of the input image;
correspondingly, the rectification module 303 may further include the following sub-modules:
the updated coordinate acquisition sub-module is used for acquiring the updated coordinate of the second characteristic point in the coordinate system of the input image;
and the updating submodule is used for updating the input image in a partitioning manner according to the coordinates of each first characteristic point and the corresponding updated coordinates of the second characteristic points.
In this embodiment, the straightening module 303 may further include the following sub-modules:
the predistortion submodule is used for updating the input image in a partitioning manner to obtain a predistortion image corresponding to the input image;
and the projection display sub-module is used for projecting and displaying the pre-distorted image through an optical element to obtain a corrected distorted image to be corrected.
The HUD imaging distortion correction device provided in the embodiment of the present application may be applied to the foregoing method embodiments, and for details, reference is made to the description of the foregoing method embodiments, and details are not repeated here.
Fig. 4 is a schematic structural diagram of a terminal device according to a fourth embodiment of the present application. As shown in fig. 4, the terminal device 400 of this embodiment includes: at least one processor 410 (only one shown in fig. 4), a memory 420, and a computer program 421 stored in the memory 420 and executable on the at least one processor 410, wherein the processor 410 executes the computer program 421 to implement the steps in the above-described embodiment of the method for correcting the distortion of HUD imaging.
The terminal device 400 may be a computing device such as a desktop computer, a notebook, a palm computer, and a cloud server. The terminal device may include, but is not limited to, a processor 410, a memory 420. Those skilled in the art will appreciate that fig. 4 is merely an example of the terminal device 400, and does not constitute a limitation to the terminal device 400, and may include more or less components than those shown, or may combine some components, or different components, and may further include, for example, an input/output device, a network access device, and the like.
The Processor 410 may be a Central Processing Unit (CPU), and the Processor 410 may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 420 may in some embodiments be an internal storage unit of the terminal device 400, such as a hard disk or a memory of the terminal device 400. The memory 420 may also be an external storage device of the terminal device 400 in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 400. Further, the memory 420 may also include both an internal storage unit and an external storage device of the terminal device 400. The memory 420 is used for storing an operating system, an application program, a Boot Loader (Boot Loader), data, and other programs, such as program codes of the computer programs. The memory 420 may also be used to temporarily store data that has been output or is to be output.
It should be clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional units and modules is only used for illustration, and in practical applications, the above function distribution may be performed by different functional units and modules as needed, that is, the internal structure of the apparatus may be divided into different functional units or modules to perform all or part of the above described functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only used for distinguishing one functional unit from another, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit.
The integrated module/unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, read-Only Memory (ROM), random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer-readable medium may contain suitable additions or subtractions depending on the requirements of legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer-readable media may not include electrical carrier signals or telecommunication signals in accordance with legislation and patent practice.
When the computer program product runs on a terminal device, the steps in the method embodiments can be implemented when the terminal device executes the computer program product.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same. Although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. An aberration correction method for HUD imaging, the aberration correction method comprising:
acquiring an image to be corrected for distortion and a corresponding input image in the driving process of a vehicle, wherein the image to be corrected for distortion is an image obtained by projecting the corresponding input image through an optical element;
partitioning a target area in the input image, and determining a first characteristic point coordinate in the input image and a second characteristic point coordinate in the distorted image to be corrected, wherein the first characteristic point coordinate corresponds to the second characteristic point coordinate;
and updating the input image in a partitioning manner according to each first characteristic point coordinate and the corresponding second characteristic point coordinate so as to correct the distorted image to be corrected.
2. The distortion correction method according to claim 1, wherein the blocking a target region in the input image, determining a first feature point coordinate in the input image, and a second feature point coordinate in the distorted image to be corrected, comprises:
partitioning a target area in the input image to obtain a plurality of first partitioned areas;
determining a plurality of corresponding second block areas in the distorted image to be corrected according to the block result of the input image;
determining the coordinates of a first characteristic point in each first block area according to the gray values of the pixel points in the plurality of first block areas;
and determining the coordinates of the second characteristic points in each second block area according to the gray values of the pixel points in the plurality of second block areas.
3. An aberration correcting method according to claim 2, wherein said updating the input image in a divisional manner based on each of the first feature point coordinates and the corresponding second feature point coordinates comprises:
for each first block area, determining a projective transformation matrix of the first block area from the input image to the distorted image to be corrected according to the first characteristic point coordinate in the first block area and the second characteristic point coordinate in the corresponding second block area;
and updating the input image in a partitioning mode according to the projective transformation matrix corresponding to each first partitioning area.
4. An aberration correcting method according to claim 3, wherein said input image includes a target region and a non-target region, and before said blocking the target region in said input image, further comprising:
partitioning the non-target area to obtain a plurality of third partitioned areas;
correspondingly, the partition updating the input image comprises:
acquiring projective transformation matrixes corresponding to the third block areas;
and updating the input image in a partitioning mode according to the projective transformation matrixes corresponding to the third partition areas and the projective transformation matrixes corresponding to the first partition areas.
5. An aberration correcting method according to claim 4, wherein said obtaining the projective transformation matrices corresponding to the third block areas comprises:
if the third block area is adjacent to any first block area, determining the projective transformation matrix corresponding to the first block area as the projective transformation matrix corresponding to the adjacent third block area;
and if the third block area is adjacent to any third block area and the adjacent first block area is not detected, determining the projective transformation matrix corresponding to the third block area as the projective transformation matrix corresponding to the adjacent third block area.
6. An aberration correcting method according to claim 1, further comprising, before the division updating the input image based on each of the first feature point coordinates and the corresponding second feature point coordinates:
acquiring a reference point in the input image, wherein the reference point is a point with unchanged coordinates before and after the input image is projected;
normalizing each pixel point in the distorted image to be corrected according to the reference point so as to transform the distorted image to be corrected to a coordinate system of the input image;
correspondingly, the updating the input image in a partitioning manner according to each of the first feature point coordinates and the corresponding second feature point coordinates includes:
acquiring an updated coordinate of the second feature point in a coordinate system of the input image;
and updating the input image in a partitioning manner according to the coordinates of each first characteristic point and the corresponding updated coordinates of the second characteristic points.
7. An aberration correcting method according to claim 1, wherein the input image is updated on a partition basis, and correcting the image to be corrected for aberration comprises:
updating the input image in a partitioning manner to obtain a pre-distortion image corresponding to the input image;
and performing projection display on the pre-distorted image through the optical element to obtain a corrected image to be corrected.
8. An aberration correcting device for HUD imaging, the aberration correcting device comprising:
the system comprises an acquisition module, a correction module and a correction module, wherein the acquisition module is used for acquiring an image to be corrected by distortion and a corresponding input image in the running process of a vehicle, and the image to be corrected by distortion is an image obtained by projecting the corresponding input image through an optical element;
the characteristic determining module is used for partitioning a target area in the input image, determining a first characteristic point coordinate in the input image and a second characteristic point coordinate in the distorted image to be corrected, wherein the first characteristic point coordinate corresponds to the second characteristic point coordinate;
and the correction module is used for updating the input image in a partitioning manner according to each first characteristic point coordinate and the corresponding second characteristic point coordinate so as to correct the distorted image to be corrected.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN202211317063.0A 2022-10-26 2022-10-26 Distortion correction method and device for HUD imaging, terminal equipment and storage medium Active CN115578283B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211317063.0A CN115578283B (en) 2022-10-26 2022-10-26 Distortion correction method and device for HUD imaging, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211317063.0A CN115578283B (en) 2022-10-26 2022-10-26 Distortion correction method and device for HUD imaging, terminal equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115578283A true CN115578283A (en) 2023-01-06
CN115578283B CN115578283B (en) 2023-06-20

Family

ID=84586775

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211317063.0A Active CN115578283B (en) 2022-10-26 2022-10-26 Distortion correction method and device for HUD imaging, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115578283B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112381739A (en) * 2020-11-23 2021-02-19 天津经纬恒润科技有限公司 Imaging distortion correction method and device of AR-HUD system
WO2021208249A1 (en) * 2020-04-15 2021-10-21 上海摩象网络科技有限公司 Image processing method and device, and handheld camera
CN114998157A (en) * 2022-07-18 2022-09-02 江苏泽景汽车电子股份有限公司 Image processing method, image processing device, head-up display and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021208249A1 (en) * 2020-04-15 2021-10-21 上海摩象网络科技有限公司 Image processing method and device, and handheld camera
CN112381739A (en) * 2020-11-23 2021-02-19 天津经纬恒润科技有限公司 Imaging distortion correction method and device of AR-HUD system
CN114998157A (en) * 2022-07-18 2022-09-02 江苏泽景汽车电子股份有限公司 Image processing method, image processing device, head-up display and storage medium

Also Published As

Publication number Publication date
CN115578283B (en) 2023-06-20

Similar Documents

Publication Publication Date Title
US20190197735A1 (en) Method and apparatus for image processing, and robot using the same
CN111476104B (en) AR-HUD image distortion correction method, device and system under dynamic eye position
CN109754434B (en) Camera calibration method, device, user equipment and storage medium
EP2362345B1 (en) Method and apparatus for high-speed and low-complexity piecewise geometric transformation of signals
US20220036521A1 (en) Image correction method and apparatus for camera
US8755624B2 (en) Image registration device and method thereof
US20220301121A1 (en) Method and apparatus for correcting face distortion, electronic device, and storage medium
JP7180079B2 (en) Circuit devices and electronic equipment
CN113256742B (en) Interface display method and device, electronic equipment and computer readable medium
CN109286758B (en) High dynamic range image generation method, mobile terminal and storage medium
CN111127365A (en) HUD distortion correction method based on cubic spline curve fitting
CN112927306B (en) Calibration method and device of shooting device and terminal equipment
US20120106868A1 (en) Apparatus and method for image correction
CN114998157A (en) Image processing method, image processing device, head-up display and storage medium
CN111353951A (en) Circuit device, electronic apparatus, and moving object
CN115578283A (en) Distortion correction method and device for HUD imaging, terminal equipment and storage medium
CN114648449A (en) Image remapping method and image processing device
CN111340722A (en) Image processing method, processing device, terminal device and readable storage medium
CN111275622A (en) Image splicing method and device and terminal equipment
CN111986144A (en) Image blur judgment method and device, terminal equipment and medium
CN111256712B (en) Map optimization method and device and robot
US20190378245A1 (en) Image correction device, image correction method, and program
CN109286798B (en) Frame position identification method and system of projection screen and terminal equipment
CN115690191A (en) Optical center determining method, device, electronic equipment and medium
CN112734680A (en) Ghost measurement method and device, readable storage medium and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant