CN113313656A - Distortion correction method suitable for HUD upper, middle and lower eye boxes - Google Patents

Distortion correction method suitable for HUD upper, middle and lower eye boxes Download PDF

Info

Publication number
CN113313656A
CN113313656A CN202110775998.2A CN202110775998A CN113313656A CN 113313656 A CN113313656 A CN 113313656A CN 202110775998 A CN202110775998 A CN 202110775998A CN 113313656 A CN113313656 A CN 113313656A
Authority
CN
China
Prior art keywords
image
correction factor
hud
eye box
horizontal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110775998.2A
Other languages
Chinese (zh)
Other versions
CN113313656B (en
Inventor
张涛
郑天策
吕涛
何飞
杨立波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Zejing Automobile Electronic Co ltd
Original Assignee
Jiangsu Zejing Automobile Electronic Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Zejing Automobile Electronic Co ltd filed Critical Jiangsu Zejing Automobile Electronic Co ltd
Publication of CN113313656A publication Critical patent/CN113313656A/en
Application granted granted Critical
Publication of CN113313656B publication Critical patent/CN113313656B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/80
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30268Vehicle interior
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention relates to the technical field of image processing, and particularly discloses a method for correcting distortion of upper, middle and lower eye boxes of a HUD (head Up display), which comprises the following steps: inputting peripheral data of the whole vehicle; the data comprises eyepoint data, front windshield glass, front wall metal plates and instrument board beams, and the data comprises key information such as observation, limitation and installation; establishing a reverse model in optical analysis software and optimizing a system; extracting observation points through peripheral data of the whole vehicle; calculating horizontal and vertical correction factors of the upper, middle and lower eye boxes; acquiring the reverse position of the large reflector, and converting eye box information; performing HUD imaging system reverse modeling according to the peripheral data of the whole vehicle; according to the conjugate relation of an object image of an optical system, a virtual image of the HUD imaging system is used as an object of the reverse imaging system, an image source is used as an image of the reverse imaging system, visual field configuration is achieved through optical design software, footprints corresponding to all visual fields of an image surface of the reverse imaging system are distributed uniformly, and distortion correction factors of the upper eye box, the middle eye box and the lower eye box are obtained through calculation according to the method.

Description

Distortion correction method suitable for HUD upper, middle and lower eye boxes
Technical Field
The invention relates to the technical field of image processing, in particular to a method for correcting distortion of upper, middle and lower eye boxes of a HUD.
Background
As a driving auxiliary device, an automobile windshield Head-Up Display (Head Up Display) can project information on the front windshield glass of an automobile to prevent a driver from looking over instrument information by lowering the Head, so that the safety of the driving process is improved, and along with the development of the technology, the HUD is widely applied in the automobile field.
Because the design of the automobile head-up display system is limited by a plurality of factors such as the eyepoint position of the whole automobile, the arrangement position of the whole automobile, the surface type parameters of the windshield glass of the automobile, the processing parameter requirements of the primary mirror and the secondary mirror and the like, the distortion in the upper eye box, the middle eye box and the lower eye box has the problem of larger design value, the existing processing method is to carry out distortion correction on the center eye box, and the driving experience of consumers with different heights cannot be met.
The patent of application number 202010187422.X to above-mentioned problem aims at solving HUD dynamic distortion problem, and it obtains virtual image data through HUD off-line detection camera and corrects it.
All eye boxes of central eye box distortion correction adaptation are used to current HUD distortion correction scheme, can't compromise the week wholely, and full eye box correction effect relies on the regional face type uniformity of car windshield imaging, all has very high requirement to imaging system's arrangement and optimization.
Based on the above, the invention designs a method for correcting distortion of upper, middle and lower eye boxes of a HUD, so as to solve the problems.
Disclosure of Invention
The invention provides a distortion correction method suitable for HUD upper, middle and lower eye boxes, which solves the problems that in the prior art, distortion correction is only carried out on a central eye box to adapt to all eye boxes, the design of an imaging system is complex, the deviation of the visual imaging effect of each eye box is overlarge, and the like.
The technical scheme of the invention is realized as follows: a method for correcting distortion of an upper, middle and lower eye box for accommodating a HUD, comprising the steps of:
s1: inputting peripheral data of the whole vehicle; the data comprises eyepoint data, front windshield glass, front wall metal plates and instrument board beams, and the data comprises key information such as observation, limitation and installation;
s2: establishing a reverse model in optical analysis software and optimizing a system; extracting observation points through the peripheral data of the whole vehicle, and analyzing key parameters such as a system field angle, a lower view angle, a left view angle and the like according to the HUD envelope position; taking the analysis result as the input of an imaging system to perform reverse modeling in optical analysis software; the virtual image is used as an object in reverse modeling, the image source is used as an image in reverse modeling, so that an imaging system is optimized, and distortion, a modulation transfer function and the like are optimized to an acceptable range; establishing a HUD imaging system reverse simulation model in optical software according to the peripheral data of the whole vehicle, and optimizing the imaging system;
s3: calculating horizontal and vertical correction factors of the upper, middle and lower eye boxes;
s4: acquiring the reverse position of the large reflector, and converting eye box information; analyzing the large reflector turnover angle records corresponding to the upper, middle and lower eye boxes in an imaging system model and taking the records as judgment angles;
s5: and processing the original image data by using the eye box correction factor and pushing the original image data into an image source for displaying.
Preferably, the step S3 specifically includes the following steps:
s31: dividing the image surface based on different view fields, wherein the view field setting is necessary to enable the footprint points to be uniformly distributed on the image surface;
s32: based on the object-image relationship, combining with the view field setting, obtaining an image surface foot locus point horizontal theoretical matrix A;
s33: acquiring an actual horizontal matrix B of the foot locus points of the image surfaces of the upper eye box, the middle eye box and the lower eye box; self-focusing needs to be realized in a multiple structure, and the acquisition mode can be realized through a macro language or an optimized operand;
s34: calculating the middle horizontal correction factor C of the upper, middle and lower eye boxesOn the upper part、CIn、CLower part
S35: and expanding the intermediate horizontal correction factor by an interpolation method according to the image source resolution parameter l x k to obtain a horizontal correction factor.
Preferably, in step S32, the image plane footprint point horizontal theoretical matrix a:
Figure BDA0003155306680000021
preferably, in the step S33, the image plane footprint point actual horizontal matrix B:
Figure BDA0003155306680000031
preferably, in the step S34, the middle upper, middle lower eye box middle level correction factor COn the upper part、CIn、CLower partThe following matrix requirements apply:
A=Con the upper part·BOn the upper part
A=CIn·BIn
A=CLower part·BLower part
Preferably, the correction factor C for the middle level of the upper, middle and lower eye boxesOn the upper part、CIn、CLower partThe calculation formula is obtained by a least square method and is as follows:
Con the upper part=[c′11 c′12 … c′1n]
CIn=[c″11 c″12 … c″1n]
CLower part=[c″′11 c″′12 … c″′1n]
Preferably, in step S35, the horizontal correction factor is COOn the upper part、COIn、COLower partThe said horizontal correction factor COOn the upper part、COIn、COLower partThe calculation formula of (2) is as follows:
COon the upper part=[co′11 co′12 … co′1k]
COIn=[co″11 co″12 … co″1k]
COLower part=[co″′11 co″′12 … co″′1k]
Preferably, in the step S35, k is the number of vertical pixels of the actually used image source, and the vertical correction factor is obtained by the same method, and the vertical correction factor is DOOn the upper part、DOIn、DOLower partThe vertical correction factor DOOn the upper part、DOIn、DOLower partThe calculation formula of (2) is as follows:
DOon the upper part=[do′11 do′12 … do′1k]
DOIn=[do″11 do″12 … do″1k]
DOLower part=[do″′11 do″′12 … do″′1k]
And after the horizontal correction factor and the vertical correction factor are calculated, the calculation results are written into hardware as configuration parameters.
Preferably, in the step S5, the original image matrix is set to have horizontal coordinates E, vertical coordinates F,
Figure BDA0003155306680000041
preferably, in step S5, the position of the eye box is obtained by determining the turning position of the large mirror, and the correction factor is used in the MCU to correct the original image, and then the result is pushed to the image source; the horizontal coordinate X and the vertical coordinate Y of the pixels of the screen display image are corrected in the following way:
Figure BDA0003155306680000042
Figure BDA0003155306680000043
Figure BDA0003155306680000044
compared with the prior art, the invention has the beneficial effects that:
1. and carrying out reverse modeling on the HUD imaging system according to the peripheral data of the whole vehicle. According to the conjugate relation of an object image of an optical system, a virtual image of the HUD imaging system is used as an object of the reverse imaging system, an image source is used as an image of the reverse imaging system, visual field configuration is achieved through optical design software, footprints corresponding to all visual fields of an image surface of the reverse imaging system are distributed uniformly, and distortion correction factors of the upper eye box, the middle eye box and the lower eye box are obtained through calculation according to the method.
2. When the HUD virtual image position is adjusted, the corresponding eye box distortion correction factor is used for processing original picture data, and processed data are pushed to an image source to be output in real time, so that the purpose of correcting distortion of the upper eye box image, the middle eye box image and the lower eye box image is achieved.
Of course, it is not necessary for any one product that embodies the invention to achieve all of the above advantages simultaneously.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of the present invention for distortion correction;
figure 2 is a schematic diagram of a WHUD imaging system provided by the present invention.
In the drawings, the components represented by the respective reference numerals are listed below:
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1-2, the present invention provides a technical solution: a method for correcting distortion of an upper, middle and lower eye box for accommodating a HUD, comprising the steps of:
s1: inputting peripheral data of the whole vehicle; the data comprises eyepoint data, front windshield glass, front wall metal plates and instrument board beams, and the data comprises key information such as observation, limitation and installation;
s2: establishing a reverse model in optical analysis software and optimizing a system; extracting observation points through the peripheral data of the whole vehicle, and analyzing key parameters such as a system field angle, a lower view angle, a left view angle and the like according to the HUD envelope position; taking the analysis result as the input of an imaging system to perform reverse modeling in optical analysis software; the virtual image is used as an object in reverse modeling, the image source is used as an image in reverse modeling, so that an imaging system is optimized, and distortion, a modulation transfer function and the like are optimized to an acceptable range; establishing a HUD imaging system reverse simulation model in optical software according to the peripheral data of the whole vehicle, and optimizing the imaging system;
s3: calculating horizontal and vertical correction factors of the upper, middle and lower eye boxes;
s4: acquiring the reverse position of the large reflector, and converting eye box information; analyzing the large reflector turnover angle records corresponding to the upper, middle and lower eye boxes in an imaging system model and taking the records as judgment angles;
s5: and processing the original image data by using the eye box correction factor and pushing the original image data into an image source for displaying.
Wherein, in the step S3, the method specifically includes the following steps:
s31: dividing the image surface based on different view fields, wherein the view field setting is necessary to enable the footprint points to be uniformly distributed on the image surface;
s32: based on the object-image relationship, combining with the view field setting, obtaining an image surface foot locus point horizontal theoretical matrix A;
s33: acquiring an actual horizontal matrix B of the foot locus points of the image surfaces of the upper eye box, the middle eye box and the lower eye box; self-focusing needs to be realized in a multiple structure, and the acquisition mode can be realized through a macro language or an optimized operand;
s34: calculating the middle horizontal correction factor C of the upper, middle and lower eye boxesOn the upper part、CIn、CLower part
S35: and expanding the intermediate horizontal correction factor by an interpolation method according to the image source resolution parameter l x k to obtain a horizontal correction factor.
Wherein, in the step S32, the image plane footprint point horizontal theoretical matrix a:
Figure BDA0003155306680000061
wherein, in the step S33, the image plane footprint point actual horizontal matrix B:
Figure BDA0003155306680000062
wherein, in the step S34, the middle upper, middle lower eye box middle level correction factor COn the upper part、CIn、CLower partThe following matrix requirements apply:
A=Con the upper part·BOn the upper part
A=CIn·BIn
A=CLower part·BLower part
Wherein the middle level correction factor C of the upper, middle and lower eye boxesOn the upper part、CIn、CLower partThe calculation formula is obtained by a least square method and is as follows:
Con the upper part=[c′11 c′12 … c′1n]
CIn=[c″11 c″12 … c″1n]
CLower part=[c″′1 c″′12 … c″′1n]
Wherein, in the step S35, the horizontal correction factor is set to COOn the upper part、COIn、COLower partThe said horizontal correction factor COOn the upper part、COIn、COLower partThe calculation formula of (2) is as follows:
COon the upper part=[co′11 co′12 … co′1k]
COIn=[co″11 co″12 … co″1k]
COLower part=[co″′11 co″′12 … co″′1k]
In step S35, k is the number of vertical pixels of the actually used image source, and a vertical correction factor is obtained by the same method, where the vertical correction factor is DOOn the upper part、DOIn、DOLower partThe vertical correction factor DOOn the upper part、DOIn、DOLower partThe calculation formula of (2) is as follows:
DOon the upper part=[do′11 do′12 … do′1k]
DOIn=[do″11 do″12 … do″1k]
DOLower part=[do″′1 do″′12 … do″′1k]
And after the horizontal correction factor and the vertical correction factor are calculated, the calculation results are written into hardware as configuration parameters.
In step S5, the original image matrix horizontal coordinate E, the vertical coordinate F,
Figure BDA0003155306680000071
in step S5, the position of the eye box is obtained by determining the turning position of the large mirror, and the correction factor is used in the MCU to correct the original image, and then the result is pushed to the image source; the horizontal coordinate X and the vertical coordinate Y of the pixels of the screen display image are corrected in the following way:
Figure BDA0003155306680000081
Figure BDA0003155306680000082
Figure BDA0003155306680000083
the above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1. A method for correcting distortion of an upper, middle and lower eye box adapted to a HUD, comprising the steps of:
s1: inputting peripheral data of the whole vehicle; the data comprises eyepoint data, front windshield glass, front wall metal plates and instrument board beams, and the data comprises key information such as observation, limitation and installation;
s2: establishing a reverse model in optical analysis software and optimizing a system; extracting observation points through the peripheral data of the whole vehicle, and analyzing key parameters such as a system field angle, a lower view angle, a left view angle and the like according to the HUD envelope position; taking the analysis result as the input of an imaging system to perform reverse modeling in optical analysis software; the virtual image is used as an object in reverse modeling, the image source is used as an image in reverse modeling, so that an imaging system is optimized, and distortion, a modulation transfer function and the like are optimized to an acceptable range; establishing a HUD imaging system reverse simulation model in optical software according to the peripheral data of the whole vehicle, and optimizing the imaging system;
s3: calculating horizontal and vertical correction factors of the upper, middle and lower eye boxes;
s4: acquiring the reverse position of the large reflector, and converting eye box information; analyzing the large reflector turnover angle records corresponding to the upper, middle and lower eye boxes in an imaging system model and taking the records as judgment angles;
s5: and processing the original image data by using the eye box correction factor and pushing the original image data into an image source for displaying.
2. The method for correcting distortion of upper, middle and lower eye boxes of an adaptive HUD according to claim 1, wherein the step S3 comprises the following steps:
s31: dividing the image surface based on different view fields, wherein the view field setting is necessary to enable the footprint points to be uniformly distributed on the image surface;
s32: based on the object-image relationship, combining with the view field setting, obtaining an image surface foot locus point horizontal theoretical matrix A;
s33: acquiring an actual horizontal matrix B of the foot locus points of the image surfaces of the upper eye box, the middle eye box and the lower eye box; self-focusing needs to be realized in a multiple structure, and the acquisition mode can be realized through a macro language or an optimized operand;
s34: calculating the middle horizontal correction factor C of the upper, middle and lower eye boxesOn the upper part、CIn、CLower part
S35: and expanding the intermediate horizontal correction factor by an interpolation method according to the image source resolution parameter l x k to obtain a horizontal correction factor.
3. The method for correcting distortion of upper, middle and lower eye boxes of an adaptive HUD according to claim 2, wherein in the step S32, the image plane footprint point horizontal theoretical matrix a:
Figure FDA0003155306670000021
4. the method for correcting distortion of upper, middle and lower eye boxes of an adaptive HUD according to claim 2, wherein in the step S33, the image plane footprint point actual horizontal matrix B:
Figure FDA0003155306670000022
5. the method for correcting for upper, middle and lower eye box distortion in an HUD of claim 2, wherein in said step S34, said upper, middle and lower eye box intermediate level correction factor COn the upper part、CIn、CLower partThe following matrix requirements apply:
A=Con the upper part·BOn the upper part
A=CIn·BIn
A=CLower part·BLower part
6. The method for accommodating upper, middle and lower eye box distortion correction of a HUD of claim 5, wherein said upper, middle and lower eye box intermediate level correction factor COn the upper part、CIn、CLower partThe calculation formula is obtained by a least square method and is as follows:
Con the upper part=[c′11 c′12…c′1n]
CIn=[c″11 c″12…c″1n]
CLower part=[c″′11 c″′12…c″′1n]。
7. The method for correcting distortion according to claim 6, wherein in step S35, the horizontal correction factor is set to COOn the upper part、COIn、COLower partThe said horizontal correction factor COOn the upper part、COIn、COLower partThe calculation formula of (2) is as follows:
COon the upper part=[co′11 co′12…co′1k]
COIn=[co″11 co″12…co″1k]
COLower part=[co″′11 co″′12…co″′1k]。
8. The method for correcting distortion of upper, middle and lower eye boxes of HUD according to claim 7, wherein in said step S35, k is the number of vertical pixels of actual image source, and a vertical correction factor is obtained by the same method, and the vertical correction factor is DOOn the upper part、DOIn、DOLower partThe vertical correction factor DOOn the upper part、DOIn、DOLower partThe calculation formula of (2) is as follows:
DOon the upper part=[do′11 do′12…do′1k]
DOIn=[do″11 do″12…do″1k]
DOLower part=[do″′11 do″′12…do″′1k]
And after the horizontal correction factor and the vertical correction factor are calculated, the calculation results are written into hardware as configuration parameters.
9. The method for adaptive upper, middle and lower eye box distortion correction of a HUD according to claim 8, wherein in said step S5, setting the original image matrix horizontal coordinate E, vertical coordinate F,
Figure FDA0003155306670000031
10. the method for correcting distortion of upper, middle and lower eye boxes of an HUD according to claim 9, wherein in step S5, the eye box position is obtained by judging the turning position of the large reflector, and the original image is corrected by the correction factor in MCU, and then the result is pushed to the image source; the horizontal coordinate X and the vertical coordinate Y of the pixels of the screen display image are corrected in the following way:
Figure FDA0003155306670000032
Figure FDA0003155306670000041
Figure FDA0003155306670000042
CN202110775998.2A 2020-11-18 2021-07-09 Distortion correction method suitable for HUD upper, middle and lower eye boxes Active CN113313656B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011292403X 2020-11-18
CN202011292403 2020-11-18

Publications (2)

Publication Number Publication Date
CN113313656A true CN113313656A (en) 2021-08-27
CN113313656B CN113313656B (en) 2023-02-21

Family

ID=77381394

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110775998.2A Active CN113313656B (en) 2020-11-18 2021-07-09 Distortion correction method suitable for HUD upper, middle and lower eye boxes

Country Status (1)

Country Link
CN (1) CN113313656B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108225734A (en) * 2018-01-05 2018-06-29 宁波均胜科技有限公司 A kind of error calibration system and its error calibrating method based on HUD systems
CN207751667U (en) * 2018-01-05 2018-08-21 宁波均胜科技有限公司 A kind of error calibration system for head-up-display system
CN109493321A (en) * 2018-10-16 2019-03-19 中国航空工业集团公司洛阳电光设备研究所 A kind of vehicle-mounted HUD visual system parallax calculation method
US20200117010A1 (en) * 2018-10-16 2020-04-16 Hyundai Motor Company Method for correcting image distortion in a hud system
CN111242866A (en) * 2020-01-13 2020-06-05 重庆邮电大学 Neural network interpolation method for AR-HUD virtual image distortion correction under observer dynamic eye position condition
CN111476104A (en) * 2020-03-17 2020-07-31 重庆邮电大学 AR-HUD image distortion correction method, device and system under dynamic eye position

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108225734A (en) * 2018-01-05 2018-06-29 宁波均胜科技有限公司 A kind of error calibration system and its error calibrating method based on HUD systems
CN207751667U (en) * 2018-01-05 2018-08-21 宁波均胜科技有限公司 A kind of error calibration system for head-up-display system
CN109493321A (en) * 2018-10-16 2019-03-19 中国航空工业集团公司洛阳电光设备研究所 A kind of vehicle-mounted HUD visual system parallax calculation method
US20200117010A1 (en) * 2018-10-16 2020-04-16 Hyundai Motor Company Method for correcting image distortion in a hud system
CN111242866A (en) * 2020-01-13 2020-06-05 重庆邮电大学 Neural network interpolation method for AR-HUD virtual image distortion correction under observer dynamic eye position condition
CN111476104A (en) * 2020-03-17 2020-07-31 重庆邮电大学 AR-HUD image distortion correction method, device and system under dynamic eye position

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈璐玲: "某车载抬头显示光学系统设计研究", 《中国新技术新产品》 *

Also Published As

Publication number Publication date
CN113313656B (en) 2023-02-21

Similar Documents

Publication Publication Date Title
CN109688392B (en) AR-HUD optical projection system, mapping relation calibration method and distortion correction method
CN107527324B (en) A kind of pattern distortion antidote of HUD
US10007853B2 (en) Image generation device for monitoring surroundings of vehicle
WO2020019487A1 (en) Mura compensation method and mura compensation system
CN109685913B (en) Augmented reality implementation method based on computer vision positioning
CN101572787B (en) Computer vision precision measurement based multi-projection visual automatic geometric correction and splicing method
CN111586384B (en) Projection image geometric correction method based on Bessel curved surface
CN106303477B (en) A kind of adaptive projector image bearing calibration and system
JP5387193B2 (en) Image processing system, image processing apparatus, and program
CN110636273A (en) Method and device for adjusting projection picture, readable storage medium and projector
CN109495729B (en) Projection picture correction method and system
CN108550157B (en) Non-shielding processing method for teaching blackboard writing
CN113160339A (en) Projector calibration method based on Samm's law
EP3985575A1 (en) Three-dimensional information processing method and apparatus
CN107749986A (en) Instructional video generation method, device, storage medium and computer equipment
CN112381739A (en) Imaging distortion correction method and device of AR-HUD system
CN107945151B (en) Repositioning image quality evaluation method based on similarity transformation
CN104732497A (en) Image defogging method, FPGA and defogging system including FPGA
CN102682431A (en) Wide-angle image correction method and system
CN113313656B (en) Distortion correction method suitable for HUD upper, middle and lower eye boxes
CN110942475B (en) Ultraviolet and visible light image fusion system and rapid image registration method
CN100498515C (en) Off-axis-mounted projector ball-screen projection non-linear distortion correction method
CN116862788A (en) CMS field checking method, system, device and storage medium
Zoido et al. Optimized methods for multi-projector display correction
CN112164377B (en) Self-adaption method for HUD image correction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant