CN113313656A - Distortion correction method suitable for HUD upper, middle and lower eye boxes - Google Patents
Distortion correction method suitable for HUD upper, middle and lower eye boxes Download PDFInfo
- Publication number
- CN113313656A CN113313656A CN202110775998.2A CN202110775998A CN113313656A CN 113313656 A CN113313656 A CN 113313656A CN 202110775998 A CN202110775998 A CN 202110775998A CN 113313656 A CN113313656 A CN 113313656A
- Authority
- CN
- China
- Prior art keywords
- image
- correction factor
- hud
- eye box
- horizontal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000012937 correction Methods 0.000 title claims abstract description 60
- 238000000034 method Methods 0.000 title claims abstract description 31
- 238000003384 imaging method Methods 0.000 claims abstract description 30
- 238000004364 calculation method Methods 0.000 claims abstract description 14
- 230000003287 optical effect Effects 0.000 claims abstract description 14
- 230000002093 peripheral effect Effects 0.000 claims abstract description 13
- 238000012545 processing Methods 0.000 claims abstract description 7
- 239000011521 glass Substances 0.000 claims abstract description 6
- 238000009434 installation Methods 0.000 claims abstract description 4
- 239000002184 metal Substances 0.000 claims abstract description 4
- 239000011159 matrix material Substances 0.000 claims description 18
- 238000004458 analytical method Methods 0.000 claims description 3
- 230000006870 function Effects 0.000 claims description 3
- 238000004088 simulation Methods 0.000 claims description 3
- 238000012546 transfer Methods 0.000 claims description 3
- 230000007306 turnover Effects 0.000 claims description 3
- 230000003044 adaptive effect Effects 0.000 claims 4
- 230000000007 visual effect Effects 0.000 abstract description 5
- 238000013461 design Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
Images
Classifications
-
- G06T5/80—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30268—Vehicle interior
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Abstract
The invention relates to the technical field of image processing, and particularly discloses a method for correcting distortion of upper, middle and lower eye boxes of a HUD (head Up display), which comprises the following steps: inputting peripheral data of the whole vehicle; the data comprises eyepoint data, front windshield glass, front wall metal plates and instrument board beams, and the data comprises key information such as observation, limitation and installation; establishing a reverse model in optical analysis software and optimizing a system; extracting observation points through peripheral data of the whole vehicle; calculating horizontal and vertical correction factors of the upper, middle and lower eye boxes; acquiring the reverse position of the large reflector, and converting eye box information; performing HUD imaging system reverse modeling according to the peripheral data of the whole vehicle; according to the conjugate relation of an object image of an optical system, a virtual image of the HUD imaging system is used as an object of the reverse imaging system, an image source is used as an image of the reverse imaging system, visual field configuration is achieved through optical design software, footprints corresponding to all visual fields of an image surface of the reverse imaging system are distributed uniformly, and distortion correction factors of the upper eye box, the middle eye box and the lower eye box are obtained through calculation according to the method.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a method for correcting distortion of upper, middle and lower eye boxes of a HUD.
Background
As a driving auxiliary device, an automobile windshield Head-Up Display (Head Up Display) can project information on the front windshield glass of an automobile to prevent a driver from looking over instrument information by lowering the Head, so that the safety of the driving process is improved, and along with the development of the technology, the HUD is widely applied in the automobile field.
Because the design of the automobile head-up display system is limited by a plurality of factors such as the eyepoint position of the whole automobile, the arrangement position of the whole automobile, the surface type parameters of the windshield glass of the automobile, the processing parameter requirements of the primary mirror and the secondary mirror and the like, the distortion in the upper eye box, the middle eye box and the lower eye box has the problem of larger design value, the existing processing method is to carry out distortion correction on the center eye box, and the driving experience of consumers with different heights cannot be met.
The patent of application number 202010187422.X to above-mentioned problem aims at solving HUD dynamic distortion problem, and it obtains virtual image data through HUD off-line detection camera and corrects it.
All eye boxes of central eye box distortion correction adaptation are used to current HUD distortion correction scheme, can't compromise the week wholely, and full eye box correction effect relies on the regional face type uniformity of car windshield imaging, all has very high requirement to imaging system's arrangement and optimization.
Based on the above, the invention designs a method for correcting distortion of upper, middle and lower eye boxes of a HUD, so as to solve the problems.
Disclosure of Invention
The invention provides a distortion correction method suitable for HUD upper, middle and lower eye boxes, which solves the problems that in the prior art, distortion correction is only carried out on a central eye box to adapt to all eye boxes, the design of an imaging system is complex, the deviation of the visual imaging effect of each eye box is overlarge, and the like.
The technical scheme of the invention is realized as follows: a method for correcting distortion of an upper, middle and lower eye box for accommodating a HUD, comprising the steps of:
s1: inputting peripheral data of the whole vehicle; the data comprises eyepoint data, front windshield glass, front wall metal plates and instrument board beams, and the data comprises key information such as observation, limitation and installation;
s2: establishing a reverse model in optical analysis software and optimizing a system; extracting observation points through the peripheral data of the whole vehicle, and analyzing key parameters such as a system field angle, a lower view angle, a left view angle and the like according to the HUD envelope position; taking the analysis result as the input of an imaging system to perform reverse modeling in optical analysis software; the virtual image is used as an object in reverse modeling, the image source is used as an image in reverse modeling, so that an imaging system is optimized, and distortion, a modulation transfer function and the like are optimized to an acceptable range; establishing a HUD imaging system reverse simulation model in optical software according to the peripheral data of the whole vehicle, and optimizing the imaging system;
s3: calculating horizontal and vertical correction factors of the upper, middle and lower eye boxes;
s4: acquiring the reverse position of the large reflector, and converting eye box information; analyzing the large reflector turnover angle records corresponding to the upper, middle and lower eye boxes in an imaging system model and taking the records as judgment angles;
s5: and processing the original image data by using the eye box correction factor and pushing the original image data into an image source for displaying.
Preferably, the step S3 specifically includes the following steps:
s31: dividing the image surface based on different view fields, wherein the view field setting is necessary to enable the footprint points to be uniformly distributed on the image surface;
s32: based on the object-image relationship, combining with the view field setting, obtaining an image surface foot locus point horizontal theoretical matrix A;
s33: acquiring an actual horizontal matrix B of the foot locus points of the image surfaces of the upper eye box, the middle eye box and the lower eye box; self-focusing needs to be realized in a multiple structure, and the acquisition mode can be realized through a macro language or an optimized operand;
s34: calculating the middle horizontal correction factor C of the upper, middle and lower eye boxesOn the upper part、CIn、CLower part;
S35: and expanding the intermediate horizontal correction factor by an interpolation method according to the image source resolution parameter l x k to obtain a horizontal correction factor.
Preferably, in step S32, the image plane footprint point horizontal theoretical matrix a:
preferably, in the step S33, the image plane footprint point actual horizontal matrix B:
preferably, in the step S34, the middle upper, middle lower eye box middle level correction factor COn the upper part、CIn、CLower partThe following matrix requirements apply:
A=Con the upper part·BOn the upper part
A=CIn·BIn
A=CLower part·BLower part
Preferably, the correction factor C for the middle level of the upper, middle and lower eye boxesOn the upper part、CIn、CLower partThe calculation formula is obtained by a least square method and is as follows:
Con the upper part=[c′11 c′12 … c′1n]
CIn=[c″11 c″12 … c″1n]
CLower part=[c″′11 c″′12 … c″′1n]
Preferably, in step S35, the horizontal correction factor is COOn the upper part、COIn、COLower partThe said horizontal correction factor COOn the upper part、COIn、COLower partThe calculation formula of (2) is as follows:
COon the upper part=[co′11 co′12 … co′1k]
COIn=[co″11 co″12 … co″1k]
COLower part=[co″′11 co″′12 … co″′1k]
Preferably, in the step S35, k is the number of vertical pixels of the actually used image source, and the vertical correction factor is obtained by the same method, and the vertical correction factor is DOOn the upper part、DOIn、DOLower partThe vertical correction factor DOOn the upper part、DOIn、DOLower partThe calculation formula of (2) is as follows:
DOon the upper part=[do′11 do′12 … do′1k]
DOIn=[do″11 do″12 … do″1k]
DOLower part=[do″′11 do″′12 … do″′1k]
And after the horizontal correction factor and the vertical correction factor are calculated, the calculation results are written into hardware as configuration parameters.
Preferably, in the step S5, the original image matrix is set to have horizontal coordinates E, vertical coordinates F,
preferably, in step S5, the position of the eye box is obtained by determining the turning position of the large mirror, and the correction factor is used in the MCU to correct the original image, and then the result is pushed to the image source; the horizontal coordinate X and the vertical coordinate Y of the pixels of the screen display image are corrected in the following way:
compared with the prior art, the invention has the beneficial effects that:
1. and carrying out reverse modeling on the HUD imaging system according to the peripheral data of the whole vehicle. According to the conjugate relation of an object image of an optical system, a virtual image of the HUD imaging system is used as an object of the reverse imaging system, an image source is used as an image of the reverse imaging system, visual field configuration is achieved through optical design software, footprints corresponding to all visual fields of an image surface of the reverse imaging system are distributed uniformly, and distortion correction factors of the upper eye box, the middle eye box and the lower eye box are obtained through calculation according to the method.
2. When the HUD virtual image position is adjusted, the corresponding eye box distortion correction factor is used for processing original picture data, and processed data are pushed to an image source to be output in real time, so that the purpose of correcting distortion of the upper eye box image, the middle eye box image and the lower eye box image is achieved.
Of course, it is not necessary for any one product that embodies the invention to achieve all of the above advantages simultaneously.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of the present invention for distortion correction;
figure 2 is a schematic diagram of a WHUD imaging system provided by the present invention.
In the drawings, the components represented by the respective reference numerals are listed below:
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1-2, the present invention provides a technical solution: a method for correcting distortion of an upper, middle and lower eye box for accommodating a HUD, comprising the steps of:
s1: inputting peripheral data of the whole vehicle; the data comprises eyepoint data, front windshield glass, front wall metal plates and instrument board beams, and the data comprises key information such as observation, limitation and installation;
s2: establishing a reverse model in optical analysis software and optimizing a system; extracting observation points through the peripheral data of the whole vehicle, and analyzing key parameters such as a system field angle, a lower view angle, a left view angle and the like according to the HUD envelope position; taking the analysis result as the input of an imaging system to perform reverse modeling in optical analysis software; the virtual image is used as an object in reverse modeling, the image source is used as an image in reverse modeling, so that an imaging system is optimized, and distortion, a modulation transfer function and the like are optimized to an acceptable range; establishing a HUD imaging system reverse simulation model in optical software according to the peripheral data of the whole vehicle, and optimizing the imaging system;
s3: calculating horizontal and vertical correction factors of the upper, middle and lower eye boxes;
s4: acquiring the reverse position of the large reflector, and converting eye box information; analyzing the large reflector turnover angle records corresponding to the upper, middle and lower eye boxes in an imaging system model and taking the records as judgment angles;
s5: and processing the original image data by using the eye box correction factor and pushing the original image data into an image source for displaying.
Wherein, in the step S3, the method specifically includes the following steps:
s31: dividing the image surface based on different view fields, wherein the view field setting is necessary to enable the footprint points to be uniformly distributed on the image surface;
s32: based on the object-image relationship, combining with the view field setting, obtaining an image surface foot locus point horizontal theoretical matrix A;
s33: acquiring an actual horizontal matrix B of the foot locus points of the image surfaces of the upper eye box, the middle eye box and the lower eye box; self-focusing needs to be realized in a multiple structure, and the acquisition mode can be realized through a macro language or an optimized operand;
s34: calculating the middle horizontal correction factor C of the upper, middle and lower eye boxesOn the upper part、CIn、CLower part;
S35: and expanding the intermediate horizontal correction factor by an interpolation method according to the image source resolution parameter l x k to obtain a horizontal correction factor.
Wherein, in the step S32, the image plane footprint point horizontal theoretical matrix a:
wherein, in the step S33, the image plane footprint point actual horizontal matrix B:
wherein, in the step S34, the middle upper, middle lower eye box middle level correction factor COn the upper part、CIn、CLower partThe following matrix requirements apply:
A=Con the upper part·BOn the upper part
A=CIn·BIn
A=CLower part·BLower part
Wherein the middle level correction factor C of the upper, middle and lower eye boxesOn the upper part、CIn、CLower partThe calculation formula is obtained by a least square method and is as follows:
Con the upper part=[c′11 c′12 … c′1n]
CIn=[c″11 c″12 … c″1n]
CLower part=[c″′1 c″′12 … c″′1n]
Wherein, in the step S35, the horizontal correction factor is set to COOn the upper part、COIn、COLower partThe said horizontal correction factor COOn the upper part、COIn、COLower partThe calculation formula of (2) is as follows:
COon the upper part=[co′11 co′12 … co′1k]
COIn=[co″11 co″12 … co″1k]
COLower part=[co″′11 co″′12 … co″′1k]
In step S35, k is the number of vertical pixels of the actually used image source, and a vertical correction factor is obtained by the same method, where the vertical correction factor is DOOn the upper part、DOIn、DOLower partThe vertical correction factor DOOn the upper part、DOIn、DOLower partThe calculation formula of (2) is as follows:
DOon the upper part=[do′11 do′12 … do′1k]
DOIn=[do″11 do″12 … do″1k]
DOLower part=[do″′1 do″′12 … do″′1k]
And after the horizontal correction factor and the vertical correction factor are calculated, the calculation results are written into hardware as configuration parameters.
In step S5, the original image matrix horizontal coordinate E, the vertical coordinate F,
in step S5, the position of the eye box is obtained by determining the turning position of the large mirror, and the correction factor is used in the MCU to correct the original image, and then the result is pushed to the image source; the horizontal coordinate X and the vertical coordinate Y of the pixels of the screen display image are corrected in the following way:
the above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Claims (10)
1. A method for correcting distortion of an upper, middle and lower eye box adapted to a HUD, comprising the steps of:
s1: inputting peripheral data of the whole vehicle; the data comprises eyepoint data, front windshield glass, front wall metal plates and instrument board beams, and the data comprises key information such as observation, limitation and installation;
s2: establishing a reverse model in optical analysis software and optimizing a system; extracting observation points through the peripheral data of the whole vehicle, and analyzing key parameters such as a system field angle, a lower view angle, a left view angle and the like according to the HUD envelope position; taking the analysis result as the input of an imaging system to perform reverse modeling in optical analysis software; the virtual image is used as an object in reverse modeling, the image source is used as an image in reverse modeling, so that an imaging system is optimized, and distortion, a modulation transfer function and the like are optimized to an acceptable range; establishing a HUD imaging system reverse simulation model in optical software according to the peripheral data of the whole vehicle, and optimizing the imaging system;
s3: calculating horizontal and vertical correction factors of the upper, middle and lower eye boxes;
s4: acquiring the reverse position of the large reflector, and converting eye box information; analyzing the large reflector turnover angle records corresponding to the upper, middle and lower eye boxes in an imaging system model and taking the records as judgment angles;
s5: and processing the original image data by using the eye box correction factor and pushing the original image data into an image source for displaying.
2. The method for correcting distortion of upper, middle and lower eye boxes of an adaptive HUD according to claim 1, wherein the step S3 comprises the following steps:
s31: dividing the image surface based on different view fields, wherein the view field setting is necessary to enable the footprint points to be uniformly distributed on the image surface;
s32: based on the object-image relationship, combining with the view field setting, obtaining an image surface foot locus point horizontal theoretical matrix A;
s33: acquiring an actual horizontal matrix B of the foot locus points of the image surfaces of the upper eye box, the middle eye box and the lower eye box; self-focusing needs to be realized in a multiple structure, and the acquisition mode can be realized through a macro language or an optimized operand;
s34: calculating the middle horizontal correction factor C of the upper, middle and lower eye boxesOn the upper part、CIn、CLower part;
S35: and expanding the intermediate horizontal correction factor by an interpolation method according to the image source resolution parameter l x k to obtain a horizontal correction factor.
5. the method for correcting for upper, middle and lower eye box distortion in an HUD of claim 2, wherein in said step S34, said upper, middle and lower eye box intermediate level correction factor COn the upper part、CIn、CLower partThe following matrix requirements apply:
A=Con the upper part·BOn the upper part
A=CIn·BIn
A=CLower part·BLower part。
6. The method for accommodating upper, middle and lower eye box distortion correction of a HUD of claim 5, wherein said upper, middle and lower eye box intermediate level correction factor COn the upper part、CIn、CLower partThe calculation formula is obtained by a least square method and is as follows:
Con the upper part=[c′11 c′12…c′1n]
CIn=[c″11 c″12…c″1n]
CLower part=[c″′11 c″′12…c″′1n]。
7. The method for correcting distortion according to claim 6, wherein in step S35, the horizontal correction factor is set to COOn the upper part、COIn、COLower partThe said horizontal correction factor COOn the upper part、COIn、COLower partThe calculation formula of (2) is as follows:
COon the upper part=[co′11 co′12…co′1k]
COIn=[co″11 co″12…co″1k]
COLower part=[co″′11 co″′12…co″′1k]。
8. The method for correcting distortion of upper, middle and lower eye boxes of HUD according to claim 7, wherein in said step S35, k is the number of vertical pixels of actual image source, and a vertical correction factor is obtained by the same method, and the vertical correction factor is DOOn the upper part、DOIn、DOLower partThe vertical correction factor DOOn the upper part、DOIn、DOLower partThe calculation formula of (2) is as follows:
DOon the upper part=[do′11 do′12…do′1k]
DOIn=[do″11 do″12…do″1k]
DOLower part=[do″′11 do″′12…do″′1k]
And after the horizontal correction factor and the vertical correction factor are calculated, the calculation results are written into hardware as configuration parameters.
10. the method for correcting distortion of upper, middle and lower eye boxes of an HUD according to claim 9, wherein in step S5, the eye box position is obtained by judging the turning position of the large reflector, and the original image is corrected by the correction factor in MCU, and then the result is pushed to the image source; the horizontal coordinate X and the vertical coordinate Y of the pixels of the screen display image are corrected in the following way:
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011292403X | 2020-11-18 | ||
CN202011292403 | 2020-11-18 |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113313656A true CN113313656A (en) | 2021-08-27 |
CN113313656B CN113313656B (en) | 2023-02-21 |
Family
ID=77381394
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110775998.2A Active CN113313656B (en) | 2020-11-18 | 2021-07-09 | Distortion correction method suitable for HUD upper, middle and lower eye boxes |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113313656B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108225734A (en) * | 2018-01-05 | 2018-06-29 | 宁波均胜科技有限公司 | A kind of error calibration system and its error calibrating method based on HUD systems |
CN207751667U (en) * | 2018-01-05 | 2018-08-21 | 宁波均胜科技有限公司 | A kind of error calibration system for head-up-display system |
CN109493321A (en) * | 2018-10-16 | 2019-03-19 | 中国航空工业集团公司洛阳电光设备研究所 | A kind of vehicle-mounted HUD visual system parallax calculation method |
US20200117010A1 (en) * | 2018-10-16 | 2020-04-16 | Hyundai Motor Company | Method for correcting image distortion in a hud system |
CN111242866A (en) * | 2020-01-13 | 2020-06-05 | 重庆邮电大学 | Neural network interpolation method for AR-HUD virtual image distortion correction under observer dynamic eye position condition |
CN111476104A (en) * | 2020-03-17 | 2020-07-31 | 重庆邮电大学 | AR-HUD image distortion correction method, device and system under dynamic eye position |
-
2021
- 2021-07-09 CN CN202110775998.2A patent/CN113313656B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108225734A (en) * | 2018-01-05 | 2018-06-29 | 宁波均胜科技有限公司 | A kind of error calibration system and its error calibrating method based on HUD systems |
CN207751667U (en) * | 2018-01-05 | 2018-08-21 | 宁波均胜科技有限公司 | A kind of error calibration system for head-up-display system |
CN109493321A (en) * | 2018-10-16 | 2019-03-19 | 中国航空工业集团公司洛阳电光设备研究所 | A kind of vehicle-mounted HUD visual system parallax calculation method |
US20200117010A1 (en) * | 2018-10-16 | 2020-04-16 | Hyundai Motor Company | Method for correcting image distortion in a hud system |
CN111242866A (en) * | 2020-01-13 | 2020-06-05 | 重庆邮电大学 | Neural network interpolation method for AR-HUD virtual image distortion correction under observer dynamic eye position condition |
CN111476104A (en) * | 2020-03-17 | 2020-07-31 | 重庆邮电大学 | AR-HUD image distortion correction method, device and system under dynamic eye position |
Non-Patent Citations (1)
Title |
---|
陈璐玲: "某车载抬头显示光学系统设计研究", 《中国新技术新产品》 * |
Also Published As
Publication number | Publication date |
---|---|
CN113313656B (en) | 2023-02-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109688392B (en) | AR-HUD optical projection system, mapping relation calibration method and distortion correction method | |
CN107527324B (en) | A kind of pattern distortion antidote of HUD | |
US10007853B2 (en) | Image generation device for monitoring surroundings of vehicle | |
WO2020019487A1 (en) | Mura compensation method and mura compensation system | |
CN109685913B (en) | Augmented reality implementation method based on computer vision positioning | |
CN101572787B (en) | Computer vision precision measurement based multi-projection visual automatic geometric correction and splicing method | |
CN111586384B (en) | Projection image geometric correction method based on Bessel curved surface | |
CN106303477B (en) | A kind of adaptive projector image bearing calibration and system | |
JP5387193B2 (en) | Image processing system, image processing apparatus, and program | |
CN110636273A (en) | Method and device for adjusting projection picture, readable storage medium and projector | |
CN109495729B (en) | Projection picture correction method and system | |
CN108550157B (en) | Non-shielding processing method for teaching blackboard writing | |
CN113160339A (en) | Projector calibration method based on Samm's law | |
EP3985575A1 (en) | Three-dimensional information processing method and apparatus | |
CN107749986A (en) | Instructional video generation method, device, storage medium and computer equipment | |
CN112381739A (en) | Imaging distortion correction method and device of AR-HUD system | |
CN107945151B (en) | Repositioning image quality evaluation method based on similarity transformation | |
CN104732497A (en) | Image defogging method, FPGA and defogging system including FPGA | |
CN102682431A (en) | Wide-angle image correction method and system | |
CN113313656B (en) | Distortion correction method suitable for HUD upper, middle and lower eye boxes | |
CN110942475B (en) | Ultraviolet and visible light image fusion system and rapid image registration method | |
CN100498515C (en) | Off-axis-mounted projector ball-screen projection non-linear distortion correction method | |
CN116862788A (en) | CMS field checking method, system, device and storage medium | |
Zoido et al. | Optimized methods for multi-projector display correction | |
CN112164377B (en) | Self-adaption method for HUD image correction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |