CN113313656B - Distortion correction method suitable for HUD upper, middle and lower eye boxes - Google Patents

Distortion correction method suitable for HUD upper, middle and lower eye boxes Download PDF

Info

Publication number
CN113313656B
CN113313656B CN202110775998.2A CN202110775998A CN113313656B CN 113313656 B CN113313656 B CN 113313656B CN 202110775998 A CN202110775998 A CN 202110775998A CN 113313656 B CN113313656 B CN 113313656B
Authority
CN
China
Prior art keywords
image
correction factor
hud
eye box
horizontal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110775998.2A
Other languages
Chinese (zh)
Other versions
CN113313656A (en
Inventor
张涛
郑天策
吕涛
何飞
杨立波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Zejing Automobile Electronic Co ltd
Original Assignee
Jiangsu Zejing Automobile Electronic Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Zejing Automobile Electronic Co ltd filed Critical Jiangsu Zejing Automobile Electronic Co ltd
Publication of CN113313656A publication Critical patent/CN113313656A/en
Application granted granted Critical
Publication of CN113313656B publication Critical patent/CN113313656B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30268Vehicle interior
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)

Abstract

The invention relates to the technical field of image processing, and particularly discloses a method for correcting distortion of upper, middle and lower eye boxes of a HUD (head Up display), which comprises the following steps: inputting peripheral data of the whole vehicle; the data comprises eyepoint data, front windshield glass, front wall metal plates and instrument board beams, and the data comprises key information such as observation, limitation and installation; establishing a reverse model in optical analysis software and optimizing a system; extracting observation points through peripheral data of the whole vehicle; calculating horizontal and vertical correction factors of the upper, middle and lower eye boxes; acquiring the reverse position of the large reflector, and converting eye box information; carrying out HUD imaging system reverse modeling according to the peripheral data of the whole vehicle; according to the conjugate relation of an object image of an optical system, a virtual image of the HUD imaging system is used as an object of the reverse imaging system, an image source is used as an image of the reverse imaging system, visual field configuration is achieved through optical design software, footprints corresponding to all visual fields of an image surface of the reverse imaging system are distributed uniformly, and distortion correction factors of the upper eye box, the middle eye box and the lower eye box are obtained through calculation according to the method.

Description

Distortion correction method suitable for HUD upper, middle and lower eye boxes
Technical Field
The invention relates to the technical field of image processing, in particular to a method for correcting distortion of upper, middle and lower eye boxes of a HUD.
Background
As a driving auxiliary device, an automobile windshield Head-Up Display (Head Up Display) can project information on the front windshield glass of an automobile to prevent a driver from looking over instrument information by lowering the Head, so that the safety of the driving process is improved, and along with the development of the technology, the HUD is widely applied in the automobile field.
Because the design of the automobile head-up display system is limited by a plurality of factors such as the eyepoint position of the whole automobile, the arrangement position of the whole automobile, the surface type parameters of the windshield glass of the automobile, the processing parameter requirements of the primary mirror and the secondary mirror and the like, the distortion in the upper eye box, the middle eye box and the lower eye box has the problem of larger design value, the existing processing method is to carry out distortion correction on the center eye box, and the driving experience of consumers with different heights cannot be met.
The patent to above-mentioned problem application number 202010187422.X aims at solving HUD dynamic distortion problem, and it acquires virtual image data through HUD off-line detection camera and corrects it.
All eye boxes of central eye box distortion correction adaptation are used to current HUD distortion correction scheme, can't compromise the week wholely, and full eye box correction effect relies on the regional face type uniformity of car windshield imaging, all has very high requirement to imaging system's arrangement and optimization.
Based on the above, the invention designs a method for correcting distortion of upper, middle and lower eye boxes of a HUD, so as to solve the problems.
Disclosure of Invention
The invention provides a distortion correction method suitable for HUD upper, middle and lower eye boxes, which solves the problems that in the prior art, distortion correction is only carried out on a central eye box to adapt to all eye boxes, the design of an imaging system is complex, the deviation of the visual imaging effect of each eye box is overlarge, and the like.
The technical scheme of the invention is realized as follows: a method for correcting distortion of an upper, middle and lower eye box for accommodating a HUD, comprising the steps of:
s1: inputting peripheral data of the whole vehicle; the data comprises eyepoint data, front windshield glass, front wall metal plates and instrument board beams, and the data comprises key information such as observation, limitation and installation;
s2: establishing a reverse model in optical analysis software and optimizing a system; extracting observation points through the peripheral data of the whole vehicle, and analyzing key parameters such as a system view angle, a lower view angle, a left view angle and the like according to the HUD envelope position; taking the analysis result as the input of an imaging system to perform reverse modeling in optical analysis software; the virtual image is used as an object in reverse modeling, the image source is used as an image in reverse modeling, so that an imaging system is optimized, and distortion, a modulation transfer function and the like are optimized to an acceptable range; establishing a HUD imaging system reverse simulation model in optical software according to the peripheral data of the whole vehicle, and optimizing the imaging system;
s3: calculating horizontal and vertical correction factors of the upper, middle and lower eye boxes;
s4: acquiring the reverse position of the large reflector, and converting eye box information; analyzing the large reflector turnover angle records corresponding to the upper, middle and lower eye boxes in an imaging system model and taking the records as judgment angles;
s5: and processing the original image data by using the eye box correction factor and pushing the original image data into an image source for displaying.
Preferably, the step S3 specifically includes the following steps:
s31: dividing the image surface based on different view fields, wherein the view field setting is necessary to enable the footprint points to be uniformly distributed on the image surface;
s32: based on the object-image relationship, combining with the view field setting, obtaining an image surface foot locus point horizontal theoretical matrix A;
s33: acquiring an actual horizontal matrix B of the foot locus points of the image surfaces of the upper eye box, the middle eye box and the lower eye box; self-focusing needs to be realized in a multiple structure, and the acquisition mode can be realized through a macro language or an optimized operand;
s34: calculating the middle horizontal correction factor C of the upper, middle and lower eye boxes On the upper part 、C In (1) 、C Lower part
S35: and expanding the intermediate horizontal correction factor by an interpolation method according to the image source resolution parameter l x k to obtain a horizontal correction factor.
Preferably, in the step S32, the image plane footprint point horizontal theoretical matrix a:
Figure BDA0003155306680000021
preferably, in step S33, the image plane footprint point actual horizontal matrix B:
Figure BDA0003155306680000031
preferably, in the step S34, the middle level correction factor C of the upper, middle and lower eye boxes On the upper part 、C In 、C Lower part The following matrix requirements apply:
A=C upper part of ·B On the upper part
A=C In ·B In
A=C Lower part ·B Lower part
Preferably, the middle level correction factor C of the upper, middle and lower eye boxes On the upper part 、C In 、C Lower part The calculation formula is obtained by a least square method and is as follows:
C on the upper part =[c′ 11 c′ 12 … c′ 1n ]
C In =[c″ 11 c″ 12 … c″ 1n ]
C Lower part =[c″′ 11 c″′ 12 … c″′ 1n ]
Preferably, in step S35, the horizontal correction factor is CO On the upper part 、CO In 、CO Lower part The said horizontal correction factor CO On the upper part 、CO In 、CO Lower part The calculation formula of (2) is as follows:
CO on the upper part =[co′ 11 co′ 12 … co′ 1k ]
CO In =[co″ 11 co″ 12 … co″ 1k ]
CO Lower part =[co″′ 11 co″′ 12 … co″′ 1k ]
Preferably, in the step S35, k is the number of vertical pixels of the actually used image source, and the vertical correction factor is obtained by the same method, and the vertical correction factor is set to DO On the upper part 、DO In (1) 、DO Lower part The vertical correction factor DO On the upper part 、DO In 、DO Lower part The calculation formula of (2) is as follows:
DO on the upper part =[do′ 11 do′ 12 … do′ 1k ]
DO In =[do″ 11 do″ 12 … do″ 1k ]
DO Lower part =[do″′ 11 do″′ 12 … do″′ 1k ]
And after the horizontal correction factor and the vertical correction factor are calculated, the calculation results are written into hardware as configuration parameters.
Preferably, in the step S5, the horizontal coordinate E, the vertical coordinate F,
Figure BDA0003155306680000041
preferably, in the step S5, the position of the eye box is obtained by judging the turning position of the large reflector, the correction factor is used in the MCU to correct the original image, and then the result is pushed to the image source; the horizontal coordinate X and the vertical coordinate Y of the pixel of the image displayed on the screen are corrected in the following way:
Figure BDA0003155306680000042
Figure BDA0003155306680000043
Figure BDA0003155306680000044
compared with the prior art, the invention has the beneficial effects that:
1. and carrying out reverse modeling on the HUD imaging system according to the peripheral data of the whole vehicle. According to the conjugate relation of an object image of an optical system, a virtual image of the HUD imaging system is used as an object of the reverse imaging system, an image source is used as an image of the reverse imaging system, visual field configuration is achieved through optical design software, footprints corresponding to all visual fields of an image surface of the reverse imaging system are distributed uniformly, and distortion correction factors of the upper eye box, the middle eye box and the lower eye box are obtained through calculation according to the method.
2. When the HUD virtual image position is adjusted, the corresponding eye box distortion correction factor is used for processing original picture data, and processed data are pushed to an image source to be output in real time, so that the purpose of correcting distortion of the upper eye box image, the middle eye box image and the lower eye box image is achieved.
Of course, it is not necessary for any one product that embodies the invention to achieve all of the above advantages simultaneously.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of the present invention for distortion correction;
figure 2 is a schematic diagram of a WHUD imaging system provided by the present invention.
In the drawings, the components represented by the respective reference numerals are listed below:
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1-2, the present invention provides a technical solution: a method for correcting distortion of an upper, middle and lower eye box for accommodating a HUD, comprising the steps of:
s1: inputting peripheral data of the whole vehicle; the data comprises eyepoint data, front windshield glass, front wall metal plates and instrument board beams, and the data comprises key information such as observation, limitation and installation;
s2: establishing a reverse model in optical analysis software and optimizing a system; extracting observation points through the peripheral data of the whole vehicle, and analyzing key parameters such as a system field angle, a lower view angle, a left view angle and the like according to the HUD envelope position; taking the analysis result as the input of an imaging system to perform reverse modeling in optical analysis software; the virtual image is used as an object in reverse modeling, the image source is used as an image in reverse modeling, so that an imaging system is optimized, and distortion, a modulation transfer function and the like are optimized to an acceptable range; establishing a HUD imaging system reverse simulation model in optical software according to peripheral data of the whole vehicle, and optimizing an imaging system;
s3: calculating horizontal and vertical correction factors of the upper, middle and lower eye boxes;
s4: acquiring the reverse position of the large reflector, and converting eye box information; analyzing the turning angle records of the large reflectors corresponding to the upper, middle and lower eye boxes in an imaging system model and taking the turning angle records as judgment angles;
s5: and processing the original image data by using the eye box correction factor and pushing the original image data into an image source for displaying.
Wherein, in the step S3, the following steps are specifically included:
s31: dividing the image surface based on different view fields, wherein the view field setting is necessary to enable the footprint points to be uniformly distributed on the image surface;
s32: based on the object-image relationship, combining with the view field setting, obtaining an image surface foot locus point horizontal theoretical matrix A;
s33: acquiring an actual horizontal matrix B of the foot locus points of the image surfaces of the upper eye box, the middle eye box and the lower eye box; self-focusing needs to be realized in a multiple structure, and the acquisition mode can be realized through a macro language or an optimized operand;
s34: calculating the middle horizontal correction factor C of the upper, middle and lower eye boxes On the upper part 、C In 、C Lower part
S35: and expanding the intermediate horizontal correction factor by an interpolation method according to the image source resolution parameter l x k to obtain a horizontal correction factor.
Wherein, in the step S32, the image surface footprint point horizontal theoretical matrix a:
Figure BDA0003155306680000061
wherein, in the step S33, the image plane footprint point actual horizontal matrix B:
Figure BDA0003155306680000062
wherein, in the step S34, the middle upper, middle lower eye box middle level correction factor C On the upper part 、C In (1) 、C Lower part The following matrix requirements apply:
A=C on the upper part ·B On the upper part
A=C In ·B In
A=C Lower part ·B Lower part
Wherein the middle level correction factor C of the upper, middle and lower eye boxes On the upper part 、C In (1) 、C Lower part The calculation formula is as follows through the least square method:
C on the upper part =[c′ 11 c′ 12 … c′ 1n ]
C In =[c″ 11 c″ 12 … c″ 1n ]
C Lower part =[c″′ 1 c″′ 12 … c″′ 1n ]
Wherein, in the step S35, the horizontal correction factor is set as CO On the upper part 、CO In 、CO Lower part The said horizontal correction factor CO On the upper part 、CO In 、CO Lower part The calculation formula of (2) is as follows:
CO on the upper part =[co′ 11 co′ 12 … co′ 1k ]
CO In =[co″ 11 co″ 12 … co″ 1k ]
CO Lower part =[co″′ 11 co″′ 12 … co″′ 1k ]
In step S35, k is the number of vertical pixels of the actually used image source, and a vertical correction factor is obtained by the same method, where the vertical correction factor is DO On the upper part 、DO In (1) 、DO Lower part The vertical correction factor DO On the upper part 、DO In 、DO Lower part The calculation formula of (2) is as follows:
DO on the upper part =[do′ 11 do′ 12 … do′ 1k ]
DO In =[do″ 11 do″ 12 … do″ 1k ]
DO Lower part =[do″′ 1 do″′ 12 … do″′ 1k ]
And after the horizontal correction factor and the vertical correction factor are calculated, the calculation results are written into hardware as configuration parameters.
Wherein, in the step S5, the horizontal coordinate E and the vertical coordinate F of the original image matrix are set,
Figure BDA0003155306680000071
in the step S5, the position of the eye box is obtained by judging the overturning position of the large reflector, the original image is corrected by using a correction factor in the MCU, and then the result is pushed into an image source; the horizontal coordinate X and the vertical coordinate Y of the pixels of the screen display image are corrected in the following way:
Figure BDA0003155306680000081
Figure BDA0003155306680000082
Figure BDA0003155306680000083
the above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (9)

1. A method for correcting distortion of an upper, middle and lower eye box adapted to a HUD, comprising the steps of:
s1: inputting peripheral data of the whole vehicle; the data comprises eyepoint data, front windshield glass, front wall metal plates and instrument panel cross beams, and the data comprises observation, limitation and installation key information;
s2: establishing a reverse model in optical analysis software and optimizing a system; extracting observation points through the peripheral data of the whole vehicle, and analyzing key parameters of a system view angle, a lower view angle and a left view angle according to the HUD envelope position; taking the analysis result as the input of an imaging system to perform reverse modeling in optical analysis software; the virtual image is used as an object in reverse modeling, the image source is used as an image in reverse modeling, so that an imaging system is optimized, and distortion and modulation transfer functions are optimized to an acceptable range; establishing a HUD imaging system reverse simulation model in optical software according to the peripheral data of the whole vehicle, optimizing the imaging system, and enabling footprint points corresponding to each field of view of an image surface of the reverse imaging system to be uniformly distributed;
s3: calculating horizontal and vertical correction factors of the upper, middle and lower eye boxes;
s4: acquiring the reverse position of the large reflector, and converting eye box information; analyzing the large reflector turnover angle records corresponding to the upper, middle and lower eye boxes in an imaging system model and taking the records as judgment angles;
s5: processing original image data by using the eye box correction factor and pushing the original image data into an image source for displaying;
in the step S3, the method specifically includes the following steps:
s31: dividing the image surface based on different view fields, wherein the view field setting is necessary to enable the footprint points to be uniformly distributed on the image surface;
s32: based on the object-image relationship, combining with the view field setting, obtaining an image surface foot locus point horizontal theoretical matrix A;
s33: acquiring an actual horizontal matrix B of the foot locus points of the image surfaces of the upper eye box, the middle eye box and the lower eye box; realizing self-focusing in a multiple structure, wherein the acquisition mode is realized by a macro language or an optimized operand;
s34: calculating the middle horizontal correction factor C of the upper, middle and lower eye boxes Upper part of 、C In 、C Lower part
S35: and expanding the intermediate horizontal correction factor by an interpolation method according to the image source resolution parameter l x k to obtain a horizontal correction factor.
2. The method for correcting distortion of upper, middle and lower eye boxes of an adaptive HUD according to claim 1, wherein in the step S32, the image plane footprint point horizontal theoretical matrix a:
Figure FDA0004010312970000021
3. the method for correcting distortion of upper, middle and lower eye boxes of an adaptive HUD according to claim 1, wherein in the step S33, the image plane footprint point actual horizontal matrix B:
Figure FDA0004010312970000022
4. the method for accommodating upper, middle and lower eye box distortion correction of a HUD according to claim 1, wherein in said step S34, said upper, middle and lower eye box intermediate level correction factor C On the upper part 、C In 、C Lower part The following matrix requirements apply:
A=C upper part of ·B On the upper part
A=C In (1) ·B In (1)
A=C Lower part ·B Lower part
5. As claimed inSolving 4 the method for correcting distortion of upper, middle and lower eye boxes adapting to HUD, which is characterized in that the middle horizontal correction factor C of the upper, middle and lower eye boxes On the upper part 、C In 、C Lower part The calculation formula is obtained by a least square method and is as follows:
C on the upper part =[c′ 11 c′ 12 … c′ 1n ]
C In =[c″ 11 c″ 12 … c″ 1n ]
C Lower part =[c″′ 11 c″′ 12 … c″′ 1n ]。
6. The method for accommodating upper, middle and lower eye box distortion correction of a HUD of claim 5 wherein in said step S35, the horizontal correction factor is set to CO Upper part of 、CO In 、CO Lower part The said horizontal correction factor CO On the upper part 、CO In 、CO Lower part The calculation formula of (c) is:
CO upper part of =[co′ 11 co′ 12 … co′ 1k ]
CO In =[co″ 11 co″ 12 … co″ 1k ]
CO Lower part =[co″′ 11 co″′ 12 … co″′ 1k ]。
7. The method according to claim 6, wherein in step S35, k is the number of vertical pixels of the actually used image source, and a vertical correction factor is obtained by the same method, and the vertical correction factor is DO On the upper part 、DO In 、DO Lower part The vertical correction factor DO On the upper part 、DO In 、DO Lower part The calculation formula of (c) is:
DO on the upper part =[do′ 11 do′ 12 … do′ 1k ]
DO In =[do″ 11 do″ 12 … do″ 1k ]
DO Lower part =[do″′ 11 do″′ 12 … do″′ 1k ]
And after the horizontal correction factor and the vertical correction factor are calculated, the calculation results are written into hardware as configuration parameters.
8. The method for adaptive upper, middle and lower eye box distortion correction of a HUD according to claim 7, wherein in said step S5, setting the original image matrix horizontal coordinate E, vertical coordinate F,
Figure FDA0004010312970000031
9. the method for correcting distortion of upper, middle and lower eye boxes of an HUD according to claim 8, wherein in step S5, the eye box position is known by judging the turning position of the large reflector, and the original image is corrected by using the correction factor in the MCU, and then the result is pushed to the image source; the horizontal coordinate X and the vertical coordinate Y of the pixels of the screen display image are corrected in the following way:
Figure FDA0004010312970000041
Figure FDA0004010312970000042
Figure FDA0004010312970000043
CN202110775998.2A 2020-11-18 2021-07-09 Distortion correction method suitable for HUD upper, middle and lower eye boxes Active CN113313656B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011292403X 2020-11-18
CN202011292403 2020-11-18

Publications (2)

Publication Number Publication Date
CN113313656A CN113313656A (en) 2021-08-27
CN113313656B true CN113313656B (en) 2023-02-21

Family

ID=77381394

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110775998.2A Active CN113313656B (en) 2020-11-18 2021-07-09 Distortion correction method suitable for HUD upper, middle and lower eye boxes

Country Status (1)

Country Link
CN (1) CN113313656B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108225734A (en) * 2018-01-05 2018-06-29 宁波均胜科技有限公司 A kind of error calibration system and its error calibrating method based on HUD systems
CN111242866A (en) * 2020-01-13 2020-06-05 重庆邮电大学 Neural network interpolation method for AR-HUD virtual image distortion correction under observer dynamic eye position condition
CN111476104A (en) * 2020-03-17 2020-07-31 重庆邮电大学 AR-HUD image distortion correction method, device and system under dynamic eye position

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN207751667U (en) * 2018-01-05 2018-08-21 宁波均胜科技有限公司 A kind of error calibration system for head-up-display system
KR102518661B1 (en) * 2018-10-16 2023-04-06 현대자동차주식회사 Method for correction image distortion of hud system
CN109493321B (en) * 2018-10-16 2021-11-12 中国航空工业集团公司洛阳电光设备研究所 Parallax calculation method for vehicle-mounted HUD visual optical system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108225734A (en) * 2018-01-05 2018-06-29 宁波均胜科技有限公司 A kind of error calibration system and its error calibrating method based on HUD systems
CN111242866A (en) * 2020-01-13 2020-06-05 重庆邮电大学 Neural network interpolation method for AR-HUD virtual image distortion correction under observer dynamic eye position condition
CN111476104A (en) * 2020-03-17 2020-07-31 重庆邮电大学 AR-HUD image distortion correction method, device and system under dynamic eye position

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
某车载抬头显示光学系统设计研究;陈璐玲;《中国新技术新产品》;20200731(第7期);第19-21页 *

Also Published As

Publication number Publication date
CN113313656A (en) 2021-08-27

Similar Documents

Publication Publication Date Title
CN109688392B (en) AR-HUD optical projection system, mapping relation calibration method and distortion correction method
CN107527324B (en) A kind of pattern distortion antidote of HUD
WO2020019487A1 (en) Mura compensation method and mura compensation system
US10007853B2 (en) Image generation device for monitoring surroundings of vehicle
CN106898286B (en) Mura defect repairing method and device based on designated position
US20160267710A1 (en) Image Rendering Method and Apparatus
WO2018201652A1 (en) Real-time virtual reality acceleration method and device
CN101572787B (en) Computer vision precision measurement based multi-projection visual automatic geometric correction and splicing method
CN111586384B (en) Projection image geometric correction method based on Bessel curved surface
CN102298894A (en) display device and contrast enhancement method thereof
CN109495729B (en) Projection picture correction method and system
CN110636273A (en) Method and device for adjusting projection picture, readable storage medium and projector
CN108550157B (en) Non-shielding processing method for teaching blackboard writing
CN111192552A (en) Multi-channel LED spherical screen geometric correction method
CN107749986A (en) Instructional video generation method, device, storage medium and computer equipment
CN104732497A (en) Image defogging method, FPGA and defogging system including FPGA
CN104461441A (en) Rendering method, rendering device and display device
TWI443604B (en) Image correction method and image correction apparatus
CN113313656B (en) Distortion correction method suitable for HUD upper, middle and lower eye boxes
CN102682431A (en) Wide-angle image correction method and system
JP7081265B2 (en) Image processing equipment
CN110211543A (en) Local backlight adjusting method and device, virtual reality system
Zoido et al. Optimized methods for multi-projector display correction
CN102469249A (en) Image correction method and image correction device
CN112164377B (en) Self-adaption method for HUD image correction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant