CN114863074A - Three-dimensional intraocular holographic automobile head-up display method combining laser radar data - Google Patents
Three-dimensional intraocular holographic automobile head-up display method combining laser radar data Download PDFInfo
- Publication number
- CN114863074A CN114863074A CN202210608286.6A CN202210608286A CN114863074A CN 114863074 A CN114863074 A CN 114863074A CN 202210608286 A CN202210608286 A CN 202210608286A CN 114863074 A CN114863074 A CN 114863074A
- Authority
- CN
- China
- Prior art keywords
- dimensional
- distribution
- laser
- point cloud
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 26
- 238000005070 sampling Methods 0.000 claims abstract description 12
- 238000012545 processing Methods 0.000 claims abstract description 11
- 238000012805 post-processing Methods 0.000 claims abstract description 10
- 238000009826 distribution Methods 0.000 claims description 59
- 238000005516 engineering process Methods 0.000 claims description 18
- 238000007667 floating Methods 0.000 claims description 7
- 238000012804 iterative process Methods 0.000 claims description 6
- 239000000463 material Substances 0.000 claims description 5
- 230000010287 polarization Effects 0.000 claims description 5
- 230000003313 weakening effect Effects 0.000 claims description 3
- 238000001093 holography Methods 0.000 claims description 2
- 230000003287 optical effect Effects 0.000 description 18
- 230000003190 augmentative effect Effects 0.000 description 10
- 238000010586 diagram Methods 0.000 description 9
- 238000001514 detection method Methods 0.000 description 7
- 238000003384 imaging method Methods 0.000 description 7
- 210000001747 pupil Anatomy 0.000 description 7
- 230000008901 benefit Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000005259 measurement Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000013461 design Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 230000018109 developmental process Effects 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 238000004806 packaging method and process Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 230000004075 alteration Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 210000001525 retina Anatomy 0.000 description 2
- 230000003595 spectral effect Effects 0.000 description 2
- 206010063385 Intellectualisation Diseases 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000003930 cognitive ability Effects 0.000 description 1
- 230000001149 cognitive effect Effects 0.000 description 1
- 230000001427 coherent effect Effects 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000000704 physical effect Effects 0.000 description 1
- 238000002310 reflectometry Methods 0.000 description 1
- 230000011514 reflex Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 230000002269 spontaneous effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B27/0103—Head-up displays characterised by optical features comprising holographic elements
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03H—HOLOGRAPHIC PROCESSES OR APPARATUS
- G03H1/00—Holographic processes or apparatus using light, infrared or ultraviolet waves for obtaining holograms or for obtaining an image from them; Details peculiar thereto
- G03H1/04—Processes or apparatus for producing holograms
- G03H1/16—Processes or apparatus for producing holograms using Fourier transform
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/014—Head-up displays characterised by optical features comprising information/image processing systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/08—Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Electromagnetism (AREA)
- Optics & Photonics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Mathematical Physics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Holo Graphy (AREA)
Abstract
The invention discloses a three-dimensional intraocular holographic automobile head-up display method combining laser radar data, which comprises the following steps: s1, scanning a target environment through a laser scanner to obtain radar data, performing post-processing through RiSCANPro software to generate point cloud data, wherein the point cloud data is composed of a plurality of spatial sampling points on the surface of an object, echo signal information of each spatial sampling point is included in the point cloud data, the surface attribute of the object can be reflected through the echo signal information, an open source Python library is used for classifying each spatial sampling point in the point cloud data, so that the object surface attribute classification of the scanned object is realized, the classified point cloud data is subjected to image processing to obtain a pixel intensity map, a target plane is further formed, and the target plane comprises laser echo signal information of each position of the object; s2, generating and optimizing a digital hologram of the three-dimensional object through a GS iterative phase retrieval algorithm; and S3, carrying out holographic projection display on the target object by using the intraocular holographic projection device.
Description
Technical Field
The invention relates to the fields of holographic projection, augmented reality and laser radar, in particular to a three-dimensional intraocular holographic projection device based on a digital holographic technology and an automobile head-up display method for innovatively fusing laser radar data and augmented reality technology (AR).
Background
With the advent of new vehicles equipped with sensors, cameras and intelligent driving assistance systems, heads-up displays (HUDs) for military and civilian aviation were once only limited to the field of military personal transportation. In 1988, the first automotive heads-up display was developed by the public, and the first automotive heads-up display consisted of only a single display placed on the windshield. In recent years, the automobile head-up display has a lot of new developments, driving auxiliary information such as speed, navigation, oil consumption and the like can be projected to the front of the sight of a driver through the automobile head-up display, the safety function effectively reduces distraction time caused by that the driver often looks down at an instrument panel, a navigator or other equipment in the automobile, and the driver can be helped to make faster response under complex road conditions. However, limited to the conventional optical system and inefficient digital processing technology, today's HUDs project only a tiny image on the windshield directly above the steering wheel, display limited information, and the virtual object or information symbol displayed by the conventional technology cannot replace the real object as an auxiliary prompt for the driver; in addition, compared with an airplane HUD, an image displayed by an automobile HUD is focused at a position close to a windshield, mismatching of the focal depth of the displayed image and the real position of an object can generate adverse consequences, even if the image is displayed in the sight of a driver, human eyes need to be adjusted adaptively according to the displayed image and an actual road, the driver is forced to focus pupils at a short focal distance of only a few meters, the attention of the driver can be dispersed, and the driver is almost impossible to watch a virtual image projected by the HUD and a road ahead at the same time, and the challenges limit further development of the automobile head-up display.
The augmented reality technology (AR) is a novel human-computer interaction technology developed on the basis of the virtual reality technology (VR), and can combine the real world with a virtual image to provide a virtual-real combined augmented reality scene. The combination of AR and HUD is an important mark for intellectualization and informatization of future automobiles. The most important challenges of HUDs in achieving augmented reality are multi-focal display, achieving as large a viewing area as possible without affecting the field of view, and minimal interference with driving behavior. The HUD is intended to project digital symbols or virtual images in the field of view of the driver, so that minimal disturbance of the driving behavior is achieved by the driving assistance system. However, these characteristics are difficult to obtain with conventional HUD systems due to high power consumption and large package size; moreover, since the imaging position of conventional HUD systems is fixed, it is a challenge to keep the exit pupil small, and in this regard, holographic AR HUDs have many significant advantages over conventional HUD techniques. Unlike conventional display systems based on direct amplitude modulation of light, such as televisions, holograms produce images by interference of light waves, and when a hologram is displayed on a Spatial Light Modulator (SLM) and illuminated by a coherent light beam, the light waves striking it will be diffracted and focused to form a visible image. Use holographic projection and augmented reality technique in car HUD, fuse real world and virtual image mutually, provide the augmented reality scene that virtuality and reality combines, when improving driving experience, also reduced driver's cognitive load, improve dangerous cognitive ability.
Fig. 1 is an optical design schematic diagram of a scanning laser head-up display for an automobile in the prior art, see document [1 ]. The projector consists of a laser and a micro-electromechanical system (MEMS) scanner, and the rayleigh relay optical system is equivalently represented as a single lens in the figure. Light from the projector is incident on an exit pupil expander, which is used to expand the numerical aperture of the incident beam, the cone of light exiting the exit pupil expander being much wider than the incident scanning beam. At the same time, the exit pupil dilator may also serve as an intermediate image plane. The viewer's eye receives the light from the eye box and images it onto the retina. Thus, the relay optical system and the driver's eye together act as an imaging system that continues to image the intermediate image on the exit pupil expander onto the retina, the optical magnification of the entire system being about 4 times.
The scanning laser HUD has certain advantages. The high coherence of the laser light directly results in a higher light collection efficiency than other types of light sources, which can produce a brighter display screen at a given light power. Likewise, since the laser light is highly polarized at the time of generation, by properly aligning the laser light, the desired polarized light reflected from the windshield can be directly obtained; in addition, lasers can provide a maximum color gamut that covers nearly all colors that any other display technology can display, including colors with very high saturation, which can make the critical HUD information stand out from the road scene. However, since this HUD still uses the conventional rayleigh optical system, the optical elements are large in number and size and are not easily packaged; moreover, the image displayed by the traditional optical system has single dimension, only two-dimensional images or digital symbols can be displayed, and three-dimensional imaging display cannot be carried out.
Fig. 2 is an optical design schematic diagram of a prior art automotive head-up display employing a free-form three-mirror reflection system, see document [2 ]. The free form three mirror system consists of a windshield, a free form mirror and a plane mirror, wherein the free form mirror is used for correcting wavefront aberration, and the plane mirror is used for folding a light path. Light from the image source is reflected to human eyes through the plane mirror, the free form mirror and the windshield, and forms a virtual image in front of the windshield through the human eye optical system.
The freeform three mirror reflection system is easier to package than the traditional rayleigh optical system with fewer optical elements, but this system still has some drawbacks. In this system, the windshield can also be considered as a freeform surface, and therefore, the asymmetric structure of the windshield and the system will cause larger aberration, which increases the difficulty of designing the optical system; in addition, the automobile head-up display system can only present two-dimensional images, and the imaging position is relatively fixed due to the application of the windshield in the imaging process.
Reference to the literature
[1]Freeman MO,Schenk H,P iyawattanametha W.MEMS scanned laser head-up display[J].Proceedings of SPIE-The International Society for Optical Engineering,2011,7930(1):79300G-79300G-8.
[2]Wei S L,Fan Z C,Zhu Z B,et al.Design of a head-up display based on freeform reflective systems for automotive applications[J].Applied Optics,2019,58(7):1675.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provides a three-dimensional intraocular holographic automobile head-up display method combining laser radar data. Lidar is an active sensor that illuminates the surrounding environment by emitting pulsed or phase modulated light, and then accurately measures range by processing the backscattered laser waveform. Lidar sensors are commonly used for target detection, classification, tracking, intent prediction, and depth layer analysis. Lidar is an indispensable technology, and if the lidar is installed on a moving platform, a detailed three-dimensional schematic diagram of a measured place can be generated. By using the laser radar sensing, not only can the image and the three-dimensional point cloud information be utilized simultaneously, but also the accurate moving target detection and grid detection can be utilized for positioning and drawing. The laser radar system supplements sensing information based on a camera or a radio radar sensing system, and can improve the accuracy and safety of the automatic driving automobile.
The invention integrates the three-dimensional virtual projection image into the real physical world by utilizing the laser scanning technology and the digital holographic technology. The three-dimensional virtual image is directly projected into the eyes of the driver by the intraocular holographic projection, and then is displayed at different distances in front of the sight line of the driver by imaging of the human eye optical system. The virtual image depth focus matched with the real position of the object can be provided by the arrangement, the driver is provided with the opportunity of watching the holographic projection at the same distance of the real position of the object, and the real driving scene is not cut off, and the real and real combination and real-time interaction characteristics are realized, so that a strong driving effect is realized on the development of future HUDs.
The purpose of the invention is realized by the following technical scheme:
a three-dimensional intraocular holographic automobile head-up display method combining laser radar data comprises the following steps:
s1, scanning a target environment through a laser scanner to obtain radar data, performing post-processing through RiSCANPro software to generate point cloud data, wherein the point cloud data is composed of a plurality of spatial sampling points on the surface of an object, echo signal information of each spatial sampling point is included in the point cloud data, the surface attribute of the object can be reflected through the echo signal information, an open source Python library is used for classifying each spatial sampling point in the point cloud data, so that the object surface attribute classification of the scanned object is realized, the classified point cloud data is subjected to image processing to obtain a pixel intensity map, a target plane is further formed, and the target plane comprises laser echo signal information of each position of the object;
s2, generating and optimizing a digital hologram of the three-dimensional object through a GS iterative phase retrieval algorithm; the method comprises the following specific steps:
(201) combining the amplitude distribution of the echo signals in the measured laser echo signal information with random phase distribution through a GS iterative phase retrieval algorithm to obtain initial complex value wave front distribution of a target plane;
(202) carrying out Fourier transform on the initial complex wave-front distribution of the target plane to obtain complex wave-front distribution on a diffraction plane, replacing amplitude distribution on the diffraction plane with measured amplitude distribution, and further updating the complex wave-front distribution on the diffraction plane;
(203) performing inverse Fourier transform on the complex wave front distribution in the updated diffraction plane to obtain the complex wave front distribution on the target plane;
(204) replacing the amplitude distribution on the target plane with the measured amplitude distribution to obtain the updated complex value wave front distribution on the target plane, namely the initial complex value wave front distribution on the target plane in the next iteration;
(205) repeating the iterative process from the step (201) to the step (204), recovering complex wave front distribution in the target plane and the diffraction plane, and storing and recording the complex wave front distribution in a digital hologram form to obtain a digital hologram of the three-dimensional object;
in the above step, the complex wavefront distribution includes an amplitude distribution and a phase distribution;
and S3, carrying out holographic projection display on the target object by using the intraocular holographic projection device.
Further, in step S1, each point in the point cloud data set includes three-dimensional coordinates, color, reflection intensity, and echo frequency information of the object.
Further, in step S1, if the target environment to be sampled is composed of more than 10 kinds of materials, several lasers with different wavelengths are used to improve the accuracy of laser scanning and post-processing by using the contrast between different laser bands.
Further, in step S2, the GS iterative phase retrieval algorithm assigns a random phase to each pixel in the target plane and runs fast fourier transform, thereby generating phase parameters of the ghost on the diffraction plane; the iterative process is repeated for a plurality of times, the algorithm distributes the retrieved phase to each pixel of the target plane in each iteration, different onsite phase parameters are obtained, and the iteration is finished for 500 times.
Further, the random phase distribution is provided by a GS iterative phase retrieval algorithm.
Further, the intraocular holographic projection device comprises a laser, a lens group, a polarizer, a half-wave plate, a non-polarization beam splitter and a reflection type spatial light modulator which are arranged in sequence; the laser beam emitted by the laser passes through the lens group to realize beam expansion and collimation of the laser beam, and the beam after beam expansion and collimation passes through the polarizer and the half-wave plate in sequence to play a role in weakening zero-order light; then dividing the input and output light beams by a non-polarization beam splitter, connecting the reflection-type spatial light modulator with a computer through an HDMI (high-definition multimedia interface), and transmitting the digital hologram of the three-dimensional object to the reflection-type spatial light modulator through the HDMI by the computer; a concave lens is arranged at the output end of the non-polarization beam splitter and used for enlarging the focusing range and the field of view; the concave lens outputs images to human eyes, the automatic focusing function of the human eyes displays the images at different distances in the field of view of a driver, and digital holographic projection of a three-dimensional object in the field of view of the driver is achieved.
Further, the lens group comprises an aspheric lens L1 and an aspheric lens L2 which are arranged in sequence, and the focal length f of the aspheric lens L1 1 3.30 mm; focal length f of aspherical lens L2 2 =100mm。
The invention also provides application of the intraocular holographic projection device, which can realize multi-focus display by utilizing an intraocular holographic projection technology, can project and display the digital hologram of the three-dimensional object at different distances in front of the sight of a driver, and realizes a three-dimensional floating AR view.
In summary, the invention collects the laser radar data through the three-dimensional ground laser scanner, based on the GS iterative phase retrieval algorithm, performs digital processing on the echo signal to generate a digital holographic image of the three-dimensional object, and finally performs projection display on the digital holographic image of the three-dimensional object by using a set of three-dimensional intraocular holographic projection device including a high-resolution spatial light modulator, wherein the three-dimensional intraocular holographic projection device can realize multi-focal-length projection of the three-dimensional object by adopting an intraocular holographic mode, thereby creating an augmented reality visual effect.
Compared with the prior art, the technical scheme of the invention has the following beneficial effects:
1. collecting laser radar data through a three-dimensional ground laser scanner, carrying out post-processing on the obtained laser radar data, then carrying out digital processing on echo signals in a digital hologram form based on a GS iterative phase retrieval algorithm to generate a digital hologram of a three-dimensional object for three-dimensional laser radar projection, then transmitting the digital hologram to a reflection-type Spatial Light Modulator (SLM) through a high-definition multimedia interface (HDMI), and finally carrying out holographic projection display on the digital hologram by using a three-dimensional image intraocular holographic projection device. The holographic projection device can realize multi-focus display by utilizing an intraocular holographic projection technology, namely, the digital hologram of a three-dimensional object can be projected and displayed at different distances in front of the sight line of a driver, so that a three-dimensional floating AR view is realized; moreover, the HUD device adopting the holographic projection technology has small packaging size and is more flexible and convenient; in addition, the addition of the laser radar point cloud data has the advantage of real scanning data from public roads, the data can be integrated into the urban environment, road obstacles outside the sight line of a driver are identified and can be projected in real time, the environment perception of the driver is obviously enhanced, and more intelligent and safer driving experience is provided for the driver.
2. The traditional HUD projection on the windshield is different, the traditional HUD generally displays two-dimensional images or digital symbols, the laser radar digital holographic projection device can realize three-dimensional image intraocular holographic projection, provides a driver with an option of projecting and displaying a multi-dimensional object, realizes three-dimensional object digital holographic projection display, and obviously enhances the environment perception capability of the driver; and the application of the holographic technology overcomes the defects of large volume and difficult packaging of the traditional optical system, and more has convenience and flexibility of practical application. The invention solves the technical problems of fixed imaging position, limited displayed information content, single displayed information dimension, larger packaging size and the like in the traditional HUD.
3. The image displayed by a conventional HUD is typically focused closer to the windshield, which forces the driver to focus the pupil at a short focal distance of only a few meters, which can distract the driver. The three-dimensional object intraocular holographic projection device can display three-dimensional floating projection at a plurality of distances in front of the sight line of a driver by utilizing intraocular holographic projection, and realizes Augmented Reality (AR) effect. Compared with the traditional HUD, the AR HUD has great improvement in other aspects such as navigation, risk avoidance, position updating and spontaneous interaction with physical objects.
The addition of lidar data to the HUD provides a means to increase the current safety and security level of the transportation sector. By projecting the road obstacle in real time, the laser radar point cloud can identify the road obstacle outside the sight line of the driver. The method has the advantages that the scanned and processed laser radar data is innovatively fused with the holographic HUD in the AR mode, so that the road surface obstacle of the driver can be reminded in real time, and the obstacle recognition capability of the driver is enhanced.
Drawings
FIG. 1 is a schematic diagram of a prior art scanning laser HUD system.
Fig. 2 is a schematic diagram of a free-form three mirror reflex HUD system of the prior art.
FIG. 3 is a schematic flow chart of a technical scheme of a laser radar holographic AR automobile head-up display method of the invention.
Fig. 4 is a schematic diagram illustrating a state in which a target object in a target environment is scanned by an in-vehicle lidar according to an embodiment of the present invention.
Fig. 5 is a theoretical schematic diagram of the GS iterative phase retrieval algorithm for generating digital holograms according to the present invention.
Fig. 6 is a theoretical schematic diagram of the GS iterative phase retrieval algorithm for realizing the digital holographic reconstruction of the three-dimensional object according to the present invention.
Fig. 7 is a schematic structural diagram of an intraocular holographic projection apparatus in an embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the figures and specific examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As shown in fig. 3, the present embodiment provides a three-dimensional intraocular holographic automobile head-up display method combining lidar data, including:
s1, as shown in fig. 4, scanning a target environment by using a RIEGL three-dimensional laser scanner (f is 1550mm, the divergence angle is 0.35mrad, and the measurement range is 600m) to obtain radar data, performing post-processing by using RiSCANPro software to generate point cloud data, wherein the point cloud is a sampling point set which characterizes the target spatial distribution and the target surface spectral characteristics under the same spatial reference system, is a model geometric description formed by a series of spatial sampling points on the surface of an object model, and is also a universal representation form of three-dimensional laser scanning data. On one hand, the point cloud data describes three-dimensional coordinate information and topology and statistical attribute information inverted from the coordinate information; on the other hand, the physical property characteristics of the surface of the object, such as material and roughness, can be reflected through the echo signal information or the spectral property. Sampling points in the point cloud data are classified by using an open source Python library, so that the classification of the space geometric characteristics of the scanned object and the physical attributes of the surface of the object is realized, the classified point cloud data is subjected to image processing to obtain a pixel intensity map, and a target plane is formed and comprises laser echo signal information of each position of the object.
If the composition of the sampled target environment is complex, the types of the materials of the object are more than 10, a plurality of lasers with different wavelengths can be used, and due to the fact that different object materials have different reflectivity, the precision of laser scanning and subsequent processing is improved by the aid of the contrast between different laser radar wave bands.
S2, generating and optimizing a digital hologram of the three-dimensional object through a GS iterative phase retrieval algorithm;
the Gerchberg Saxton algorithm (GS for short) is the first optical wavefront phase retrieval iterative algorithm proposed by Gerchberg and Saxton, the GS algorithm forms a template of the iterative phase retrieval algorithm in optics, and the principle of the GS iterative phase retrieval algorithm is shown in FIG. 5. GS iterative phase retrieval algorithms typically require intensity measurements at two locations: | u 0 | 2 Is the intensity of the target plane, | U 0 | 2 Is the intensity of the diffraction plane. The complex-valued wavefronts of the target plane and the diffraction plane are correlated by fourier transform, wherein the complex-valued wavefront distribution includes amplitude and phase information;
the GS iterative phase retrieval algorithm starts from a target plane, wherein the target plane comprises laser echo signal information of each position of an object, and the specific process is as follows:
(201) measuring the amplitude distribution u of the echo signal in the laser echo signal information by GS iterative phase retrieval algorithm 0 And random phase distributionCombining to obtain initial complex wave front distribution of target plane
(202) Initial complex-valued wavefront distribution to the target planeFourier transform is carried out to obtain complex wave front distribution Uexp (i phi) on a diffraction plane, and the amplitude distribution U on the diffraction plane is replaced by the measured amplitude distribution U 0 Further updating the complex wave front distribution on the diffraction plane to U 0 exp(iΦ);
(203) For complex wave front distribution U in the updated diffraction plane 0 exp (i phi) is subjected to inverse Fourier transform to obtain complex wave front distribution on the target plane
(204) The amplitude distribution u on the target plane is replaced by the measured amplitude distribution u 0 Obtaining the complex wave front distribution on the target plane after updatingThe initial complex wave front distribution on the target plane in the next iteration is obtained;
(205) repeating the iterative process from the step (201) to the step (204), recovering complex wave front distribution in the target plane and the diffraction plane, and storing and recording the complex wave front distribution in a digital hologram form to obtain a digital hologram of the three-dimensional object;
in this embodiment, a GS iterative phase retrieval algorithm is used to generate and optimize the digital hologram. The target plane (i.e., the acquired and post-processed lidar data) is represented as a pixel intensity map, and the GS iterative phase retrieval algorithm uses the square root of the target plane pixel intensity map as the modulus and performs the phase retrieval in an iterative manner. The GS iterative phase retrieval algorithm assigns a random phase to each pixel in the target plane and runs a fast fourier transform, thereby generating phase parameters for the heavy scene on the diffraction plane. This iterative process is repeated a number of times, with the algorithm assigning the retrieved phase to each pixel of the target plane in each iteration, and obtaining different live-in-place phase parameters, ending 500 iterations.
The iterative phase retrieval algorithm may also utilize a series of intensity measurements and phase iterations to achieve a complete wavefront reconstruction of the three-dimensional object. As shown in FIG. 6, the three-dimensional object is partitioned into a series of two-dimensional target planes Z 1 、Z 2 、Z 3 、Z 4 The distance between the target planes is arbitrary, and for a dense three-dimensional object, the distance between the two-dimensional target planes separated can be infinite; the propagation of light between two-dimensional target planes is calculated by an angular spectrum method, and light waves pass through the last two-dimensional target plane and then are transmitted to an intensity detector positioned at a detection plane; the intensity detector can be moved so as toTwo or more intensity distributions are measured at different distances from the three-dimensional object (i.e., detection planes), and FIG. 6 shows two detection planes (H) at different distances from the three-dimensional object 1 、H 2 ) Intensity measurement is respectively carried out, light waves are transmitted back and forth between detection planes, and the complete wave front of the three-dimensional object can be recovered through an iterative phase retrieval algorithm to generate a digital image of the three-dimensional object.
And S3, carrying out holographic projection display on the target object by using the intraocular holographic projection device.
In order to combine the lidar data with the digital holography technology, in this embodiment, the lidar data obtained through acquisition and post-processing is represented as a pixel intensity map as a target plane, then the target plane is subjected to digital processing by using an iterative phase retrieval optimization algorithm to generate a digital holographic image, and finally, the three-dimensional object digital holographic image is subjected to three-dimensional holographic projection display by using a three-dimensional object intraocular holographic projection device. The invention adopts an intraocular holographic projection mode to directly project a virtual image to human eyes, and the automatic focusing function of the human eyes can display the image at different distances within the field range of view of a driver, so that the technology provides a personalized method which can project three-dimensional floating holographic projection at an ideal distance within the field range of view of the driver; moreover, due to the fact that projection display capabilities of the laser radar data subjected to post-processing at different distances are different, the reproduced image has strong three-dimensional property and visual effect, so that visual depth is increased, Augmented Reality (AR) experience is reconstructed, and three-dimensional floating AR holographic projection can be achieved.
Specifically, the laser radar data after post-processing is imported into MATLAB, MATLAB codes are run to obtain a digital hologram (CGH) of a three-dimensional object generated by a computer, the computer transmits the digital hologram of the three-dimensional object to a reflection-type spatial light modulator through a high-definition multimedia interface (HDMI), and finally, a holographic projection display is carried out on the digital hologram by using a three-dimensional object intraocular holographic projection device. Fig. 7 is an intraocular holographic projection apparatus used in an embodiment of the present invention, which is composed of a laser, a lens group, a polarizer, a half-wave plate, a non-polarizing beam splitter, and a reflective spatial light modulator in this order, and the positional relationship between the respective optical elements is as shown in fig. 7.
Firstly, a laser beam emitted by an unpolarized HeNe laser (lambda is 632.8nm, 5mW) passes through a lens group consisting of two lenses, wherein the lens group consists of an aspheric lens L1(f is 3.30mm) and an aspheric lens L2(f is 100mm), and the aspheric lens L1 and the aspheric lens L2 respectively realize beam expansion and collimation of the laser beam; the light beams after beam expansion and collimation sequentially pass through the polarizer and the half-wave plate to play a role in weakening zero-order light; subsequently using a non-polarizing beam splitter to divide the input and output beams; performing digital processing on an object to be projected by utilizing a laser radar scanning and iterative phase retrieval algorithm to generate a digital hologram mode, and creating a digital hologram on a reflection-type spatial light modulator through a high-resolution multimedia interface (HDMI); in addition, a concave lens L3 is used at the output of the non-polarizing beam splitter to increase the focus range and field of view of the reflective spatial light modulator. When the hologram on the reflection-type spatial light modulator is focused at infinity, the hologram is guided into human eyes through the concave lens, the automatic focusing function of the human eyes can display images at different distances in the field of view of a driver, and the three-dimensional floating AR holographic projection can be projected at an ideal distance in the field of view of the driver.
Finally, it should be pointed out that: the above examples are merely illustrative of the computational process of the present invention and are not limiting thereof. Although the present invention has been described in detail with reference to the foregoing examples, it should be understood by those skilled in the art that the calculation processes described in the foregoing examples can be modified or equivalent substitutions for some of the parameters may be made without departing from the spirit and scope of the calculation method of the present invention.
The present invention is not limited to the above-described embodiments. The foregoing description of the specific embodiments is intended to describe and illustrate the technical solutions of the present invention, and the above specific embodiments are merely illustrative and not restrictive. Those skilled in the art can make many changes and modifications to the invention without departing from the spirit and scope of the invention as defined in the appended claims.
Claims (8)
1. A three-dimensional intraocular holographic automobile head-up display method combined with laser radar data is characterized by comprising the following steps:
s1, scanning a target environment through a laser scanner to obtain radar data, performing post-processing through RiSCANPro software to generate point cloud data, wherein the point cloud data is composed of a plurality of spatial sampling points on the surface of an object, echo signal information of each spatial sampling point is included in the point cloud data, the surface attribute of the object can be reflected through the echo signal information, an open source Python library is used for classifying each spatial sampling point in the point cloud data, so that the object surface attribute classification of the scanned object is realized, the classified point cloud data is subjected to image processing to obtain a pixel intensity map, a target plane is further formed, and the target plane comprises laser echo signal information of each position of the object;
s2, generating and optimizing a digital hologram of the three-dimensional object through a GS iterative phase retrieval algorithm; the method comprises the following specific steps:
(201) combining the amplitude distribution of the echo signals in the measured laser echo signal information with random phase distribution through a GS iterative phase retrieval algorithm to obtain initial complex value wave front distribution of a target plane;
(202) carrying out Fourier transform on the initial complex wave front distribution of the target plane to obtain complex wave front distribution on a diffraction plane, replacing amplitude distribution on the diffraction plane with measured amplitude distribution, and further updating the complex wave front distribution on the diffraction plane;
(203) performing inverse Fourier transform on the complex wave front distribution in the updated diffraction plane to obtain the complex wave front distribution on the target plane;
(204) replacing the amplitude distribution on the target plane with the measured amplitude distribution to obtain the updated complex-value wave-front distribution on the target plane, namely the initial complex-value wave-front distribution on the target plane in the next iteration;
(205) repeating the iterative process from the step (201) to the step (204), recovering complex wave front distribution in the target plane and the diffraction plane, and storing and recording the complex wave front distribution in a digital hologram form to obtain a digital hologram of the three-dimensional object;
in the above step, the complex wavefront distribution includes an amplitude distribution and a phase distribution;
and S3, carrying out holographic projection display on the target object by using the intraocular holographic projection device.
2. The method for displaying the three-dimensional eye-ward holographic automobile head-up combined with the lidar data of claim 1, wherein in step S1, each point in the point cloud data set comprises three-dimensional coordinates, color, reflection intensity and echo frequency information of the object.
3. The method for displaying the three-dimensional eye holography car head-up combined with the laser radar data as claimed in claim 1, wherein in step S1, if the sampled target environment is composed of more than 10 kinds of object materials, several lasers with different wavelengths are used, and the contrast between different laser bands is used to improve the precision of laser scanning and post-processing.
4. The method for displaying the three-dimensional eye hologram head-up in combination with the lidar data of claim 1, wherein in step S2, the GS iterative phase retrieval algorithm assigns a random phase to each pixel in the target plane and runs fast fourier transform, thereby generating the phase parameters of the ghost on the diffraction plane; the iterative process is repeated for a plurality of times, the algorithm distributes the retrieved phase to each pixel of the target plane in each iteration, different onsite phase parameters are obtained, and the iteration is finished for 500 times.
5. The method of claim 1, wherein the random phase distribution is provided by a GS iterative phase retrieval algorithm.
6. The method of claim 1, wherein the holographic projection device comprises a laser, a lens set, a polarizer, a half-wave plate, a non-polarizing beam splitter and a reflective spatial light modulator; the laser beam emitted by the laser passes through the lens group to realize beam expansion and collimation of the laser beam, and the beam after beam expansion and collimation passes through the polarizer and the half-wave plate in sequence to play a role in weakening zero-order light; then dividing the input and output light beams by a non-polarization beam splitter, connecting the reflection-type spatial light modulator with a computer through an HDMI (high-definition multimedia interface), and transmitting the digital hologram of the three-dimensional object to the reflection-type spatial light modulator through the HDMI by the computer; a concave lens is arranged at the output end of the non-polarization beam splitter and used for enlarging the focusing range and the field of view; the concave lens outputs images to human eyes, the automatic focusing function of the human eyes displays the images at different distances in the field of view of a driver, and digital holographic projection of a three-dimensional object in the field of view of the driver is achieved.
7. The method as claimed in claim 6, wherein the lens group comprises an aspherical lens L1 and an aspherical lens L2, the focal length f of the aspherical lens L1 being set in sequence, and the focal length f of the aspherical lens L1 being set in sequence 1 3.30 mm; focal length f of aspherical lens L2 2 =100mm。
8. The application of the intraocular holographic projection device is characterized in that multifocal display can be realized by utilizing an intraocular holographic projection technology, digital holograms of three-dimensional objects can be projected and displayed at different distances in front of the sight line of a driver, and a three-dimensional floating AR view is realized.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210608286.6A CN114863074A (en) | 2022-05-31 | 2022-05-31 | Three-dimensional intraocular holographic automobile head-up display method combining laser radar data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210608286.6A CN114863074A (en) | 2022-05-31 | 2022-05-31 | Three-dimensional intraocular holographic automobile head-up display method combining laser radar data |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114863074A true CN114863074A (en) | 2022-08-05 |
Family
ID=82640633
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210608286.6A Pending CN114863074A (en) | 2022-05-31 | 2022-05-31 | Three-dimensional intraocular holographic automobile head-up display method combining laser radar data |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114863074A (en) |
-
2022
- 2022-05-31 CN CN202210608286.6A patent/CN114863074A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3598202B1 (en) | A head-up display | |
Jang et al. | Holographic near-eye display with expanded eye-box | |
US20120224062A1 (en) | Head up displays | |
US10203499B2 (en) | Image projection system | |
WO2022052112A1 (en) | Head-up display device, head-up display method, and vehicle | |
GB2472773A (en) | A road vehicle contact-analogue head up display | |
CN109716244B (en) | Holographic wide field-of-view display | |
JP2020530900A (en) | Camera assembly with programmable diffractive optics for depth detection | |
US20220252879A1 (en) | Image projection | |
US11940758B2 (en) | Light detection and ranging | |
Skirnewskaja et al. | LiDAR-derived digital holograms for automotive head-up displays | |
CN115933346A (en) | Method for determining a hologram of an image for a system | |
AU2022216817B2 (en) | Image projection | |
CN114863074A (en) | Three-dimensional intraocular holographic automobile head-up display method combining laser radar data | |
Skirnewskaja et al. | Accelerated Augmented Reality Holographic 4k Video Projections Based on Lidar Point Clouds for Automotive Head‐Up Displays | |
CN115729083A (en) | Method for reconstructing an image from a hologram | |
EP4231277A1 (en) | Head-up display | |
CN112824968A (en) | Projection apparatus and method | |
EP2527929A1 (en) | Projection apparatus | |
CN104155761A (en) | Driving assisting head-up device | |
CN118104232A (en) | Head-up display calibration | |
CN214278540U (en) | Vehicle-mounted head-up display system based on spatial light modulator | |
US20230101295A1 (en) | Field of View Optimization | |
TWI711295B (en) | 3d holographic display system | |
EP4339712A1 (en) | Optimised hologram updating |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |