CN113406874B - System and method for realizing color three-dimensional point cloud naked eye display by using single spatial light modulator - Google Patents

System and method for realizing color three-dimensional point cloud naked eye display by using single spatial light modulator Download PDF

Info

Publication number
CN113406874B
CN113406874B CN202110678904.XA CN202110678904A CN113406874B CN 113406874 B CN113406874 B CN 113406874B CN 202110678904 A CN202110678904 A CN 202110678904A CN 113406874 B CN113406874 B CN 113406874B
Authority
CN
China
Prior art keywords
color
point cloud
light modulator
spatial light
naked eye
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN202110678904.XA
Other languages
Chinese (zh)
Other versions
CN113406874A (en
Inventor
国中元
缪佳奇
黄隆钤
戴子博
王彦哲
陈星�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202110678904.XA priority Critical patent/CN113406874B/en
Publication of CN113406874A publication Critical patent/CN113406874A/en
Application granted granted Critical
Publication of CN113406874B publication Critical patent/CN113406874B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03HHOLOGRAPHIC PROCESSES OR APPARATUS
    • G03H1/00Holographic processes or apparatus using light, infrared or ultraviolet waves for obtaining holograms or for obtaining an image from them; Details peculiar thereto
    • G03H1/22Processes or apparatus for obtaining an optical image from holograms
    • G03H1/2202Reconstruction geometries or arrangements
    • G03H1/2205Reconstruction geometries or arrangements using downstream optical component
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03HHOLOGRAPHIC PROCESSES OR APPARATUS
    • G03H1/00Holographic processes or apparatus using light, infrared or ultraviolet waves for obtaining holograms or for obtaining an image from them; Details peculiar thereto
    • G03H1/0005Adaptation of holography to specific applications
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03HHOLOGRAPHIC PROCESSES OR APPARATUS
    • G03H1/00Holographic processes or apparatus using light, infrared or ultraviolet waves for obtaining holograms or for obtaining an image from them; Details peculiar thereto
    • G03H1/04Processes or apparatus for producing holograms
    • G03H1/08Synthesising holograms, i.e. holograms synthesized from objects or objects from holograms
    • G03H1/0866Digital holographic imaging, i.e. synthesizing holobjects from holograms
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03HHOLOGRAPHIC PROCESSES OR APPARATUS
    • G03H1/00Holographic processes or apparatus using light, infrared or ultraviolet waves for obtaining holograms or for obtaining an image from them; Details peculiar thereto
    • G03H1/04Processes or apparatus for producing holograms
    • G03H1/08Synthesising holograms, i.e. holograms synthesized from objects or objects from holograms
    • G03H1/0891Processes or apparatus adapted to convert digital holographic data into a hologram
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03HHOLOGRAPHIC PROCESSES OR APPARATUS
    • G03H1/00Holographic processes or apparatus using light, infrared or ultraviolet waves for obtaining holograms or for obtaining an image from them; Details peculiar thereto
    • G03H1/0005Adaptation of holography to specific applications
    • G03H2001/0088Adaptation of holography to specific applications for video-holography, i.e. integrating hologram acquisition, transmission and display

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Holo Graphy (AREA)

Abstract

The invention discloses a system and a method for realizing naked eye display of color three-dimensional point cloud by using a single spatial light modulator. Modeling and manufacturing a 3D color point cloud model, and dividing the model into an RGB three-channel sub-model; carrying out shielding removal and down-sampling treatment, scaling the coordinate scale, and controlling the distribution and the interval of the model; calculating diffraction integrals from each point to the SLM plane and superposing the diffraction integrals to obtain a kinoform; the brightness of the three parallel light pipe lasers is adjusted to be the lowest suitable value, three beams of light with different colors enter the wedge-shaped beam splitter after passing through the polaroid, are converged and combined by the wedge-shaped beam splitter, enter the spatial light modulator loaded with the kinoform through the polaroid, and are reflected and then enter the naked eye imaging naked eye 3D display through the lens group. The invention realizes the naked eye 3D display system with wide visual angle and high definition of the color three-dimensional point cloud by utilizing the single spatial light modulator, solves the problems of non-related image aliasing and poor naked eye observation effect of color imaging of the single spatial light modulator, and provides an effective way for miniaturization of color three-dimensional naked eye display equipment.

Description

System and method for realizing color three-dimensional point cloud naked eye display by single spatial light modulator
Technical Field
The invention relates to a 3D display system and a method in the technical field of holographic display, in particular to a system and a method for realizing color three-dimensional point cloud naked eye 3D display by using a single spatial light modulator.
Background
And 3D displaying stereoscopic vision information of object depth feeling by using the color naked eyes. The method can completely record and reconstruct the wave front of a three-dimensional object, provide all depth information required by a human visual system, and more truly reproduce the same scenes in the objective world, and the three-dimensional display naked-eye 3D display technology with color and large field angle is one of the current research hotspots. Color naked eye 3D display technologies fall into two general categories: a class of three SLMs: the RGB three-color light respectively illuminates three SLMs to reconstruct a 3D image view field on a reconstruction plane, the scheme needs a complex light path design to ensure that the reconstructed images of RGB three channels are accurately combined together, the system cost is too high, and the integral volume is greatly increased; another class reconstructs 3D image fields of view with a single SLM. At present, a color display method based on a single-chip spatial light modulator mainly comprises a time division multiplexing method, a space division method and a space superposition method.
The time division multiplexing method comprises the following steps: the single-chip SLM is illuminated by periodically illuminating Red, Green and Blue light, and an RGB (Red, Green, Blue) three-channel kinoform is periodically displayed on the SLM, so that color projection is realized. The time division multiplexing method has high requirement on system synchronism, the scheme requires that the spatial light modulator has a high frame rate, and human eyes feel a time-synthesized color image through an integral effect after the rate reaches a certain degree. In the principle of the scheme, for one color component, energy is lost on the time axis, so the imaging effect of the color component is influenced to a certain extent. The method needs to accurately control the working time of the RGB light source and the synchronism of loading the corresponding RGB color component hologram, which has higher requirement on the response speed of hardware for loading the hologram.
The space division method comprises the following steps: the single-chip SLM plane is divided into three regions, RGB three-channel kinoforms are loaded respectively, three color lights illuminate the three regions respectively, a beam shaping system is needed to ensure that the wavefront of illumination light is matched with the divided regions, complexity is improved, and the utilization rate of a single-chip modulator is not high.
The space superposition method comprises the following steps: the RGB three-channel image is coded in the same kinoform, the pixel utilization rate is high, a time-sharing system is not needed, and the single-chip SLM color projection system is simple in structure. However, when one of the RGB three-color lights is irradiated alone, in addition to restoring the image of its own color channel, the images of other color channels are reproduced, and these images are overlapped on the image plane, which causes serious noise and extraneous image problems, so that it is necessary to eliminate the extraneous signal by applying pinhole filtering while not blocking the effective signal. The space superposition method has simpler light path and system than other methods, and provides great potential advantages for the miniaturization of the color stereoscopic naked eye display equipment. At present, few researches on reducing image plane noise and eliminating irrelevant images in a spatial superposition method are carried out based on the method, and the method does not give full play to the advantages of the method at present.
Disclosure of Invention
In order to solve the problems existing in the background technology, the invention provides a single spatial light modulator naked eye display system and method for color three-dimensional point cloud which is low in adjustment complexity and can be observed by naked eyes, and the problems of small visual angle, large calculation amount and low speed in manufacturing a kinoform are solved.
The technical scheme adopted by the invention is as follows:
a single spatial light modulator realizes the naked eye display system of the color three-dimensional point cloud, namely the single spatial light modulator realizes the naked eye 3D display system of the color three-dimensional point cloud:
the device comprises a spatial light modulator, a collimator laser, a polaroid, a wedge-shaped beam splitter and a lens group; the three parallel light pipe lasers emit light beams with different colors, the light beams are respectively incident to the wedge-shaped beam splitter after passing through the respective polaroids, are incident to the spatial light modulator after being converged and combined by the wedge-shaped beam splitter and then are reflected by the spatial light modulator and then are incident to human eyes through the lens group.
The three lasers with the parallel light pipes are respectively a green light laser with the parallel light pipes, a red laser with the parallel light pipes and a blue laser with the parallel light pipes; in specific implementation, the three parallel light pipe lasers are a parallel light pipe wavelength 520nm (green) solid laser, a parallel light pipe wavelength 635nm (red) semiconductor laser and a parallel light pipe wavelength 450nm (blue) semiconductor laser respectively.
The green light balancing collimator laser emits green light beams which are transmitted to the first wedge-shaped beam splitter after passing through the first polarizing film, the red light balancing collimator laser emits red light beams which are transmitted to the first wedge-shaped beam splitter after passing through the second polarizing film and are reflected, and the blue light balancing collimator laser emits blue light beams which are transmitted to the second wedge-shaped beam splitter after passing through the third polarizing film and are reflected; the green light beam is transmitted through the first wedge-shaped beam splitter, the red light beam is transmitted through the second wedge-shaped beam splitter after being reflected by the first wedge-shaped beam splitter, and then the green light beam and the blue light beam are transmitted to the spatial light modulator through the fourth polaroid after being reflected by the second wedge-shaped beam splitter.
The spatial light modulator is of a reflective type, and the modulation mode is phase modulation.
The lens group is formed by coaxially arranging a plurality of lenses.
Secondly, a method for realizing naked eye 3D display of color three-dimensional point cloud by using a single spatial light modulator comprises the following steps:
1) the first step is as follows:
manufacturing a 3D color point cloud model of an object to be imaged, wherein the model is divided into three sub-models of RGB three channels; carrying out shielding removal and down-sampling treatment on the three submodels, scaling the coordinate scales of the three submodels, as shown in figure 1, and controlling the submodel of the red channel R and the submodel of the blue channel B to be respectively positioned at two sides of the submodel of the green channel G, wherein the three submodels are staggered at fixed intervals along the horizontal direction; the three submodels of the RGB channels of the 3D color point cloud model are kept at fixed intervals so that they are completely spaced apart from each other.
As shown in fig. 1, the three submodels respectively emit light beams along a horizontal plane to irradiate onto an SLM plane representing the spatial light modulator, calculate diffraction integrals from each point in the 3D color point cloud model to pixels on the SLM plane representing the spatial light modulator, and obtain a kinoform by using the diffraction integrals;
the SLM plane is a liquid crystal plane of the spatial light modulator.
In a specific implementation, three submodels are used to produce a point sequence with coordinate format XYZRGB.
2) The second step is that:
and arranging light paths according to the holographic three-dimensional display device, adjusting polaroids at the front ends of the three parallel light pipe lasers to reduce the brightness of light beams passing through the polaroids to be the lowest and proper, loading a kinoform on the spatial light modulator, adjusting the angle of a beam splitter to superpose target RGB channel components, and observing a color 3D image with a certain depth by naked eyes through a lens group.
The target RGB channel components are specifically 9 images obtained by diffraction of the spatial light modulator, and in single SLM spatial superposition method color imaging, 3 images are equal in size, have red, green and blue colors respectively, are clear at the same position, and can obtain complete color imaging after superposition; these 3 images are referred to as target RGB channel components.
In the step 1), shielding is removed, namely opaque objects observed in reality are simulated, only the front side of a 3D color point cloud model seen by human eyes at an observation position in the figure 3 is reserved, and the shielded back point cloud is removed; the method specifically comprises the following steps: and rasterizing an XY plane of the submodel, wherein the XY plane of the submodel is a plane perpendicular to an optical axis where the emitted light beam is located, retaining points which are closest to the SLM plane in the same grid, and removing the rest points as redundant points so as to achieve the effect of removing the redundant points with shielding relation.
In the step 1), the point cloud density is reduced by down-sampling, so that the subsequent calculation is faster.
In the step 1), the coordinate scales of the three submodels are scaled point clouds, and the following processing is performed on the three submodels with the same size:
establishing a minimum bounding box outside the sub-model, wherein the minimum bounding box is a cube, scaling the side length size Ltarget of the cube to be less than one third of the image plane size L, and calculating the image plane size L as follows:
L=λz/pix
where λ represents the wavelength of the blue light beam, z represents the distance of the submodel for the green channel G along the optical axis to the SLM plane, and pix represents the single pixel side length in the spatial light modulator.
The three submodels are staggered at fixed intervals along the horizontal direction, and specifically comprise: the included angle between the connecting line from the submodel of the red channel R and the submodel of the green channel G to the center of the SLM plane respectively, and the included angle between the connecting line from the submodel of the blue channel B and the connecting line from the submodel of the green channel G to the center of the SLM plane respectively are arcsin (dx/z), so that target RGB channel components are accurately superposed during reproduction, and the aliasing of unrelated images is prevented, so that the pinhole filtering is not needed during reproduction.
And connecting the submodel to the center of the SLM plane, specifically, connecting the average coordinate of all point coordinates in the point cloud of the submodel to the center of the SLM plane.
Thus, the three channel components of RGB are separated and kept at a certain interval dx in the imaging plane, which is the plane perpendicular to the optical axis where the three submodels are located.
Calculating the diffraction integral of the 3D color point cloud model to each pixel on the SLM plane, and obtaining a kinoform by using the diffraction integral, wherein the method specifically comprises the following steps: and diffracting the light of each point of the three sub models to the complex amplitude of the same pixel of the SLM plane for addition, wherein the diffraction integral is the complex amplitude, and the result obtained after the addition at each pixel of the SLM plane extracts the phase respectively to form an information graph.
And subsequently, the imaging is from the same SLM plane by modulating according to the kinoform, namely, the naked eye observation visual angle is ensured to be the same, and the elements are reduced, so that the adjustment difficulty is reduced. The GPU acceleration operation programmed by CUDA improves the operation speed by 10 times, and each point is 4ms averagely.
In the step 2), the polarizing plates at the front ends of the three parallel light pipe lasers are adjusted to minimize the brightness of the light beams passing through the polarizing plates, specifically:
the method comprises the steps of firstly, utilizing polarizing plates in front of three parallel light pipe lasers to adjust the brightness of light beams emitted by the three parallel light pipe lasers after penetrating through the polarizing plates to be the lowest to prevent eyes from being damaged, and then finely adjusting the polarizing plates in front of the three parallel light pipe lasers to enable the color of the light beams emitted from the second wedge-shaped beam splitter to be consistent with the color of an original object to be imaged, wherein the RGB color is normal.
The lens group projects the constructed object and hologram to a spatial position set when calculating the kinoform.
As shown in fig. 3, a virtual lens image and a real holographic image are formed between the spatial light modulator and the lens group, the virtual lens image is close to the spatial light modulator, and the real holographic image is close to the lens group.
Human eyes see through from the lens group and observe the lens virtual image, the size of the lens virtual image observed by human eyes:
Figure BDA0003122085500000041
wherein Sv represents a lens virtual image, v represents an optical path from the lens virtual image to the center of the lens group, and u represents an optical path from the holographic real image to the center of the lens group. The lens group is used for enlarging the holographic real image and increasing the observation visual angle.
And a flat plate is placed at the imaging position of the holographic real image, the holographic real image is observed by human eyes at the same side of the spatial light modulator, the holographic real image is provided with a real image part and a virtual image part, and a light beam emitted by the spatial light modulator is irradiated to the flat plate and is formed by diffuse reflection.
The invention adopts the shielding removal and the down-sampling treatment to remove the shielded part of the back of the model at the visual angle of the human eye observation position, and the shielded part is matched with the opaque object observed in reality, so that the imaging is more real, and the calculated amount is reduced. The problems that a spatial superposition algorithm of single spatial light modulator color imaging needs small holes, adjustment is complex, imaging has diffraction circular spots, and naked eye observation effect is poor are solved. The algorithm adopted by the invention directly scales and shifts the point cloud coordinates of the RGB sub-model, ensures that 3 images with the same color obtained by a monochromatic light source through SLM diffraction are not overlapped, and also ensures that the RGB channel components of the target are overlapped at the observation position through the adjustment of the light source angle, and ensures that the naked eye observation visual angles of the RGB channel components are the same. The algorithm adopted by the invention reduces the number of used elements, greatly reduces the difficulty of light path adjustment and improves the naked eye observation effect. Is a potential solution for large-view angle holographic head-mounted reconstruction.
The system projects a three-dimensional object, and a kinoform is loaded to a spatial light modulator to diffract a three-dimensional real image of the three-dimensional object; the short-focus large-aperture lens group is used for concentrating the diffracted waves in an observation distance area, the lens group converts the holographic real image into a lens virtual image for observation, and the visible visual angle of the three-dimensional image is increased.
According to the invention, a color point cloud kinoform is manufactured through modeling, the point cloud is subjected to shielding removal and down-sampling, and a new color model diffraction integral space superposition method is provided, so that RGB superposed imaging does not depend on small-hole filtering, the complexity of single SLM color imaging adjustment can be greatly reduced, the kinoform calculation process is accelerated by using a GPU, and a color three-dimensional point cloud model can be observed by naked eyes.
The principle of the invention for realizing dynamic holographic three-dimensional display is as follows: the method comprises the steps of manufacturing a color point cloud model, conducting unblocking and downsampling on point cloud, conducting diffraction integral numerical operation accelerated by a GPU to obtain a kinoform, conveniently observing the color three-dimensional point cloud model through naked eyes by adopting a new space superposition method, repeatedly rotating the point cloud according to a certain angle interval, repeatedly manufacturing the kinoform to obtain a kinoform sequence, and rapidly switching the kinoform sequence on an SLM to achieve dynamic display.
Due to the application of the technical scheme, compared with the prior art, the invention has the following advantages:
1) compared with the prior art, the imaging is three-dimensional mostly by clear blurring of two surfaces with different distances in the prior art, the technical scheme provided by the invention adopts color point cloud data as a three-dimensional model, the depths of points are different, and the points can be regarded as a set of points on a plurality of planes with different distances, so that the method is more detailed and accords with a real scene;
2) in the prior art, a light screen or ground glass and other rough planes are adopted to scatter a high-brightness real image, so that the real image is convenient for human eyes to observe, normal observation brightness can be achieved only by high laser power, most of light energy is scattered and wasted, and the imaging environment is ensured not to be interfered by stray light; the invention provides naked eye observation with high light energy utilization rate and finer imaging, a low-power laser can also meet the requirement, and the influence of environment stray light on the naked eye observation is small;
3) the invention innovatively colors a monochromatic point cloud model and directly manufactures a color point cloud, integrates the existing technologies such as shielding removal and down-sampling, and accelerates the diffraction integral calculation time by 9.1 times by adopting the GPU (graphics processing unit) for CUDA (compute unified device architecture) programming;
4) in the existing spatial light modulator imaging method, a kinoform made by an overlay method commonly uses a bi-phase grating, a spherical phase grating and a blazed grating to enable spherical convergence points of three images formed by a monochromatic light source to have different offsets, and then a small hole is used in a light path to filter out irrelevant images, so that three target RGB components in nine images with different colors are reserved; the small holes are easy to filter unclean, and cause image surface residual irrelevant image noise. The invention adopts the pretreatment of scaling and shifting aiming at the point cloud coordinate, thereby ensuring that three images with the same color generated by the monochromatic light source are not overlapped; thus, the program code is simpler, the required components are fewer, and the optical path adjustment is easier.
5) According to the existing spatial light modulator imaging method, a kinoform made by an overlay method adopts a double-phase and spherical phase, and circular diffraction spots can be seen by naked eyes, so that the imaging effect is seriously influenced; the invention has high imaging quality and no defects.
Drawings
FIG. 1 is a schematic diagram of a 3D color point cloud model according to the present invention;
FIG. 2 is a schematic view of an optical path system apparatus of the present invention;
FIG. 3 is a schematic diagram of spatial position distribution of a holographic real image, a lens virtual image and a human eye position when a 3D holographic image is observed by naked eyes;
FIG. 4 shows a lotus point cloud model;
FIG. 5 is a color lotus dot cloud picture in which the portion of the back that is occluded under the current viewing angle is removed, and each point is given a color according to the distance from the viewpoint;
FIG. 6 is an image of a color lotus observed on a light screen;
FIG. 7 is a color lotus diagram observed by naked eyes through a lens set;
FIG. 8 is an example of imaging with a single SLM color imaging using a conventional spatial stacking algorithm with extraneous image aliasing;
FIG. 9 is an example of a single SLM color imaging using a conventional spatial superposition algorithm with annular diffraction spots;
in the figure: the device comprises a spatial light modulator (10), parallel light pipe lasers (1, 2 and 3), polarizing plates (4, 5, 6 and 9), wedge-shaped beam splitters (7 and 8) and a lens group (11).
Detailed Description
The invention is further described with reference to the accompanying drawings and the detailed description.
As shown in fig. 2, the optical path includes a spatial light modulator 10, the collimator lasers 1, 2, 3, the polarizers 4, 5, 6, 9, the wedge beam splitters 7, 8, and a lens group 11; the three parallel light pipe lasers 1, 2 and 3 emit light beams with different colors, the light beams are respectively incident to the wedge beam splitters 7 and 8 after passing through the polarizing plates 4, 5 and 6, the light beams are converged and combined by the wedge beam splitters 7 and 8, then are incident to the spatial light modulator 10 through the polarizing plate 9, and are reflected by the spatial light modulator 10 and then are incident to human eyes through the lens group 11 to be imaged.
The three balancing parallel light pipe lasers 1, 2 and 3 are R, G, B balancing parallel light pipe lasers 1, 2 and 3 with three colors respectively, and are a green light balancing parallel light pipe laser 1, a red light balancing parallel light pipe laser 2 and a blue light balancing parallel light pipe laser 3 respectively; the green light balancing collimator laser 1 emits a green light beam which is transmitted to the first wedge-shaped beam splitter 7 after passing through the first polaroid 4, the red light beam emitted by the red light balancing collimator laser 2 is transmitted to the first wedge-shaped beam splitter 7 after passing through the second polaroid 5 and is reflected, and the blue light beam emitted by the blue light balancing collimator laser 3 is transmitted to the second wedge-shaped beam splitter 8 after passing through the third polaroid 6 and is reflected; the green light beam is transmitted through the first wedge-shaped beam splitter 7, the red light beam is reflected through the first wedge-shaped beam splitter 7 and is incident to the second wedge-shaped beam splitter 8 together for transmission, and then is incident to the spatial light modulator 10 through the fourth polarizer 9 together with the blue light beam reflected through the second wedge-shaped beam splitter 8.
In specific implementation, the green light-matching parallel light pipe laser 1, the first polarizing film 4, the first wedge-shaped beam splitter 7, the second wedge-shaped beam splitter 8, the fourth polarizing film 9 and the spatial light modulator 10 are all arranged along a main optical axis of the same straight line.
The optical axes of the red light balancing collimator laser 2 and the second polarizing film 5 are arranged at a certain deflection angle with the main optical axis, the optical axes of the blue light balancing collimator laser 3 and the third polarizing film 6 are arranged at a certain deflection angle with the main optical axis, and the RGB three-beam parallel light of the green light beam, the red light beam and the blue light beam is converged at the spatial light modulator 10.
The position where the green beam is transmitted to the first wedge-shaped beam splitter 7 and the position where the red beam is reflected to the first wedge-shaped beam splitter 7 are not overlapped on the first wedge-shaped beam splitter 7, the position where the green beam is transmitted to the second wedge-shaped beam splitter 8, the position where the red beam is transmitted to the second wedge-shaped beam splitter 8 and the position where the blue beam is reflected to the second wedge-shaped beam splitter 8 are not overlapped on the second wedge-shaped beam splitter 8.
The spatial light modulator 20 is a reflective type, and the modulation mode is phase modulation.
The lens assembly 11 is a short-focus large-aperture lens assembly and is formed by coaxially arranging a plurality of lenses so as to increase the visual angle and enhance the display effect. Short coke means within 10 mm. By large aperture is meant a lens with a diameter greater than 30 mm.
The three parallel light pipe lasers 1, 2 and 3 are all integrated semiconductor lasers and serve as light sources.
The embodiment of the invention and the implementation process thereof are as follows:
1) the first step is as follows:
A3D color point cloud model of an object to be imaged is manufactured through modeling, as shown in figure 4, the original point cloud is a lotus flower, is a 360-degree scan of the lotus flower model and has an xyz coordinate of 81000 points.
1.1) dividing a 3D color point cloud model of lotus into three submodels of RGB three channels;
in the aspect of point cloud coloring, an additional three coordinates RGB are adopted to represent the color component of each point. The proportion of white lotus flowers, namely all points RGB is 1: 1: 1. or each point may be assigned a different color depending on the distance to the viewpoint.
And 1.2) sequentially carrying out shielding removal and down-sampling treatment on the three sub-models, and simulating the shielding relation of a real scene by taking the direction vertical to the XY plane as the sight direction.
Removing occlusion, namely simulating an opaque object observed in reality, only keeping the front side of the 3D color point cloud model seen by human eyes at an observation position' in the figure 3, and removing occluded back point cloud; the method specifically comprises the following steps: and rasterizing an XY plane of the submodel, wherein the XY plane of the submodel is a plane perpendicular to an optical axis where the emitted light beam is located, retaining points which are closest to the SLM plane in the same grid, and removing the rest points as redundant points so as to achieve the effect of removing the redundant points with shielding relation. Keeping points closer to an observer in the same grid so as to achieve the effect of removing redundant points with shielding relation; the size of the grid is customized, and only one point of each grid is reserved so as to achieve the purpose of down-sampling. The results are shown in FIG. 5.
And the down-sampling is to reduce the density of the point cloud, so that the subsequent calculation is faster.
1.3) then scaling the coordinate scale of the three submodels, namely scaling point clouds, aiming at the three submodels with the same size, the following processing is carried out: establishing a minimum bounding box outside the sub-model, wherein the minimum bounding box is a cube, scaling the side length size Ltarget of the cube to be less than one third of the image plane size L, and calculating the image plane size L as follows:
L=λz/pix
where λ represents the wavelength of the blue light beam, z represents the distance of the submodel for the green channel G along the optical axis to the SLM plane, and pix represents the single pixel side length in the spatial light modulator.
1.4) as shown in FIG. 1, controlling the submodel of the red channel R and the submodel of the blue channel B to be respectively positioned at two sides of the submodel of the green channel G, and staggering the three submodels at fixed intervals along the horizontal direction; the three submodels of the RGB channels of the 3D color point cloud model are kept at fixed intervals so that they are completely spaced apart from each other.
The three submodels are staggered at fixed intervals along the horizontal direction, and specifically comprise: the included angle between the connecting line from the submodel of the red channel R and the submodel of the green channel G to the center of the SLM plane respectively, and the included angle between the connecting line from the submodel of the blue channel B and the connecting line from the submodel of the green channel G to the center of the SLM plane respectively are arcsin (dx/z), so that target RGB components are accurately superposed during reproduction, and the aliasing of unrelated images is prevented, so that the pinhole filtering is not needed during reproduction.
The imaging distance z is 400mm, the calculated L is 22.5mm, the size of the target image plane should be smaller than 22.5/3 and 7.5mm, the size Ltarget of the target image plane is 5mm, and the position of the RGB point cloud in space keeps a certain interval dx which is 7.5 mm.
And connecting the submodel to the center of the SLM plane, specifically, connecting the average coordinate of all point coordinates in the point cloud of the submodel to the center of the SLM plane.
This separates the three channel components RGB and keeps a certain separation dx in the imaging plane, which is the plane perpendicular to the optical axis where the three submodels lie.
In a specific implementation, three submodels are used to produce a point sequence with coordinate format XYZRGB. As in the following table:
TABLE 1 XYZRGB Point cloud sequence segment
Figure BDA0003122085500000081
Figure BDA0003122085500000091
1.5) as shown in fig. 1, the three submodels respectively emit light beams along a horizontal plane to irradiate onto the SLM plane representing the spatial light modulator 10, calculate the diffraction integral of the 3D color point cloud model to each pixel of the SLM plane representing the spatial light modulator 10, and obtain a kinoform by using the diffraction integral.
The method comprises the following steps: light of each point of the three sub-models is diffracted to the complex amplitudes of the same pixel of the SLM plane to be added, diffraction integration is specifically the complex amplitudes, and after the light is added at each pixel of the SLM plane, the result extracts phases respectively to form a kinoform.
The kinoform operation process can adopt GPU acceleration operation programmed by CUDA, specifically, a common matrix type is converted into gpuArray in matlab, namely CUDA acceleration built in matlab is called, after acceleration, the time consumption of diffraction integral calculation of each point is 1/9.1 of the time consumption of CPU calculation, and the average time consumption of each point calculation is 32 ms.
TABLE 2 GPU comparison of time consumption before and after acceleration, de-occlusion, and downsampling
Device \ number of points 81000 points (original point cloud) 28724 points (De-occlusion, down sampling)
CPU Time consuming 23740s Consuming 7863s
GPU Consuming 2599s Time-consuming 835s
2) The second step is that:
the light path is arranged according to the holographic three-dimensional display device, and the beam splitters 7 and 8 are adjusted to enable the red light of the parallel light pipe laser 2, the blue light of the parallel light pipe laser 3 and the green light of the parallel light pipe laser 1 to form arcsin (dx/z) angles, so that RGB three-channel components of a holographic image obtained by subsequent SLM diffraction can be conveniently superposed on a spatial position.
The SLM puts in the phase place hologram of first step preparation, lights RGB laser instrument, adjusts the polaroid and makes the light beam be the brightest state, observes diffuse reflection formation of image with the light screen before convex lens, adjusts the upper and lower screw button of two beam splitters and then adjusts the angle of beam splitter for it makes red, blue image and green image coincide completely to locate at 400mm at z. The imaging results observed at 400mm z using a light screen are shown in fig. 6.
And the polaroids at the front ends of the three laser devices 1, 2 and 3 are adjusted to reduce the brightness of the light beams passing through the polaroids to the minimum, a kinoform is loaded on the spatial light modulator 10, and 3D imaging with a certain depth is observed by naked eyes through the lens group 11.
In the step 2), the polaroid is adjusted to enable the brightness of the RGB light beams to be the lowest, and naked eye 3D image observation is carried out when the RGB light beams are almost invisible under indoor normal ambient light. The convex lens is placed at the position of an extension line of the light screen imaging:
firstly, the polaroids in front of the three parallel light pipe lasers 1, 2 and 3 are used for adjusting the brightness of light beams emitted by the three parallel light pipe lasers 1, 2 and 3 after penetrating through the polaroids to be the lowest to prevent eyes from being damaged, and then the polaroids in front of the three parallel light pipe lasers 1, 2 and 3 are finely adjusted, so that the color of the light beam emitted from the second wedge-shaped beam splitter 8 is consistent with the color matching of an original object to be imaged, and the RGB color is normal.
The lens group projects the constructed object and hologram to the spatial position set when calculating the kinoform, i.e. z is 400 mm.
As shown in fig. 3, a lens aerial image and a holographic real image are formed between the spatial light modulator 10 and the lens group 11, the lens aerial image is close to the spatial light modulator 10, and the holographic real image is close to the lens group 11.
And placing a light screen at the imaging position of the real holographic image with the z being 400mm, seeing the real holographic image on the light screen at the same side of the spatial light modulator 10, and finely adjusting the screw buttons of the beam splitters 7 and 8 to enable the target RGB channel components to be superposed. The real image is illuminated by a beam of light from the spatial light modulator 10 onto the light screen and is observed by the human eye as a result of the diffuse reflection of the real image by the light screen.
The human eye sees the lens virtual image through observing from the lens group 11, and the size of the lens virtual image observed by the human eye is as follows:
Figure BDA0003122085500000101
wherein Sv represents the size of a lens virtual image, v represents the optical path from the lens virtual image to the center of the lens group, and u represents the optical path from a holographic real image to the center of the lens group. The lens group is used for amplifying the holographic real image and increasing the observation visual angle; as shown in fig. 3, the included angle between the eye and the connecting line between the two ends of the virtual image of the lens is larger than the included angle between the eye and the connecting line between the two ends of the real holographic image, i.e. the size of the image is increased by the lens group, and the visual angle of the image to the eye is also increased.
The eyes move to the position close to the convex lens, and move up and down, left and right by a small amplitude until the color three-dimensional holographic lotus is found; the screws of the beam splitters 7, 8 are again fine-tuned to make the RGB three-channel components completely coincide. The camera is placed at the viewer's position of the human eye and the image is taken as shown in fig. 6 and 7.
In conclusion, the color point cloud naked eye 3D display system and method creatively use the color point cloud stereo model to realize true 3D naked eye observation of the color model, and solve the problems of complex adjustment and poor naked eye imaging effect of the traditional single SLM space superposition method. The traditional spatial superposition algorithm uses a combination of two phases and a spherical phase to form a diffraction convergent point, a blazed grating is superposed on a kinoform to enable the spherical convergent point of three images formed by a monochromatic light source to have different offsets, three target RGB channel components in nine images obtained by SLM diffraction are reserved at the convergent point by using a small hole, and the three components are finely adjusted to be completely superposed. The algorithm adopted by the invention directly scales and shifts the point cloud coordinates of the RGB sub-model, so that three images with the same color obtained by a monochromatic light source through SLM diffraction are not overlapped, and the target RGB channel components are overlapped at an observation position through light source angle adjustment, thereby ensuring that the naked eye observation visual angles of the RGB channel components are the same. The algorithm adopted by the invention reduces the number of used elements, greatly reduces the difficulty of light path adjustment and improves the naked eye observation effect.
FIG. 8 shows that when a conventional spatial superposition algorithm is used for single SLM color imaging, speckle noise caused by unclean pinhole filtering is large; fig. 9 shows that when the conventional spatial superposition algorithm is used for single SLM color imaging, the lower left corner of the picture has circular diffraction spots caused by spherical phase. The present invention addresses these problems by using methods.
Therefore, the technical scheme provided by the invention realizes a naked eye 3D display system with color dots, a wide viewing angle and high definition by using a single spatial light modulator. The optical system is currently directed to commercial VR helmets. Within the range allowed by aberration, the configuration display system is compact, the problem of irrelevant image aliasing of single spatial light modulator color imaging is solved, compared with the traditional method, the naked eye imaging quality is improved, the element adjustment complexity is reduced, and an effective way is provided for the miniaturization of color stereoscopic naked eye display equipment.

Claims (8)

1. A method for realizing naked eye 3D display of color three-dimensional point cloud by using a single spatial light modulator is characterized by comprising the following steps: the display method adopts a display system which comprises a spatial light modulator (10), a parallel light pipe laser (1, 2, 3), a polaroid (4, 5, 6, 9), a wedge-shaped beam splitter (7, 8) and a lens group (11); the three parallel light pipe lasers (1, 2 and 3) emit light beams with different colors, the light beams are respectively incident to the wedge beam splitters (7 and 8) after passing through the polarizing plates (4, 5 and 6), are incident to the spatial light modulator (10) after being converged and combined by the wedge beam splitters (7 and 8) and then passing through one polarizing plate (9), and are reflected by the spatial light modulator (10) and then are incident to human eyes for imaging through the lens group (11);
the display method comprises the following steps:
step 1):
manufacturing a 3D color point cloud model of an object to be imaged by modeling, wherein the model is divided into three sub-models of RGB (red, green and blue) three channels; carrying out shielding removal and down-sampling treatment on the three submodels, scaling the coordinate scales of the three submodels, controlling the submodel of the red channel R and the submodel of the blue channel B to be respectively positioned at two sides of the submodel of the green channel G, and staggering the three submodels at fixed intervals along the horizontal direction;
the three sub-models respectively emit light beams along a horizontal plane to irradiate the SLM plane, the diffraction integral from each point in the 3D color point cloud model to a pixel on the SLM plane is calculated, and a kinoform is obtained by utilizing the diffraction integral;
step 2):
arranging light paths according to the display system, adjusting polaroids at the front ends of three lasers (1, 2 and 3) with parallel light pipes to reduce the brightness of light beams passing through the polaroids to the minimum, loading a kinoform on a spatial light modulator (10), adjusting the angles of wedge-shaped beam splitters (7 and 8) to superpose target RGB channel components, and observing color 3D imaging by naked eyes through a lens group (11);
the three parallel light pipe lasers (1, 2 and 3) are respectively a green light parallel light pipe laser (1), a red parallel light pipe laser (2) and a blue light parallel light pipe laser (3); the green light is distributed with the collimator laser (1) and emits the green light beam to enter the first wedge beam splitter (7) to transmit after passing through the first polaroid (4), the red light is distributed with the collimator laser (2) and emits the red light beam to enter the first wedge beam splitter (7) to reflect after passing through the second polaroid (5), the blue light is distributed with the collimator laser (3) and emits the blue light beam to enter the second wedge beam splitter (8) to reflect after passing through the third polaroid (6); the green light beam is transmitted by the first wedge beam splitter (7), the red light beam is transmitted by the second wedge beam splitter (8) after being reflected by the first wedge beam splitter (7), and then the green light beam and the blue light beam are transmitted to the spatial light modulator (10) after being reflected by the second wedge beam splitter (8) after being transmitted by the second wedge beam splitter and passing through the fourth polaroid (9).
2. The method for realizing naked eye 3D display of the color three-dimensional point cloud by the single spatial light modulator according to claim 1, wherein the method comprises the following steps: the spatial light modulator (10) is of a reflective type, and the modulation mode is phase modulation.
3. The method for realizing naked eye 3D display of the color three-dimensional point cloud by the single spatial light modulator according to claim 1, wherein the method comprises the following steps: the lens group (11) is formed by coaxially arranging a plurality of lenses.
4. The method for realizing naked eye 3D display of the color three-dimensional point cloud by the single spatial light modulator according to claim 1, wherein the method comprises the following steps: in the step 1), the shielding removal specifically comprises the following steps: and rasterizing the XY plane of the sub-model, keeping the points closest to the SLM plane in the same grid, and removing the rest points as redundant points.
5. The method for realizing naked eye 3D display of the color three-dimensional point cloud by the single spatial light modulator according to claim 1, wherein the method comprises the following steps: in the step 1), the coordinate scales of the three submodels are scaled, and the following processing is performed on the three submodels with the same size:
establishing a minimum bounding box outside the sub-model, wherein the minimum bounding box is a cube, scaling the side length size Ltarget of the cube to be less than one third of the image plane size L, and calculating the image plane size L as follows:
L=λz/pix
where λ represents the wavelength of the blue light beam, z represents the distance of the submodel for the green channel G along the optical axis to the SLM plane, and pix represents the single pixel side length in the spatial light modulator.
6. The method for realizing naked eye 3D display of the color three-dimensional point cloud by the single spatial light modulator according to claim 1, wherein the method comprises the following steps: the three submodels are staggered at fixed intervals along the horizontal direction, and specifically comprise: the included angle between the sub-model of the red channel R and the sub-model of the green channel G and the connecting line between the sub-model of the blue channel B and the sub-model of the green channel G and the connecting line between the sub-model of the SLM and the center of the SLM are arcsin (dx/z), z represents the distance from the sub-model of the green channel G to the SLM plane along the optical axis, and dx represents a certain interval kept in the imaging plane after the three channel components of RGB are separated.
7. The method for realizing naked eye 3D display of the color three-dimensional point cloud by the single spatial light modulator according to claim 1, wherein the method comprises the following steps: calculating the diffraction integral of the 3D color point cloud model to each pixel on the SLM plane, and obtaining a kinoform by using the diffraction integral, wherein the method specifically comprises the following steps: and diffracting the light of each point of the three submodels to the complex amplitude of the same pixel of the SLM plane for addition, and extracting the phase of the result after the addition of each pixel of the SLM plane to form a kinoform.
8. The method for realizing naked eye 3D display of the color three-dimensional point cloud by the single spatial light modulator according to claim 1, wherein the method comprises the following steps: in the step 2), the polarizing plates at the front ends of the three lasers (1, 2, 3) with parallel light tubes are adjusted to minimize the brightness of the light beams passing through the polarizing plates, specifically: firstly, the brightness of light beams emitted by the three parallel light pipe lasers (1, 2 and 3) after penetrating through the polaroids is adjusted to be the lowest by utilizing the polaroids in front of the three parallel light pipe lasers (1, 2 and 3), and then the polaroids in front of the three parallel light pipe lasers (1, 2 and 3) are finely adjusted, so that the color of the light beams emitted from the second wedge-shaped beam splitter (8) is matched with the color of an original object to be imaged and the RGB color is normal.
CN202110678904.XA 2021-06-18 2021-06-18 System and method for realizing color three-dimensional point cloud naked eye display by using single spatial light modulator Expired - Fee Related CN113406874B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110678904.XA CN113406874B (en) 2021-06-18 2021-06-18 System and method for realizing color three-dimensional point cloud naked eye display by using single spatial light modulator

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110678904.XA CN113406874B (en) 2021-06-18 2021-06-18 System and method for realizing color three-dimensional point cloud naked eye display by using single spatial light modulator

Publications (2)

Publication Number Publication Date
CN113406874A CN113406874A (en) 2021-09-17
CN113406874B true CN113406874B (en) 2022-07-01

Family

ID=77681403

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110678904.XA Expired - Fee Related CN113406874B (en) 2021-06-18 2021-06-18 System and method for realizing color three-dimensional point cloud naked eye display by using single spatial light modulator

Country Status (1)

Country Link
CN (1) CN113406874B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115167087B (en) * 2022-05-27 2023-12-26 四川大学 Spherical holographic reconstruction quality improvement method based on random gradient descent optimization algorithm

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4860253A (en) * 1987-06-19 1989-08-22 Hughes Aircraft Company Associative memory system with spatial light modulator and feedback for adjustable thresholding and enhancement
US6559948B1 (en) * 1999-06-30 2003-05-06 Raytheon Company Method for locating a structure using holograms
JP2004086966A (en) * 2002-08-26 2004-03-18 Optware:Kk Optical information reproducing device and optical information recording/reproducing device
TWI457605B (en) * 2011-12-16 2014-10-21 Delta Electronics Inc Stereoscopic display apparatus
CN105044962B (en) * 2015-07-13 2018-06-26 上海理工大学 A kind of preparation method of color hologram polymer dispersed liquid crystals grating
CN105866962A (en) * 2016-03-28 2016-08-17 郎宁 Naked-eye 3D laser holographic display device
CN211956129U (en) * 2020-04-14 2020-11-17 浙江大学 High-brightness multifunctional color holographic projection experimental instrument
JP2020187762A (en) * 2020-07-02 2020-11-19 レイア、インコーポレイテッドLeia Inc. Vehicle monitoring system
CN112596242A (en) * 2020-12-22 2021-04-02 上海趣立信息科技有限公司 Color holographic near-to-eye display method and system based on spatial light modulator time division multiplexing
CN215006257U (en) * 2021-06-18 2021-12-03 浙江大学 Device for realizing color three-dimensional point cloud naked eye display by using single spatial light modulator

Also Published As

Publication number Publication date
CN113406874A (en) 2021-09-17

Similar Documents

Publication Publication Date Title
US7738151B2 (en) Holographic projector
US5379133A (en) Synthetic aperture based real time holographic imaging
JP4133832B2 (en) Color video holography playback device
CN108803295B (en) Method for manufacturing large-field-of-view hologram, display system and dot matrix light source
US20050122549A1 (en) Computer assisted hologram forming method and apparatus
CN105700320A (en) Holographic three-dimensional display method and device based on spatial light modulator
TWI390369B (en) Method and device for reducing speckle
CN107390379B (en) Near-to-eye holographic three-dimensional display system and display method
JP2005508016A (en) Projecting 3D images
CN105954993B (en) A kind of color hologram 3 D displaying method divided based on space and its system
Kim et al. 3D display technology
GB2363273A (en) Computation time reduction for three dimensional displays
CN113885209B (en) Holographic AR three-dimensional display method, module and near-to-eye display system
US20070081207A1 (en) Method and arrangement for combining holograms with computer graphics
JP2001075464A (en) Device and method for forming synthetic hologram
US5430560A (en) Three-dimensional image display device
US7277209B1 (en) Computer-assisted holographic method and device
CN108762033A (en) It imaging method and optical system and its storage medium, chip and combines
CN112882228A (en) Color holographic near-eye AR display system based on white light illumination and color holographic calculation method
CN207541416U (en) A kind of calculating hologram three-dimensional display device that can represent reproduction image hiding relation
CN113406874B (en) System and method for realizing color three-dimensional point cloud naked eye display by using single spatial light modulator
CN215006257U (en) Device for realizing color three-dimensional point cloud naked eye display by using single spatial light modulator
CN207541417U (en) It is a kind of to reduce the calculating hologram three-dimensional display device for reproducing waste information
Xia et al. 31‐1: Invited Paper: Eyeglasses‐Style Maxwellian‐View Near‐eye Display with Lens‐Array‐Based Holographic Optical Element
CN111830811A (en) High-definition three-dimensional holographic display method based on diffraction field superposition and implementation device thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20220701