WO2018076661A1 - 一种三维显示装置 - Google Patents

一种三维显示装置 Download PDF

Info

Publication number
WO2018076661A1
WO2018076661A1 PCT/CN2017/083805 CN2017083805W WO2018076661A1 WO 2018076661 A1 WO2018076661 A1 WO 2018076661A1 CN 2017083805 W CN2017083805 W CN 2017083805W WO 2018076661 A1 WO2018076661 A1 WO 2018076661A1
Authority
WO
WIPO (PCT)
Prior art keywords
lens
light
visible
nano
nanostructure
Prior art date
Application number
PCT/CN2017/083805
Other languages
English (en)
French (fr)
Inventor
乔文
浦东林
朱鸣
周小红
黄文彬
赵改娜
朱鹏飞
陈林森
Original Assignee
苏州苏大维格光电科技股份有限公司
苏州大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 苏州苏大维格光电科技股份有限公司, 苏州大学 filed Critical 苏州苏大维格光电科技股份有限公司
Publication of WO2018076661A1 publication Critical patent/WO2018076661A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B30/00Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
    • G02B30/20Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes
    • G02B30/26Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes of the autostereoscopic type

Definitions

  • the invention belongs to the field of three-dimensional image display, and in particular relates to a three-dimensional display device, which comprises a head-mounted three-dimensional display device.
  • Virtual reality is an interactive 3D dynamic visual simulation of multi-source information fusion that immerses users in the environment.
  • Virtual reality is a combination of multiple technologies.
  • the wide-angle (wide-view) stereoscopic display technology is a prerequisite for the user's immersive experience, and is also a technical difficulty of virtual reality.
  • binocular vision is the basis of stereoscopic display, that is, different images are acquired through the left and right eyes, and the body images are merged by the brain.
  • the clarity of the image obtained by both eyes, and the adjustment of the monocular focus on the plane of the binocular convergence point are also satisfied.
  • Augmented Reality (AR) technology is a new technology that integrates real world information and virtual world information "seamlessly". It is an entity information (visual information, which is difficult to experience in a certain time and space of the real world. Sound, taste, touch, etc., through computer and other science and technology, simulation and then superimposition, the virtual information is applied to the real world, perceived by human senses, thus achieving a sensory experience beyond reality.
  • the real environment and virtual objects are superimposed in real time on the same picture or space.
  • One of the characteristics of the AR system is that it is difficult to display the positioning of virtual objects in the three-dimensional space.
  • a display device based on a light diffraction device and a planar waveguide is disclosed in US Patent Application No. US20150016777A1.
  • the display device includes a multilayer waveguide structure and a diffraction device Array. This is combined with a fast scanning of the fiber, which converts the illumination light into an exiting light field, forming a single imaged point on the retina of the human eye.
  • the waveguide outputs different exit angles and different diffusion angle light fields, which form a high-speed scanning spot on the retina and constitute a 3D image. It is applied to the field of reality enhancement to realize the fusion of virtual scenes and real scenes.
  • the parallel or divergent light field formed by the light diffraction device of the patent does not conform to the imaging habit of the human eye, and is liable to cause dizziness.
  • this simple scanning three-dimensional display scheme is difficult to meet the processing and output requirements of the current three-dimensional display huge amount of information.
  • phase plate for a three dimensional display or optical switch is disclosed in US Patent No. 008014050 B2.
  • the described phase plate comprises a bulk diffraction grating structure and a photosensitive material.
  • the diffraction efficiency and phase delay of a single pixel unit can be controlled by the electrode array, thereby achieving rapid regulation of the phase of the light field.
  • a method of realizing phase regulation by using an electrode array encounters a constraint that it is difficult to miniaturize a single pixel, and the display effect thereof is difficult to meet the current consumer demand for display fineness and comfort.
  • Chinese patent 201610034105.8 discloses a projection type naked eye three-dimensional display device.
  • the display device projects the multi-view image signal to the directional projection screen through the projection device, and the incident image signal is phase-modulated to form a convergence viewpoint in the visible window to obtain a 3D scene.
  • the method has the advantages of high brightness, good 3D effect, and the like, and can realize large-area naked-eye 3D display, and can be applied to display terminals such as televisions and advertisement machines.
  • this method cannot be applied to the field of virtual reality and reality enhancement, realizing the 3D scene reproduction on the wearable device.
  • the present invention aims to provide a directional nanostructure based on the holographic principle.
  • the functional lens through the illumination of a specific light source, can realize a head-mounted 3D display scheme and a display device without visual fatigue.
  • a three-dimensional display device comprising an image generating device, and a visible lens, the visible lens comprising at least one layer of visible lens elements, the visible lens unit being provided with a nanostructure, nanostructures on the visible lens Matching with the image output by the image generating device to form a virtual scene in the window; or superimposing the virtual scene with the real scene to obtain fusion of real world information and virtual world information.
  • the visor is composed of a layer of visible lens elements or is superposed by two, three, four, or more than four layers of glare units.
  • the ocular lens unit includes a lens substrate and a nanostructure.
  • the lens substrate comprises a waveguide structure.
  • the visible lens is a single unit having one, two, or more than two independent viewing areas, or two left and right independent visible lenses respectively corresponding to two eyeballs.
  • the above visible lens forms one or more than two viewpoints in the window.
  • the viewpoint design interval is smaller than the window range.
  • the upper and middle viewpoints of the visible lens correspond to a distant scene
  • the lower viewpoint corresponds to a near scene.
  • the nanostructures are nanoscale sized nanogratings, also known as nanograting structures.
  • the nanostructure is a channel structure, a relief structure, or a hollow structure, and the shape thereof is one or more of a rectangle, a circle, a diamond, and a hexagon.
  • the distribution of the nanostructures is based on the principle that the image generating device forms a viewpoint through different spatial positions in the window via the nanostructure.
  • the image generating device includes a projection device that cooperates with at least one layer of the visor lens unit to effect virtual image display within the window.
  • the projection device is coupled to the nanostructure via a waveguide structure.
  • the above visible lens can include two left and right independent visible areas, and the left and right can be independent
  • the nanostructure distribution of the viewport is symmetrical.
  • a set of nanostructures is disposed corresponding thereto, and the density and distribution of the nanostructures are arranged according to the following conditions: the density and distribution of the nanostructures corresponding to a single viewpoint are independent of the angle of the viewing axis, corresponding to The nanostructures of the respective viewpoints are uniformly arranged by nesting each other; or, the density and distribution of the nanostructures corresponding to the single viewpoint are related to the angle of view of the viewing axis, and the distribution characteristics of the nanostructure corresponding to the single viewpoint are: corresponding to the viewpoint
  • the arrangement density of the nanostructures in the vicinity of the visual axis corresponding to the visual axis is greater than the arrangement density away from the visual axis region, that is, the nanostructures corresponding to the respective viewpoints are non-uniformly arranged by nesting each other.
  • the nanostructure density and distribution curve corresponding to a single said viewpoint is a trigonometric function, a square wave function, a trapezoidal function or a sinusoidal function.
  • Figure 1 is a diagram of the structure of the human eye.
  • Figure 2 is a view of the distribution of cone cells.
  • 3 is a structural view of the nano-grating inside the pixel on the directional light guiding film in the XY plane.
  • FIG. 4 is a structural view of the pixel internal nanograting on the directional light guiding film of FIG. 1 in the XZ plane.
  • FIG. 5 is a schematic diagram of a plurality of nano-grating pixel structures.
  • Figure 6 is a nanostructure distribution of a directional light directing film that is concentrated by a single viewpoint.
  • FIG. 7 is a schematic diagram of constructing a single-view new wavefront using a nanostructure directional functional film.
  • FIG. 9 is a diagram of a realistic enhanced display scheme based on projection projection in an embodiment of the present invention.
  • FIG. 10 is a diagram of another realistic enhanced display scheme according to an embodiment of the present invention.
  • FIG. 11 is a diagram of another realistic enhanced display scheme in accordance with an embodiment of the present invention.
  • FIG. 12 is a schematic diagram of a virtual reality display scheme according to an embodiment of the present invention.
  • FIG. 13 is a schematic diagram of a multi-view display scheme according to an embodiment of the present invention.
  • 15(a)-(b) are diagrams showing a multi-depth display scheme realized by a multifocal nanostructure directional functional lens according to an embodiment of the present invention.
  • Figure 16 is a nanostructure distribution of a multifocal nanostructure directional functional lens.
  • Figure 17 is a schematic diagram of multiple scene depth segmentation of a virtual scene.
  • 18(a)-(b) are diagrams showing a multi-depth display scheme realized by light field scanning in an embodiment of the present invention.
  • 19 and 20 are schematic diagrams showing a three-dimensional display device of a multi-layer visible lens unit based on frequency division control according to an embodiment of the present invention.
  • 21(a) and 21(b) are schematic diagrams showing a three-dimensional display device of a multi-layer visible lens unit based on frequency division control according to an embodiment of the present invention.
  • 22(a)-(b) are schematic diagrams showing an example of a frequency division control circuit according to an embodiment of the present invention.
  • 23(a)-(c) are schematic diagrams showing a three-dimensional head-mounted three-dimensional display according to an embodiment of the present invention
  • Figure 24 is a schematic diagram of a head mounted three-dimensional display scheme.
  • 25 is a schematic diagram of a head-mounted three-dimensional reality enhanced display scheme.
  • 26 and 27 are schematic views of a three-dimensional display device based on a multi-layer visible lens unit.
  • Figure 28 is a schematic view showing the structure of a head mounted three-dimensional display device of the present invention.
  • Figure 29 is a diagram showing an example of a circuit control principle diagram of the three-dimensional display device of the present invention.
  • Figure 30 is a diagram showing an example of a circuit control principle diagram of a realistic enhanced three-dimensional display device of the present invention.
  • 31 to 40 are schematic views of various application scenarios of the present invention.
  • Figure 1 is a human eye structure diagram.
  • the human eye approximates the sphere, and the eyeball includes the iris 101, the cornea 102, the lens 103, the retina 104, and the macula 105; the axis of the eye's line of sight is called the visual axis 11.
  • the tissue in which the eyeball 1 has an optical imaging function is the cornea 102 and the lens 103.
  • the retina 104 is located at the back end of the eye and is the first stop for visually transmitted neural information.
  • the cone cells on the retina 104 are the primary photoreceptor neurons, with the visual axis 11 facing the end.
  • Figure 2 is a view of the distribution of cone cells. As can be seen from the figure, the distribution of cone cells is extremely uneven, and is densely distributed in the central recess of the macula 105, and is distributed in a small amount at other positions of the retina 104. Therefore, the fovea is the most sensitive area with a diameter of about 1 to 3 mm.
  • the field of view of the human eye can reach 150°, but the range of objects can be clearly observed at the same time only 6 ° ⁇ 8 ° around the visual axis.
  • the invention fully considers the distribution characteristics of the cone cells, designs the pixel distribution of each viewing angle, and achieves the optimization of the visual experience.
  • each nano-grating structure is regarded as one pixel, and the orientation of the nano-grating structure determines the optical field angle modulation characteristic, and the period determines the spectral filtering characteristics.
  • the periodicity (space frequency) and orientation of the nano-grating structure are continuous between the sub-pixels, and the control and transformation of the light field can be realized. Therefore, after making a plurality of nano-grating structures with different orientation angles and periods set on the surface of the head-mounted visual device, it is theoretically possible to obtain enough different viewpoints, with the control of color and gray scale, It can realize naked-eye 3D display under multiple viewing angles.
  • the present invention provides a head mounted three-dimensional display device comprising an image generating device and a visible lens corresponding to an eye, the visible lens comprising at least one layer of nanostructures provided with nanostructures having a converging imaging function a functional film, such that the visible lens becomes a directional functional lens having a directivity function, and the nanostructure on the directional functional lens matches an image output by the image generating device, and a converging wavefront is projected in front of the window to form a virtual scene; Or the convergence wavefront is superimposed with the wavefront formed by the real scene to obtain the fusion of real world information and virtual world information.
  • the visible lens may be an integral lens or two visible lenses respectively corresponding to the two eyes.
  • it may be constructed with three or more visible lenses as needed.
  • the invention converges the viewing angle image in the space in front of the eyeball to form a virtual scene, which is consistent with the principle of imaging the real scene in the human eye, so the visual fatigue degree of long-time viewing is greatly reduced compared with the traditional three-dimensional display technology.
  • the directional functional lens (ie, the view lens) forms a plurality of viewpoints in the eye window of the human eye, so that the single eye can see two or more views of the angle of view, achieving monocular parallax effect, and continuous dynamic parallax;
  • the nanostructure is nanometer A nano-scale grating of a size, each of the nano-gratings is a nano-grating pixel, and each view image is formed by a plurality of nano-grating pixels.
  • the corresponding nano-grating pixel distribution of the single-view image can be optimized according to the visual axis angle of the eye in practical applications, that is, the row of the nano-grating pixels corresponding to the viewing angle image in the vicinity of the viewing axis corresponding to the viewing angle
  • the density of the cloth is greater than the density of the arrangement away from the visual axis area; the nano-grating pixels corresponding to the respective viewing angle images are non-uniformly arranged on the surface of the directional functional lens by nesting each other; the advantage is that the structure of the human eye is fully utilized, and the use is less Higher image quality can be achieved with nano-grating pixel counts.
  • the invention adopts a nano-grating structure based on diffraction effect to construct a new light field, and a single nano-grating structure interacts with light to change its phase.
  • FIG. 3 and FIG. 4 are diffraction gratings with a structural scale at the nanometer level (ie, Nano-grating, also known as nano
  • the grating pixel 201 whose structure is also referred to as a nano-grating structure, is a structural diagram under the XY plane and the XZ plane. According to the grating equation, the period and the orientation angle of the diffraction grating pixel 201 satisfy the following relationship:
  • the light is incident on the XY plane at a certain angle
  • ⁇ 1 and ⁇ 1 sequentially represent the diffraction angle of the diffracted light 202 (the angle between the diffracted ray and the positive z-axis) and the azimuth of the diffracted light 202 (the diffracted ray and the x-axis are positive)
  • the angle ⁇ , ⁇ and ⁇ sequentially represent the incident angle of the light source 201 (the angle between the incident ray and the positive z-axis) and the wavelength
  • ⁇ and ⁇ sequentially represent the period and orientation angle of the nano-diffraction grating 201 (the groove direction and the y-axis)
  • n represents the refractive index of the light wave in the medium.
  • the period and orientation angle of the desired nanograting can be calculated by the above two formulas. For example, a red light of 650 nm wavelength is incident at an angle of 60°, a diffraction angle of light is 10°, and a diffraction azimuth angle is 45°. By calculation, the corresponding nano-diffraction grating period is 550 nm, and the orientation angle is -5.96°.
  • the structure of the nanograting is a channel structure, a relief structure, or a hollow structure, and the shape thereof is one or more of a rectangle, a circle, a diamond, and a hexagon.
  • shape structures that satisfy the foregoing requirements may also be used.
  • the image generating device may employ a projection device (in view of application requirements of the head-mounted three-dimensional display device, using a pico projector having a smaller volume as much as possible), the projection device according to a multi-depth display scheme when projecting
  • the three-dimensional display effect of the single eye is realized by the time series scanning method; that is, the projected image is segmented, and when the image is divided, the divided image is projected to different distances in front of the human eye to form a virtual image whose distance, size and the real scene match, thereby forming a plurality of 3D image of depth of field.
  • Three-dimensional display technology with multiple depth of field achieves the three-dimensional display effect of a single eye.
  • the human eye's ability to resolve near-distance objects is stronger than that of distant objects.
  • close-range objects can be finely segmented, and distant objects can be roughly segmented.
  • the segmented image is projected to different distances in front of the human eye to form a virtual image whose distance and size match the real scene, and a multi-depth three-dimensional image is formed.
  • the observer is both immersive and effectively integrated with the real scene.
  • the observer adjusts the eye to focus on a single or adjacent distance scene, the scene on the non-focus plane becomes a blurred image.
  • the distribution of the nanograting structure on the nanostructured functional film is based on the principle that the image generating device converges at different spatial locations in the space in front of the eyeball through the nanograting structure on the nanostructured functional film.
  • the left and right spaces in front of the eyeball respectively correspond to corresponding regions of the visible lens; thus, the viewpoints gathered in the space in front of the eyeball form a virtual image whose distance and size match the real scene, constitute a three-dimensional image of the light field of view and different depth of field; focus by adjusting the eye to Proximity and long-distance scenery, get a clear 3D display, that is, the eyes can select virtual scenes with different depth of field to focus separately,
  • the visible lens is a single unitary body or two left and right independent visible lenses respectively corresponding to two eyeballs; according to binocular parallax characteristics, on a single integral visible lens or on two left and right visible lenses Matching the distribution and position of the nano-grating corresponding to the corresponding viewpoints of the left and right eyes, and matching the corresponding output view information, thereby obtaining a three-dimensional display experience conforming to natural habits.
  • the distribution and position of the nano-grating pixels corresponding to the corresponding viewpoints of the left and right eyes are matched on the left and right light field lenses, and the corresponding output view information is matched, so that a three-dimensional display experience conforming to natural habits can be obtained.
  • the image generating device is a pico projector that projects at a large angle from the rear side of the viewing lens to the nanostructured functional film, or the pico projector projects at a large angle from the front side of the visible lens to
  • the nanostructure functional film realizes directional light illumination of the nanostructure functional film on the visible lens, and the illumination source emitted by the micro projector can be a point light source, a line light source, or a surface light source, and the emitted light intensity can be time or space
  • the micro-projector realizes the light field gray level, that is, the amplitude information modulation, and matches the phase information of the light field modulated by the nano-structure functional film, and finally projects the convergence wave surface in front of the human eye, so that the human eye can see the realistic virtual three-dimensional image.
  • the converging wavefront is further superimposed with a wavefront formed by a real scene, for example, a propagating light field of a real scene is transmitted through a translucent nanostructure functional film, superimposed with a converging wavefront of the virtual scene, or the real world scene is captured and acquired in real time through a digital lens.
  • the virtual information is merged and projected in front of the human eye to get real world information. Fusion virtual world of information.
  • the visible lens is sequentially laminated by the optical coupling waveguide integrated device, the lens substrate and the nano-structure functional film, or directly embedded or laminated on the optical coupling waveguide integrated device.
  • the micro-projector is optically coupled with the optical coupling waveguide integrated device to realize directional light illumination of the nanostructure functional film on the visible lens
  • the pico projector is a point scan or line scan projection image, or a surface projection image, and the emitted light Strong can change with time or space
  • the pico projector realizes the light field gray (amplitude) information modulation, and matches the phase information of the light field modulated by the visible lens, finally projecting the convergence wavefront in front of the human eye, so that the human eye can see To the realistic virtual three-dimensional image, or the convergence wavefront is further superimposed with the wavefront formed by the real scene, and the fusion of real world information and virtual world information is obtained.
  • the visible lens is composed of a layer of visible lens units, or is stacked by two, three, four, or more than four layers of visible lens units;
  • Each of the visible lens units includes a layer of the optical waveguide device, the lens substrate and the nanostructure functional film;
  • the nanostructure functional film is embedded or attached to one side of the lens substrate, and then the optical waveguide device is disposed on the other surface of the lens substrate, or the lens substrate and the nanostructure functional film are embedded together on the optical waveguide device, and the optical waveguide device
  • the length and width dimensions and thickness dimensions are greater than the corresponding dimensions of the lens substrate and the nanostructure functional film
  • Each of the optical lens devices of the visible lens unit is optically coupled to a light coupling device
  • the image generating device is a pico projector, the number of the pico projectors is consistent with the number of light coupling devices, and one-to-one optical connection; or the pico projector is one, and all the light coupling devices are disposed on the visible lens.
  • an optical switching device is disposed between the light coupling device and the pico projector, and a light coupling device is switched by the optical switching device to optically connect with the pico projector;
  • the pico projector is coupled to the optical waveguide device on the visible lens by a light coupling device, and under the action of total reflection, the light propagates in the visible lens, and the visible lens comprises a set of pixel type disposed on the nanostructure functional film.
  • the nano-grating structure is diffracted by the action of light, so that part of the light escapes from the light-emitting surface of the visible lens.
  • the angle of the emitted light is related to the period and orientation of the nano-structure.
  • the efficiency of the emitted light is related to the pixel size and depth of the nano-structure. After passing through the nano-grating structure, the light forms a converging viewpoint on the light-emitting surface of the visible lens.
  • the micro-projector can be imaged by point scanning or line scanning, and the intensity of the emitted light can change with time or space.
  • the micro-projector realizes the light field gray by scanning. Degree is the amplitude information modulation, and matches the phase information of the light field modulated by the viewable lens, and finally projects the convergence wavefront in front of the human eye, so that the human eye can see the realistic virtual three-dimensional image, or the convergence wavefront is further and the reality scene
  • the wavefronts formed are superimposed to obtain the fusion of real world information and virtual world information.
  • the visible lens is composed of a layer of visible lens elements, or is laminated by two, three, four, or more than four layers of visible lens elements;
  • Each visible lens unit includes an optical waveguide device, a lens substrate, and a nanojunction in sequence
  • the functional film is laminated;
  • a nanostructure functional film is embedded on one side of the lens substrate, and then an optical waveguide device is disposed on the other surface of the lens substrate, or the lens substrate and the nanostructure functional film are embedded together on the optical waveguide device, and the length and width dimensions of the optical waveguide device are And the thickness dimension is greater than the corresponding size of the lens substrate and the nanostructure functional film;
  • Each of the optical lens devices of the visible lens unit is optically coupled to a light coupling device
  • the image generating device includes a light source and a spatial light modulator
  • the number of the light sources is consistent with the number of light-coupled devices, and is optically connected one-to-one;
  • the light source includes a point source or a line source, and a light collimating device, and the point source or line source passes through the light collimating device and
  • the light coupling device is optically connected; or the light source is one, and all the light coupling devices are disposed on the same side of the visible lens, and an optical switching device is disposed between the light coupling device and the light source, and a light coupling is switched by the optical switching device.
  • the device is optically coupled to a light source, the light source comprising a point source or a line source, and a light collimating device, the point source or line source being optically coupled to the optical switching device by a light collimating device;
  • the spatial light modulator is disposed between the visible lens and the human eye;
  • the light source is coupled into the optical waveguide device through the light collimating device and the light coupling device, that is, the optical lens is introduced, and under the action of total reflection, the light propagates in the nanostructure functional film, and the nanostructure functional film comprises a set of pixel nanometers.
  • the grating structure which diffracts with the light, causes part of the light to escape from the light-emitting surface of the nano-structured functional film.
  • the angle of the emitted light is related to the period and orientation of the nano-grating structure.
  • the efficiency of the emitted light is related to the pixel size and depth of the nano-grating structure.
  • the light of the light source passes through the nano-grating structure to form one or more convergence views on the light-emitting surface of the visible lens, the spatial light modulator is placed between the visible lens and the human eye, and the spatial light modulator performs light field gray-scale, ie, amplitude information modulation. And matching the phase information of the light field modulated by the nanostructured functional film, ultimately in the human eye A converging wavefront is projected in front, so that the human eye can see a realistic virtual three-dimensional image, or the converging wavefront is further superimposed with the wavefront formed by the real scene, and the fusion of real world information and virtual world information is obtained.
  • the three-dimensional display based on the multi-layer nanostructure functional film light field lens is realized by the method of frequency division illumination.
  • the multilayer nanostructure functional film is closely laminated to form a light field lens.
  • the illumination source is controlled by frequency division, and the output light field in the light-emitting space is controlled by the nano-grating structure designed by each layer of the nano-structure functional film to sequentially change the outgoing light field.
  • the light source is alternately switched to each layer of the directional light guiding film by the optical switching device to realize the alternating illumination of the directional light guiding films of the respective layers.
  • the nano-grating structure on the nano-structured functional film is disposed according to the following conditions: when the eyes are viewed at different positions and the objects are different in distance, the eyes rotate and the visual axis rotates; the nano-grating structures respectively form at least two a viewing angle, or a plurality of continuous viewing angles forming a continuous window for viewing the virtual scene when the eye is rotated; the nano-grating structure fabricated on the nano-structured functional film has a viewing angle corresponding to at least two viewing angles of the horizontal rotation of the eyeball, and One-to-one corresponding to at least two angles of view of the eyeball rotating up and down, according to human visual habits, the upper and middle viewing angles of the visible lens correspond to distant scenes, and the lower angle of view corresponds to a near scene; or the design angle of view is smaller than the pupil size of the human eye to ensure monocular It is possible to see two or more viewing angle images, and realize the monocular parallax effect, that is, the position of the single eye focusing is located on the
  • the visible lens has two independent visible lenses respectively corresponding to two eyeballs, and the nanostructure distribution of the left and right visible lenses is symmetrical, and the light field generated between the two has a double
  • the visual difference is that when the eyeball moves, the concentrated light field of the left and right visible lenses forms a parallax effect, that is, the image obtained by the left eye contains more left direction information, The image obtained by the right eye contains more right direction information, which is formed by brain fusion to form a stereoscopic image.
  • the image generating device is a pico projector or a spatial light modulating device
  • the head mounted three-dimensional display device is provided with an eyeball tracking device, and the eyeball tracking device tracks the dynamic change of the eyeball to determine the visual axis angle.
  • the pupil position then convert the above information into a control signal, control the pico projector or spatial light modulator, project the corresponding image in different parts of the visible lens, so that the converged viewpoint is located on the visual axis of the eye.
  • a nano-structured functional film is provided with a set of nano-grating pixels corresponding thereto, and the density and distribution of the nano-grating pixels are arranged according to the following conditions:
  • nano-grating pixels corresponding to a single viewing angle are independent of the angle of view of the viewing axis, and the nano-grating pixels corresponding to the respective viewing angles are uniformly arranged on the nano-structured functional film by nesting each other;
  • the nano-grating pixel corresponding to the single viewing angle is related to the viewing angle of the viewing angle, and the distribution characteristic of the nano-grating pixel corresponding to the single viewing angle is: the nano-grating pixel corresponding to the viewing angle corresponds to the visual axis of the viewing angle
  • the arrangement density in the vicinity is greater than the arrangement density away from the visual axis region, that is, the nano-grating pixels corresponding to the respective viewing angles are non-uniformly arranged on the nano-structure functional film by nesting each other. This arrangement not only balances the quality of the visual experience, but also reduces the number of nano-grating pixels and reduces the processing cost.
  • the nanostructured functional film is a multifocal nanostructure directional functional film that projects at least two depths of virtual image in a human visual field, when the human eye focuses on a close-up scene by adjusting the eyeball The distant scene is blurred. Conversely, when the human eye focuses on a distant scene, the close-up scene is blurred.
  • the visible lens is an off-axis Fresnel lens having a plurality of focal lengths, and the imaging relationship can be simply approximated under the paraxial condition:
  • the nano-grating structure is a plurality of off-axis Fresnel lens structures nested with each other to form an off-axis Fresnel lens structure with multiple focal lengths, thereby displaying images of different distances, realizing multiple depth-of-field image separation display, and changing projections
  • the size of the object is such that the ratio of the virtual image size presented to its corresponding distance is in accordance with the ratio of the human eye to the real scene.
  • the nano-grating structure is nested with three off-axis Fresnel lens structures to form three focal lengths of off-axis Fresnel lens structures, the three off-axis Fresnel
  • the focal length of the lens structure is increased in turn, corresponding to the image display of the foreground, medium and close-up, respectively, and the corresponding nano-grating nesting arrangement is as follows:
  • the nano-gratings sequentially correspond to a distant view, a medium view, a close view, and a distant view;
  • the nano-gratings sequentially correspond to a medium view, a close-up view, a distant view, and a close view;
  • the nano-gratings sequentially correspond to a close-range, a distant view, a medium view, and a close-up view.
  • the nano-grating structure may be composed of a single material or a plurality of materials, and may be made of resin, plastic, rubber, glass, polymer, photorefractive crystal, metal, metal oxide, or the like.
  • 5(a), 5(c), 5(g), 5(f), and 5(i) are schematic diagrams showing a nanostructure functional film composed of two materials, a substrate 2101 and a functional layer 2102.
  • the functional layer 2102 forms a nano-grating structure.
  • 5(e) is a schematic view showing a nanostructure functional film composed of a single material substrate 2101, that is, a nanograting is directly prepared on a substrate.
  • 5(b), 5(d), 5(h), and 5(i) are schematic views showing a nanostructure functional film composed of three materials: a substrate 2101, a functional layer 2102, and a composite layer 2103.
  • the visible lens may be formed by combining a lens substrate of a visible lens with a functional film having a nano-grating structure, or may be composed of a multi-layer functional film, or directly processing a nano-grating on the lens substrate. . Therefore, the nano-grating can be processed on the above functional film, and then attached or embedded on the lens substrate, or the nano-grating can be processed directly on the lens substrate. That is, the marking 2101 substrate in Figures 5(a)-(j) can represent either the substrate of the functional film or the body of the lens.
  • the visible lens can be constructed in accordance with the aforementioned composite structure.
  • FIG. 5(e) is a process for directly processing the nanograting on the substrate 2101
  • FIG. 5(f) is for embedding the functional layer 2102 inside the substrate 2101 so that the nanograting is disposed inside the substrate 2101.
  • 5(g) and 5(h) show that both the substrate 2101 and the functional layer 2102 are prepared with nano-gratings and nested with each other.
  • FIGS. 5(i) and 5(j) the functional layer 2101 of the nanograting structure is bonded to the substrate 2101.
  • the essence of the nano-grating structure is that the optical refractive index changes periodically in the micro-nano-scale space and can have a diffraction effect with the light action.
  • the nanostructure functional film proposed by the invention wherein the nano grating pixel can be fabricated by ultraviolet continuous space frequency lithography and nano imprinting, the ultraviolet continuous variable space lithography technology refers to the Chinese patent of application number CN201310166341.1 The lithographic apparatus and photolithography method described in the application. It should be noted that in the present invention, various differently oriented nano-gratings can be fabricated on a smooth surface by photolithography. The thickness of the nano-grating is 10um-2mm, and the structure thereof can be embossed.
  • the nano-lithography method is used to fabricate the nanostructure, and then the nano-structure can be used.
  • the embossed template is then embossed by nanoimprinting to form a pixel array of nano-gratings. It may also be a refractive index modulation type, which is prepared by nanolithography on a refractive index modulation type recording material such as a photopolymer film, a photorefractive crystal glass, or the like.
  • FIG. 6(a)-(f) are schematic views showing the structure of a functional film 22 containing a nano-grating pixel and a lens substrate 21 constituting a directional functional lens (visible lens 2) or a visible lens unit.
  • the nanostructure functional film 22 is attached to the surface of the lens substrate 21, or the nanostructure functional film 22 is embedded inside the lens substrate 21 (FIG. 6).
  • Fig. 6(e) Obtaining a directional functional lens. It is worth noting that when making a single-layer and multi-layer closely-stacked nano-grating structure (as shown in Fig. 6(b), Fig. 6(e), Fig. 6(f)), it can be vapor-deposited or laminated on the surface of the grating structure.
  • a transparent dielectric layer 23 having a refractive index different from that of the substrate protects the nano-grating structural characteristics and light guiding properties.
  • Figure 7 is a nanostructure distribution of a directional functional film that achieves a single viewpoint convergence.
  • the nanograting structure shown in Figure 7 is equivalent to a single off-axis Fresnel lens structure that allows images to converge at viewpoint 1.
  • n ⁇ m such nano-grating structures constitute n ⁇ m off-axis Fresnel lens structures with different focal points (each group of nano-gratings can simulate off-axis Fresnel lens structures with different focal lengths as needed).
  • the exiting light can be made insensitive to the wavelength of the incident light, such as by grading the nanostructure, so that the multi-wavelength incident light can achieve the same convergence effect.
  • the pixel (nano grating) on the figure is not limited to a rectangular pixel, and may be composed of a pixel structure such as a circle, a diamond, or a hexagon.
  • the pixels (nano-gratings) on the figure can also be separated from each other, and the pixel (nano-grating) pitch can be appropriately designed to meet the requirements of the illumination gap.
  • the pixel diffraction can achieve ideal diffraction efficiency and achieve uniform illumination.
  • Figure 7 constructs a single-view new wavefront using a nanostructure functional film (or nanostructured directional functional film) with directivity.
  • a nanostructure functional film or nanostructured directional functional film
  • the scene emits diffuse light to the surroundings, and the light projected by the scene to the human eye is imaged by the cornea and the lens.
  • the new wavefront constructed by the nanostructured directional functional film needs to conform to the natural viewing conditions, that is, the new wavefront constructed by the nanostructured directional functional film should be a converging wavefront, forming at least one convergence point in front of the eye, ie Viewpoint.
  • the eye should be in the viewing area behind the viewpoint so that the human eye is relaxed and comfortable while viewing the virtual object.
  • the viewpoint distance of the nanostructure directional functional film can be optimized to make the eye in the best observation range.
  • FIG. 8 is a diagram of a realistic enhanced display scheme based on reflective projection in an embodiment of the present invention.
  • the visible lens 2 is embedded or bonded with a nano-structure functional film 22, and the micro-projector 3 projects from the rear side of the visible lens 2 at a large angle to the directional functional film 22 to realize directional light illumination of the nanostructure on the visible lens.
  • the illumination source emitted by the pico projector 3 may be a point source, a line source, or a surface source.
  • the intensity of the emitted light can vary with time or space.
  • the pico projector 3 realizes light field gray (amplitude) information modulation, and matches the phase information of the light field modulated by the nanostructure functional film 22, and finally projects a converging wavefront in front of the human eye, so that the human eye can see the realistic virtual three-dimensional. image.
  • the converging wavefront is further superimposed with the wavefront formed by the real scene, for example, the propagating light field of the real scene is transmitted through the translucent nanostructure functional film, superimposed with the converging wavefront of the virtual scene, or the real world scene is captured and virtualized by the digital lens in real time. After the information is merged, it is projected in front of the human eye to obtain the fusion of real world information and virtual world information.
  • FIG. 9 is a diagram of a realistic enhanced display scheme based on projection projection in an embodiment of the present invention.
  • the nanostructured functional film 22 is attached or embedded in the wearable viewable lens 2.
  • the pico projector 3 projects from the front side of the visible lens to the nanostructured functional film 22 at a large angle to achieve directional light illumination of the nanostructures on the visible lens 2.
  • the illumination source emitted by the pico projector 3 may be a point source, a line source, or a surface source.
  • the intensity of the emitted light can vary with time or space.
  • Pico projector 3 achieves light field grayscale (amplitude) The information is modulated and matched with the phase information of the light field modulated by the light field lens (ie, the lens 2), and finally the converging wavefront is projected in front of the human eye, so that the human eye can see the realistic virtual three-dimensional image.
  • the converging wavefront can be further superimposed on the wavefront formed by the real scene to obtain the fusion of real world information and virtual world information.
  • FIG. 10 is a diagram of another realistic enhanced display scheme according to an embodiment of the present invention.
  • the nanostructure functional film 22 is embedded or attached to the visible lens of the head mounted three-dimensional display device.
  • the pico projector 3 is coupled into the visor (the ray coupling device 4 is prior art and will not be described again) to achieve directional light illumination of the nanostructures on the visible lens screen.
  • the pico projector 3 can be point scan or line scan projection imaging, or face projection imaging.
  • the intensity of the emitted light can vary with time or space.
  • the pico projector 3 realizes light field gray (amplitude) information modulation, and matches the phase information of the light field modulated by the visible lens (substantially the nanostructure functional film 22), and finally projects a converging wavefront in front of the human eye to make the human eye You can see realistic virtual 3D images.
  • the converging wavefront can be further superimposed with the wavefront formed by the real scene to obtain the fusion of real world information and virtual world information.
  • the light coupling device 4 and the visible lens in FIG. 10 are both transparent or translucent materials, and the light of the actual scene passes through the light coupling device 4 and the visible lens, and is fused with the virtual information, so that the eye sees the virtual and the reality.
  • the visual presentation of the combination are both transparent or translucent materials, and the light of the actual scene passes through the light coupling device 4 and the visible lens, and is fused with the virtual information, so that the eye sees the virtual and the reality.
  • FIG. 11 is a schematic diagram of another implementation of the reality enhanced display scheme of the present invention.
  • the visible lens comprises: a lens substrate 21, a nanostructure functional film 22 and an optical waveguide device 5, the nanostructure functional film 22 is embedded or attached to one side of the lens substrate 21, and then an optical waveguide is disposed on the other surface of the lens substrate 21.
  • the device 5 (the lens substrate 21 and the nano-structure functional film 22 can also be embedded together on the optical waveguide device 5 as needed, as shown in FIG. 11, the length, width and thickness of the optical waveguide device 5 are larger than the lens substrate. 21 and corresponding dimensions of the nanostructure functional film 22), the pico projector 3 and the optical waveguide device 5 are optically connected by the light coupling device 4.
  • the pico projector 3 is coupled through a light coupling device 4 into the optical waveguide device 5 on the visible lens. Under the action of total reflection, light travels within this visible lens. Visible lenses contain A set of pixel-type nanostructures (nano-gratings) disposed on the nano-structured functional film 22 are diffracted by the action of light, so that part of the light escapes from the light-emitting surface of the light-guiding lens. The angle of the exiting ray is related to the shape (period, orientation) of the nanostructure. The efficiency of the outgoing light is related to the pixel size and depth of the nanostructure. Therefore, by designing a specific nano-grating structure, a convergence viewpoint can be formed on the light-emitting surface of the visible lens.
  • the pico projector 3 is imaged by point scanning or line scan projection.
  • the intensity of the emitted light can vary with time or space.
  • the pico projector 3 realizes the modulation of the light field gray (amplitude) information by scanning, and matches the phase information of the light field modulated by the directivity function screen (ie, the visible lens), and finally projects the convergence wavefront in front of the human eye, making the person The eye can see realistic virtual 3D images.
  • the converging wavefront can be further superimposed with the wavefront formed by the real scene to obtain the fusion of real world information and virtual world information.
  • FIG. 12 is a schematic diagram of another implementation of the reality enhanced display scheme of the present invention.
  • the visible lens 2 (light guiding lens) comprises: a lens substrate 21, a nanostructure functional film 22 and an optical waveguide device, and the nanostructure functional film 22 is embedded or bonded on one side of the lens substrate 21, and then on the other side of the lens substrate 21.
  • the optical waveguide device 5 is further disposed (the lens substrate 21 and the nanostructure functional film 22 may be embedded together on the optical waveguide device as needed, as shown in FIG.
  • the length and width dimensions and thickness dimensions of the optical waveguide device are The light source collimating device 6, the light coupling device 4, the spatial light modulator 7, and the illumination light source (point light source or line light source) are further disposed in the embodiment, which is larger than the lens substrate 21 and the corresponding size of the nanostructure functional film 22. 12 is expressed as: the point/line source 8) is optically connected to the optical waveguide device 5 through the light collimating device 6, the light coupling device 4, and the spatial light modulator 7 is disposed between the human eye and the nanostructure functional film 22. .
  • An illumination source (point source or line source) is coupled into the optical waveguide device 5 by the light collimation device 6 and the light coupling device 4, that is, into the visible lens 2.
  • the nanostructure functional film 22 comprises a set of pixel-type nano-grating structures that are diffracted by the action of light to make part of the light function thin from the nanostructures.
  • the exit surface of the film 22 escapes.
  • the angle of the exiting ray is related to the shape (period, orientation) of the nano-grating structure.
  • the efficiency of the outgoing light is related to the pixel size and depth of the nano-grating structure.
  • a spatial light modulator 7 (such as a liquid crystal panel or other flat panel display) is placed between the visible lens and the human eye, and the light field grayscale (amplitude) information modulation is realized by the spatial light modulator 7, and the nanostructure functional film 22 is The phase information of the modulated light field is matched, and finally the converging wavefront is projected in front of the human eye, so that the human eye can see the realistic virtual three-dimensional image.
  • the converging wavefront can be further superimposed with the wavefront formed by the real scene to obtain the fusion of real world information and virtual world information.
  • Figures 13(a) and 13(b) are diagrams of a multi-view display scheme in accordance with an embodiment of the present invention.
  • the eyes view objects with different positions and different distances, the eyes rotate and the visual axis rotates. Therefore, it is necessary to specifically design the nano-grating structure on the nano-structure functional film, and the object is that the nano-grating structures respectively form at least two viewing angles, or a plurality of continuous viewing angles form a continuous window, which is convenient for viewing the virtual scene when the eye rotates.
  • a nano-grating structure forming three viewing angles is formed on the nano-structure functional film 22, and the three viewing angles corresponding to the horizontal rotation of the eyeball are respectively the nano-grating structure distribution area 9111 corresponding to the viewing angle 91.
  • the nano-grating structure distribution region 9201 corresponding to the viewing angle 92 and the nano-grating structure distribution region 9301 corresponding to the viewing angle 93 have corresponding visual axes of 1101, 1102, and 1103, respectively.
  • the eyes move left and right, corresponding to the left and right viewing angles (ie, viewing angle 91 and viewing angle 93) and their nano-grating pixel respective regions 9101 and nano-grating pixel respective regions 9301.
  • the eye may also move up and down, it is also possible to set the nano-grating pixel area corresponding to the upper and lower viewing angles.
  • a nano-grating structure forming three viewing angles is formed on the nano-structure functional film to correspond to the three viewing angles of the middle and lower rotation of the eyeball, respectively, and the nano-grating structure distribution area 9102 corresponding to the viewing angle 94.
  • the nano-grating structure distribution region 9202 corresponding to the viewing angle 95 and the nano-grating structure distribution region 9302 corresponding to the viewing angle 96 have corresponding visual axes of 1201, 1202, and 1203, respectively.
  • the eyes move up and down, corresponding to the upper middle and lower The viewing angle (ie, viewing angle 94, viewing angle 95, and viewing angle 96) and its nano-grating pixel distribution region 9102, nano-grating pixel distribution region 9202, and nano-grating pixel distribution region 9203.
  • the upper and middle viewing angles of the visible lens correspond to distant scenes
  • the lower viewing angle corresponds to the near scene.
  • the viewing angle can be designed to be smaller than the pupil size of the human eye, so that one eye can see two or more viewing angle images, and the monocular parallax effect is achieved, that is, the position of the single eye focusing is located on the display object.
  • the viewing angle is smaller than the design of the pupil of the human eye, which can reduce the contradiction of the convergence of the convergence, realize continuous dynamic parallax, and make the viewing effect more natural.
  • an eyeball tracking device may be added to the head mounted three-dimensional display device, the eyeball tracking device is used to track the dynamic change of the eyeball, the visual axis angle and the pupil position are determined, and then the above information is converted.
  • control devices such as light sources, pico projectors or spatial light modulators, project corresponding images in different parts (areas) of the visible lens, so that the convergence point of view is on the visual axis of the eye to achieve optimal visual quality. And reduce the amount of data needed to process (transfer).
  • Figures 14(a)-(e) are plots of distribution of viewing angle pixel densities.
  • the ordinate Y represents the distribution density or the number of nano-grating pixels (nano-gratings)
  • the middle vertical dashed line represents a certain visual axis
  • the abscissa X represents the distance between a certain area on the nano-structured functional film and the visual axis.
  • the angle, the intersection of the visual axis and the abscissa is marked as 0, indicating that the distance (or angle) between the position (or region) on the nanostructure functional film and the visual axis is zero.
  • Case 1 as shown in Fig.
  • the single viewing angle pixel is independent of the viewing angle angle.
  • the pixels of each viewing angle are evenly arranged on the surface of the directional functional lens by nesting each other, that is, the distribution of the nano gratings on the entire nanostructure functional film is evenly arranged.
  • Case 2 as shown in Figures 14(b)-(d), the single viewing angle pixel is related to the angle of view of the boresight. Considering the characteristics of the human eye, the human vision is a sensitive area around 6° to 8° around the visual axis. Referring to the cone distribution map of FIG.
  • the pixel distribution curve of the designed single viewing angle can be a trigonometric function, a square wave function, a trapezoidal function, or a sine function (take two points) One cycle) Or other function types, the common feature is that the nano-grating pixels corresponding to the viewing angle are densely arranged near the viewing axis corresponding to the viewing angle, and are sparsely arranged away from the viewing axis region.
  • the nano-grating pixel corresponding to the single viewing angle is related to the angle of the viewing axis, and the distribution characteristic of the nano-grating pixel corresponding to the single viewing angle is: the density of the nano-grating pixel corresponding to the viewing angle in the vicinity of the viewing axis corresponding to the viewing angle,
  • the arrangement density is greater than the distribution density away from the visual axis region, that is, the nano-grating pixels corresponding to the respective viewing angles are non-uniformly arranged on the nano-structure functional film by nesting each other.
  • the multi-view nano-grating pixels are non-uniformly arranged on the surface of the directional functional lens by nesting each other.
  • the nano-grating pixel distribution corresponding to each viewpoint of each viewing angle is a trigonometric function distribution as indicated by a broken line, and the nano-grating pixels are based on three viewing angles, that is, viewing angles 91, 92, 93 and their corresponding visual axes 1101, 1102, and 1103.
  • the density function of the nano-gratings corresponding to each of the visual axes is triangular. The closer to the visual axis, the higher the density of the nano-grating pixels.
  • the distribution density curve is finally in the form of an isosceles or an equilateral triangle.
  • the solid line of the medium-waist trapezoid is the overall pixel density distribution of the nanostructures in the light field lens after synthesis from three viewing angles. The advantage is that it can make full use of the characteristics of the human eye structure and obtain higher image quality by using fewer pixels. Similarly, nano-grating arrangements with more viewing angles can be designed as needed.
  • the distribution density curves of other types of functions are also based on the same principles described above.
  • Fig. 15(b) is a multi-depth display scheme diagram realized by using a multifocal nanostructure directional functional lens on the basis of Fig. 15(a).
  • the nanostructure functional film By designing the nanostructure functional film as a multifocal nanostructure directional functional film, it can project at least two depths of virtual image in the human eye bright region (Fig. 15(b) with depth, medium and near depth depths) For example).
  • Fig. 15(b) with depth, medium and near depth depths
  • the close-up scene is blurred.
  • This stereoscopic display method conforming to the human eye focusing habit makes the three-dimensional scene viewing more natural.
  • the above is based on two focal lengths, followed by Push, in theory, can achieve optical rendering similar to real scene, making the virtual scene full of chaos.
  • a multifocal nanostructure directional functional lens ie, a viewable lens
  • the imaging relationship can be simply approximated as:
  • u and u' are the object distance and the image distance, respectively, and f is the focal length of the Fresnel lens.
  • f is the focal length of the Fresnel lens.
  • the size of the projected object should be changed in the design so that the size of the virtual image presented is proportional to the distance between the corresponding distance and the distance, just as the different scenes in the real world are in the same state in the human eye, so that the observer is both immersive and Can be effectively integrated with the real scene.
  • the size of the projected object can be changed by various methods, for example, the image chip can be controlled, the size of the object image of the output can be changed, or the magnification of the projection engine can be changed, and the directivity function of the projection engine and the multi-focus nanostructure can also be adjusted.
  • the relative distance of the lens changes the size of the projected object.
  • the multi-angle of view and the multi-depth of field are combined to obtain a fusion of the stereoscopic effect of the parallax and the stereoscopic effect of the eye muscle focusing, so that the viewing effect is more natural.
  • Figure 16 is a nanograting structure distribution diagram of a multifocal nanostructure directional functional lens.
  • the nano-grating structure is equivalent to nesting a plurality of off-axis Fresnel lens structures to form a plurality of focal lengths (three in the figure, a focal length a, a focal length b, and a focal length c).
  • the pixel (nano grating) on the figure is not limited to a rectangular pixel, and may be composed of a pixel structure such as a circle, a diamond, or a hexagon.
  • the pixels on the figure can also be separated from each other, and the pixel pitch can be appropriately designed to meet the requirements of the illumination gap.
  • the structural parameters such as the pixel size, structure or groove depth vary according to the spatial distribution, so that each pixel can obtain an ideal diffraction efficiency and achieve uniform illumination. Align it with the projected image or the spatial light modulator output image.
  • the short focal length (such as the focal length a in Figure 16) should be matched to the nano-grating pixel unit and the close-in image pixel unit, that is, the standard in Figure 16
  • the nano-grating pixel unit numbered 001; the nano-grating pixel unit corresponding to the long focal length (such as the focal length c in FIG.
  • the nano-grating pixel unit corresponding to the focal length b) in FIG. 16 matches the mid-image pixel unit, that is, the nano-grating pixel unit labeled 002 in FIG. Thereby achieving multi-depth image separation display.
  • the nano-grating pixel units corresponding to the respective focal lengths are nested with each other, and the nested arrangement example of the nano-grating pixel units corresponding to the three focal lengths as shown in FIG. 16 is taken as an example of four units in the horizontal direction and three units in the vertical direction.
  • the first row is 001, 002, 003, 001 from left to right; the second row is 002, 003, 001, 002; the third row is 003, 001, 002/003.
  • Figures 17(a), 17(b), 17(c) and 17(d) are a A schematic diagram of multiple scene depth divisions of virtual scenes. Taking the image shown in 17(a) as an example, it can be divided into several images according to the distant and close relationship of the scene (the illustration is divided into close-range, medium-range, and distant views, respectively, corresponding to Figure 17(b), Figure 17(c), Figure 17 (d)). Considering that the human eye's ability to resolve near-distance objects is better than that of distant objects. In the image segmentation, close-range objects can be finely segmented, and distant objects can be roughly segmented. The segmented image is projected to different distances in front of the human eye to form a multi-depth three-dimensional image.
  • Figures 18(a) and 18(b) are diagrams of a multiple depth of field display scheme implemented by light field scanning in accordance with an embodiment of the present invention.
  • the nano pixel structure design using the directional functional lens realizes multiple convergence viewing angles, and the embodiment changes the incident angle of the nano pixel unit by rapidly moving the spatial position or (and) the projection angle of the pico projector, thereby Change the exit light field and A plurality of continuous viewing angles of scanning are performed within the viewing area, particularly in the depth direction.
  • the nanostructured directional functional film can be viewed as an off-axis Fresnel lens. Under paraxial conditions, the imaging relationship can be simply approximated as:
  • u and u' are the object distance and the image distance, respectively, and f is the focal length of the Fresnel lens.
  • f the focal length of the Fresnel lens.
  • the size of the projected object should be changed to make the virtual image size suitable, so that the observer has both immersion and can effectively integrate with the real scene.
  • the observer focuses on the close-up scene by adjusting, the distant scene is blurred, and vice versa, so that the observer obtains a viewing experience that is almost the same as the real scene.
  • a three-dimensional display scheme with multiple viewpoints and multiple depths of field can be obtained according to practical applications, and the three-dimensional effect experience can be improved.
  • FIG. 19 is a diagram showing a display scheme based on a multi-layer nanostructure functional film (also referred to as a directional light guiding film) according to an embodiment of the present invention.
  • the multi-layer nanostructure functional film is superimposed (taking three layers in FIG. 19 as an example), which is substantially closely superposed by three layers of visible lens units, each of which includes an optical waveguide device 5, a lens substrate 21, and a nanometer.
  • Each of the visible lens units is controlled by a separate pico projector 3 and a light coupling device 4.
  • the pico projector 3 can be point scan or line scan projection imaging, or face projection imaging.
  • the intensity of the emitted light can vary with time or space.
  • the pico projector 3 can be placed on the same side of each layer of the light guiding film or on different sides.
  • the method of tightly overlapping multi-layer directional light guiding films is essentially using spatial multiplexing. The method increases the visual screen to display the amount of information, and realizes three-dimensional display of multiple viewpoints or multiple focal lengths. By adopting the method, the image clarity is provided, and more viewing angle information is provided, and a good naked-eye 3D display effect can be achieved, and the advantage is that the viewing angle is more continuous and the 3D experience is better.
  • FIG. 20 is a modification of the scheme of FIG. 19.
  • the multi-layered visible lens unit has the same structure (the illustration also has three layers as an example, and can be made into two layers, four layers, five layers or more as needed. Multilayer), each layer of the lens unit also corresponds to a light coupling device 4, but these light coupling devices 4 are disposed on the same side of each layer of the optical waveguide device 5, and then the pico projector 3 is shared by one, An optical switching device 10 is disposed between each of the light-coupling devices 4 and the pico projector 3, and the micro-projector 3 is switched by the optical switching device 10 to communicate with a certain light-coupling device 4.
  • the pico projector 3 can be point scan or line scan projection imaging, or face projection imaging.
  • the intensity of the emitted light can be changed with time or space to achieve alternating illumination of the functional films of the various layers of nanostructures. Therefore, by controlling the micro projector 3 by frequency division, the nano-structure functional films can be sequentially illuminated, that is, the outgoing light field in the light-emitting space is sequentially changed according to the outgoing light field controlled by the nano-structure functional film through the nano-grating structure.
  • the method of closely overlapping the frequency-divided multi-layer nano-structure functional film essentially uses a space-time multiplexing method to increase the amount of information that can be displayed on the visual screen, and realize multi-view or multi-focus three-dimensional display. By this method, both the image sharpness and the more viewing angle information are provided, and a good 3D display effect can be achieved. The advantage is that the perspective is more continuous and the 3D experience is better.
  • the head mounted three-dimensional display device is further provided with a frequency division control device, and the frequency division control device includes:
  • a pulse generating circuit configured to generate a reference pulse signal, connected to the input end of the frequency dividing circuit, and send the reference pulse signal to the frequency dividing circuit to adjust the frequency of the periodic control signal;
  • the other output of the frequency dividing circuit is connected to the light cut Changing the device, controlling the optical switching device to periodically switch the sequential illumination of the visible lens units of each layer by the pico projector according to the set frequency;
  • the lighting control circuit is further disposed between the frequency dividing circuit and each of the pico projectors, and the lighting control circuit is configured according to the periodic control signal of the frequency dividing circuit.
  • the frequency periodically turns on and off sequentially the illumination of each layer of the visible lens unit by each pico projector.
  • Fig. 21 (a) and Fig. 21 (b) are further expanded and optimized on the basis of the example of Fig. 12.
  • the visible lens is superimposed with a multi-layer nanostructure functional film (Fig. 21(a), Fig. 21(b) takes three layers as an example), and is substantially superimposed by three layers of visible lens elements, each layer of visible lenses.
  • the units each include an optical waveguide device 5, a lens substrate 21, and a nanostructure functional film 22. (of course, the nano grating can be directly processed on the lens substrate 21 or the optical waveguide device 5, then, correspondingly, a nanostructure functional film is not required.
  • Each of the layers is optically coupled to a light coupling device 4.
  • a light collimating device 6 and a spatial light modulator 7 are further provided, and the illumination light source (point light source or line light source, expressed as: point/line light source in FIG. 12) sequentially passes through the light collimating device 6, and the light is coupled.
  • the device 4 is optically coupled to the optical waveguide device 5, and the spatial light modulator 7 is disposed between the human eye and the nanostructure functional film 22.
  • An illumination source (point source or line source, labeled as point/line source 30) is coupled into the optical waveguide device 5 by a light collimating device 6 and a light coupling device 4 (expressed as a light coupling device in Fig. 12), that is, Import the visible lens.
  • the nanostructure functional film 22 comprises a set of pixel nano-grating structures that are diffracted by the action of light to cause some of the light to escape from the light-emitting surface of the nano-structured functional film 22.
  • the angle of the exiting ray is related to the shape (period, orientation) of the nano-grating structure.
  • the efficiency of the outgoing light is related to the pixel size and depth of the nano-grating structure. Therefore, by designing a specific nano-grating structure, one or more convergences can be formed on the light-emitting surface of the display device (visual lens). Viewpoint.
  • a spatial light modulator 7 (such as a liquid crystal panel or other flat panel display) is placed between the visible lens and the human eye, and the light field grayscale (amplitude) information modulation is realized by the spatial light modulator 7, and the nanostructure functional film 22 is The phase information of the modulated light field is matched, and finally the converging wavefront is projected in front of the human eye, so that the human eye can see the realistic virtual three-dimensional image.
  • the converging wavefront can be further superimposed with the wavefront formed by the real scene to obtain the fusion of real world information and virtual world information.
  • the scheme of FIG. 21(a) adopts each layer of visible lens unit corresponding to one light coupling device, light collimating device and point/line light source, and the illumination of each layer is independently controlled by one point/line light source.
  • multiple layers three layers in the figure) are used together to share a light collimating device through an optical switching device, and optically connect a point/line source, that is, the illumination of the multi-layer visible lens is
  • a point/line source is passed through a light collimating device and then alternately switched by the optical switching device to provide illumination to each layer of the viewing lens.
  • the solution consists essentially of multiple layers of cross-linked nanostructure functional films for phase control and a fast response spatial light modulator for controlling grayscale display.
  • the single-layer nanostructured functional film forms a converging viewpoint on the light exiting surface. Taking the three-layer visible lens unit as an example, the uppermost layer of the nano-structure functional film forms a convergence viewpoint 2111, and the middle layer of the nano-structure functional film forms a convergence viewpoint 2121, and the lowermost nano-structure functional film forms a convergence viewpoint 2131. And so on.
  • the multilayer light guiding film is closely laminated.
  • the illumination light source is controlled by frequency division to realize sequential illumination of each light guiding film, that is, the outgoing light field in the light exiting space is sequentially changed according to the outgoing light field controlled by the nano-structure functional film through the nano-grating structure.
  • the illumination light source may be placed on the same side of each layer of the light guiding film as needed, or may be placed on the opposite side.
  • Each layer of directional light directing film can be controlled by separate illumination sources, light collimating devices, and light coupling devices.
  • the sequential transformation of the exiting light field can be achieved by alternately illuminating the illumination sources of the respective layers of light guiding lenses.
  • each layer of directional light directing film is controlled by the same illumination source and light collimating means.
  • the light source is alternately switched to the directional light guiding film of each layer by using the optical switching device to realize the pointing of each layer Alternate illumination of a light directing film.
  • Fig. 22(a), 22(b), 22(a), and 22(b) are block diagrams of the frequency division control circuit of the frequency division type nanostructure functional film.
  • Fig. 22(a) is a block diagram showing the control circuit of the above structure of Fig. 21(a).
  • the head mounted three-dimensional display device is further provided with a frequency dividing control device, and the frequency dividing control device comprises:
  • a pulse generating circuit configured to generate a reference pulse signal, connected to the input end of the frequency dividing circuit, and send the reference pulse signal to the frequency dividing circuit to adjust the frequency of the periodic control signal;
  • An image refresh control circuit wherein the input end is connected to an output end of the frequency dividing circuit, and the output end is connected to an input end of the spatial light modulator for controlling the refresh frequency of the spatial light modulator to be synchronized with the switching frequency of the light source;
  • the other output end of the frequency dividing circuit is connected to the optical switching device, and the optical switching device is controlled to periodically switch the light source to each layer of visible lenses according to the set frequency.
  • a lighting control circuit is further disposed between the frequency dividing circuit and each light source, and the lighting control circuit is periodically controlled according to the frequency dividing circuit.
  • the signal periodically turns on and off the sequential illumination of each layer of the visible lens unit by each light source according to the set frequency.
  • the pulse generating circuit generates a periodic pulse signal.
  • the pulse signal passes through a frequency dividing circuit to control the lighting circuit, thereby achieving alternate switching of the point/line source and alternating illumination of the directional light guiding films of the layers.
  • the frequency dividing circuit controls the refresh frequency of the spatial light modulation signal to achieve matching of the output image refresh frequency and the multi-layer directional light guiding film illumination frequency.
  • Fig. 22(b) is a block diagram showing the control circuit of the above structure of Fig. 21(b).
  • the pulse generating circuit generates a periodic pulse signal.
  • the pulse signal passes through a frequency dividing circuit to control the optical switching device, thereby achieving alternate illumination of the directional light guiding films of the respective layers.
  • the frequency dividing circuit controls the refresh frequency of the spatial light modulation signal to achieve matching of the output image refresh frequency and the multi-layer directional light guiding film illumination frequency.
  • FIG. 23(a), 23(b), and 23(c), FIG. 23(a), FIG. 23(b), and FIG. 23(c) are diagrams of a binocular window corresponding to the embodiment of the present invention.
  • two independent visible lenses respectively correspond to two eyeballs.
  • the image of the same object on the retina of the two eyes is not exactly the same.
  • the left eye sees more left side of the object from the left side, while the right eye sees more right side of the object from the right side.
  • the view information output from the corresponding viewpoints of the left and right eyes should be matched to achieve binocular stereo vision.
  • the two visible lens illumination areas corresponding to the left and right eye viewpoints should match. That is, the distribution area of the nano-grating pixel array corresponding to the middle viewpoint of the left eye should be coaxial with the left-eye visual axis (or can be approximately coaxial) and coaxial with the object (or can be approximately coaxial).
  • the nano-grating pixel array distribution area corresponding to the middle viewpoint of the right eye should be coaxial (or approximately coaxial) with the right-eye boresight and coaxial (or approximately coaxial) with the object.
  • the view information output by the left and right eyes should match. That is: the left eye through the left visible lens to observe the visible object should contain more left information, the right eye through the right visible lens to observe the visible object should contain more right information, in line with the human eye perspective, distance and size relationship .
  • the position of the nano-grating corresponding to the corresponding viewpoint of the left and right eyes should be correctly matched, and the view information should be output.
  • FIG. 24 is a schematic diagram of a three-dimensional head-mounted three-dimensional display according to an embodiment of the present invention.
  • the nanostructure distribution of the left and right visible lenses is symmetrical, and the light field generated between the two has binocular parallax.
  • the concentrated light field of the left and right visible lenses forms a parallax effect, that is, the left eye obtains
  • the image contains more left direction information
  • the image obtained by the right eye contains more right direction information, which is formed by brain fusion to form a stereoscopic image. In line with human eye observation habits.
  • adding eyeball dynamic tracking device determining the visual axis angle and pupil position through eyeball tracking, transmitting the movement information of the eyeball and its visual axis angle and pupil position to the control device to control the pico projector or point/line light source
  • the spatial light modulator respectively projects the corresponding images in different parts of the left and right visible lenses to achieve the optimal visual quality and reduce the amount of data required for processing (transmission).
  • FIG. 25 is a schematic diagram of a head-mounted three-dimensional reality enhanced display scheme.
  • the pico projector is projected at a large angle from the side of the visible lens (the micro projector is not shown), and the directional light illumination of the nanostructures on the two independent visible lenses is realized.
  • the pico projector realizes the modulation of the light field gray (amplitude) information by controlling the relationship between the light intensity, the wavelength and the spatial position, and the phase of the light field modulated by the nanostructure directivity functional film (ie, the nanostructure functional film on the lens) The information is matched, and finally the converging wavefront is projected in front of the human eye.
  • the wavefront can be superimposed with the wavefront formed by the real scene to obtain the fusion of real world information and virtual world information.
  • the nanostructure distribution of the left and right visible lenses is symmetrical, and the light field generated between the two has binocular parallax.
  • the concentrated light field of the left and right visible lenses forms a parallax effect, that is, the left eye obtains
  • the image contains more left direction information
  • the image obtained by the right eye contains more right direction information
  • the stereo image is formed by brain fusion, which conforms to the human eye observation habit. Considering that the eyes are different in viewing position and the distance is different, the eyes rotate and the visual axis and the viewpoint rotate accordingly.
  • the nano-grating pixels on the nano-structure functional film are designed to form nine viewpoints as shown in FIG. A continuous window that makes it easy to view virtual scenes as the eye turns.
  • the distribution of pixel density at a single viewing angle is related to the angle of view of the boresight.
  • the nano-grating pixels corresponding to the viewing angle are densely arranged in the vicinity of the viewing axis corresponding to the viewing angle, and are sparsely arranged away from the viewing axis region.
  • the nano-grating pixels of various viewing angles are non-uniformly arranged by nesting each other. The eyes move left and right, corresponding to the left and right viewing angles and their nano-grating pixel areas.
  • the eye moves up and down, corresponding to the upper and lower viewing angles and its nano-grating pixel area.
  • the upper and middle viewing angles of the visible lens correspond to distant scenes, and the lower angle of view corresponds to the near scene.
  • the eyeball dynamic tracking device is added, and the visual axis angle and the pupil position are determined by eyeball tracking. Positioning, transmitting the movement information of the eyeball and its visual axis angle and pupil position to the control device, controlling the pico projector or the point/line source and the spatial light modulator, respectively projecting corresponding images in different parts of the left and right visible lenses , to achieve the best quality of view and reduce the amount of data required to process (transfer).
  • FIG. 26 illustrates an optical waveguide device (not shown in the figure, which can be referred to the corresponding description) by using an illumination source (point source or line source) through a light collimating device and a light coupling device (refer to the corresponding description above) Under the action of reflection, light propagates through the nanostructured functional film.
  • an illumination source point source or line source
  • a light coupling device (refer to the corresponding description above)
  • the nanostructure functional film comprises a set of pixel type nano grating structures, which are diffracted by the action of light, so that part of the light escapes from the light exit surface of the nanostructure functional film.
  • the angle of the exiting ray is related to the shape of the nanostructure (period, orientation).
  • the efficiency of the outgoing light is related to the pixel size and depth of the nanostructure.
  • a multi-layered nanostructured functional film for phase control forms a frequency divisiond nanostructure functional film, and a fast response spatial light modulator combination for controlling gray scale display.
  • Each single layer of nanostructured functional film forms a converging viewpoint on the illuminating surface.
  • the 2510 nm light guiding film forms a converging viewpoint 2511
  • the 2520 nanostructure functional film forms a converging viewpoint 2521
  • the 2530 nanostructure functional film forms a converging viewpoint 2531. And so on.
  • the multilayer nanostructure functional film is closely laminated.
  • the illumination source is controlled by frequency division to realize sequential illumination of each nano-structured functional film, that is, the outgoing light field in the light-emitting space is sequentially changed according to the output light field controlled by the nano-structure functional film through the nano-grating structure, thereby realizing in front of the single eye An observation window composed of a plurality of consecutive observation angles.
  • the light film can be controlled by a separate illumination source, a light collimating device, and a light coupling device (also known as a light coupler).
  • the sequential transformation of the exiting light field can be achieved by alternately illuminating the illumination sources of the respective layers of light guiding lenses.
  • each layer of nanostructured functional film is controlled by the same illumination source and light collimation device.
  • the light source is alternately switched to each layer of the nanostructure functional film by using an optical switching device to realize alternating illumination of the nanostructure functional film of each layer (not shown).
  • an optical switching device to realize alternating illumination of the nanostructure functional film of each layer (not shown).
  • both the image sharpness and the more viewing angle information are provided, and a good 3D display effect can be achieved.
  • the nano-grating pixels and the spatial light modulator pixels do not need to be aligned, which greatly reduces the manufacturing difficulty.
  • the nanostructure distribution of the left and right visible lenses is symmetrical, and the light field generated between the two has binocular parallax.
  • the concentrated light field of the left and right light field lenses forms a parallax effect, that is, the left eye obtains
  • the image contains more left direction information
  • the image obtained by the right eye contains more right direction information
  • the stereo image is formed by brain fusion, which conforms to the human eye observation habit.
  • the eyes are viewed at different positions and different distances, the eyes rotate, the visual axis and the viewpoint rotate accordingly, and the nanostructure directional functional lens is designed to display a wide-angle virtual scene in front of the human eye.
  • the distribution of pixel density at a single viewing angle is related to the angle of view of the boresight.
  • the nano-grating pixels corresponding to the viewing angle are densely arranged in the vicinity of the viewing axis corresponding to the viewing angle, and are sparsely arranged away from the viewing axis region.
  • the eyes move left and right, corresponding to the left and right viewing angles and their nano-grating pixel areas.
  • the eye moves up and down, corresponding to the upper and lower viewing angles and its nano-grating pixel area.
  • adding eyeball dynamic tracking device determining the visual axis angle and pupil position through eyeball tracking, transmitting the movement information of the eyeball and its visual axis angle and pupil position to the control device to control the pico projector or point/line light source
  • the spatial light modulator respectively projects the corresponding images in different parts of the left and right visible lenses to achieve the optimal visual quality and reduce the amount of data required for processing (transmission).
  • FIG. 27 is a schematic diagram of a head-mounted three-dimensional virtual reality display scheme.
  • the nanostructure functional film is embedded or attached to the left and right visible lenses, respectively.
  • An illumination source point source or line source
  • the nanostructured functional film comprises a set of pixel-type nanostructures that are diffracted by the action of light to cause some of the light to escape from the light exit surface of the light guiding film.
  • the angle of the exiting ray is related to the shape of the nanostructure (period, orientation).
  • the efficiency of the outgoing light is related to the pixel size and depth of the nanostructure.
  • a converging viewpoint is formed on the illuminating surface of the visible lens.
  • a spatial light modulator (not shown) is placed in front of the nanostructure functional film to realize light field gray (amplitude) information modulation, and matched with the phase information of the light field modulated by the nanostructure directivity functional film, and finally projected in front of the human eye.
  • Convergence wave surface The multi-layered nanostructure functional films used to control the phase are closely stacked to form a frequency-dividing nanostructure functional film, and combined with a fast response spatial light modulator for controlling gray scale display.
  • the visible lens 2 is composed of three layers of visible lens units, each of which has a layer of nano-structured functional film, and each layer of single-layer nano-structured functional film forms a light-emitting surface. Convergence point of view.
  • the focal lengths of the functional nano-structure functional films are different (by designing nano-grating combinations with different focal lengths), so that the visible lens is projected into the human eye to form at least two depth virtual images, and the illumination source is controlled by frequency division.
  • the nano-structured functional films are sequentially illuminated, that is, the outgoing light field in the light-emitting space is sequentially changed according to the output light field controlled by the nano-grating structure of each nano-structure functional film.
  • the short focal length corresponding to the nano-grating pixel unit and the close-in image pixel unit should be matched, and the long focal length corresponding nano-grating pixel unit matches the distant image pixel unit.
  • the distant scene is blurred.
  • the close-up scene is blurred.
  • This stereoscopic display method conforming to the human eye focusing habit makes the three-dimensional scene viewing more natural.
  • the nanostructure distribution of the left and right visible lenses is symmetrical, and the light field generated between the two has binocular parallax.
  • the concentrated light field of the left and right light field lenses forms a parallax effect, that is, the left eye obtains
  • the image contains more left direction information
  • the image obtained by the right eye contains more right direction information
  • the stereo image is formed by brain fusion, which conforms to the human eye observation habit.
  • adding an eyeball dynamic tracking device to determine the viewing axis angle through eye tracking And the position of the pupil the movement information of the eyeball and its visual axis angle and pupil position are transmitted to the control device, and the pico projector or the point/line light source and the spatial light modulator are controlled to project correspondingly in different parts of the left and right visible lenses respectively.
  • the image achieves optimal viewing quality and reduces the amount of data needed to process (transfer).
  • the spatial light modulation described above may be an image display device such as a liquid crystal flat panel display.
  • the present invention discloses a wearable 3D display device realized by a visible lens provided with a nanostructure functional film.
  • the phase information modulation of the condensed light field is realized by the visible lens provided with the nano structure functional film, and the light field gradation information realized by the light guiding, projection and the like is matched, and the three-dimensional display experience can be obtained in front of the human eye.
  • the invention also points out that the method of multi-angle of view and multi-depth of field can improve the three-dimensional effect and eliminate the visual fatigue.
  • the amount of display information can be improved, and the display definition and stereoscopic effect can be improved.
  • the head-mounted three-dimensional display device according to the present invention can be used in virtual reality or real-life enhancement fields to obtain a comfortable three-dimensional display scene without causing dizziness.
  • FIG. 28 is a schematic diagram of a head mounted 3D display device based on a visible lens.
  • External information acquisition sensors such as a realistic three-dimensional scene acquisition sensor (3001), a head motion recognition sensor (3002), an eye motion recognition sensor (3004), etc., are integrated on the head-mounted portable device.
  • the specific location distribution can be changed according to the actual application needs.
  • the virtual three-dimensional scene is added to a specific location by a visor lens and image output device (3003) provided with a nanostructure functional film.
  • the specific position of each component can be adjusted and modified according to the actual application needs.
  • the above-mentioned wearable 3D display device realized by the visible lens provided with the nanostructure functional film is combined with the external information acquisition system and the control system, and can be used in the field of virtual reality and reality enhancement.
  • FIG. 29 is a schematic diagram of a virtual reality system based on a visible lens.
  • multiple sensors or image collectors collect information from the real world and observers, including but not limited to: for observers' head movements.
  • a head motion recognition device an eye movement recognition device for eye movement recognition, a gesture recognition device for gesture recognition, and the like.
  • the identification device collects the relevant posture information, it transmits it to the information processing device for pre-processing, and then transmits it to the central processing unit (CPU) for processing.
  • the CPU can also collect the cloud information from the cloud information acquiring device, and integrate all the information.
  • the image output control command is output to the image output control device, and the device issues an instruction to control the image generating device (micro projector or light source plus spatial light modulator, etc.) to collect the three-dimensional image through the visible lens, and the terminal information is matched, Processing, interacting, and ultimately presenting virtual objects or information at specific locations in the virtual three-dimensional scale space through the visual lens.
  • the image generating device micro projector or light source plus spatial light modulator, etc.
  • Figure 30 is a schematic diagram of a realistic enhancement system based on a visible lens.
  • the display scene recognition device is added, so that the real scene can be merged with the virtual scene in real time to realize the realistic enhanced three-dimensional display.
  • FIG. 31 is a schematic diagram of information exchange between a wearable 3D display device and other mobile devices or terminals through a cloud network.
  • the head mounted mobile device (3100), the waist worn mobile device (3101), the wrist worn mobile device (3102), and the portable mobile device (3103, 3104) can conveniently implement information interaction through the cloud.
  • the patent covers virtual reality and augmented reality display technologies that can be applied to social activities such as video games, live events, video entertainment, healthcare, real estate, retail, education, engineering, and military ( Figures 27-35).
  • FIG. 32 is a schematic diagram of the present invention applied to traffic driving.
  • a virtual image is shown (the image in the figure is "600 meters behind Wenxing Road” and the right turn on the actual road surface),
  • the virtual image is accurately projected to the position matching the real scene through the adjustment of the focal length, so that the virtual image and the real scene are organically integrated, natural and accurate, realizing enhanced display, and effectively avoiding the existing car navigation system, visual Traffic accident caused by scene switching.
  • Figure 33 is a schematic diagram of the application of the present invention to children's education (of course, may be other In the field of multimedia information display and movies, television, etc., the two children watched or shared consultation about dinosaurs through the device of the present invention, including text display and stereoscopic display of dinosaurs.
  • FIG. 34 is a schematic diagram of the present invention applied to the fields of game entertainment, military training, war, etc., wherein the buildings and characters may be fully virtual, or may be a realistic enhancement mode in which virtual and real things are combined.
  • game entertainment and military training it can greatly improve the fidelity of games and military training, improve the fun and playability of the game, and improve the actual combat effect of military training.
  • soldiers can quickly obtain information on the enemy's position on the battlefield, our military's position, movement, and operational characteristics, as well as the battlefield shape information, greatly improving the soldier's information collection ability and real-time judgment. Accuracy, as well as unified operational coordination, improve the overall combat effectiveness of the military.
  • Figure 35 is a representation of the invention applied to the field of shopping or product display. It can be used to fully understand the appearance information of the product, and combine text and sound information to achieve a new shopping experience or display effect.
  • FIG. 36 is a schematic diagram of the present invention applied to the medical field to realize a richer information exchange between a doctor and a patient.
  • the doctor can let the patient visually see the stereoscopic information of his sick tooth and understand the condition, and the window also Synchronize text information such as diagnostic results.
  • 37 is a schematic diagram of the present invention applied to the field of home audio-visual entertainment, which can obtain an almost immersive visual experience and greatly alleviate the symptoms of visual fatigue.
  • FIG. 38 is a schematic view of the present invention applied to virtual dummy fitting of a garment, which is synthesized by three-dimensional scanning or multi-angle photographing of the wearer, and then the three-dimensional virtual image of the wearer and the wearer is merged to obtain a three-dimensional wearing new costume.
  • the image allows the wearer to observe his own try-on effect in real time.
  • the head-mounted three-dimensional display device of the present invention With the head-mounted three-dimensional display device of the present invention, a true visual experience close to the mirror can be obtained.
  • 39 is a schematic diagram of the present invention applied to the field of business meetings, which provides a more realistic and vivid display of the products or copy files that need to be discussed, and has a more intuitive advantage than the conventional ppt. This is especially true for large device displays.
  • FIG. 40 is a schematic diagram of the present invention applied to the field of remote interaction.
  • each party only needs to place its own chess piece, and then Through the digital camera added in the present invention, the chess pieces of the own party are taken together, even together with the whole picture of the chess player (which may be three-dimensional scanning, multi-angle recording or photographing, etc.), and are three-dimensionally converted, and finally projected by the present invention to In front of each other, they are like face to face, with a high degree of immersive feeling. If you add reality enhancement technology, you will barely notice the distant distance between each other. This can be a revolutionary for remote interaction. Leap.
  • the present invention can achieve the following advantages (different embodiments may achieve some or all of the advantages):
  • the wearable 3D display device uses a multi-layer nanostructure functional lens to obtain a concentrated light field in front of the human eye, and the reproduced light field is the same as the way in which the human eye obtains the real scene light field. Matching the phase of the light field with the light field gradation information realized by light guiding, projection, etc., can realize a comfortable three-dimensional display scene without causing dizziness.
  • the display has high definition and strong stereoscopic effect.
  • the display scheme of the multi-layer frequency-dividing directional lens controls the illumination source by means of frequency division, so that the exiting light field in the light-emitting space is sequentially changed according to the outgoing light field controlled by each light-guide film through the nano-grating structure.
  • the amount of display information is greatly increased, and the contradiction between display definition and stereoscopic effect is avoided, and an excellent three-dimensional display effect can be obtained without sacrificing image quality.
  • Multi-view 3D display scheme can reduce the contradiction adjustment of convergence, realize continuous dynamic parallax, no stroboscopic, Make the viewing more natural.
  • the multi-depth 3D display scheme allows the human eye to adjust the focus to different depths of the scene, in line with the human eye adjustment habits, without dizziness.
  • the nano-lithography method can be used to etch the directional nano-grating on the surface of the lens, or the nano-lithography method can be used to prepare the embossing template and then mass-copy by nanoimprinting. To reduce screen costs.

Abstract

一种三维显示装置,包括图像生成装置,和可视镜片(2),所述可视镜片(2)上设有至少一层设置有具有会聚成像功能的纳米结构功能薄膜(22),从而使得可视镜片(2)成为具有光场变换功能的指向性功能镜片,所述指向性功能镜片上的纳米结构与图像生成装置输出的图像匹配,在人眼前方投射出会聚波面,形成虚拟景象;或该会聚波面与现实景象形成的波面叠加,得到真实世界信息和虚拟世界信息的融合。在眼球前方的空间中会聚视角图像,形成虚拟景象,其和现实景物在人眼中成像的原理一致,因此长时间观看的视觉疲劳度比传统的三维显示技术大大降低。

Description

一种三维显示装置 技术领域
本发明属于三维图像显示领域,具体涉及一种三维显示装置,包括头戴式三维显示装置。
背景技术
虚拟现实(VR)是一种多源信息融合的交互式三维动态视景仿真,使用户沉浸到该环境中。虚拟现实是多种技术的综合。其中,广角(宽视野)立体显示技术是用户沉浸式体验的前提条件,也是虚拟现实的技术难点。在VR显示系统中,双目视觉是立体显示的基础,即通过左右眼获取不同图像,并由大脑融合成立体图像。但要实现舒适的三维动态视景,双眼获取图像的清晰度,单眼调节对焦在双目会聚视点所在平面等要求也要一并满足。
增强现实(AR)技术,是一种将真实世界信息和虚拟世界信息“无缝”集成的新技术,是把原本在现实世界的一定时间空间范围内很难体验到的实体信息(视觉信息,声音,味道,触觉等),通过电脑等科学技术,模拟仿真后再叠加,将虚拟的信息应用到真实世界,被人类感官所感知,从而达到超越现实的感官体验。真实的环境和虚拟的物体实时地叠加到了同一个画面或空间同时存在。其中AR系统的特点之一:在三维尺度空间增添定位虚拟物体,是显示技术的难点。
总结上述虚拟现实和增强现实技术,如何通过佩戴式可视屏幕获得舒适的三维显示场景,不产生晕眩感,是其共同的技术关键。众多专利从多个角度给出了他们的解决方案。
美国专利申请US20150016777A1公开了一种基于光衍射器件和平面波导的显示装置。所述显示装置包含多层波导结构和衍射器件 阵列。将其与光纤快速扫描结合,由衍射波导器件将照明光转换成出射光场,在人眼视网膜上形成单个成像点。通过改变照明光束进入波导的输入角度和照明光束扫描点位置,使得波导输出不同出射角和不同扩散角光场,这些变化的光场在视网膜上形成高速扫描的光点,并构成3D图像。应用于现实增强领域,实现虚拟景物和真实景物的融合。但该专利利用光衍射器件形成的平行或发散光场不符合人眼成像习惯,易产生晕眩感。此外,受限于光纤扫描速度,这种单纯扫描方式的三维显示方案难以满足当前三维显示巨大信息量的处理与输出要求。
美国专利US008014050B2公开了一种用于三维显示或光开关的光学全息相位板。所描述相位板包含一个体衍射光栅结构和一种光敏材料。通过电极阵列可控制单个像素单元的衍射效率和位相延迟,从而实现光场相位的快速调控。然而这种利用电极阵列实现相位调控的方法遇到了单个像素难以微小化的制约,其显示效果难以满足当前消费者对显示精细度和舒适度的要求。
中国专利201610034105.8公开了一种投影式裸眼三维显示装置。所述显示装置通过投影装置将多视角图像信号投射至指向性投影屏幕,入射图像信号经过相位调制后在可视窗口形成会聚视点,获得3D景象。该方法具有亮度高、3D效果佳等优点,可实现大幅面的裸眼3D显示,可应用于电视、广告机等显示终端。然而,该方法无法应用于虚拟现实和现实增强领域,实现佩戴式装置上的3D景象重现。
因此,国内外还未见有一款能满足无视觉疲劳的头戴式3D显示技术方案。
发明内容
鉴于此,本发明旨在基于全息原理,提供一种指向性纳米结构 功能镜片,通过特定光源的照明,可实现无视觉疲劳的头戴式3D显示方案和显示装置。
为达到上述目的,本发明的技术方案如下:
一种三维显示装置,包括图像生成装置,和可视镜片,所述可视镜片包括至少一层可视镜片单元,所述可视镜片单元设置有纳米结构,所述可视镜片上的纳米结构与图像生成装置输出的图像匹配,在视窗内形成虚拟景象;或该虚拟景象与现实景象叠加,得到真实世界信息和虚拟世界信息的融合。
所述可视镜片由一层可视镜片单元构成,或由两层、三层、四层、或大于四层的可视镜片单元叠合在一起。
所述可视镜片单元包括镜片基底和纳米结构。
进一步的,所述镜片基底包括波导结构。
所述可视镜片为一具有一个、两个、或大于两个独立可视区域的单一整体,或为左右两个分别对应两个眼球的独立可视镜片。
上述可视镜片在视窗内形成一个或大于等于两个视点。
进一步的,所述视点设计间隔小于视窗范围。
进一步的,所述可视镜片上方和中间视点对应远处景象,下方视点对应近处景象。
所述纳米结构为纳米级尺寸的纳米光栅,也称为纳米光栅结构。所述纳米结构为槽形结构、浮雕结构、或镂空结构,其形状为矩形、圆形、菱形、六边形中的一种或多种。
进一步的,所述纳米结构的分布基于以下原则:所述图像生成装置经所述纳米结构在视窗内不同空间位置形成视点。
所述图像生成装置包括投影装置,所述投影装置与至少一层所述可视镜片单元配合实现视窗内虚拟图像显示。
进一步的,所述投影装置经波导结构耦合至所述纳米结构。
上述可视镜片可包含左右两个独立可视区域,左右两个独立可 视区域的纳米结构分布是对称性的。
进一步的,对应于每个所述视点,均设置一组纳米结构与其对应,所述纳米结构的密度及分布依据以下条件布置:单个视点对应的纳米结构密度及分布与距离视轴角度无关,对应于各视点的纳米结构通过互相嵌套的方式均匀排列;或,单个视点对应的纳米结构密度及分布与距离视轴角度有关,所述单个视点对应的纳米结构的分布特征为:与该视点对应的纳米结构在该视点对应视轴附近的排布密度,大于远离视轴区域的排布密度,即对应于各视点的纳米结构通过互相嵌套的方式非均匀排列。
单个所述视点对应的纳米结构密度及分布曲线呈三角函数、方波函数、梯形函数或正弦函数。
附图说明
为了更清楚地说明本发明实施例技术中的技术方案,下面将对实施例技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是人眼结构图。
图2是视锥细胞分布图。
图3是指向性导光薄膜上像素内部纳米光栅在XY平面下的结构图。
图4是图1中的指向性导光薄膜上像素内部纳米光栅在XZ平面下的结构图。
图5是多种纳米光栅像素结构示意图。
图6是单个视点会聚的指向性导光薄膜的纳米结构分布图。
图7是利用纳米结构指向性功能薄膜构筑单视点新波前的示意图。
图8是本发明实施方式下基于反射式投影的一种现实增强显示方案 图。
图9是本发明实施方式下基于投射式投影的一种现实增强显示方案图。
图10是本发明实施方式下的另一种现实增强显示方案图。
图11是本发明实施方式下的另一种现实增强显示方案图。
图12是本发明实施方式下的一种虚拟现实显示方案图。
图13是本发明实施方式下的一种多视角显示方案图。
图14(a)-(e)是视角像素密度的分布曲线图。
图15(a)-(b)是本发明实施方式下利用多焦距纳米结构指向性功能镜片实现的一种多景深显示方案图。
图16是多焦距纳米结构指向性功能镜片的纳米结构分布图。
图17是一种虚拟景物多景深分割示意图。
图18(a)-(b)是本发明实施方式下利用光场扫描实现的一种多景深显示方案图。
图19和图20为本发明实施方式下的基于分频控制的多层可视镜片单元的三维显示装置示意图。
图21(a)和图21(b)为本发明实施方式下的基于分频控制的多层可视镜片单元的三维显示装置示意图。
图22(a)-(b)为本发明实施方式下分频控制电路示例原理图。
图23(a)-(c)是本发明实施例下一种双目头戴式三维显示示意图
图24是头戴式三维显示方案示意图。
图25是头戴式三维现实增强显示方案示意图。
图26和图27是基于多层可视镜片单元的三维显示装置示意图。
图28是本发明头戴式三维显示装置结构实施例示意图。
图29是本发明三维显示装置的电路控制原理图示例。
图30是本发明现实增强三维显示装置的电路控制原理图示例。
图31-图40是本发明的多种应用场景示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
参见图1和图2,图1是人眼结构图。人的眼睛近似球体,眼球包括虹膜101、角膜102、晶状体103、视网膜104、黄斑105;眼睛视线的轴线称为视轴11。
眼球1具有光学成像功能的组织是角膜102和晶状体103。视网膜104位于眼睛后端,是视觉形成的神经信息传递的第一站。视网膜104上的视锥细胞是的主要感光神经元,在视轴11正对终点。附图2是视锥细胞分布图,由图可知,视锥细胞分布极不均匀,在黄斑105中心凹处最密集,在视网膜104其他位置少量分布。因此,中心凹是视觉最敏锐的区域,其直径约为1~3mm。人眼的视场可达150°,但能同时清晰地观察物体的范围只在视轴周围6°~8°。本发明将充分考虑视锥细胞分布特点,设计各视角像素分布,达到视觉体验最优化。
按照上述原理,将每一个纳米光栅结构视为一个像素,该纳米光栅结构的取向决定了光场角度调制特性,其周期决定了光谱滤波特性。该方法中纳米光栅结构的周期(空频)和取向在各亚像素之间的变化连续,即可实现光场的调控和变换。因此,在头戴式可视设备表面制作出多个按需设定的不同取向角和周期的纳米光栅结构之后,理论上就可以获得足够多的不同视点,配合颜色和灰度的控制,就能实现多视角下的裸眼3D显示。
基于上述发现,本发明提出一种头戴式三维显示装置,包括图像生成装置,和对应眼睛的可视镜片,所述可视镜片包括至少一层设置有具有会聚成像功能的纳米结构的纳米结构功能薄膜,从而使得可视镜片成为具有指向性功能的指向性功能镜片,所述指向性功能镜片上的纳米结构与图像生成装置输出的图像匹配,在视窗前方投射出会聚波面,形成虚拟景象;或该会聚波面与现实景象形成的波面叠加,得到真实世界信息和虚拟世界信息的融合。
在实际应用中,可视镜片可以是一个整体的镜片,也可以是分别对应双眼的两个可视镜片,当然,也可以是根据需要以3个或更多个可视镜片来构建。
本发明在眼球前方的空间中会聚视角图像,形成虚拟景象,其和现实景物在人眼中成像的原理一致,因此长时间观看的视觉疲劳度比传统的三维显示技术大大降低。
所述指向性功能镜片(即可视镜片)在人眼视窗内形成多个视点,使单眼能够看到两幅以上视角图像,实现单眼视差效果,及连续的动态视差;所述纳米结构为纳米级尺寸的纳米光栅,所述每一个纳米光栅即为一个纳米光栅像素,每个视角图像由多个纳米光栅像素会聚而成。
根据人眼的特点,可在实际应用中对单个视角图像的对应的纳米光栅像素分布依据眼睛的视轴角度进行优化,即与该视角图像对应的纳米光栅像素在该视角对应视轴附近的排布密度大于远离该视轴区域的排布密度;各视角图像对应的纳米光栅像素通过互相嵌套的方式非均匀排列在指向性功能镜片表面;其优点为充分利用人眼结构特点,使用较少纳米光栅像素数即可获得较高图像质量。
本发明采用基于衍射效应的纳米光栅结构构筑新光场,单个纳米光栅结构与光相互作用,改变其相位,参见图3、图4,图3、图4是结构尺度在纳米级别的衍射光栅(即纳米光栅,也可看作纳米 光栅像素201,其结构我们也称为纳米光栅结构)在XY平面和XZ平面下的结构图。根据光栅方程,衍射光栅像素201的周期、取向角满足以下关系:
(1)tanφ1=sinφ/(cosφ-n sinθ(Λ/λ))
(2)sin21)=(λ/Λ)2+(n sinθ)2-2n sinθcosφ(λ/Λ)
其中,光线以一定的角度入射到XY平面,θ1和φ1依次表示衍射光202的衍射角(衍射光线与z轴正方向夹角)和衍射光202的方位角(衍射光线与x轴正方向夹角),θ和λ依次表示光源201的入射角(入射光线与z轴正方向夹角)和波长,Λ和φ依次表示纳米衍射光栅201的周期和取向角(槽型方向与y轴正方向夹角),n表示光波在介质中的折射率。
换言之,在规定好入射光线波长、入射角以及衍射光线衍射角和衍射方位角之后,就可以通过上述两个公式计算出所需的纳米光栅的周期和取向角了。例如,650nm波长红光以60°角入射,光的衍射角为10°、衍射方位角为45°,通过计算,对应的纳米衍射光栅周期为550nm,取向角为-5.96°。
所述纳米光栅的结构为槽形结构、浮雕结构、或镂空结构,其形状为矩形、圆形、菱形、六边形中的一种或多种。当然,也可以是满足前述要求的其他形状结构。
在实际应用中,所述图像生成装置可以采用投影装置(鉴于头戴式三维显示装置的应用要求,尽可能采用体积更小的微型投影仪),所述投影装置在投影时按照多景深显示方案通过时序扫描法实现单眼的三维显示效果;即对所投影的图像进行分割,在图像分割时,被分割图像被投影至人眼前不同距离,形成距离、大小与现实场景吻合的虚像,从而构成多景深三维图像。
采用多景深的三维显示技术。在投影时按照多景深显示方案通 过时序扫描法实现单眼的三维显示效果。考虑到人眼对近距离物体的远近分辨能力强于对远距离物体的远近分辨能力。在图像分割时,可对近距离物体进行细致分割,对远距离物体进行大致分割。被分割图像被投影至人眼前不同距离,形成距离、大小与现实场景吻合的虚像,形成多景深三维图像。使观察者既有沉浸感,又与现实景物有效融合。由此,当观察者通过调节可将眼睛聚焦到单一或邻近距离的景物,而非聚焦平面上的景物成模糊像。亦可将根据实际应用,获得多视点多景深的三维显示方案,提高三维效果体验。
在一些实施例中,所述纳米结构功能薄膜上的纳米光栅结构的分布基于以下原则:图像生成装置经所述纳米结构功能薄膜上的纳米光栅结构在眼球前方的空间中不同空间位置会聚的视点与人眼眼球的移动位置相匹配;其中,眼球前方近处空间对应可视镜片的下方区域,眼球前方中间空间对应可视镜片的中间区域,眼球前方远上方空间,对应可视镜片的上方区域,眼球前方的左右空间分别对应可视镜片的相应区域;从而会聚在眼球前方空间中的视点形成距离、大小与现实场景吻合的虚像,构成光视场和不同景深三维图像;通过调节眼睛聚焦到邻近和远距离的景物,获得对应清晰的3D显示,即眼睛可以选择不同景深的虚拟景物进行分别聚焦,这和观看现实景物的体验是一致的。
在一些实施例中,可视镜片为一单一整体或左右两个分别对应两个眼球的独立可视镜片;根据双目视差特性,在单一整体的可视镜片上或左右两个可视镜片上匹配左、右眼相应视点对应的纳米光栅的分布和位置,且匹配对应的输出视图信息,从而获得符合自然习惯的三维显示体验。
充分考虑双目视差特性,在左右两个光场镜片上匹配左右眼相应视点对应的纳米光栅像素分布和位置,以及匹配对应的输出视图信息,可获得符合自然习惯的三维显示体验。
在一些实施例中,所述图像生成装置为微型投影仪,所述微型投影仪从可视镜片后方侧面大角度投影至纳米结构功能薄膜,或微型投影仪从可视镜片前方侧面大角度投影至纳米结构功能薄膜,实现可视镜片上纳米结构功能薄膜的指向性光照明,所述微型投影仪出射的照明光源可以是点光源、线光源、或面光源,其出射光强可以随时间或空间变化,微型投影仪实现光场灰度即振幅信息调制,并与纳米结构功能薄膜调制的光场相位信息匹配,最终在人眼前方投射出会聚波面,使人眼可以看到逼真的虚拟三维图像,或该会聚波面进一步与现实景象形成的波面叠加,例如现实场景的传播光场透过半透明的纳米结构功能薄膜,与虚拟景象的会聚波面叠加,或者通过数码镜头将真实世界景象实时采集并与虚拟信息融合后一起投射在人眼前方,得到真实世界信息和虚拟世界信息的融合。
在一些实施例中,所述可视镜片依次由光耦合波导一体器件、镜片基体和纳米结构功能薄膜叠合而成,或直接在光耦合波导一体器件上嵌入或贴合纳米结构功能薄膜而成;微型投影仪与光耦合波导一体器件光学耦合连接,从而实现可视镜片上纳米结构功能薄膜的指向性光照明,微型投影仪为点扫描或线扫描投影成像,或面投影成像,其出射光强能够随时间或空间变化,微型投影仪实现光场灰度(振幅)信息调制,并与可视镜片调制的光场相位信息匹配,最终在人眼前方投射出会聚波面,使人眼可以看到逼真的虚拟三维图像,或该会聚波面进一步与现实景象形成的波面叠加,得到真实世界信息和虚拟世界信息的融合。
进一步的,所述可视镜片由一层可视镜片单元构成,或由两层、三层、4层、或大于4层的可视镜片单元叠合在一起;
每个可视镜片单元包括依次由光波导器件、镜片基体和纳米结构功能薄膜叠合而成;
或直接在光波导器件上嵌入或贴合纳米结构功能薄膜而成;
或纳米结构功能薄膜嵌入或贴合在镜片基体的一面上,然后在镜片基体的另一面上再设置光波导器件,或将镜片基体及纳米结构功能薄膜一起嵌入到光波导器件上,光波导器件的长宽尺寸及厚度尺寸均大于镜片基体及纳米结构功能薄膜的对应尺寸;
每一个可视镜片单元的光波导器件均光学连接有一个光线耦合器件;
所述图像生成装置为微型投影仪,所述微型投影仪的数量与光线耦合器件的数量一致,并一一对应光学连接;或微型投影仪为一个,所有光线耦合器件均设置在可视镜片的同一侧,这些光线耦合器件与微型投影仪之间设有一个光切换器件,并通过光切换器件切换某一个光线耦合器件与微型投影仪进行光学连接;
微型投影仪通过光线耦合器件耦合进可视镜片上的光波导器件,在全反射的作用下,光线在这个可视镜片内传播,可视镜片包含有一组设置于纳米结构功能薄膜上的像素式纳米光栅结构,与光线作用发生衍射,使部分光线从可视镜片出光面逸出,出射光线角度与纳米结构的周期、取向有关,出射光强效率与纳米结构的像素大小、结构深度有关,出射光线经过纳米光栅结构后在可视镜片出光面形成会聚视点,微型投影仪通过点扫描或线扫描投影成像,其出射光强能够随时间或空间变化,微型投影仪,通过扫描方式实现光场灰度即振幅信息调制,并与即可视镜片调制的光场相位信息匹配,最终在人眼前方投射出会聚波面,使人眼可以看到逼真的虚拟三维图像,或该会聚波面进一步与现实景象形成的波面叠加,得到真实世界信息和虚拟世界信息的融合。
在另外一些实施例中,所述可视镜片由一层可视镜片单元构成,或由两层、三层、4层、或大于4层的可视镜片单元叠合在一起;
每个可视镜片单元包括依次由光波导器件、镜片基体和纳米结 构功能薄膜叠合而成;
或直接在光波导器件上嵌入或贴合纳米结构功能薄膜而成;
或纳米结构功能薄膜嵌入在镜片基体一面上,然后在镜片基体的另一面上再设置光波导器件,或将镜片基体及纳米结构功能薄膜一起嵌入到光波导器件上,光波导器件的长宽尺寸及厚度尺寸均大于镜片基体及纳米结构功能薄膜的对应尺寸;
每一个可视镜片单元的光波导器件均光学连接有一个光线耦合器件;
所述图像生成装置包括光源和空间光调制器;
所述光源的数量与光线耦合器件的数量一致,并一一对应光学连接;所述光源包括点光源或线光源,及一个光线准直器件,所述点光源或线光源通过光线准直器件与光线耦合器件光学连接;或光源为一个,所有光线耦合器件均设置在可视镜片的同一侧,这些光线耦合器件与光源之间设有一个光切换器件,并通过光切换器件切换某一个光线耦合器件与光源进行光学连接,所述光源包括点光源或线光源,及一个光线准直器件,所述点光源或线光源通过光线准直器件与光切换器件光学连接;
所述空间光调制器设置于可视镜片与人眼之间;
光源通过光线准直器件和光线耦合器件耦合进光波导器件,也即是导入可视镜片,在全反射的作用下,光线在纳米结构功能薄膜内传播,纳米结构功能薄膜包含有一组像素式纳米光栅结构,与光线发生衍射作用,使部分光线从纳米结构功能薄膜出光面逸出,出射光线角度与纳米光栅结构周期、取向有关,出射光强效率与纳米光栅结构的像素大小、结构深度有关,光源的光经过纳米光栅结构后在可视镜片出光面形成一个或多个会聚视点,空间光调制器放置在可视镜片与人眼之间,空间光调制器进行光场灰度即振幅信息调制,并与纳米结构功能薄膜调制的光场相位信息匹配,最终在人眼 前方投射出会聚波面,使人眼可以看到逼真的虚拟三维图像,或该会聚波面进一步与现实景象形成的波面叠加,得到真实世界信息和虚拟世界信息的融合。
上述实施例中,当采用一个光源及光切换器件时,采用分频照明的方法实现基于多层纳米结构功能薄膜光场镜片的三维显示。将多层纳米结构功能薄膜紧密叠合形成光场镜片。通过分频的方式控制照明光源,实现出光空间内出射光场按各层纳米结构功能薄膜设计的纳米光栅结构控制出射光场依次变换。利用光切换器件将照明光源交替切换至各层指向性导光薄膜,实现各层指向性导光薄膜的交替照明。通过该方法,既兼顾了图像清晰度,又提供了更多视角信息,可实现良好的3D显示效果。此外,在制作上,纳米光栅像素与空间光调制器像素无需对准,极大降低了制造难度。其优势为视角更连续,3D体验更佳,制作更简便。
在一些实施例中,纳米结构功能薄膜上的纳米光栅结构的设置依据以下条件:眼睛观看位置不同、距离不同的物体时,眼睛转动,视轴随之转动;所述纳米光栅结构分别形成至少两个视角,或多个连续视角形成连续视窗,便于眼睛转动时观看虚拟景物;纳米结构功能薄膜上制作的纳米光栅结构,其形成的视角一一对应于眼球水平左右转动的至少两个视角,及一一对应眼球上下转动的至少两个视角,根据人眼视觉习惯,可视镜片上方和中间视角对应远处景象,下方视角对应近处景象;或进一步设计视角间隔小于人眼瞳孔大小,保证单眼能够看到两幅以上视角图像,实现单眼视差效果,即单眼聚焦的位置位于显示物体上。
在一些实施例中,所述可视镜片为两个独立的可视镜片分别对应两个眼球,左右两个可视镜片的纳米结构分布是对称性的,两者之间产生的光场具有双目视差,在眼球移动时,左右可视镜片的会聚光场形成视差效应,即左眼获得的图像包含更多的左方向信息, 右眼获得的图像包含更多的右方向信息,通过大脑融合形成立体图像。
在一些实施例中,所述图像生成装置为微型投影仪或空间光调制装置,所述头戴式三维显示装置中设有眼球跟踪装置,所述眼球跟踪装置跟踪眼球动态变化,确定视轴角度和瞳孔位置,然后将上述信息转换为控制信号,控制微型投影仪或空间光调制器,在可视镜片的不同部分投影相应的图像,使得会聚的视点位于眼睛的视轴上。
在一些实施例中,对应于每个视角,纳米结构功能薄膜上设置一组纳米光栅像素与其对应,所述纳米光栅像素的密度及分布依据以下条件布置:
一种方式是,单个视角对应的纳米光栅像素与距离视轴角度无关,对应于各视角的纳米光栅像素通过互相嵌套的方式均匀排列在纳米结构功能薄膜上;
或,另一种布置方式,单个视角对应的纳米光栅像素与距离视轴角度有关,所述单个视角对应的纳米光栅像素的分布特征为:与该视角对应的纳米光栅像素在该视角对应视轴附近的排布密度,大于远离视轴区域的排布密度,即对应于各视角的纳米光栅像素通过互相嵌套的方式非均匀排列在纳米结构功能薄膜上。这一种排布方式既兼顾了视觉体验的质量,又可以减少纳米光栅像素的数量,降低加工成本。
在一些实施例中,所述纳米结构功能薄膜为多焦距纳米结构指向性功能薄膜,其在人眼明视区域投影形成至少两个深度的虚像,当人眼通过调节眼球聚焦到近距离景象时,远距离景象模糊,反之,当人眼聚焦到远距离景象时,近距离景象模糊。
比如,所述可视镜片为具有多个焦距的离轴菲涅尔透镜,在近轴条件下,其成像关系可简单近似为:
Figure PCTCN2017083805-appb-000001
其中,u和u’分别为物距和像距,f为菲涅尔透镜焦距,每个菲涅尔透镜具有不同焦距,并在人眼前方投影出景深不同的多个虚像,其垂轴放大率β即像高y′与物高y之比也将随之改变:
Figure PCTCN2017083805-appb-000002
所述纳米光栅结构为多个离轴菲涅尔透镜结构的互相嵌套,形成多个焦距的离轴菲涅尔透镜结构,从而显示不同距离的图像,实现多景深图像分离显示,通过改变投影物体大小,使所呈现的虚像大小与其对应的远近距离比例符合人眼观察实景的比例。
以三个焦距的实例举例说明,所述纳米光栅结构为三个离轴菲涅尔透镜结构的互相嵌套,形成三个焦距的离轴菲涅尔透镜结构,所述三个离轴菲涅尔透镜结构的焦距依次变大,分别对应远景、中景、近景的图像显示,其对应的纳米光栅嵌套排列方式如下:
第一排,所述纳米光栅依次对应于远景、中景、近景、远景;
第二排,所述纳米光栅依次对应于中景、近景、远景、近景;
第三排,所述纳米光栅依次对应于近景、远景、中景、近景。
以此类推,可以制作任意需要的多焦距会聚的纳米结构功能薄膜。
参见图5(a)、图5(b)、图5(c)、图5(d)、图5(e)、图5(f)、图5(g)、图5(h)、图5(i)和图5(j),这些附图是多种含有纳米光栅像素结构的功能薄膜的示意图。纳米光栅结构可由单种材料组成,亦可由多种材料组成,其材质可为树脂、塑料、橡胶、玻璃、聚合物、光折变晶体、金属、金属氧化物等。其中,图5(a)、图5(c)、图5(g)、图5(f)、图5(i)是由基材2101和功能层2102两种材质构成纳米结构功能薄膜的示意 图,功能层2102形成纳米光栅结构。图5(e)是由一种单一材质基材2101构成纳米结构功能薄膜的示意图,即直接在基材上制备纳米光栅。图5(b)、图5(d)、图5(h)和图5(i)是由基材2101、功能层2102和复合层2103三种材质构成纳米结构功能薄膜的示意图。
在本发明中,可视镜片可以由可视镜片的镜片基体与具有纳米光栅结构的功能薄膜复合而成,也可以由多层功能薄膜复合而成,或者直接在镜片基体上加工纳米光栅而成。因此,纳米光栅可以加工在上述功能薄膜上,然后再贴合或嵌入到镜片基体上,也可以直接在镜片基体上加工所述纳米光栅。也就是说,图5(a)-(j)中的标识2101基材,既可以代表功能薄膜的基材,也可以代表镜片的本体。当可视镜片中设有光波导器件时,可以按照前述复合结构进行构建可视镜片。
其中,图5(e)是在基材2101上直接加工纳米光栅,而图5(f)是将功能层2102嵌入到基材2101内部,从而将纳米光栅设置于基材2101的内部。图5(g)和图5(h)是基材2101和功能层2102均制备纳米光栅,并且相互嵌套。图5(a)-(d),及图5(i)、图5(j)是将纳米光栅结构的功能层2101贴合于基材2101上。
纳米光栅结构的本质是光学折射率在微纳米尺度空间内周期性变化并可与光作用发生衍射效应。本发明提出的上述纳米结构功能薄膜,其中纳米光栅像素可以采用紫外连续变空频光刻技术以及纳米压印进行制作,该紫外连续变空频光刻技术参照申请号为CN201310166341.1的中国专利申请记载的光刻设备和光刻方法。需要指出的是,在本发明中,可以采用光刻方法在光滑表面制作出各个不同指向的纳米光栅。纳米光栅的厚度为10um-2mm,其结构可以是浮雕型的,通过上述纳米光刻方法制作纳米结构,再做出能够用 于压印的模板,然后通过纳米压印批量压印出纳米光栅构成的像素阵列。亦可是折射率调制型,通过纳米光刻在折射率调制型记录材料(如光致聚合物薄膜、光折变晶体玻璃等)上曝光制备。
附图6(a)-(f)是含有纳米光栅像素的功能薄膜22与镜片基体21构成指向性功能镜片(可视镜片2)或可视镜片单元的结构示意图。如图6(a)、图6(b)和图6(c)所示,通过在镜片基体21表面贴合纳米结构功能薄膜22,或在镜片基体21内部嵌入纳米结构功能薄膜22(图6(d)、图6(e))获得指向性功能镜片。值得指出的是,制作单层和多层紧密叠合的纳米光栅结构时(如图6(b)、图6(e)、图6(f)),可以在光栅结构表面蒸镀或贴合一层与基底折射率不同的透明介质层23,保护纳米光栅结构特性和导光特性。
参见图7,图7是实现单个视点会聚的指向性功能薄膜的纳米结构分布图。图7所示的纳米光栅结构相当于单个离轴菲涅尔透镜结构,可以使图像会聚于视点1。n×m个这样的纳米光栅结构构成了n×m个不同焦点的离轴菲涅尔透镜结构(每一组纳米光栅可以根据需要模拟不同焦距的离轴菲涅尔透镜结构)。此外,通过单个像素复杂纳米结构的设计,可使出射光线对入射光波长不敏感,如通过渐变纳米结构,可使多波长入射光获得相同会聚效果。图上像素(纳米光栅)不限于矩形像素,也可以是圆形,菱形,六边形等像素结构组成。图上像素(纳米光栅)亦可互相分立,适当设计像素(纳米光栅)间距,可使之满足照明空隙要求。此外,通过调节图上各像素(纳米光栅)的像素大小、结构或槽深等结构参数依空间分布变化,可使各像素点获得理想的衍射效率,达到均匀照明的目的。
图7利用具有指向性功能的纳米结构功能薄膜(或称为纳米结构指向性功能薄膜)构筑单视点新波前。在自然观看情况下,自然 景物向四周发射漫反射光,而景物投射到人眼的光线被角膜和晶状体成像。同样的,由纳米结构指向性功能薄膜构筑的新波前需符合自然观看条件,即:由纳米结构指向性功能薄膜构筑的新波前应为会聚波面,在眼睛前方形成至少一个会聚点,即视点。眼睛应位于视点后的观察区域,从而使人眼在观看虚拟物体时处于放松和舒服的状态。考虑到头戴式可视设备屏幕距人眼距离通常为10mm-50mm,可优化纳米结构指向性功能薄膜构筑的视点距离,使眼睛处于最佳观察范围内。
参见图8,图8是本发明实施方式下基于反射式投影的一种现实增强显示方案图。可视镜片2嵌入或贴合有纳米结构功能薄膜22,微型投影仪3从可视镜片2后方侧面大角度投影至指向性功能薄膜22,实现可视镜片上纳米结构的指向性光照明。微型投影仪3出射的照明光源可以是点光源、线光源、或面光源。其出射光强可以随时间或空间变化。微型投影仪3实现光场灰度(振幅)信息调制,并与纳米结构功能薄膜22调制的光场相位信息匹配,最终在人眼前方投射出会聚波面,使人眼可以看到逼真的虚拟三维图像。或该会聚波面进一步与现实景象形成的波面叠加,例如现实场景的传播光场透过半透明的纳米结构功能薄膜,与虚拟景象的会聚波面叠加,或者通过数码镜头将真实世界景象实时采集并与虚拟信息融合后一起投射在人眼前方,得到真实世界信息和虚拟世界信息的融合。
参见图9,图9是本发明实施方式下基于投射式投影的一种现实增强显示方案图。将纳米结构功能薄膜22贴合或嵌入佩戴式可视镜片2。微型投影仪3从可视镜片前方侧面大角度投影至纳米结构功能薄膜22,实现可视镜片2上纳米结构的指向性光照明。微型投影仪3出射的照明光源可以是点光源、线光源、或面光源。其出射光强可以随时间或空间变化。微型投影仪3实现光场灰度(振幅) 信息调制,并与光场镜片(即可视镜片2)调制的光场相位信息匹配,最终在人眼前方投射出会聚波面,使人眼可以看到逼真的虚拟三维图像。该会聚波面可以进一步与现实景象形成的波面叠加,得到真实世界信息和虚拟世界信息的融合
参见图10,图10是本发明实施方式下的另一种现实增强显示方案图。将纳米结构功能薄膜22嵌入或贴合在头戴式三维显示装置的可视镜片上。微型投影仪3耦合进可视镜片(光线耦合器件4是现有技术,不再赘述),从而实现可视镜片屏幕上纳米结构的指向性光照明。微型投影仪3可以是点扫描或线扫描投影成像,或面投影成像。其出射光强可以随时间或空间变化。微型投影仪3实现光场灰度(振幅)信息调制,并与可视镜片(实质是纳米结构功能薄膜22)调制的光场相位信息匹配,最终在人眼前方投射出会聚波面,使人眼可以看到逼真的虚拟三维图像。该会聚波面可以进一步与现实景象形成的波面叠加,得到真实世界信息和虚拟世界信息的融合。例如,图10中的光线耦合器件4及可视镜片均为透明或半透明材质,则现实景物的光线透过光线耦合器件4及可视镜片,与虚拟信息融合,使眼睛看到虚拟和现实结合的视觉呈现。
参见图11,图11是本发明另一种实现现实增强显示方案示意图。可视镜片包括:镜片基体21、纳米结构功能薄膜22和光波导器件5,将纳米结构功能薄膜22嵌入或贴合在镜片基体21的一面上,然后在镜片基体21的另一面上再设置光波导器件5(也可以根据需要,将镜片基体21及纳米结构功能薄膜22一起嵌入到光波导器件5上,如图11所示的那样,光波导器件5的长宽尺寸及厚度尺寸均大于镜片基体21及纳米结构功能薄膜22的对应尺寸),微型投影仪3与光波导器件5通过光线耦合器件4进行光学连接。微型投影仪3通过光线耦合器件4耦合进可视镜片上的光波导器件5。在全反射的作用下,光线在这个可视镜片内传播。可视镜片包含有 一组设置于纳米结构功能薄膜22上的像素式纳米结构(纳米光栅),与光线作用发生衍射,使部分光线从导光镜片出光面逸出。出射光线角度与纳米结构的形状(周期、取向)有关。出射光强效率与纳米结构的像素大小、结构深度有关。因此,通过设计特定纳米光栅结构,可在可视镜片出光面形成会聚视点。微型投影仪3通过点扫描或线扫描投影成像。其出射光强可以随时间或空间变化。微型投影仪3通过扫描方式实现光场灰度(振幅)信息调制,并与指向性功能屏幕(即可视镜片)调制的光场相位信息匹配,最终在人眼前方投射出会聚波面,使人眼可以看到逼真的虚拟三维图像。该会聚波面可以进一步与现实景象形成的波面叠加,得到真实世界信息和虚拟世界信息的融合。
参见图12,图12是本发明另一种实现现实增强显示方案示意图。可视镜片2(导光镜片)包括:镜片基体21、纳米结构功能薄膜22和光波导器件,将纳米结构功能薄膜22嵌入或贴合在镜片基体21的一面上,然后在镜片基体21的另一面上再设置光波导器件5(也可以根据需要,将镜片基体21及纳米结构功能薄膜22一起嵌入到光波导器件上,如图12所示的那样,光波导器件的长宽尺寸及厚度尺寸均大于镜片基体21及纳米结构功能薄膜22的对应尺寸),本实施例中,还设置有光线准直器件6、光线耦合器件4和空间光调制器7,照明光源(点光源或线光源,图12中表达为:点/线光源8)依次通过光线准直器件6、光线耦合器件4与光波导器件5进行光学连接,而空间光调制器7设置在人眼与纳米结构功能薄膜22之间。照明光源(点光源或线光源)通过光线准直器件6和光线耦合器件4耦合进光波导器件5,也即是导入可视镜片2。在全反射的作用下,光线在纳米结构功能薄膜22(也可称为导光膜或导光薄膜)内传播。纳米结构功能薄膜22包含有一组像素式纳米光栅结构,与光线作用发生衍射,使部分光线从纳米结构功能薄 膜22出光面逸出。出射光线角度与纳米光栅结构形状(周期、取向)有关。出射光强效率与纳米光栅结构的像素大小、结构深度有关。因此,通过设计特定纳米光栅结构,可在显示器件(可视镜片)出光面形成一个或多个会聚视点。将空间光调制器7(如液晶面板或其它平板显示器)放置在可视镜片与人眼之间,利用空间光调制器7实现光场灰度(振幅)信息调制,并与纳米结构功能薄膜22调制的光场相位信息匹配,最终在人眼前方投射出会聚波面,使人眼可以看到逼真的虚拟三维图像。该会聚波面可以进一步与现实景象形成的波面叠加,得到真实世界信息和虚拟世界信息的融合。
参见附图13(a)和图13(b),图13(a)和图13(b)是本发明实施方式下的一种多视角显示方案图。眼睛观看位置不同、距离不同的物体时,眼睛转动,视轴随之转动。为此,需要针对性的设计纳米结构功能薄膜上的纳米光栅结构,目标是这些纳米光栅结构分别形成至少两个视角,或多个连续视角形成连续视窗,便于眼睛转动时观看虚拟景物。如图13(a)所示,在纳米结构功能薄膜22上制作形成三个视角的纳米光栅结构,以对应眼球水平左右转动的三个视角,分别是对应视角91的纳米光栅结构分布区域9101、对应视角92的纳米光栅结构分布区域9201、和对应视角93的纳米光栅结构分布区域9301,它们对应的视轴分别为1101、1102、1103。眼睛左右移动,对应左右视角(即视角91和视角93)及其纳米光栅像素分别区域9101和纳米光栅像素分别区域9301。因为眼睛还可能上下移动,因此还可以对应上下视角设置纳米光栅像素区域。如图13(b)所示,在纳米结构功能薄膜上制作形成三个视角的纳米光栅结构,以对应眼球上中下转动的三个视角,分别是对应视角94的纳米光栅结构分布区域9102、对应视角95的纳米光栅结构分布区域9202、和对应视角96的纳米光栅结构分布区域9302,它们对应的视轴分别为1201、1202、1203。眼睛上下移动,对应上中下 视角(即视角94、视角95和视角96)及其纳米光栅像素分布区域9102、纳米光栅像素分布区域9202和纳米光栅像素分布区域9203。优选地,考虑人眼视觉习惯,可视镜片上方和中间视角(视角95、视角96)对应远处景象,下方视角(视角94)对应近处景象。此外,亦可设计视角间隔小于人眼瞳孔大小,保证单眼能够看到两幅以上视角图像,实现单眼视差效果,即单眼聚焦的位置位于显示物体上。视角小于人眼瞳孔的设计,可减小辐辏调节矛盾,实现连续的动态视差,使观看效果更加自然。
在一些实施例中,为了得到最优的视觉质量,可以在头戴式三维显示装置中增设眼球跟踪装置,利用眼球跟踪装置跟踪眼球动态变化,确定视轴角度和瞳孔位置,然后将上述信息转换为控制信号,控制光源、微型投影仪或空间光调制器等器件,在可视镜片的不同部分(区域)投影相应的图像,使得会聚的视点位于眼睛的视轴上,达到最优视景质量和减少所需处理(传输)数据量的目的。
参见附图14(a)-(e),图14(a)-(e)是视角像素密度的分布曲线图。图中,纵坐标Y代表纳米光栅像素(纳米光栅)的分布密度或数量,中间纵向的虚线代表某一视轴,横坐标X代表纳米结构功能薄膜上某一区域与该视轴之间的距离或者夹角,视轴与横坐标的交点标注为0,表示纳米结构功能薄膜上该位置(或区域)与该视轴之间的距离(或夹角)为0。情形1,如图14(a)所示,单个视角像素与距离视轴角度无关。各视角像素通过互相嵌套的方式均匀排列在指向性功能镜片表面,即整个纳米结构功能薄膜上的纳米光栅的分布是均匀排布的。情形2,如图14(b)-(d)所示,单个视角像素与距离视轴角度有关。考虑到人眼特性,人视觉在视轴周围6°~8°为敏感区。参考附图2的视锥细胞分布图,根据实际应用,在设计纳米光栅像素时,可将所设计单个视角的像素分布曲线呈三角函数、方波函数、梯形函数、正弦函数(取其二分之一周期) 或其他函数类型,其共同特征为:与该视角对应的纳米光栅像素在该视角对应视轴附近密集排布,远离视轴区域稀疏排布。即,单个视角对应的纳米光栅像素与距离视轴角度有关,所述单个视角对应的纳米光栅像素的分布特征为:与该视角对应的纳米光栅像素在该视角对应视轴附近的排布密度,大于远离视轴区域的排布密度,即对应于各视角的纳米光栅像素通过互相嵌套的方式非均匀排列在纳米结构功能薄膜上。
以图14(e)以水平三视角为例,说明多视角的纳米光栅像素通过互相嵌套的方式非均匀排列在指向性功能镜片表面。对应各视角的各视点的纳米光栅像素分布如虚线所示为三角函数分布,图中以三个视角即视角91、92、93及其对应的视轴1101、1102、1103为依据进行纳米光栅像素的排布,每一个视轴对应的纳米光栅排布密度函数呈三角形,距离视轴越近,纳米光栅像素密度越高,距离到一定数值后,密度降为零,其下降斜率是线性的,因此分布密度曲线最终呈等腰或等边三角形形态。图中等腰梯形的实线为三个视角合成后,光场镜片内纳米结构的整体像素密度分布。其优点为充分利用人眼结构特点,使用较少像素数即可获得较高图像质量。同理可以根据需要设计更多视角的纳米光栅排列分布。其他类型函数的分布密度曲线也基于上述同样的原理。
参见图15(a)和图15(b),图15(b)是在图15(a)的基础上利用多焦距纳米结构指向性功能镜片实现的一种多景深显示方案图。通过将纳米结构功能薄膜设计为多焦距纳米结构指向性功能薄膜,使之在人眼明视区域投影形成至少两个深度的虚像(图15(b)中以远、中、近三个景深深度为例)。当人眼通过调节眼球聚焦到近距离景象时,远距离景象模糊,反之,当人眼聚焦到远距离景象时,近距离景象模糊。这种符合人眼调焦习惯的立体显示方式使三维景象观看效果更加自然。上述是以两个焦距为例,依次类 推,理论上可以实现近乎于和实景相似的光学呈现,使得虚拟景象足以乱真。为实现上述构想,多焦距纳米结构指向性功能镜片(即可视镜片)可视作一个具有多个焦距的离轴菲涅尔透镜。在近轴条件下,其成像关系可简单近似为:
Figure PCTCN2017083805-appb-000003
其中,u和u’分别为物距和像距,f为菲涅尔透镜焦距。通过设置多个菲涅尔透镜焦距,可在人眼前方投影出景深不同的多个虚像。值得注意的是,垂轴放大率β(像高y′与物高y之比)也将随之改变:
Figure PCTCN2017083805-appb-000004
因此,设计中应通过改变投影物体大小,使所呈现的虚像大小与其对应的远近距离比例合适,就如现实中远近不同的景物在人眼中呈现的状态一样,使观察者既有沉浸感,又可与现实景物有效融合。其中,投影物体的大小可通过多种方法改变,比如,可控制图像芯片,改变其输出的物体图像大小,亦或改变投影引擎的放大倍率,还可调整投影引擎与多焦距纳米结构指向性功能镜片的相对距离,使投影物体大小发生变化。此外,根据实际应用,将多视角和多景深结合,获得视差立体效果和眼部肌肉调焦立体效果的融合,使观看效果更加自然。
参见图16。图16是多焦距纳米结构指向性功能镜片的纳米光栅结构分布图。其纳米光栅结构相当于多个离轴菲涅尔透镜结构的互相嵌套,形成多个焦距(图中示例为3个,焦距a、焦距b、焦距c)。图上像素(纳米光栅)不限于矩形像素,也可以是圆形,菱形,六边形等像素结构组成。图上像素亦可互相分立,适当设计像素间距,可使之满足照明空隙要求。此外,通过调节图上各像素 的像素大小、结构或槽深等结构参数依空间分布变化,可使各像素点获得理想的衍射效率,达到均匀照明的目的。使之与投影图像或空间光调制器输出图像对准,实现多景深显示时,应将短焦距(如图16中的焦距a)对应纳米光栅像素单元与近景图像像素单元匹配,即图16中标号为001的纳米光栅像素单元;长焦距(如图16中的焦距c)对应的纳米光栅像素单元与远景图像像素单元匹配,即图16中标号为003的纳米光栅像素单元;中焦距(如图16中的焦距b)对应的纳米光栅像素单元与中景图像像素单元匹配,即图16中标号为002的纳米光栅像素单元。从而实现多景深图像分离显示。各焦距对应的纳米光栅像素单元进行相互嵌套排布,如图16所示的三个焦距对应的纳米光栅像素单元的嵌套排布示例,以横向4个单元,纵向3个单元为例,第一排从左到右依次为001、002、003、001;第二排为002、003、001、002;第三排为003、001、002/003。
参见图17(a)、图17(b)、图17(c)和图17(d),图17(a)、图17(b)、图17(c)和图17(d)是一种虚拟景物多景深分割示意图。以17(a)所示图像为例,可根据景物远近关系分割成若干图像(图示中以近景、中景、远景来划分举例说明,分别对应图17(b)、图17(c)、图17(d))。考虑到人眼对近距离物体的远近分辨能力优于对远距离物体的远近分辨能力。在图像分割时,可对近距离物体进行细致分割,对远距离物体进行大致分割。被分割图像被投影至人眼前不同距离,形成多景深三维图像。
参见附图18(a)和图18(b),图18(a)和图18(b)是本发明实施方式下利用光场扫描实现的一种多景深显示方案图。不同于上述实施例中利用指向性功能镜片的纳米像素结构设计实现多个会聚视角,该实施例通过快速移动微型投影仪的空间位置或(和)投影角度,改变纳米像素单元的入射角度,从而改变出射光场,并 在观察区域内实现多个连续视角的扫描,尤其是深度方向的扫描。对于单个视点,纳米结构指向性功能薄膜可视作一个离轴菲涅尔透镜。在近轴条件下,其成像关系可简单近似为:
Figure PCTCN2017083805-appb-000005
其中,u和u’分别为物距和像距,f为菲涅尔透镜焦距。通过调节物距,可在人眼前方投影出景深不同的虚像。充分考虑人眼成像特点,虚像距离人眼位置应在其近点和远点之间。值得注意的是,垂轴放大率β(像高y′与物高y之比)也将随之改变:
Figure PCTCN2017083805-appb-000006
因此,设计中应通过改变投影物体大小,使所成虚像大小合适,使观察者既有沉浸感,并可与现实景物有效融合。由此,当观察者通过调节将眼睛聚焦到近距离景象时,远距离景象模糊,反之亦然,使得观察者获得与现实景物近乎一样的观看体验。将该实施例与上述实施例结合,可根据实际应用,获得多视点多景深的三维显示方案,提高三维效果体验。
参见图19,图19为本发明实施方式下一种基于多层纳米结构功能薄膜(也可称为指向性导光薄膜)的显示方案图。多层纳米结构功能薄膜叠加(图19中以三层为例),实质上是由三层可视镜片单元紧密叠加,每一层可视镜片单元均包括光波导器件5、镜片基体21、纳米结构功能薄膜22,每一层均通过光线耦合器件4与一台微型投影仪3光学连接。每层可视镜片单元均由独立的微型投影仪3和光线耦合器件4控制。微型投影仪3可以是点扫描或线扫描投影成像,或面投影成像。其出射光强可以随时间或空间变化。微型投影仪3可放置在各层导光薄膜同侧,亦可放置在不同侧。通过多层指向性导光薄膜紧密叠合的方法,本质上是利用空间复用的 方法增加可视屏幕可显示信息量,实现多视点或多焦距的三维显示。通过该方法,既兼顾了图像清晰度,又提供了更多视角信息,可实现良好的裸眼3D显示效果,其优势为视角更连续,3D体验更佳。
参见图20,图20为图19方案的一种变形,多层的可视镜片单元结构相同(图示中也以3层为例,可以根据需要做成2层、4层、5层或更多层),每一层可视镜片单元也均对应一个光线耦合器件4,但是这些光线耦合器件4均设置在每一层光波导器件5的同一侧,然后微型投影仪3是共用一个,在各光线耦合器件4与微型投影仪3之间设置一个光切换器件10,通过光切换器件10来切换微型投影仪3与某一个光线耦合器件4连通。微型投影仪3可以是点扫描或线扫描投影成像,或面投影成像。其出射光强可以随时间或空间变化,实现各层纳米结构功能薄膜的交替照明。因此,通过分频的方式控制微型投影仪3,可实现各纳米结构功能薄膜依次照明,即出光空间内出射光场按各纳米结构功能薄膜通过纳米光栅结构控制的出射光场依次变换。通过分频式多层纳米结构功能薄膜紧密叠合的方法,本质上是利用时空复用的方法增加可视屏幕可显示信息量,实现多视点或多焦距的三维显示。通过该方法,既兼顾了图像清晰度,又提供了更多视角信息,可实现良好的3D显示效果。其优势为视角更连续,3D体验更佳。
为了实现上述两个实施例的分频控制,所述头戴式三维显示装置还设有分频控制装置,所述分频控制装置包括:
分频电路,用于生成周期性控制信号;
脉冲发生电路,用于生成基准脉冲信号,与分频电路的输入端连接,将基准脉冲信号发送给分频电路,从而调整周期性控制信号的频率;
当微型投影仪为一个时,所述分频电路的另一输出端连接光切 换器件,控制光切换器件按照设定的频率周期性依次切换微型投影仪对各层可视镜片单元的依次照明;
当微型投影仪与光线耦合器件的数量一致时,所述分频电路与各微型投影仪之间还设有照明控制电路,所述照明控制电路根据分频电路的周期性控制信号,按照设定的频率周期性依次启闭各微型投影仪对各层可视镜片单元的依次照明。
参见图21(a)、图21(b),图21(a)、图21(b)是在图12示例的基础上进一步的扩展和优化。
可视镜片采用多层纳米结构功能薄膜叠加(图21(a)、图21(b)中以三层为例),实质上是由三层可视镜片单元进行叠加,每一层可视镜片单元均包括光波导器件5、镜片基体21、纳米结构功能薄膜22(当然,纳米光栅可以直接加工在镜片基体21或者光波导器件5之上,那么,对应的,就不需要纳米结构功能薄膜和镜片基体了),每一层均光学连接一光线耦合器件4。
本实施例中,还设置有光线准直器件6和空间光调制器7,照明光源(点光源或线光源,图12中表达为:点/线光源)依次通过光线准直器件6、光线耦合器件4与光波导器件5进行光学连接,而空间光调制器7设置在人眼与纳米结构功能薄膜22之间。照明光源(点光源或线光源,图中标注为点/线光源30)通过光线准直器件6和光线耦合器件4(图12中表达为光线耦合器件)耦合进光波导器件5,也即是导入可视镜片。在全反射的作用下,光线在纳米结构功能薄膜22(也可称为导光膜或导光薄膜)内传播。纳米结构功能薄膜22包含有一组像素式纳米光栅结构,与光线作用发生衍射,使部分光线从纳米结构功能薄膜22出光面逸出。出射光线角度与纳米光栅结构形状(周期、取向)有关。出射光强效率与纳米光栅结构的像素大小、结构深度有关。因此,通过设计特定纳米光栅结构,可在显示器件(可视镜片)出光面形成一个或多个会聚 视点。将空间光调制器7(如液晶面板或其它平板显示器)放置在可视镜片与人眼之间,利用空间光调制器7实现光场灰度(振幅)信息调制,并与纳米结构功能薄膜22调制的光场相位信息匹配,最终在人眼前方投射出会聚波面,使人眼可以看到逼真的虚拟三维图像。该会聚波面可以进一步与现实景象形成的波面叠加,得到真实世界信息和虚拟世界信息的融合。
其中,图21(a)的方案采用每一层可视镜片单元均对应一个光线耦合器件、光线准直器件及点/线光源,每一层的照明均由一个点/线光源独立控制。而图21(b)则多层(图中以三层为例)一起通过一个光切换器件共用一个光线准直器件,并光学连接一个点/线光源,即多层可视镜片的照明均由一个点/线光源通过一个光线准直器件,然后经光切换器件切换轮流给予各层可视镜片提供照明。
该方案本质由多层用于控制相位的分频式纳米结构功能薄膜和一个用于控制灰度显示的快速响应空间光调制器组成。单层纳米结构功能薄膜在出光面形成一个会聚视点。以图中三层可视镜片单元为例,最上面一层纳米结构功能薄膜形成会聚视点2111,中间一层纳米结构功能薄膜形成会聚视点2121,最下面一层纳米结构功能薄膜形成会聚视点2131。以此类推。多层导光薄膜紧密叠合。通过分频的方式控制照明光源,实现各导光薄膜依次照明,即出光空间内出射光场按各纳米结构功能薄膜通过纳米光栅结构控制的出射光场依次变换。如图21(a)所示,根据需要,照明光源可放置在各层导光薄膜同侧,亦可放置在异侧。每层指向性导光薄膜可由独立的照明光源、光准直器件及光线耦合器件控制。通过交替点亮各层导光镜片的照明光源可实现出射光场顺序变换。或如图21(b)所示,各层指向性导光薄膜由相同照明光源和光准直器件控制。利用光切换器件将照明光源交替切换至各层指向性导光薄膜,实现各层指向 性导光薄膜的交替照明。通过该方法,既兼顾了图像清晰度,又提供了更多视角信息,可实现良好的3D显示效果。此外在制作上,纳米光栅像素与空间光调制器像素无需对准,极大降低了制造难度。其优势为视角更连续,3D体验更佳,制作更简便。
参见图22(a)、图22(b),图22(a),图22(b)为分频式纳米结构功能薄膜的分频控制电路原理框图。如图22(a)所示为上述图21(a)结构的控制电路原理框图。所述头戴式三维显示装置还设有分频控制装置,所述分频控制装置包括:
分频电路,用于生成周期性控制信号;
脉冲发生电路,用于生成基准脉冲信号,与分频电路的输入端连接,将基准脉冲信号发送给分频电路,从而调整周期性控制信号的频率;
图像刷新控制电路,其输入端与分频电路的一输出端连接,输出端与空间光调制器的一输入端连接,用于控制空间光调制器的刷新频率与光源的切换频率同步;
当光源为一个时,如图22(b)所示,所述分频电路的另一输出端连接光切换器件,控制光切换器件按照设定的频率周期性依次切换光源对各层可视镜片单元的依次照明;
当光源与光线耦合器件的数量一致时,如图22(a)所示,所述分频电路与各光源之间还设有照明控制电路,所述照明控制电路根据分频电路的周期性控制信号,按照设定的频率周期性依次启闭各光源对各层可视镜片单元的依次照明。
脉冲发生电路产生周期性脉冲信号。该脉冲信号通过分频电路,控制照明电路,从而实现点/线光源的交替通断和各层指向性导光薄膜的交替照明。同时,分频电路控制空间光调制信号的刷新频率,实现输出图像刷新频率与多层指向性导光薄膜照明频率的匹配。如图22(b)所示为上述图21(b)结构的控制电路原理框图。 脉冲发生电路产生周期性脉冲信号。该脉冲信号通过分频电路,控制光切换器件,从而实现各层指向性导光薄膜的交替照明。同时,分频电路控制空间光调制信号的刷新频率,实现输出图像刷新频率与多层指向性导光薄膜照明频率的匹配。
参见图23(a)、图23(b)、图23(c),图23(a)、图23(b)、图23(c)为本发明实施方式下的一种双眼视窗对应图,本方案即采用两个独立的可视镜片分别对应两个眼球。双目运动时,由于眼球的精细协调运动,使来自物体同一部分的光线成像于两眼视网膜上。同一被视物体在两眼视网膜上的像并不完全相同,左眼从左方看到物体的左侧面较多,而右眼则从右方看到物体的右侧面较多。为达到三维视觉效果,应匹配左右眼相应视点输出的视图信息,实现双目立体视觉。如图23(a)所示,双眼会聚于中间物体时,左右眼视点对应的两个可视镜片照明区域应匹配。即:左眼中间视点对应的纳米光栅像素阵列分布区域应与左眼视轴同轴(或可近似为同轴),且与物体同轴(或可近似为同轴)。右眼中间视点对应的纳米光栅像素阵列分布区域应与右眼视轴同轴(或可近似为同轴),且与物体同轴(或可近似为同轴)。此外,左右眼输出的视图信息应匹配。即:左眼通过左可视镜片观察可视物体应含有更多左侧信息,右眼通过右可视镜片观察可视物体应含有更多右侧信息,符合人眼视角关系、距离及大小关系。相同地,如图23(b)、图23(c)所示,双眼会聚于边缘物体时,也应正确匹配左右眼相应视点对应的纳米光栅位置,以及输出视图信息。
参见图24,图24是本发明实施例下一种双目头戴式三维显示示意图。左右两个可视镜片的纳米结构分布是对称性的,两者之间产生的光场具有双目视差,在眼球移动时,左右可视镜片的会聚光场形成视差效应,即左眼获得的图像包含更多的左方向信息,右眼获得的图像包含更多的右方向信息,通过大脑融合形成立体图像, 符合人眼观察习惯。更进一步优化,增设眼球动态跟踪装置,通过眼球跟踪确定视轴角度和瞳孔位置,将眼球的移动及其视轴角度和瞳孔位置等动态信息传送给控制装置,控制微型投影仪或者点/线光源及空间光调制器,分别在左右可视镜片的不同部分投影相应的图像,达到最优视景质量和减少所需处理(传输)数据量的目的。
参见图25,图25是一种头戴式三维现实增强显示方案示意图。在图23(a)、图24的基础上,微型投影仪从可视镜片侧面大角度投影(微型投影仪未画出),实现两个独立的可视镜片上纳米结构的指向性光照明。微型投影仪通过控制光强、波长与空间位置的关系,实现光场灰度(振幅)信息调制,并与纳米结构指向性功能薄膜(即可视镜片上纳米结构功能薄膜)调制的光场相位信息匹配,最终在人眼前方投射出会聚波面。该波面可与现实景象形成的波面叠加,得到真实世界信息和虚拟世界信息的融合。左右两个可视镜片的纳米结构分布是对称性的,两者之间产生的光场具有双目视差,在眼球移动时,左右可视镜片的会聚光场形成视差效应,即左眼获得的图像包含更多的左方向信息,右眼获得的图像包含更多的右方向信息,通过大脑融合形成立体图像,符合人眼观察习惯。考虑到眼睛观看位置不同、距离不同的物体时,眼睛转动,视轴和视点随之转动,因此,对应设计纳米结构功能薄膜上的纳米光栅像素,使之形成如图24中所示9个视点组成的连续视窗,便于眼睛转动时观看虚拟景物。单个视角像素密度的分布与距离视轴角度有关。与视角对应的纳米光栅像素在该视角对应视轴附近密集排布,远离视轴区域稀疏排布。各视角的纳米光栅像素通过互相嵌套的方式非均匀排列。眼睛左右移动,对应左右视角及其纳米光栅像素区域。眼睛上下移动,对应上下视角及其纳米光栅像素区域。可视镜片上方和中间视角对应远处景象,下方视角对应近处景象。更进一步优化,增设眼球动态跟踪装置,通过眼球跟踪确定视轴角度和瞳孔位 置,将眼球的移动及其视轴角度和瞳孔位置等动态信息传送给控制装置,控制微型投影仪或者点/线光源及空间光调制器,分别在左右可视镜片的不同部分投影相应的图像,达到最优视景质量和减少所需处理(传输)数据量的目的。
参见图26,图26中,对应与左右眼球的可视镜片均为多层结构,图26中以4层为例进行说明,每一层经包括光波导器件、镜片基体、和纳米机构功能薄膜,他们之间的结构在前述实施例中已经有明确的说明,这里不再赘述。图26示例采用照明光源(点光源或线光源)通过光线准直器件和光线耦合器件耦合进可视镜片的各光波导器件(本图中未画出,可参考前述相应的说明),在全反射的作用下,光线在纳米结构功能薄膜内传播。纳米结构功能薄膜包含有一组像素式纳米光栅结构,与光线作用发生衍射,使部分光线从纳米结构功能薄膜出光面逸出。出射光线角度与纳米结构形状(周期、取向)有关。出射光强效率与纳米结构的像素大小、结构深度有关。通过设计特定纳米光栅结构,在可视镜片出光面形成一个会聚视点。在纳米结构功能薄膜前方安置空间光调制器(未画出),实现光场灰度(振幅)信息调制,并与纳米结构功能薄膜调制的光场相位信息匹配,最终在人眼前方投射出会聚波面。多层用于控制相位的纳米结构功能薄膜形成分频式纳米结构功能薄膜,和一个用于控制灰度显示的快速响应空间光调制器组合。每一单层纳米结构功能薄膜在出光面均形成一个会聚视点。如2510纳米导光薄膜形成会聚视点2511,2520纳米结构功能薄膜形成会聚视点2521,2530纳米结构功能薄膜形成会聚视点2531。以此类推。多层纳米结构功能薄膜紧密叠合。通过分频的方式控制照明光源,实现各纳米结构功能薄膜依次照明,即出光空间内出射光场按各纳米结构功能薄膜通过纳米光栅结构控制的出射光场依次变换,从而在单眼前方实现由多个连续观察视角组成的观察视窗。每层指向性导 光薄膜可由独立的照明光源、光线准直器件及光线耦合器件(也称为光线耦合器)控制。通过交替点亮各层导光镜片的照明光源可实现出射光场顺序变换。亦或各层纳米结构功能薄膜由相同照明光源和光准直器件控制。利用光切换器件将照明光源交替切换至各层纳米结构功能薄膜,实现各层纳米结构功能薄膜的交替照明(图中未画出)。通过该方法,既兼顾了图像清晰度,又提供了更多视角信息,可实现良好的3D显示效果。此外在制作上,纳米光栅像素与空间光调制器像素无需对准,极大降低了制造难度。左右两个可视镜片的纳米结构分布是对称性的,两者之间产生的光场具有双目视差,在眼球移动时,左右光场镜片的会聚光场形成视差效应,即左眼获得的图像包含更多的左方向信息,右眼获得的图像包含更多的右方向信息,通过大脑融合形成立体图像,符合人眼观察习惯。考虑到眼睛观看位置不同、距离不同的物体时,眼睛转动,视轴和视点随之转动,设计纳米结构指向性功能镜片,使之在人眼前显示广角虚拟景物。单个视角像素密度的分布与距离视轴角度有关。与视角对应的纳米光栅像素在该视角对应视轴附近密集排布,远离视轴区域稀疏排布。眼睛左右移动,对应左右视角及其纳米光栅像素区域。眼睛上下移动,对应上下视角及其纳米光栅像素区域。更进一步优化,增设眼球动态跟踪装置,通过眼球跟踪确定视轴角度和瞳孔位置,将眼球的移动及其视轴角度和瞳孔位置等动态信息传送给控制装置,控制微型投影仪或者点/线光源及空间光调制器,分别在左右可视镜片的不同部分投影相应的图像,达到最优视景质量和减少所需处理(传输)数据量的目的。
参见图27,图27是一种头戴式三维虚拟现实显示方案示意图。将纳米结构功能薄膜分别嵌入或贴合左右可视镜片。照明光源(点光源或线光源)通过光线准直器件和光线耦合器件耦合进光场镜片光波导器件(未画出)。在全反射的作用下,光线在纳米结构功能 薄膜内传播。纳米结构功能薄膜包含有一组像素式纳米结构,与光线作用发生衍射,使部分光线从导光薄膜出光面逸出。出射光线角度与纳米结构形状(周期、取向)有关。出射光强效率与纳米结构的像素大小、结构深度有关。通过设计特定纳米光栅结构,在可视镜片出光面形成一个会聚视点。在纳米结构功能薄膜前方放置空间光调制器(未画出),实现光场灰度(振幅)信息调制,并与纳米结构指向性功能薄膜调制的光场相位信息匹配,最终在人眼前方投射出会聚波面。多层用于控制相位的纳米结构功能薄膜紧密叠加形成分频式纳米结构功能薄膜,并与一个用于控制灰度显示的快速响应空间光调制器组合。图27中可视镜片2以三层可视镜片单元叠合组成为例,每一层可视镜片单元均设有一层纳米结构功能薄膜,每一层单层纳米结构功能薄膜在出光面形成一个会聚视点。且各层纳米结构功能薄膜的焦距不同(通过设计不同焦距的纳米光栅组合),使可视镜片在人眼明视区域投影形成至少两个深度的虚像,通过分频的方式控制照明光源,实现各纳米结构功能薄膜依次照明,即出光空间内出射光场按各纳米结构功能薄膜的纳米光栅结构控制的出射光场依次变换。为实现多景深显示,应将短焦距对应纳米光栅像素单元与近景图像像素单元匹配,长焦距对应的纳米光栅像素单元与远景图像像素单元匹配。从而实现多景深图像分离显示。当人眼通过调节聚焦到近距离景象时,远距离景象模糊,反之,当人眼聚焦到远距离景象时,近距离景象模糊。这种符合人眼调焦习惯的立体显示方式使三维景象观看效果更加自然。左右两个可视镜片的纳米结构分布是对称性的,两者之间产生的光场具有双目视差,在眼球移动时,左右光场镜片的会聚光场形成视差效应,即左眼获得的图像包含更多的左方向信息,右眼获得的图像包含更多的右方向信息,通过大脑融合形成立体图像,符合人眼观察习惯。更进一步优化,增设眼球动态跟踪装置,通过眼球跟踪确定视轴角度 和瞳孔位置,将眼球的移动及其视轴角度和瞳孔位置等动态信息传送给控制装置,控制微型投影仪或者点/线光源及空间光调制器,分别在左右可视镜片的不同部分投影相应的图像,达到最优视景质量和减少所需处理(传输)数据量的目的。
上述空间光调制可选用液晶平板显示器等图像显示器件。
综上所述,本发明公开了一种利用设置纳米结构功能薄膜的可视镜片实现的佩戴式3D显示装置。在本发明中,利用设置纳米结构功能薄膜的可视镜片实现会聚光场的相位信息调制,与导光、投影等方式实现的光场灰度信息匹配,可在人眼前方获得三维显示体验。本发明还指出,利用多视角、多景深的方法,可提升三维效果,消除视疲劳。利用分频、多层纳米功能薄膜叠合等方法,可提高显示信息量,提升显示清晰度和立体感。本发明所涉及的头戴式三维显示装置,可用于虚拟现实,或现实增强领域,获得舒适的三维显示场景,不产生晕眩感。
参见图28,图28是基于可视镜片的一种头戴式3D显示装置示意图。外界信息采集传感器,例如:现实三维场景采集传感器(3001)、头部运动识别传感器(3002)、眼部运动识别传感器(3004)等,集成在头戴式便携装置上。其具体位置分布可根据实际应用需要改变。虚拟三维景象通过设置有纳米结构功能薄膜的可视镜片和图像输出装置(3003)添加至特定位置。各部件具体位置可根据实际应用需要调整和修改。
将上述利用设置纳米结构功能薄膜的可视镜片实现的佩戴式3D显示装置与外界信息采集系统、控制系统结合,可用于虚拟现实和现实增强领域。
参见图29,图29是基于可视镜片的虚拟现实系统方案示意图。在虚拟现实系统中,多个传感器(或图像采集器)对真实世界和观察者进行信息采集,主要包括但不局限于:用于观察者头部运动的 头部运动识别装置、用于眼部运动识别的眼球运动识别装置、用于手势识别的手势识别装置等。这些识别装置采集到相关姿态信息后,传送给信息处理装置进行预处理,然后再传送给中央处理器(CPU)进行处理,CPU还可以从云端信息获取装置那里采集云端信息,并将所有信息综合处理后,输出图像输出控制指令给图像输出控制装置,该装置发出指令,控制图像生成装置(微型投影仪或光源加空间光调制器等)经过可视镜片进行三维图像的汇聚,终端信息匹配、处理、交互,最终通过可视镜片在虚拟三维尺度空间特定位置呈现虚拟物体或信息。
参见附图30,图30是基于可视镜片的现实增强系统方案示意图。在图29的基础上,增加显示景物识别装置,这样可以将现实景物实时的与虚拟景象进行融合,实现现实增强三维显示。对AR甚至MR提供产业化基础。
参见图31,图31是佩戴式3D显示装置与其他移动设备或终端可通过云网络实现信息交互的示意图。头戴式移动装置(3100)、腰戴式移动装置(3101)、腕戴式移动装置(3102)与便携式移动装置(3103、3104)可通过云端便捷地实现信息交互。
该专利所涉及的虚拟现实与增强现实显示技术,可应用到诸如视频游戏、事件直播、视频娱乐、医疗保健、房地产、零售、教育、工程和军事(图27-35)等社会活动中。
图32是本发明应用于交通驾驶的示意图,通过本发明的装置,将虚拟图像(图中示例的是“600米后文星路”的图文提示和实际路面上的右转行驶标识),该虚拟图像通过焦距的调整,准确的投影到与实景匹配的位置,使虚拟图像和现实景物有机的融合到一起,自然且准确,实现现实增强显示,可有效避免现有车载导航系统中,视觉场景切换导致的交通事故。
图33是本发明应用于儿童教育的示意图(当然可以是其他任 何多媒体信息展示领域及电影、电视等领域),两位小朋友通过本发明的装置,一起观看或分享关于恐龙的咨询,包括文字显示和恐龙的立体显示。
图34是本发明应用于游戏娱乐、军事训练、战争等领域的示意图,其中的建筑、人物可以是全虚拟的,也可以是虚拟和现实事物融合的现实增强模式。应用于游戏娱乐和军事训练,可以大大提高游戏及军事训练的拟真度,提高游戏的乐趣和可玩性,提高军事训练的实战效果。应用于军事作战,通过云端信息采集与信息交互,使士兵快速获得战场上敌军、我军的位置、运动、作战特性等信息,以及战场形貌信息,大大提高士兵的信息采集能力、实时判断准确性、以及统一作战协调性,提高军队整体战斗力。
图35是本发明应用于购物领域或产品展示。可以拟真的全面了解产品的外观信息,并结合文字、声音信息,实现全新的购物体验或展示效果。
图36是本发明应用于医疗领域的示意图,实现医生与病患的更加丰富的资讯交流,如图中,医生可以让病患直观的看到自己病牙的立体信息,了解病况,视窗中还同步显示诊断结果等文字信息。
图37是本发明应用于家庭影音娱乐领域的示意图,可获得近乎于身临其境的视觉体验,又大大减轻视觉疲劳的症状。
图38是本发明应用于服饰虚拟试穿的示意图,通过对试穿者的三维扫描或多角度拍照进行三维合成,然后再将服装与试穿者的三维虚像融合,得到穿上新服饰的三维影像,试穿者可以实时的观察自己的试穿效果,通过本发明的头戴式三维显示装置,可以获得近乎于照镜子的真实视觉体验。
图39是本发明应用于商务会议领域的示意图,真实而生动的展示需要讨论的产品或文案,相比于传统的ppt,更具直观的优势。对于大型设备展示来说,更是如此。
图40是本发明应用于远程交互领域的示意图,在图中,以父女两人通过本发明进行远程互动下国际象棋的例子,在本方案中,每一方只需要摆放己方的棋子,然后通过本发明上增设的数码摄像机将己方的棋子动态、甚至是连同下棋人本人全貌一同拍摄(可以是三维扫描、多角度录像或拍照等方式),并进行三维转换,最终通过本发明投射到对方眼前,使得相互之间如同面对面在一起,具有高度的拟真感,如果再加上现实增强技术,几乎察觉不到彼此本来遥远的距离了,这对于远程交互来说,可以是一次革命性的飞跃。
综上,本发明可以实现(不同实施例可能实现部分或全部优点)以下优点:
1)本发明中涉及的佩戴式3D显示装置,利用多层纳米结构功能镜片在人眼前获得会聚光场,再现光场与人眼获取真实景物光场方式相同。将光场相位与导光、投影等方式实现的光场灰度信息匹配,可实现舒适的三维显示场景,不产生晕眩感。
2)显示清晰度高,立体感强。多层分频式指向性镜片的显示方案,通过分频的方式控制照明光源,实现出光空间内出射光场按各导光薄膜通过纳米光栅结构控制的出射光场依次变换。使显示信息量大幅增加,避免了显示清晰度和立体感之间额矛盾,在不牺牲图像质量的前提下,可获得优良的三维显示效果。
3)无需对准,与现行三维显示方式兼容。多层分频式指向性镜片的显示方案中,如单层导光薄膜用于实现单会聚视点光场,则光场灰度信息与相位信息无需对准,即如液晶面板等空间光调制器与纳米功能镜片像素之间无需对准,降低制作成本,易于批量生产。此外,其图像输出格式与现有的快门式三维显示格式兼容,易于产业化。
4)可获得舒适的三维显示场景,不产生晕眩感。多视点的3D显示方案,可减小辐辏调节矛盾,实现连续的动态视差,无频闪, 使观看效果更加自然。多景深的3D显示方案,可使人眼调节对焦于深度不同的景象,符合人眼调节习惯,不产生晕眩感。
5)在本发明中,既可以采用纳米光刻方法在镜片表面刻蚀制作出指向性纳米光栅,也可通过该纳米光刻方法先制作出能用于压印模板,再通过纳米压印批量复制,以降低屏幕成本。
对所公开的实施例的上述说明,使本领域专业技术人员能够实现或使用本发明。对这些实施例的多种修改对本领域的专业技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本发明的精神或范围的情况下,在其它实施例中实现。因此,本发明将不会被限制于本文所示的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。

Claims (17)

  1. 一种三维显示装置,其特征在于,包括图像生成装置,和可视镜片,所述可视镜片包括至少一层可视镜片单元,所述可视镜片单元设置有纳米结构,所述可视镜片上的纳米结构与图像生成装置输出的图像匹配,在视窗内形成虚拟景象。
  2. 根据权利要求1所述的三维显示装置,其特征在于,所述可视镜片由一层可视镜片单元构成,或由两层、三层、四层、或大于四层的可视镜片单元叠合在一起。
  3. 根据权利要求1所述的三维显示装置,其特征在于,所述可视镜片单元包括镜片基底和纳米结构。
  4. 根据权利要求3所述的三维显示装置,其特征在于,所述镜片基底包括波导结构。
  5. 根据权利要求1所述的三维显示装置,其特征在于,所述可视镜片为一具有一个、两个、或大于两个独立可视区域的单一整体。
  6. 根据权利要求1所述的三维显示装置,其特征在于,所述可视镜片为左右两个分别对应两个眼球的独立可视镜片。
  7. 根据权利要求1所述的三维显示装置,其特征在于,所述可视镜片在视窗内形成一个或大于等于两个视点。
  8. 根据权利要求7所述的三维显示装置,其特征在于,所述视点设计间隔小于视窗范围。
  9. 根据权利要求7所述的三维显示装置,其特征在于,所述可视镜片上方和中间视点对应远处景象,下方视点对应近处景象。
  10. 根据权利要求1所述的三维显示装置,其特征在于,所述纳米结构为纳米级尺寸的纳米光栅,也称为纳米光栅结构。
  11. 根据权利要求10所述的头戴式三维显示装置,其特征在于,所述纳米结构为槽形结构、浮雕结构、或镂空结构。
  12. 根据权利要求1所述的三维显示装置,其特征在于,所述 纳米结构的分布基于以下原则:所述图像生成装置经所述纳米结构在视窗内不同空间位置形成视点。
  13. 根据权利要求1所述的三维显示装置,其特征在于,所述图像生成装置包括投影装置,所述投影装置与至少一层所述可视镜片单元配合实现视窗内虚拟图像显示。
  14. 根据权利要求13所述的三维显示装置,其特征在于,所述投影装置经波导结构耦合至所述纳米结构。
  15. 根据权利要求5或6任一所述的三维显示装置,其特征在于,所述可视镜片包含左右两个独立可视区域,所述左右两个独立可视区域的纳米结构分布是对称性的。
  16. 根据权利要求7所述的三维显示装置,其特征在于,对应于每个所述视点均与一组纳米结构相对应,所述纳米结构的密度及分布依据以下条件布置:
    单个视点对应的纳米结构密度及分布与距离视轴角度无关,对应于各视点的纳米结构通过互相嵌套的方式均匀排列;
    或,单个视点对应的纳米结构密度及分布与距离视轴角度有关,所述单个视点对应的纳米结构的分布特征为:与该视点对应的纳米结构在该视点对应视轴附近的排布密度,大于远离视轴区域的排布密度,即对应于各视点的纳米结构通过互相嵌套的方式非均匀排列。
  17. 根据权利要求16所述的三维显示装置,其特征在于,单个所述视点对应的纳米结构密度及分布曲线呈三角函数、方波函数、梯形函数或正弦函数。
PCT/CN2017/083805 2016-10-28 2017-05-10 一种三维显示装置 WO2018076661A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610970307.3A CN106371218B (zh) 2016-10-28 2016-10-28 一种头戴式三维显示装置
CN201610970307.3 2016-10-28

Publications (1)

Publication Number Publication Date
WO2018076661A1 true WO2018076661A1 (zh) 2018-05-03

Family

ID=57893300

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/083805 WO2018076661A1 (zh) 2016-10-28 2017-05-10 一种三维显示装置

Country Status (2)

Country Link
CN (1) CN106371218B (zh)
WO (1) WO2018076661A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113676719A (zh) * 2021-07-21 2021-11-19 北京中科慧眼科技有限公司 双目立体相机的对焦参数计算方法、系统和智能终端

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106371218B (zh) * 2016-10-28 2019-05-24 苏州苏大维格光电科技股份有限公司 一种头戴式三维显示装置
CN111781724B (zh) * 2017-02-28 2021-10-26 华为技术有限公司 一种信息显示设备及信息显示方法
KR20190126124A (ko) 2017-03-21 2019-11-08 매직 립, 인코포레이티드 분할된 동공들을 위한 공간 광 변조기 조명을 갖는 디스플레이 시스템
EP3610316B1 (en) * 2017-05-17 2023-03-08 Vuzix Corporation Fixed focus image light guide with zoned diffraction gratings
CN107894666B (zh) * 2017-10-27 2021-01-08 杭州光粒科技有限公司 一种头戴式多深度立体图像显示系统及显示方法
WO2019173997A1 (en) * 2018-03-15 2019-09-19 Nokia Technologies Oy Near eye display and associated method
CN112236711A (zh) * 2018-05-17 2021-01-15 诺基亚技术有限公司 用于图像显示的设备和方法
CN110908134B (zh) * 2018-08-28 2021-01-26 京东方科技集团股份有限公司 一种显示装置及显示系统
CN109765695B (zh) * 2019-03-29 2021-09-24 京东方科技集团股份有限公司 一种显示系统和显示装置
CN110187506B (zh) * 2019-05-28 2021-12-17 京东方科技集团股份有限公司 光学显示系统和增强现实设备
CN111175975B (zh) * 2020-01-16 2022-04-19 华东交通大学 一种用于实现大焦深成像的近眼显示装置
CN111751988B (zh) * 2020-06-16 2023-03-28 深圳珑璟光电科技有限公司 一种景深调节方法、装置及双目近眼显示设备
CN116420104A (zh) 2020-09-30 2023-07-11 海思智财控股有限公司 用于虚拟实境及扩增实境装置的虚拟影像显示系统
CN112669671B (zh) * 2020-12-28 2022-10-25 北京航空航天大学江西研究院 一种基于实物交互的混合现实飞行仿真系统
CN113112613B (zh) * 2021-04-22 2022-03-15 贝壳找房(北京)科技有限公司 模型显示方法、装置、电子设备和存储介质
CN114167621A (zh) * 2021-12-07 2022-03-11 苏州大学 一种裸眼3d显示装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090213459A1 (en) * 2008-02-04 2009-08-27 Washington, University Of Contact lens for three dimensional visualization
US20120300024A1 (en) * 2011-05-25 2012-11-29 Microsoft Corporation Imaging system
CN103926699A (zh) * 2014-01-17 2014-07-16 吉林大学 一种可用于立体显示器像元的光发射角度调制装置
CN105892079A (zh) * 2016-06-24 2016-08-24 京东方科技集团股份有限公司 一种显示装置
CN105934902A (zh) * 2013-11-27 2016-09-07 奇跃公司 虚拟和增强现实系统与方法
CN106371218A (zh) * 2016-10-28 2017-02-01 苏州苏大维格光电科技股份有限公司 一种头戴式三维显示装置

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103995356B (zh) * 2014-05-30 2016-01-20 北京理工大学 一种真实立体感的光场头盔显示装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090213459A1 (en) * 2008-02-04 2009-08-27 Washington, University Of Contact lens for three dimensional visualization
US20120300024A1 (en) * 2011-05-25 2012-11-29 Microsoft Corporation Imaging system
CN105934902A (zh) * 2013-11-27 2016-09-07 奇跃公司 虚拟和增强现实系统与方法
CN103926699A (zh) * 2014-01-17 2014-07-16 吉林大学 一种可用于立体显示器像元的光发射角度调制装置
CN105892079A (zh) * 2016-06-24 2016-08-24 京东方科技集团股份有限公司 一种显示装置
CN106371218A (zh) * 2016-10-28 2017-02-01 苏州苏大维格光电科技股份有限公司 一种头戴式三维显示装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113676719A (zh) * 2021-07-21 2021-11-19 北京中科慧眼科技有限公司 双目立体相机的对焦参数计算方法、系统和智能终端
CN113676719B (zh) * 2021-07-21 2023-11-14 北京中科慧眼科技有限公司 双目立体相机的对焦参数计算方法、系统和智能终端

Also Published As

Publication number Publication date
CN106371218A (zh) 2017-02-01
CN106371218B (zh) 2019-05-24

Similar Documents

Publication Publication Date Title
WO2018076661A1 (zh) 一种三维显示装置
CN106526730B (zh) 一种宽视角波导镜片及制作方法和头戴式三维显示装置
US10297071B2 (en) 3D light field displays and methods with improved viewing angle, depth and resolution
US11935206B2 (en) Systems and methods for mixed reality
CN106662731B (zh) 可佩戴3d增强现实显示器
Geng Three-dimensional display technologies
US10593092B2 (en) Integrated 3D-D2 visual effects display
WO2017181917A1 (zh) 一种裸眼3d显示装置及实现裸眼3d显示的方法
CN106371222A (zh) 一种纳米透镜波导镜片和多景深三维显示装置
CN106501938A (zh) 一种头戴式增强现实三维显示装置
JP2019502941A (ja) 指向性カラーフィルタ及び裸眼3dディスプレイ装置
CN107367845A (zh) 显示系统和显示方法
JP2011501822A (ja) 表示装置及びその表示方法
JPH11513129A (ja) 3次元画像形成システム
CN210835313U (zh) 全息衍射波导镜片、波导镜片组及增强现实彩色显示装置
KR20130097014A (ko) 확장형 3차원 입체영상 디스플레이 시스템
JP2016500829A (ja) 収束角度スライス型の真の3dディスプレイ
JP2019512109A (ja) 裸眼光学立体スクリーン
CN112415656A (zh) 一种全息衍射波导镜片及增强现实彩色显示装置
JP2019521387A (ja) ハイブリッドフォトニックvr/arシステム
JP2018151459A (ja) ステレオ立体視装置
Hua Past and future of wearable augmented reality displays and their applications
JP3756481B2 (ja) 三次元表示装置
US20220163816A1 (en) Display apparatus for rendering three-dimensional image and method therefor
KR20120095217A (ko) 입체영상표시장치

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17864199

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17864199

Country of ref document: EP

Kind code of ref document: A1