WO2019228539A1 - 成像方法和光学系统及其存储介质、芯片与组合 - Google Patents

成像方法和光学系统及其存储介质、芯片与组合 Download PDF

Info

Publication number
WO2019228539A1
WO2019228539A1 PCT/CN2019/093501 CN2019093501W WO2019228539A1 WO 2019228539 A1 WO2019228539 A1 WO 2019228539A1 CN 2019093501 W CN2019093501 W CN 2019093501W WO 2019228539 A1 WO2019228539 A1 WO 2019228539A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual
image
optical
optical system
light field
Prior art date
Application number
PCT/CN2019/093501
Other languages
English (en)
French (fr)
Inventor
谈顺毅
Original Assignee
江苏慧光电子科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 江苏慧光电子科技有限公司 filed Critical 江苏慧光电子科技有限公司
Publication of WO2019228539A1 publication Critical patent/WO2019228539A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03HHOLOGRAPHIC PROCESSES OR APPARATUS
    • G03H1/00Holographic processes or apparatus using light, infrared or ultraviolet waves for obtaining holograms or for obtaining an image from them; Details peculiar thereto
    • G03H1/04Processes or apparatus for producing holograms
    • G03H1/0443Digital holography, i.e. recording holograms with digital recording means
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03HHOLOGRAPHIC PROCESSES OR APPARATUS
    • G03H1/00Holographic processes or apparatus using light, infrared or ultraviolet waves for obtaining holograms or for obtaining an image from them; Details peculiar thereto
    • G03H1/04Processes or apparatus for producing holograms
    • G03H1/08Synthesising holograms, i.e. holograms synthesized from objects or objects from holograms
    • G03H1/0841Encoding method mapping the synthesized field into a restricted set of values representative of the modulator parameters, e.g. detour phase coding
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03HHOLOGRAPHIC PROCESSES OR APPARATUS
    • G03H1/00Holographic processes or apparatus using light, infrared or ultraviolet waves for obtaining holograms or for obtaining an image from them; Details peculiar thereto
    • G03H1/22Processes or apparatus for obtaining an optical image from holograms
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03HHOLOGRAPHIC PROCESSES OR APPARATUS
    • G03H1/00Holographic processes or apparatus using light, infrared or ultraviolet waves for obtaining holograms or for obtaining an image from them; Details peculiar thereto
    • G03H1/26Processes or apparatus specially adapted to produce multiple sub- holograms or to obtain images from them, e.g. multicolour technique
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03HHOLOGRAPHIC PROCESSES OR APPARATUS
    • G03H1/00Holographic processes or apparatus using light, infrared or ultraviolet waves for obtaining holograms or for obtaining an image from them; Details peculiar thereto
    • G03H1/04Processes or apparatus for producing holograms
    • G03H1/0443Digital holography, i.e. recording holograms with digital recording means
    • G03H2001/045Fourier or lensless Fourier arrangement
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03HHOLOGRAPHIC PROCESSES OR APPARATUS
    • G03H1/00Holographic processes or apparatus using light, infrared or ultraviolet waves for obtaining holograms or for obtaining an image from them; Details peculiar thereto
    • G03H1/26Processes or apparatus specially adapted to produce multiple sub- holograms or to obtain images from them, e.g. multicolour technique
    • G03H1/2645Multiplexing processes, e.g. aperture, shift, or wavefront multiplexing
    • G03H2001/2675Phase code multiplexing, wherein the sub-holograms are multiplexed according to spatial modulation of the reference beam

Definitions

  • the present invention relates to the field of imaging, and in particular, to an imaging method and an optical system, and a storage medium, chip, and combination thereof.
  • Holographic display is based on the principle of interference diffraction imaging, which can truly restore the light field information, so as to achieve the light field display effect with depth and angle, and has the potential to achieve true 3D display.
  • the current computational holographic methods for generating holograms have certain limitations.
  • the existing technology calculates that each object point of different depth is propagated to the image plane and performs wavefront superposition of all points, resulting in too much calculation. It is difficult to realize real-time operations under hardware conditions, or to perform a fast Fourier transform directly on the entire image, resulting in the loss of the depth relationship between each pixel.
  • Patent document CN201710036146.5 discloses a near-eye holographic display system and method. Illumination coherent light emitted by an illumination device is irradiated on a diffractive device loaded with a hologram: the diffractive device modulates the illumination coherent light according to the loaded hologram: modulated The diffracted light wave is diffracted in space, and a three-dimensional holographic reproduction image with depth information is constructed within a certain distance: Because the holographic reproduction image has depth information, the distance between each depth plane and the near-eye projection optical structure is different, so different depth planes It will be projected by the projection structure on the space at different depths from the observer's eyes, so that the observer's eyes can see the magnified virtual objects with depth levels at the same time. Although this patent document focuses on depth, it does not involve the problem of calculating the object point distribution of each different depth to spread to the image plane, and the problem of excessive calculation of the actual three-dimensional image calculation.
  • Patent document CN201380050549.5 discloses a technical solution for generating an image from a light field using a virtual viewpoint, which calculates a virtual depth map based on the captured light field image data and virtual viewpoint, and based on the captured light field image data and virtual depth map Generate an image from a virtual viewpoint.
  • this patent document does not involve the wavefront of all points, the calculation amount is low, and the depth relationship is also processed, but it uses a camera to capture the image, extract the depth information of the real-world image, and then use the acquired depth information
  • the virtual viewpoints described are imaged in the form of plane projection, and factors such as occlusion and blanking are added to make people feel a certain depth, but they are actually 2-dimensional images without depth.
  • an object of the present invention is to provide an imaging method and an optical system, a storage medium, an electronic chip, and a combination thereof.
  • an image on a virtual object surface is propagated to a virtual optical modulation surface to obtain light field distribution information modulated by the virtual optical modulation surface.
  • An optical system provided according to the present invention includes:
  • Imaging control system Propagate the image on the virtual object surface to the virtual optical modulation surface to obtain the light field distribution information modulated by the virtual optical modulation surface.
  • the propagating the image on the virtual object surface to the virtual optical modulation surface includes:
  • the sub-images on the individual blocks are propagated to all or part of the virtual optical modulation surface.
  • the virtual optical modulation plane is composed of one or more virtual optical planes; among the plurality of virtual optical planes, there are different kinds of virtual optical planes or the same kind of virtual optical planes; different kinds of virtual optical planes The calculated optical parameters are different.
  • each virtual optical surface corresponds to a part of a virtual object surface, respectively;
  • two or more virtual optical surfaces are superimposed on the airspace and / or two or more virtual optical surfaces are not superposed on the airspace.
  • the propagating the image on the virtual object surface to the virtual optical modulation surface is specifically:
  • One virtual object plane corresponds to one virtual optical modulation plane, or multiple virtual object planes correspond to respective virtual optical modulation planes;
  • the images on the multiple virtual object surfaces are generated by splitting the same input image, and the light field distributions modulated by the respective virtual optical modulation surfaces corresponding to the multiple virtual object surfaces are displayed separately in time. Or part of the display is superimposed.
  • the images on the plurality of virtual object surfaces are respectively different parts of the same input image, and the images on the plurality of virtual object surfaces are equal to the input image after being superimposed.
  • the image on the virtual object surface has a set phase, wherein the set phase enables the energy to be a set distribution pattern when the image on the virtual object surface is propagated to the virtual optical modulation surface, and / or the set
  • the phase makes the phase of the light field when the image on the virtual object surface is propagated to the virtual optical modulation surface into a set distribution pattern, such as a uniform distribution or a circular distribution of isopotential lines.
  • any one or more of the following parameters are generated:
  • the propagation distance of the image on each block is fixed, and the propagation of the image on each block is calculated using the image intensity and phase distribution convolution propagation function.
  • the calculation results are obtained by fast Fourier / Inverse Fourier transform of the intensity and phase distribution of the image on the block of the virtual object surface, fast Fourier / Inverse Fourier transform of the point multiplication propagation function, and fast inverse Fourier / Fourier transform;
  • the Fourier / Inverse Fourier transform of the function is calculated and stored in advance.
  • the image propagation on each block is calculated using the image intensity and phase distribution, multiplied by the first set phase distribution, and then performed by inverse Fourier / Fourier transform, and then multiplied by the second set phase distribution.
  • the first set phase distribution and the second set phase distribution are generated and stored in advance, or are calculated in real time (for example, generated based on a propagation distance).
  • the propagation of the object-surface image is calculated by first calculating or generating a pre-stored light field distribution at a single point on the object surface to a virtual optical surface, and recorded as the first light field distribution; according to the first light field distribution
  • the second light field distribution is obtained, that is, the light field distribution of a single point propagation on the virtual optical surface on the object surface image; and the second light field distribution corresponding to the single point is superimposed to calculate the Propagation
  • the object-surface image refers to an image on the virtual object surface;
  • the first light field distribution of the points with the same propagation distance on the object surface is the same.
  • the corresponding points are obtained in virtual optics.
  • the light field distribution on the surface is multiplied by the respective intensity of the corresponding point or the intensity and phase of the corresponding point to obtain the second light field distribution of each point on the object surface.
  • the first light field distribution is a light field distribution on a virtual optical surface after an ideal point on the object surface travels a certain distance, where the ideal point is assumed to have an intensity of 1, a phase of 0pi, and an object surface coordinate of (0, 0);
  • the second light field distribution is a light field distribution transmitted from an actual point on the object surface to the virtual optical surface, where the actual point is assumed to have an intensity of 4, a phase of pi / 2, and an object surface coordinate of (100 , 50); Therefore, the intensity and phase of the actual point need to be multiplied on the first light field distribution, and then the light field distribution coordinates are translated to obtain the light of the actual object point on the virtual optical surface that propagates the same distance.
  • Field distribution further, by superimposing and propagating a second light field distribution of actual points on the virtual surface over a certain distance on the virtual surface, it is possible to obtain the light field distribution that these object surface image points propagate to the virtual optical surface.
  • the images on different blocks of the object plane are scaled separately, and the scaling ratio is generated according to the feature information or read directly from the feature information.
  • the gap is filled with 0 energy.
  • the virtual object plane and / or the virtual optical modulation plane is generated according to any one or more of the actual spatial light modulator parameters, the incident light wavelength, the optical device in the system, and the input characteristic information.
  • a virtual object plane and / or a virtual optical modulation plane is constituted in real time according to a result of eye tracking.
  • the modulated light field distribution information is encoded, and the light field after the image propagation on the virtual object surface is superimposed on the virtual optical surface before encoding; or, the light field after the virtual object surface is transmitted is encoded first, and then Superimpose the optical distribution corresponding to the virtual optics on the virtual optical surface; wherein, the coding adopts any one or any combination of the following methods:
  • the coded output is a hologram / phase chart in a pure phase format, which is output to a spatial light modulator for imaging;
  • the coding adopts the method of directly discarding the intensity information, retaining only the phase information and discretizing;
  • the coding adopts the method of compensating the intensity or phase of the input information on the virtual object plane
  • the coding uses iterative methods to repeatedly calculate the propagation of the virtual object surface and the virtual optical surface;
  • the coding uses time division multiplexing to display the same image and / or sub-image using multiple sub-holograms.
  • a light source and a spatial light modulator are included; the light source is output to the spatial light modulator; and the spatial light modulator is under the control of the imaging control system to modulate the actual light field distribution according to the light field distribution information.
  • the spatial light modulator uses a phase modulation device, or the spatial light modulator uses a combination of a phase modulation device and an intensity modulation device.
  • the imaging control system includes a control circuit, wherein the control circuit is used to participate in calculation of information output to the spatial light modulator, control (drive) of the spatial light modulator, and / or control and adjust the light source.
  • control circuit is used to participate in calculation of information output to the spatial light modulator, control (drive) of the spatial light modulator, and / or control and adjust the light source.
  • a lens system is included, and the light field distribution modulated by the virtual optical modulation surface passes through the lens system to obtain an output image.
  • a virtual optical modulation surface is generated according to the actual imaging control system parameters to correct the aberrations generated by the lens and / or other optical components.
  • a waveguide device is further included, and the waveguide device is used to expand the exit pupil (eye movement, EYEBOX) size and / or field size of the light field of the output image.
  • the aberrations generated by the waveguide device are corrected by different settings of the virtual optical modulation surface.
  • the waveguide device is an array type waveguide device mainly composed of a plurality of surfaces having different transmission / reflection rates; or the waveguide device is a waveguide device mainly composed of a diffractive or holographic device.
  • the lens system zooms the angle of the incident light beam of the light source, and / or enlarges or reduces the light field output by the spatial light modulator.
  • the entrance pupil of the waveguide device is smaller than the pupil size of the human eye, and the exit pupil of the light field output by the previous stage system of the waveguide device in the optical system is coupled with the entrance pupil of the waveguide device.
  • the light source includes a laser and / or a light emitting diode.
  • the light source further includes an optical fiber part, and the light energy emitted by the laser or the light emitting diode is coupled into the optical fiber and then guided to the spatial light modulator.
  • the light source further includes a combining device, which uses any one or more of an X prism, a dichroic mirror, and an optical fiber to combine and output the light beams emitted by the light sources of different colors to the space light.
  • a combining device which uses any one or more of an X prism, a dichroic mirror, and an optical fiber to combine and output the light beams emitted by the light sources of different colors to the space light. Modulator.
  • a stop is also included, which stops an unnecessary portion of the light field.
  • a plurality of spatial light modulators are included, and the light fields restored by the plurality of spatial light modulators are superimposed to restore the target light field.
  • An optical system combination includes a plurality of the above-mentioned optical systems, and the plurality of optical systems are connected in parallel to output different light fields to the left and right eyes of a viewer to form a binocular parallax image, and / or output to a plurality of viewers.
  • the input information includes feature information in addition to the light intensity distribution information of the image; wherein the image on the virtual object surface is obtained by processing or not processing the image in the input information.
  • the input information includes one or more levels in a multi-level structure of a frame, a sub-frame, or a molecular frame, and the multi-level structure is organized according to feature information.
  • the feature information of the input information includes a pixel object's imaging distance, angle, frame / subframe / molecular frame image total brightness, number of subframes, number of molecular frames, left and right frames, receiving target, aberration parameters, and scaling ratio. At least one of the blanking relationship and the color;
  • the pixel object includes a pixel point and / or a pixel block.
  • the input information is external input, or is stored in the imaging control system, or is partially stored in the imaging control system, and part is input from the outside.
  • the virtual optical modulation surface and / or the virtual optical surface are generated based on the feature information or partly based on the feature information.
  • a computer-readable storage medium storing a computer program, which is executed by a processor to implement the steps of the above-mentioned imaging method.
  • an ASIC chip with integrated logic and the program and or circuit of the chip realize the steps of the imaging method described above.
  • the energy intensity distribution of the input image is displayed on the virtual object surface and transmitted to the virtual optical modulation surface, and other characteristic information (such as imaging distance) is modulated by the virtual optical modulation surface, and the virtual object surface is calculated by the virtual optical modulation.
  • the light field distribution information after surface modulation is encoded and output.
  • the input image information includes image energy intensity (light intensity distribution, gray information of each color), and any one or more kinds of characteristic information of each pixel imaging distance and viewing angle.
  • the input image is composed of a multi-level structure
  • the characteristic information elements that distinguish the levels include: pixel distance, receiving target, image angle, zoom ratio, image blanking relationship, left and right frames, image color, and total light intensity.
  • the characteristic information elements include: pixel distance, receiving target, image angle, zoom ratio, image blanking relationship, left and right frames, image color, and total light intensity.
  • the input image is composed of a secondary structure of a frame and a subframe, or a tertiary structure of a frame, a subframe, and a molecular frame.
  • the virtual object plane and the virtual optical modulation plane of the corresponding stage are directly obtained from the multi-stage structure.
  • a spatial light modulator is used to display the calculated and encoded light field information output.
  • the spatial light modulator uses phase modulation.
  • the spatial light modulator is a silicon-based liquid crystal device.
  • the spatial light modulator is a combination of a phase modulation device and an intensity modulation device.
  • the input image may be divided into blocks corresponding to the respective virtual optical surfaces on the virtual optical modulation plane, so that the optical characteristics displayed by some blocks of the image are different from the optical characteristics of other blocks (for example, the distances are different).
  • the virtual optical surface corresponding to each segment is calculated and obtained according to the input feature information.
  • the light field modulated by the corresponding virtual optical surface of each segment of the same image can be displayed separately in time series, or several segments can be displayed at the same time, and other segments can be displayed at the same time at another time.
  • the accumulation of the display content of the blocks in the time domain is equal to the complete image, but the optical characteristics of each block can be different.
  • the blocks displayed sequentially in time may be grouped according to the display time, for example, multiple blocks displayed at the same time are grouped into a group, and there is a gap between each block in the same group to make the virtual object surface
  • the blocks of the same group are propagated to the virtual optical modulation surface, there is no overlap between the virtual optical surfaces corresponding to the blocks. Therefore, the optical characteristics of the block patterns in the same group can be different without interfering with each other.
  • the virtual optical surfaces corresponding to the blocks of the different groups may overlap, but at the same time, the virtual optical surfaces of each image displayed at the same time do not overlap.
  • the virtual optical surface can be used to adjust the imaging distance, the angle of the corresponding image, the aberrations generated by the optical system, the diopter of the human eye, and other optical characteristic information.
  • the optical system includes a light source, a spatial light modulator, and the imaging control system includes a control circuit.
  • the light source comprises a semiconductor laser.
  • the light source further comprises an optical fiber
  • the beam shaping is realized by coupling the light output by the semiconductor laser into the optical fiber and then outputting it to the spatial light modulator. It is also possible to introduce lasers of different wavelengths into the same optical fiber to combine the input light sources of different wavelengths.
  • the spatial light modulator uses a silicon-based liquid crystal device based on phase modulation.
  • control system uses FPGA or DSP or GPU or ASIC chip to calculate the image propagation on the virtual object surface, the virtual optical modulation surface and the encoding of the light field information.
  • the optical system further includes a lens or a lens group, and / or other optical components such as a diaphragm.
  • the optical system further includes a waveguide device for expanding the exit pupil (EYEBOX) or eye movement of the system without reducing the field of view (FOV).
  • EYEBOX exit pupil
  • FOV field of view
  • the waveguide device may be an arrayed waveguide composed of a plurality of reflective surfaces having different transmittances, or a waveguide device composed of a grating (diffraction-type devices HOE, DOE).
  • a grating diffiffraction-type devices HOE, DOE
  • the virtual optical surface corrects aberrations caused by different propagation distances of the pupils from the image to each of the stitched pupils in the waveguide device according to different propagation distances or angles between the pupils when the waveguide is dilated.
  • an initial phase can be set on the image on the virtual object surface to make it have certain characteristics after being propagated to the virtual optical modulation surface, such as a uniform phase distribution, or a phase in a set distribution form, or a uniform intensity.
  • the light field information propagated to the virtual object surface image after the virtual optical modulation surface is modulated by the virtual optical surface is encoded.
  • the above encoding method may be a phase-only encoding, such as discarding the strength and retaining only the phase, or bi-phase encoding, or other optimized encoding methods
  • the optical device simulated by the virtual optical surface may be a simulated lens, a reflective surface, or a free-form surface device.
  • the light field (output modulated light field) output by the analog optical device can be obtained by the superposition process of a system mainly composed of a solid optical device (such as a lens) to obtain the output light field of the optical system.
  • a solid optical device such as a lens
  • the light field output by the analog optical device is used as the output light field of the optical system without the need for a solid lens or other optical device.
  • the pupil size of the human eye refers to 2-8 mm, such as 2 mm, 3 mm, 4 mm, 5 mm, 8 mm, and the like.
  • the present invention has the following beneficial effects:
  • the present invention improves the efficiency of image generation, realizes the conversion of holograms with a lower amount of computation, and realizes real-time holographic imaging.
  • the present invention reduces the power consumption of the system operation and reduces the configuration requirements for system hardware such as a computing chip and the spatial light modulator.
  • Figure 1, Figure 2, Figure 3, and Figure 4 are the four sub-images obtained by splitting the same input image.
  • the pixel values of the input image are black squares representing the original pixel values of the image, and the white in the sub-image.
  • a square represents a pixel value of 0.
  • Figure 5 shows the input image
  • 6 and 8 are schematic diagrams of splitting an image generated by a plurality of molecular frames in two sub-frames of the input image in FIG. 5 into two sub-images, corresponding to two virtual object planes.
  • the black part is the gap with zero energy.
  • FIGS. 7 and 9 are virtual optical surfaces corresponding to the sub-images in FIGS. 6 and 8, respectively.
  • FIG. 10 is a schematic diagram of the same size of the virtual object plane and the same size of the virtual optical plane.
  • FIG. 11 is a schematic diagram showing that the blocks of the virtual object plane do not overlap in space, and the corresponding virtual optical planes overlap in space.
  • FIG. 12 is a schematic diagram of the virtual object planes having different block sizes and the virtual optical planes overlapping in space.
  • FIG. 13 shows that the parts of the virtual object plane completely overlap in space, but the corresponding feature information is different, so the virtual optical planes corresponding to the two object planes that overlap in space are different.
  • FIG. 14, FIG. 15, and FIG. 16 respectively show schematic diagrams of different optical systems.
  • the figure shows:
  • An imaging method provided by the present invention propagates an image on a virtual object surface to a virtual optical modulation surface to obtain light field distribution information modulated by the virtual optical modulation surface.
  • the imaging control system can generate the input information (frame / subframe / molecule frame, or further subdivide) into one or more virtual object surfaces.
  • the imaging control system divides the virtual object plane into blocks. For example, a sub-frame with a resolution of 1024 ⁇ 768 in an input image is divided into sub-images located on 192 blocks with a resolution of 64 ⁇ 64. The sub-images on each block are respectively propagated to the virtual optical modulation surface. Further, an image contained in a subdivided sub-frame or / molecular frame in a frame of the input image is called a sub-image, and the size of the sub-image may be agreed to be the same as the block size. At this time, the input sub-frame The molecular frame corresponds to a block of the virtual object surface, and no additional processing is required.
  • the influence of each block propagation can be the entire virtual optical modulation surface, or only a part of the virtual optical modulation surface.
  • the size of the virtual optical surface corresponding to the virtual object surface block propagation in the virtual optical modulation surface is determined by the spatial light modulator. A series of parameters such as the actual pixel size, the block propagation distance, and the corresponding incident light frequency are determined comprehensively.
  • the virtual object plane may be before or after the virtual optical modulation plane, and the propagation may be forward. For example, after the image of the virtual object plane propagates normally for a distance, a light field distribution is formed on the virtual optical modulation plane, or it may be reversed, such as The light field distribution on the virtual optical modulation surface propagates a distance to form a virtual object surface.
  • an R sub-frame in the input image frame contains an imaging distance of 100 ⁇ 200 pixels, which is 500 mm behind the virtual optical modulation surface.
  • the virtual object surface corresponding to the molecular frame image can be divided into 2 ⁇ 4 blocks with a resolution of 64 ⁇ 64, and the sub-images are transmitted at a distance of 100 mm before the virtual optical modulation surface of each block
  • the imaging of the sub-image is modulated by a corresponding virtual optical surface on the virtual optical modulation surface to a distance of 500 mm behind the virtual optical modulation surface.
  • the propagation can be calculated by using Fresnel transform / inverse Fresnel transform, Fourier transform / inverse Fourier transform, angular spectrum propagation, or directly using Kirchhoff diffraction formula.
  • You can also pre-store or calculate the light field distribution of a single point of the object surface propagating to the virtual optical surface, record it as the first light field distribution, and then calculate the corresponding points on the object surface (points with the same distance from the object surface to the virtual optical surface) according to the object points.
  • the coordinates of the first light field distribution are translated, and then the object point intensity is used as a weight, and the first light field distribution after the translation is weighted and superimposed to calculate the propagation.
  • the distance at which the sub-images on each virtual object surface propagate to the virtual optical modulation surface can be the same or different, or the propagation distance can be set to 0 to display the sub-image directly on the virtual optical Modulation surface.
  • Each virtual optical surface on the virtual optical modulation surface corresponds to a block on the virtual object surface corresponding to the same frame / subframe / molecule frame.
  • the sub-images on different blocks of the virtual object plane can be scaled separately to compensate for deviations that may occur from the actual scene size after the image is modulated to different distances.
  • the zoom ratio can be determined according to the corresponding virtual optical modulation surface and other parameters, for example, the zoom ratio is determined according to the imaging distance to which the virtual optical modulation surface modulates the image, or the imaging distance and space to which the image is modulated by the virtual optical modulation surface.
  • the parameters of the light modulator, the frequency of the light source, and the parameters of other optical systems determine the scaling ratio.
  • this ratio can also be calculated in advance and stored in the feature information of the input information, so that the system does not need to calculate it in real time, just read the corresponding information directly and scale it.
  • the image can also be scaled when generating the input information, and the system does not need to scale the sub-image at this time.
  • this deviation is related to the actual optical system used. It may also be that the actual optical system is insensitive or has no effect on the size of the actual image after the virtual optical surface modulates the object surface. In this case, it may not be different subimages on the object surface. Make zoom adjustments.
  • Each virtual optical surface can modulate the sub-images on the corresponding virtual object surface according to the input information, for example, by adjusting the distance of the image on the corresponding virtual object surface by means of an analog lens, and / or adjusting the corresponding virtual object.
  • the angle of the image on the surface block, and / or correct the aberration of the corresponding block image (the aberration of each block may be the same or different), and it can also compensate the incident illumination light field distribution.
  • Virtual optical surfaces can be generated using Seidel polynomials or Zernike polynomials.
  • the virtual optical surface can correct aberrations, such as phase distribution generated using Zernike polynomials.
  • the corrected optical aberrations include spherical aberration, coma, astigmatism, field curvature, distortion, chromatic aberration, high-order aberration, or chromatic aberration. Aberration of mathematical expressions.
  • the virtual object surface and the virtual optical modulation surface may also adopt a time division multiplexing method, so that the virtual optical surfaces on the virtual optical modulation surface corresponding to the blocks on the virtual object surface do not overlap, for example, a 32 ⁇ 32 resolution
  • the size of the corresponding virtual optical surface after the sub-images on the block propagate is a resolution of 64 ⁇ 64, then the frame / sub-frame / molecule frame corresponding to the virtual object surface can be split into 4 sub-images, and the information they contain
  • the sum of is equal to the original input image, and the size of the blocks in each sub-image is still a resolution of 32 ⁇ 32, but the 16 pixels of the four sides of the block on the virtual object surface displayed at the same time (because the four sides of each sub-image are 16 A pixel gap, for example, an air strike of 16 pixels on the left, and a 16 pixel gap on the right side of a block adjacent to its left that can be displayed at the same time, so that the gap in the two object surface blocks together is exactly
  • an image of 1920 ⁇ 1152 can be 30 ⁇ 18 It consists of 64 ⁇ 64 sub-blocks.
  • Each sub-block of the virtual object contains 64 ⁇ 64 pixels containing the original information. In actual display, it is divided into 4 sub-frames / molecular frames that are displayed at different times.
  • the molecular frame / molecule frame contains 30 ⁇ 18 64 ⁇ 64 blocks.
  • the size of the virtual optical surface can be reduced by properly setting the block propagation distance and / or appropriately cutting off some high-frequency information (for example, making the virtual object surface block size equal to the virtual optical surface size) ), And arrange the virtual optical surface distribution in a better way, thereby reducing the number of time-multiplexed sub-images or imaging without using time-multiplexed sub-images.
  • the light field transmitted by each block to the virtual optical modulation surface has little or no high-frequency information, thereby reducing its The size of the corresponding virtual optical surface, thereby reducing the number of sub-images required in time division multiplexing or imaging without using time division multiplexing.
  • the final image has a long imaging distance
  • it is also possible to superimpose the other surrounding blocks by partially transmitting the energy of the virtual object surface to 0 and causing the corresponding block energy to be 0 after propagating to the virtual optical modulation surface (For example, the part with energy) to increase the exit pupil area of the corresponding block image.
  • the virtual optical modulation surface For example, the part with energy
  • the light field with the energy of 0 is not actually calculated, it only copies the surrounding light field distribution, thus improving the image quality.
  • propagating a part of the object surface with a non-zero energy to it can save the calculation amount.
  • the coded output can be a hologram / phase chart in a pure phase format and output to a spatial light modulator for imaging.
  • the control circuit synchronously controls the color and intensity of the output of the light source to match the corresponding hologram / phase chart on the spatial light modulator.
  • Encoding can directly discard the intensity information, retain only the phase information and adopt a discretization method.
  • Encoding can use phase information on the virtual object surface to make the transmitted light field meet the set phase distribution or intensity distribution (for example, the intensity or phase distribution is more uniform) to optimize the display effect.
  • Coding can optimize the display effect by compensating the intensity or phase of the input information on the virtual object surface.
  • the encoding can use the method of double phase (double phase), the formula:
  • ⁇ 1 xy is the phase of the odd row / column point corresponding to the virtual optical surface
  • Phase of ⁇ 2 xy corresponding to even row / column points (or points corresponding to ⁇ 1 xy ⁇ 2 xy can also be arranged in a checkerboard manner)
  • ⁇ xy is the phase of the light field after the object plane is propagated or the light field is propagated and modulated by the virtual optical plane
  • ⁇ z xy is the phase distribution of the illumination light field at the corresponding position
  • a xy_max is the maximum value of the light field amplitude after the object surface propagation or the light field amplitude after the object surface propagation and the virtual optical surface modulation
  • the subscript x indicates the coordinate value of the light field after the object plane propagates or the light field after the object plane propagates and is modulated by the virtual optical plane (equivalent to the x direction of the relevant point on the virtual optical modulation plane or the virtual object plane).
  • the subscript y indicates the coordinate value of the light field after the object plane propagates or the light field after the object plane propagates and is modulated by the virtual optical plane (equivalent to the y direction of the virtual optical modulation plane or the relevant point on the virtual object plane).
  • a piece of spatial light modulator can be used to display the bi-phase to restore the light field, such as the aforementioned method of parity rows or columns.
  • two or more pieces of pure-phase spatial light modulators can also be used. Each piece of spatial light modulator modulates one of the two-phase distributions. After the synthesis by the optical system, the complete Light field. At this time, ⁇ 1 xy is displayed on one spatial light modulator alone, and ⁇ 2 xy is displayed on another spatial light modulator alone, there is no need to distinguish between parity rows / columns.
  • Coding can use complex error diffusion
  • the light field transmitted on the virtual object surface can be superimposed on the virtual optical surface and then encoded, or the light field transmitted on the virtual object surface can be encoded first, and then the optical distribution corresponding to the virtual optical device on the virtual optical surface (virtual
  • the optical distribution of optical devices is mostly a pure phase distribution, which can be directly superimposed without re-encoding).
  • the superposition can be a direct addition of the phase ⁇ xy , or a phase distribution Dot multiplication (both are mathematically equivalent).
  • the coding can also retain the intensity and phase.
  • Two or more spatial light modulators are used on the device to modulate the intensity and phase respectively, and the light field is restored after being synthesized by the optical system.
  • an intensity spatial light modulator such as intensity-modulated LCoS or LCD or OLED devices
  • a phase-modulated transmissive spatial light modulator silicon-based liquid crystal-based pure phase device
  • restore the intensity of the light field Distribution and phase distribution where the phase-modulated spatial light modulator is placed after the intensity-modulated spatial light modulator.
  • the above-mentioned functions can also be realized by making the intensity modulated spatial light modulator and the phase modulated spatial light modulator into one device through a device design and manufacturing process.
  • the coding can use time division multiplexing to display the same sub-image using multiple sub-holograms to reduce image speckle and improve imaging quality.
  • Coded control circuit can use GPU, FPGA, DSP, CPU or develop ASIC chip
  • An optical system provided by the present invention includes an imaging control system: propagating an image on a virtual object surface to a virtual optical modulation surface to obtain light field distribution information modulated by the virtual optical modulation surface.
  • the imaging control system can use electronic components to receive the input signal and convert the input signal into a hologram / phase chart output according to the imaging method provided by the present invention.
  • the image on the virtual object surface is propagated to the virtual optical modulation surface to obtain the light field distribution information modulated by the virtual optical modulation surface, and the modulated light field distribution is encoded and output as a hologram / phase chart.
  • the input information received by the imaging control system includes gray information of the color of the image pixels in the received input image, and feature information such as distance information / depth information of the pixels and / or viewing angle information of the pixels.
  • the input information received by the imaging control system can be organized according to a multi-level structure.
  • each frame contains one or more sub-frames, and the sub-frames can be further subdivided into one or more molecular frames.
  • a plurality of sub-images in the stage are processed separately and a hologram / phase chart is generated.
  • the classification of the input information may be distinguished according to characteristics such as distance, color, and viewing angle.
  • the first-level information divides a frame of image into three sub-frames according to the three colors of RGB
  • the second-level information divides the sub-frame into three sub-frames according to distance.
  • Multiple molecular frames with different imaging distances For example, the R subframe contains three molecular frames with imaging distance, the G frame contains one molecular frame with imaging distance, and the B subframe contains four molecular frames with different imaging distance.
  • the optical system can also include spatial light modulators, light sources, lens systems, diaphragms, and other optical devices
  • the spatial light modulator can be a pure phase modulation device manufactured by a silicon-based liquid crystal process.
  • a system can include one or more spatial light modulators.
  • the light source can be laser or LED
  • the light source can use one or more semiconductor lasers or LEDs. When multiple lasers or LEDs are used, they can be combined and output to the spatial light modulator through a combiner;
  • the light source may also include a collimation system, which collimates or enlarges or reduces the beam angle and outputs the beam to the spatial light modulator;
  • the combiner can be a combining prism, or one or more dichroic mirrors, or a fiber coupling method
  • the optical path system may also include a lens system to scale the output light field of the spatial light modulator (such as a telescope system, or an imaging lens, etc.);
  • a lens system to scale the output light field of the spatial light modulator such as a telescope system, or an imaging lens, etc.
  • the optical path system may also include waveguides and / or diffractive optics to achieve pupil dilation and / or expansion of the field of view of the output image;
  • the optical path system may also include an array waveguide to achieve pupil dilation and / or expansion of the field of view of the output image;
  • the optical path system can also include grating waveguides (including HOE and DOE devices);
  • the optical path system may also include a lens array to achieve pupil expansion and / or expansion of the field of view of the output image;
  • the 0th order and / or the extra diffraction order generated by the hologram can be guided outside the pupil through proper design, or the incident angle of these stray light does not meet the coupling condition of the waveguide, so that the waveguide output
  • the image will not contain ghosts caused by order 0 stray light and / or unwanted diffraction orders;
  • a diaphragm can also be added to the optical path system to block the 0th order and / or the unnecessary diffraction order;
  • a diaphragm to the optical system to block the noise (excessive part of the image) caused by the bi-phase encoding, or cooperate with the waveguide to generate the bi-phase encoding.
  • the noise is guided outside the entrance pupil of the wave or the angle of incidence does not meet the coupling conditions of the waveguide, thereby filtering out the noise due to the bi-phase encoding.
  • spatial light modulators in the optical system to restore the output light field on the virtual optical modulation surface (for example, two spatial light modulators, each of which displays a part of the code in the bi-phase code, which is synthesized by the combining device and output (Full light field)
  • each frame of the input image has a resolution of 800 ⁇ 600, and each frame contains two sub-frames with different imaging distances.
  • each frame contains two sub-frames with different imaging distances.
  • the sub-frame is recorded as the second sub-frame.
  • the imaging control system sets the block size of the virtual object surface to 50 ⁇ 50, so the first sub-frame occupies 2 ⁇ 3 blocks, the second sub-frame occupies 4 ⁇ 2 blocks, the virtual object surface and the virtual optical modulation surface
  • the distance is set to 0.1 meters, and the size of the virtual optical surface corresponding to each block is consistent with the block size.
  • the lens simulated by the virtual optical surface corresponding to the block of the virtual object plane of the sub-frame with an imaging distance of 10 meters is set to focus the image at the 10-meter position, and the center of the simulated lens corresponding to each block can be changed by changing Position to adjust the position of the block image on the actual imaging plane, so as to achieve a better imaging effect.
  • the lens simulated by the virtual optical plane corresponding to the virtual object plane block of the sub-frame with an imaging distance of 0.2 meters is set to a focal distance at which the image is formed at a position of 0.2 meters.
  • the encoded result is output to spatial light.
  • the modulator and the imaging control system synchronize the light source to illuminate the spatial light modulator output image light field.
  • the modulation plane modulates the light field propagating to the virtual optical modulation plane from the object plane by a method of multiplying with the complex amplitude point of the virtual optical plane or adding the phase.
  • the object image displayed at 10 meters and the object image at 0.2 meters can be scaled according to the actual optical system. For example, the original image of the object at 0.2 meters is enlarged by 10% in the X direction and 9% in the Y direction.
  • the image at the meter is reduced by 5% in the X direction and 4% in the Y direction (the magnification and reduction ratios in the X and Y directions can also be the same, which is determined by the actual optical system), so that the virtual image is consistent with its actual size in space. .
  • the light source is a semiconductor laser, which is output to a spatial light modulator as a parallel light after passing through a collimating lens (you can also use angled incident light to change the output field of view).
  • the optical system may also include a waveguide system.
  • the exit pupil of the spatial light modulator output image is coupled to the entrance pupil of the waveguide system.
  • the useless secondary will be blocked from the entrance pupil of the waveguide, or the angle does not meet the input requirements of the waveguide. Filtered out, the waveguide system dilates the output image so that viewers can view the image in a larger range.
  • the virtual optical surfaces corresponding to the different positions of the virtual object surface can also perform aberration compensation and compensation according to the actual distance of each exit pupil image in the waveguide.
  • the calculation can be generated by Zernike polynomial or Seidel polynomial calculation.
  • Each frame of the input image has a resolution of 640 ⁇ 480.
  • Each frame contains six sub-frames corresponding to the three colors of the left-eye RGB and three sub-frames corresponding to the right-eye of the RGB.
  • Each sub-frame contains a number of molecular frames with different imaging distances.
  • a left-eye G sub-frame contains a molecular frame with an imaging distance of 1 meter, one of which has a resolution of 128 ⁇ 160 object images, and a molecular frame with an imaging distance of 0.1 meters, including an object image with a resolution of 60 ⁇ 80 and an object image with a resolution of 200 ⁇ 64
  • the imaging control system generates a virtual object surface, According to the positions of these three object images in the molecular frame, they are arranged on the corresponding virtual object surface and divided into 4 ⁇ 10, 2 ⁇ 5, 7 ⁇ 4 blocks, each block contains 32 ⁇ 16 pixels.
  • the imaging control system propagates a total of 78 blocks of 4 ⁇ 10 + 2 ⁇ 5 + 7 ⁇ 4 respectively by 10 cm. It can be calculated by Fresnel transform or spatial light angle spectrum propagation method, or it can use the object surface light field.
  • the convolution with the propagation function is calculated.
  • the convolution can be calculated by the fast Fourier transform, or the image intensity and phase distribution can be multiplied by the first set phase distribution, and then the fast Fourier / inverse Fourier transform is performed, and then multiplied by the second Setting the calculation of the phase distribution can speed up the calculation.
  • the propagation distance of all the blocks is set to a fixed propagation distance, for example, 10 cm.
  • the propagation function and / or the set phase distribution need only be calculated once and stored. Blocks can be used repeatedly, which can effectively save computing time.
  • the size of the virtual optical surface corresponding to each block on the virtual optical modulation surface is set to 48 ⁇ 24 pixels, and dual-phase single spatial light is used.
  • the encoding method of the modulator corresponds to the expansion of the virtual optical surface to 48 ⁇ 48 pixels.
  • the calculation method is
  • ⁇ 1 xy is the phase of the pixels in the odd-numbered columns of the light field after the coded object plane propagation
  • ⁇ 2 xy is the phase of the pixels in the even-numbered columns of the light field after the coded object plane propagation
  • ⁇ xy is the light after the object plane propagation
  • the phase of the field, the phase distribution of the illumination light field at the corresponding position is a constant
  • the amplitude distribution of the illumination light field at the corresponding position is all 1
  • a xy_max is the maximum value of the amplitude of the light field after object surface propagation.
  • a corresponding 80 virtual optical planes of 48 ⁇ 48 pixels are generated on the virtual optical modulation plane, and a virtual focal plane with a focal length of 100/9 cm is generated on each virtual optical plane of the virtual object plane block with an imaging distance of 1 meter.
  • Virtual lens (you can offset the center of the virtual lens to make its corresponding virtual object surface block translate on the final imaging plane, so as to better control its position), corresponding to the virtual object surface imaging distance of 0.1 meters
  • the block's virtual optical surface does not perform distance modulation.
  • the phase distribution of the generated virtual optical modulation surface is superimposed with the previously calculated light field distribution of the block propagating to the virtual object surface after bi-phase encoding, such as or Phase distribution of the lens simulated by the corresponding virtual optical surface (can be generated using Zernike or Seidel polynomial calculation), ang () is a complex angle operation, and Q ⁇ is a discretization operation, for example, the operation data is rounded according to the rounding method It becomes 64 discrete values between 0 and 2 ⁇ , and h xy represents the light field distribution output to the spatial light modulator.
  • the output resolution is 960 ⁇ 1440, which can be output to a spatial light modulator with a resolution greater than 960 ⁇ 1440 for imaging.
  • the order of the above steps can also be adjusted as needed.
  • the light field after the object plane is propagated is not encoded first, and the output light field is bi-phase encoded after being superimposed with the light field distribution of the corresponding virtual optical surface.
  • a solid lens can also be added to the system, such as a convex lens with a focal length of 10 cm, placed at a distance of 5 cm from the spatial light modulator.
  • the object corresponding to the imaging distance of 0.1 m can be directly placed.
  • the surface is displayed on a spatial light modulator.
  • the virtual optical surface corresponding to an imaging distance of 1 meter needs to simulate a concave lens with a focal length of 39.9091 cm. Or directly transmit the image at a distance of 4.0909 cm from the object surface of 1 meter to the spatial light modulator without the need to simulate the lens. If there are other virtual object surfaces with different distances, you can also set it to propagate 4.0909 cm and then correspond to the virtual image.
  • Analog lens on optical surface The advantage of this is that the number of virtual optical surfaces is reduced, and the difference in focal length between the optical device parameters that need to be simulated is small, which is easier to implement for actual physical devices, and it is also conducive to adding an aperture to filter stray light behind the focal point of the lens.
  • a time division multiplexing method is used to generate a virtual object plane and a virtual optical modulation plane.
  • the image generated by the two molecular frames in the aforementioned sub-frame is split into two sub-images, corresponding to two virtual object surfaces (each virtual object surface can contain both a partially propagated image of 0.1 m and a partially propagated image of 1 m) ,
  • the size of each block on the virtual object surface is still 32 ⁇ 16, but the 16 ⁇ 8 pixels around each block are 0 (as shown in Figures 6 and 8), and the virtual distance corresponding to each sub-image of 10 cm is propagated.
  • the resolution of the virtual optical surface corresponding to each block is 48 ⁇ 24 (the resolution of the virtual optical surface corresponding to the block can also be changed according to the modulation parameters, for example, the imaging distance after modulation is The resolution of the virtual optical surface corresponding to a 1-meter image block is still 48 ⁇ 24, but the resolution of the virtual optical surface corresponding to a block with an imaging distance of 0.1 after modulation is adjusted to 40 ⁇ 20).
  • the light field is superimposed on the modulation phase on the virtual optical surface (different distances can be modulated for different blocks), and the resolution after bi-phase encoding is 48 ⁇ 48, as shown in Figures 7 and 9.
  • the holograms corresponding to the sub-images are displayed one after another, and the image finally seen by the viewer after superimposed by the residual effects of human vision is consistent with the input image ( Figure 5).
  • the virtual optical surfaces corresponding to the sub-images displayed at the same time will not interfere with each other.
  • holograms displayed at different times can be coded differently, for example, bi-phase coding is used for the holograms displayed at the first time, and The hologram uses the discarding intensity coding method.
  • the advantage of this is that you can choose the most suitable encoding method according to the characteristics of the image displayed at certain moments.
  • the propagation distance of all blocks on the virtual object surface can also be set to infinity, so that the fast light Fourier or inverse Fourier transform can be used to obtain the light field distribution after propagation, and then pass the corresponding virtual optics
  • the surface modulates the imaging distance.
  • a beam of light with a certain divergence angle can be used.
  • a monochromatic semiconductor laser is used in the illumination system, and the spatial light modulator is directly obliquely incident, and the beam waist position of the light spot covers the spatial light modulator.
  • collimated parallel light illumination for example using a lens to collimate the light beam emitted by a semiconductor laser.
  • the light source can also use one or more R, G, and B three-color lasers, which are combined by an X prism or a dichroic mirror and output to a spatial light modulator (collimated or with a certain divergence angle).
  • the light source can also use fiber output (such as single-mode polarization-maintaining fiber), couple the light emitted by a single laser or multiple lasers or narrowband LEDs into the fiber, and then use the fiber output to directly illuminate the spatial light modulator, or pass the lens Output to spatial light modulator after collimation
  • fiber output such as single-mode polarization-maintaining fiber
  • the light source output to the spatial light modulator can be directly incident on the spatial light modulator in an oblique incidence manner, or a device such as BS or TIR or PBS can be used to import the output light to the spatial light modulator.
  • the light field emitted by the spatial light modulator can be directly output for viewing by human eyes. It can also be output to the human eye after passing through the optical system.
  • a waveguide can also be added to the optical system for pupil expansion.
  • the above TIR or BS device can be integrated in the waveguide to make a whole, the light source directly outputs to the TIR or BS device combined with the waveguide, and the light is output to the spatial light modulator. The light output by the spatial light modulator is modulated. The field is coupled into the waveguide and output to the human eye after pupil expansion.
  • the modulated light field output by the spatial light modulator can also be enlarged or reduced by the lens system (or modulated by the virtual optical surface without using an actual lens) (reducing or enlarging the exit pupil).
  • the input optical coupling angle is matched, and then the input wave is introduced into the pupil and output to the viewer after being expanded by the waveguide.
  • the spatial light modulator When using non-collimated light to illuminate the spatial light modulator (such as direct illumination with a semiconductor laser or fiber), you can set the light emitting point (light emitting point of the semiconductor laser or the exit end face of the fiber) to an appropriate position by setting a virtual The distance of the object plane and the parameters of the virtual optical plane can directly adjust the angle and exit pupil of the spatial light modulator output light field, which can be directly coupled into the waveguide without an optical system, or the above structure and waveguide can be integrated on a device, thereby reducing The volume of the system.
  • non-collimated light such as direct illumination with a semiconductor laser or fiber
  • the imaging control system can use GPU or FPGA calculations, and can perform parallel calculations on multiple blocks to increase the operation speed.
  • the imaging control system can also use the DSP for calculations.
  • the imaging control system can also use custom-developed ASIC chips for calculations.
  • the imaging control system can be combined with the CPU to obtain video information through the operating system.
  • system, device and each module provided by the present invention can be made logically by programming method steps.
  • the same programs are implemented in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, and embedded microcontrollers. Therefore, the system, device, and its various modules provided by the present invention can be considered as a hardware component, and the modules included in it for implementing various programs can also be considered as the structure within the hardware components; Modules for implementing various functions are considered to be both software programs that implement methods and structures within hardware components.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Holo Graphy (AREA)

Abstract

本发明提供了一种成像方法和光学系统及其存储介质、芯片与组合,将虚拟物面上的图像传播至虚拟光学调制面,得到经过虚拟光学调制面调制后的光场分布信息,具体地,对虚拟物面进行切分,生成得到多个分块;将输入图像进行切分,得到分别位于所述多个分块上的子图像;将各个分块上的子图像传播至整个或部分虚拟光学调制面上。本发明提高了图像生成效率,以较低的运算量实现全息图的转换,实现实时全息成像;降低了系统运算的功耗,降低对于计算芯片等系统硬件以及空间光调制器的配置要求;在大大提高计算效率的同时,还保留了图像中各像素的深度信息。

Description

成像方法和光学系统及其存储介质、芯片与组合 技术领域
本发明涉及成像领域,具体地,涉及成像方法和光学系统及其存储介质、芯片与组合。
背景技术
全息显示通过干涉衍射原理成像,可以真实地还原光场信息,从而实现带有深度和角度的光场显示效果,并且具备实现真正3D显示的潜力。但是目前生成全息图的计算全息方法存在一定的局限性,例如现有技术是计算每个不同深度的物点分别传播至像面并进行所有点的波前叠加,导致运算量过大,现有硬件条件下难以实现实时运算,或者直接对整个图像进行快速傅立叶变换,导致各像素点相互之间的深度关系丢失。
专利文献CN201710036146.5公开了一种近眼全息显示系统及方法,其照明装置发出的照明相干光照射在加载全息图的衍射器件上:衍射器件根据加载的全息图对照明相干光进行调制:经过调制的衍射光波在空间进行衍射,在一定距离内构建出具有深度信息的三维全息再现像:由于全息再现像具有深度信息,各个深度平面距离近眼投射光学结构的距离是不同的,因此不同的深度平面会被投射结构投射到空间离观察人眼的不同深度的位置上,以使观察人眼能够观看到被放大的同时具有深度层次的虚拟物体。该专利文献虽然关注到了深度,但是并没有涉及计算每个不同深度的物点分布传播至像面,以及实际三维图像计算的计算量过大的问题。
专利文献CN201380050549.5公开了一种利用虚拟视点从光场生成图像的技术方案,其基于所捕捉的光场图像数据和虚拟视点计算虚拟深度图,基于所捕捉的光场图像数据和虚拟深度图从虚拟视点的角度生成图像。该专利文献虽然不涉及所有点的波前,运算量较低,并且也对深度关系进行了处理,但是其是用相机捕捉图像,把现实世界图像的深度信息取出来,然后利用获取的深度信息以其描述的虚拟视点按照平面投影的方式成像,加入遮挡消隐等因素,让人有一定的深度感觉,但实际是没有深度的2维图像。
发明内容
针对现有技术中的缺陷,本发明的目的是提供一种成像方法和光学系统及其存储介质、电子芯片与组合。
根据本发明提供的一种成像方法,将虚拟物面上的图像传播至虚拟光学调制面,得到经过虚拟光学调制面调制后的光场分布信息。
根据本发明提供的一种光学系统,包括:
成像控制系统:将虚拟物面上的图像传播至虚拟光学调制面,得到经过虚拟光学调制面调制后的光场分布信息。
优选地,所述将虚拟物面上的图像传播至虚拟光学调制面,包括:
对虚拟物面进行切分,生成得到多个分块;
将输入图像进行切分,得到分别位于所述多个分块上的子图像;
将各个分块上的子图像传播至整个或部分虚拟光学调制面上。
优选地,所述虚拟光学调制面由一个或多个虚拟光学面构成;在所述多个虚拟光学面中,具有不同种类的虚拟光学面或者相同种类的虚拟光学面;不同种类的虚拟光学面,计算得到的光学参数不同。
优选地,每一个虚拟光学面分别对应虚拟物面的一部分;
所述多个虚拟光学面之间,存在两个以上的虚拟光学面在空域上叠加和/或存在两个以上的虚拟光学面在空域上不叠加。
优选地,所述将虚拟物面上的图像传播至虚拟光学调制面,具体为:
一个虚拟物面对应一个虚拟光学调制面,或者多个虚拟物面分别对应各自的虚拟光学调制面;
其中,所述多个虚拟物面上的图像由同一输入图像拆分生成得到,并且所述多个虚拟物面分别对应的各自的虚拟光学调制面调制后的光场分布采用在时间上分别显示或部分分别显示的方式进行叠加。
优选地,所述多个虚拟物面上的图像分别为同一输入图像的不同部分,所述多个虚拟物面上的图像叠加后等于所述输入图像。
优选地,虚拟物面上的图像具有设置的相位,其中,所述设置的相位使得虚拟物面上的图像传播至虚拟光学调制面时能量成设定的分布模式,和/或所述设置的相位使得虚拟物面上的图像传播至虚拟光学调制面时的光场的相位成设定的分布模式,例如均匀 分布或等位线为圆形的分布。
优选地,根据实际空间光调制器参数、入射光波长、光学器件中的任一者或者任多者,生成如下任一种或任多种参数:
-分块的大小及分块对应的虚拟光学面的大小;
-虚拟物面到对应的虚拟光学调制面的距离;
-虚拟光学面所模拟的光学器件的参数。
优选地,每个分块上的图像的传播距离固定,每个分块上的图像的传播采用图像强度及相位分布卷积传播函数来计算。
优选地,通过快速傅立叶/傅立叶逆变换虚拟物面的分块上的图像的强度及相位分布,点乘传播函数的快速傅立叶/傅立叶逆变换,再快速傅立叶逆/傅立叶变换来得到计算结果;传播函数的傅立叶/傅立叶逆变换预先计算并存储。
优选地,每个分块上的图像的传播采用图像强度及相位分布,乘以第一设定相位分布,再执行傅立叶/傅立叶逆变换,再乘以第二设定相位分布的方式计算,所述第一设定相位分布及第二设定相位分布是预先生成并存储的,或者是实时计算生成的(例如根据传播距离生成)。
优选地,物面图像的传播采用先计算生成或读取预先存储的物面上单点传播一定距离至虚拟光学面上的光场分布,记为第一光场分布;根据第一光场分布得到第二光场分布,即为物面图像上单点传播至虚拟光学面上的光场分布;再以叠加相关的单点对应的第二光场分布的方式来计算所述物面图像的传播,所述物面图像是指所述虚拟物面上的图像;
其中,物面上传播距离相同的点的第一光场分布相同,根据所述物面上的各点在物面各自的坐标对所述第一光场分布平移后,得到对应点在虚拟光学面上的光场分布,并乘以对应点各自的强度或乘以对应点各自的强度和相位,得到所述物面上的各点的第二光场分布。
例如,第一光场分布是物面上一个理想点传播一定距离后在虚拟光学面上的光场分布,其中,所述理想点假设强度为1,相位为0pi,物面坐标为(0,0);第二光场分布是物面上实际的点传播至虚拟光学面上的光场分布,其中,所述实际的点假设强度为4,相位为pi/2,物面坐标为(100,50);所以需要在第一光场分布上乘以所述实际的点的强度和相位,然后对于光场分布坐标平移,才能得出传播相同距离的实际物面点在虚拟光学面上的光场分布;进一步地,通过叠加传播一定距离的物面上实际的点在虚拟光学 面上的第二光场分布就可以得到这些物面图像点传播至虚拟光学面上的光场分布。
优选的,对于物面不同分块上的图像分别进行缩放,缩放比例根据特征信息生成或从直接从特征信息之中读取。
优选地,同一个虚拟物面的同一时间显示的多个分块之间相互存在间隙。
优选地,所述间隙用0能量来填充。
优选地,根据实际空间光调制器参数、入射光波长、系统中的光学器件、输入的特征信息中的任一者或者任多者,来生成虚拟物面和/或虚拟光学调制面。
优选地,根据眼球追踪的结果,实时构成虚拟物面和/或虚拟光学调制面。
优选地,对调制后的光场分布信息进行编码,将虚拟物面上的图像传播后的光场与虚拟光学面叠加后再编码;或者,先对虚拟物面传播后的光场编码,再叠加虚拟光学面上虚拟光学器件对应的光学分布;其中,所述编码采用如下任一种或任多种方式的组合:
-编码输出为纯相位格式的全息图/相息图,输出到空间光调制器成像;
-编码采用直接丢弃强度信息,只保留相位信息并离散化的方式;
-编码采用对虚拟物面输入信息的强度或相位进行补偿的方式;
-编码采用通过迭代的方式反复计算虚拟物面的传播及虚拟光学面;
-编码使用双相位编码的方式;
-编码使用复误差扩散(complex error diffusion)的方法;
-编码使用对强度和/或相位信息离散化的方法;
-编码采用时分复用的方式使用多个子全息图显示同一图像和/或子图像。
优选地,包括光源和空间光调制器;光源输出至空间光调制器;空间光调制器在成像控制系统的控制下,根据所述光场分布信息调制出实际光场分布。
优选地,空间光调制器使用相位调制器件,或空间光调制器使用相位调制器件与强度调制器件的组合。
优选地,成像控制系统包括控制电路,其中,所述控制电路用于参与输出至空间光调制器的信息的计算,空间光调制器的控制(驱动),和/或控制调节光源。
优选地,包括透镜系统,经过虚拟光学调制面调制后的光场分布通过所述透镜系统,得到输出图像。
优选地,根据实际成像控制系统参数,生成虚拟光学调制面,来校正透镜和/或其它光学元器件产生的像差。
优选地,还包含波导器件,波导器件用于扩展输出图像的光场的出瞳(眼动、EYEBOX)大小和/或视场大小。
优选地,通过虚拟光学调制面的不同设置来校正波导器件产生的像差。
优选地,所述波导器件是主要由多个透射/反射率不同的面组成的阵列式波导器件;或者,所述波导器件是主要由衍射或全息类器件组成的波导器件。
优选地,透镜系统缩放光源的入射光束的角度,和/或放大缩小空间光调制器输出的光场。
优选地,所述波导器件的入瞳小于人眼的瞳孔尺寸,光学系统中波导器件的前级系统输出的光场的出瞳与波导器件的入瞳耦合。
优选地,所述光源包括激光器和/或发光二极管。
优选地,所述光源还包含光纤部分,将激光器或者发光二极管发出的光能耦合入光纤,再引导至空间光调制器。
优选地,所述光源还包含合束器件,所述合束器件使用X棱镜、二向色镜、光纤中的任一者或任多者,将不同颜色光源发出的光束合束输出至空间光调制器。
优选地,还包括光阑,所述光阑遮蔽光场中不需要的部分。
优选地,包含多个空间光调制器,多个空间光调制器各自还原的光场相叠加,来还原目标光场。
根据本发明提供的一种光学系统组合,包括多个上述的光学系统,多个光学系统并联,输出不同光场至观看者的左右眼形成双眼视差图像,和/或输出至多个观看者。
优选地,输入信息除图像的光强分布信息外还包括特征信息;其中,通过所述输入信息中的图像经过处理或者不经过处理得到所述虚拟物面上的图像。
优选地,所述输入信息包含帧、子帧、分子帧多级结构中的一级或多级,所述多级结构按照特征信息组织。
优选地,所述输入信息的特征信息包含像素对象的成像距离、角度、帧/子帧/分子帧图像总亮度、子帧数量、分子帧数量、左右帧、接收目标、像差参数、缩放比例、消隐关系、颜色之中的至少一种;
其中,所述像素对象包括像素点和/或像素分块。
优选地,所述输入信息是外部输入,或者存储在成像控制系统之中,或者部分存储在成像控制系统中,部分从外部输入。
优选地,所述虚拟光学调制面和/或虚拟光学面根据所述特征信息生成或部分根据所述特征信息生成。
根据本发明提供的一种存储有计算机程序的计算机可读存储介质,所述计算机程序被处理器执行时实现上述的成像方法的步骤。
根据本发明提供的一种集成有逻辑的ASIC芯片,所述芯片的程序和或电路实现上述的成像方法的步骤。
优选地,将输入图像的能量强度分布显示在虚拟物面,并传播至虚拟光学调制面,将其它特征信息(例如成像距离)通过虚拟光学调制面调制出来,计算出虚拟物面经过虚拟光学调制面调制后的光场分布信息,并编码输出。
优选地,输入图像信息包含图像能量强度(光强分布、各颜色的灰度信息);各像素成像距离、观看角度中的任一种或任多种特征信息。
优选地,输入图像采用多级结构组成,区分各级的特征信息要素包括:像素点距离、接收目标、影像角度、缩放比例、影像消隐关系、左右帧、影像颜色、总光强中的任一种或任多种要素。
优选地,输入图像采用帧、子帧的二级结构,或采用帧、子帧、分子帧的三级结构组成。
优选地,从多级结构中直接得到对应级的虚拟物面和虚拟光学调制面。
优选地,使用空间光调制器显示经计算和编码后的光场信息输出。
优选地,空间光调制器使用相位调制。
优选地,空间光调制器为硅基液晶器件。
优选地,空间光调制器为相位调制器件和强度调制器件的组合。
优选地,可以将输入图像分块,分别对应虚拟光学调制面上各自的虚拟光学面,从而实现图像的一些分块显示的光学特征与另一些分块的光学特征不同(例如距离不同)。
优选地,各分块对应的虚拟光学面根据输入的特征信息计算获得。
优选地,同一图像的各分块经对应的虚拟光学面调制后的光场可以在时序上分别显示,或几个分块在同一时间显示,另几个分块在另一个时间同时显示,所有分块显示内容在时域上的累加等于完整图像,但各分块的光学特征可以不同。
优选地,可以将在时序上先后显示的各分块按显示时间分组,例如同一时刻显 示的多个分块分为一组,同组内的各个分块之间存在间隙,以使虚拟物面上同组的分块传播至虚拟光学调制面后,各分块对应的虚拟光学面之间不存在重叠。从而使同组内的各分块图案的光学特征可以不同而不会相互干扰。上述不同组的各分块对应的虚拟光学面可以重叠,但在同一时刻,同时显示的各个图像的虚拟光学面都不重叠。
优选地,虚拟光学面可以用来调节对应图像的成像距离,角度,光学系统产生的像差,人眼的屈光度散光等等光学特征信息。
优选地,光学系统中包含光源,空间光调制器,成像控制系统包括控制电路。
优选地,其中光源包括半导体激光器。
优选地,光源还包含光纤,通过将半导体激光器输出的光耦合入光纤再输出到空间光调制器来实现光束整形。还可以将不同波长的激光导入同一光纤来实现不同波长的输入光源的合束。
优选地,其中空间光调制器使用基于相位调制的硅基液晶器件。
优选地,控制系统使用FPGA或DSP或GPU或ASIC芯片来计算虚拟物面上图像的传播,虚拟光学调制面以及光场信息的编码。
优选地,光学系统中还包含透镜或透镜组,和/或光阑等其它光学元器件。
优选地,光学系统中还包含波导器件,用于扩展系统的出瞳(EYEBOX)或眼动,同时并不缩小视场(FOV)。
优选地,所述波导器件可以使用由多个透过率不一的反射面组成的阵列式波导,或者由光栅(衍射类器件HOE、DOE)组成的波导器件。
优选地,虚拟光学面根据波导扩瞳时各个瞳之间的不同传播距离或角度来校正由于图像到波导器件中各个拼接的瞳的传播距离不同造成的像差。
优选地,可以对虚拟物面上的图像设置一个初始相位,以使其传播至虚拟光学调制面后具有某些特性,例如相位分布均匀,或者相位成设定分布形式,或者强度均一等。
优选地,对于传播至虚拟光学调制面经过虚拟光学面调制后的虚拟物面图像的光场信息进行编码。
优选地,上述编码方式可以是纯相位的编码,例如丢弃强度,只保留相位,或者双相位编码,或者其它优化的编码方式
优选地,虚拟光学面所模拟的光学器件,可以是模拟的透镜、反射面或者自由曲面 器件。模拟光学器件输出的光场(输出调制光场),可以经过主要由实体光学器件(例如透镜)组成的系统的叠加处理后,得到所述光学系统的输出光场。当然,若模拟光学器件输出的光场已经符合要求时,则所述模拟光学器件输出的光场即作为所述光学系统的输出光场而无需实体透镜或其它光学器件。
优选地,人眼的瞳孔尺寸是指2-8毫米,例如2毫米、3毫米、4毫米、5毫米或者8毫米等。
其中,符号/表示或者的意思。
与现有技术相比,本发明具有如下的有益效果:
1、本发明提高了图像生成效率,以较低的运算量实现全息图的转换,实现实时全息成像。
2、本发明降低了系统运算的功耗,降低对于计算芯片等系统硬件以及空间光调制器的配置要求。
3、本发明在大大提高计算效率的同时,还保留了图像中各个深度信息。
附图说明
通过阅读参照以下附图对非限制性实施例所作的详细描述,本发明的其它特征、目的和优点将会变得更明显:
图1、图2、图3、图4分别为同一输入图像被拆分得到的四个子图像,其中,输入图像的像素值均为黑色方块代表的是图像原始的像素值,子图像中的白色方块代表像素值为0。
图5为输入图像。
图6、图8为将图5中输入图像的两个子帧中多个分子帧生成的图像拆分成两个子图像,对应两个虚拟物面的示意图。其中黑色部分是能量为0的间隙。
图7、图9分别为图6、图8中各子图像对应的虚拟光学面。
图10为虚拟物面的分块大小相同,虚拟光学面大小相同的示意图。
图11为虚拟物面的分块在空间上不重叠,对应的虚拟光学面在空间上重叠的示意图。
图12为虚拟物面的分块大小不相同,虚拟光学面在空间上重叠的示意图。
图13为虚拟物面的部分在空间上完全重叠,但对应的特征信息不同,因而空间重叠的两个物面对应的虚拟光学面不同。
图14、图15、图16分别示出了不同的光学系统的原理示意图。
图中示出:
光源1
空间光调制器2
光学器件3
具体实施方式
下面结合具体实施例对本发明进行详细说明。以下实施例将有助于本领域的技术人员进一步理解本发明,但不以任何形式限制本发明。应当指出的是,对本领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干变化和改进。这些都属于本发明的保护范围。
成像方法
本发明提供的一种成像方法,将虚拟物面上的图像传播至虚拟光学调制面,得到经过虚拟光学调制面调制后的光场分布信息。
成像控制系统可以将输入的信息(帧/子帧/分子帧,或进一步细分)生成一个或多个虚拟物面。
成像控制系统将虚拟物面切分得到分块,例如将输入图像的一帧分辨率1024×768的子帧切分得到分别位于192个分辨率为64×64的分块上的子图像,将每一个分块上的子图像分别传播至虚拟光学调制面。进一步的,输入图像中一个帧中的细分的子帧或/分子帧包含的图像称为子图像,子图像的大小也可以约定为和分块大小一致,此时,输入的一个子帧/分子帧就对应虚拟物面的一个分块,无需再做额外处理。
每一分块传播所影响到的可以是整个虚拟光学调制面,或者只是虚拟光学调制面的一部分,虚拟光学调制面上对应虚拟物面分块传播的虚拟光学面的尺寸由空间光调制器上实际像素点大小、分块传播距离、对应入射光频率等一系列参数综合决定。
虚拟物面可以在虚拟光学调制面之前或者之后,所述传播可以是正向的,例如虚拟物面的图像正常传播一段距离后在虚拟光学调制面上形成光场分布,也可以是逆向的,例如虚拟光学调制面上的光场分布传播一段距离后形成虚拟物面,例如,输入的图像帧中一个R子帧中包含一个大小为100×200像素的成像距离为虚拟光学调制面之后500毫米的分子帧图像,则可以将该分子帧图像对应生成的虚拟物面划分为2×4个分辨率 64×64的分块,子图像按照每个分块在虚拟光学调制面之前100毫米的距离传播至虚拟光学调制面,通过虚拟光学调制面上的对应的虚拟光学面将子图像的成像调制到虚拟光学调制面之后500毫米的距离。
所述传播可以使用菲涅尔变换/菲涅尔逆变换,傅立叶变换/傅立叶逆变换,角频谱传播,或者直接使用基尔霍夫衍射公式等方法计算得到。也可以先预存或计算物面单点传播到虚拟光学面的光场分布,记为第一光场分布,再对物面相应点(物面上到虚拟光学面距离相同的点)根据物点的坐标对所述第一光场分布平移,然后以物点强度作为权,加权叠加平移后的第一光场分布来计算所述传播。
每一个虚拟物面上的分块上的子图像传播到虚拟光学调制面上的距离可以是相同的,也可以是不同的,或者也可以设置传播距离为0,将子图像直接显示在虚拟光学调制面上。
虚拟光学调制面上的每一个虚拟光学面对应同一帧/子帧/分子帧所对应形成的虚拟物面上的一个分块。
虚拟物面不同分块上的子图像可以分别进行缩放,用于补偿将图像调制到不同距离后可能与实际景物应有的大小产生的偏差。缩放的比例可以根据对应虚拟光学调制面以及其它一些参数来确定,例如根据虚拟光学调制面将图像调制到的成像距离来确定缩放比例,或者根据虚拟光学调制面将图像调制到的成像距离以及空间光调制器的参数、光源的频率、其它光学系统的参数等来确定缩放比例。当然这个比例也可以是事先计算好,存储在输入信息的特征信息中,从而系统无需实时计算,只需直接读取相应信息并进行缩放即可。或者也可以在生成输入信息时已将图像缩放,则此时系统无需再对子图像进行缩放。当然,这种偏差与实际采用的光学系统相关,也可能实际的光学系统对于虚拟光学面调制物面后实际图像的大小不敏感或者无影响,这种情况下也可以不对物面的不同子图像进行缩放调整。
每一个虚拟光学面可以根据输入信息分别调制对应的虚拟物面上的分块上的子图像,例如通过模拟透镜的方式调节对应虚拟物面分块上图像的远近,和/或调节对应虚拟物面分块上图像的角度,和/或者校正对应分块图像的像差(各分块的像差可以相同,也可以不同),还可以对入射的照明光场分布进行补偿。
虚拟光学面可以使用Seidel多项式或Zernike多项式计算生成。虚拟光学面可以校正像差,例如使用Zernike多项式生成的相位分布,校正的光学像差包括球差、慧差、像散、场曲、畸变、色差、高阶像差或者色差等等所有可以给出数学表达式的像差。
虚拟物面和虚拟光学调制面还可以采用时分复用的方式,使虚拟物面上的分块所对应的虚拟光学调制面上的各虚拟光学面不重叠,例如一个分辨率32×32的分块上的子图像传播后对应的虚拟光学面的大小为分辨率64×64,则可以将所述虚拟物面对应的帧/子帧/分子帧拆分成4个子图像,它们包含的信息的总和等于原输入图像,每个子图像中的分块大小仍为分辨率32×32,但同一时刻显示的虚拟物面上的分块的四边的16个像素(因为每个子图像的四边是16个像素的空隙,例如左边有16个像素的空袭,与其左边相邻的可以同一时刻显示的分块的右边也有16像素空隙,这样两个物面分块中的空隙加在一起正好是32像素点,可以是一个在同一时刻不被显示的图像分块)对应的值都为0,这样原先每64×64的图像所包含的4个相邻分块中只有一个分块在同一时间中显示,其对应的虚拟光学面在虚拟光学调制面上不会重叠。通过快速的显示这4个子图像,利用人眼的视觉残留效应,观看者可以看到不会相互干扰的4个子图像叠加后完整的64×64的输入图像。换言之,通过4个在时间上分开显示的虚拟物面和虚拟光学调制面,可以还原出任意的分辨率,包含任意多个不同虚拟光学面的图像,例如1920×1152的图像可以是30×18个64×64的分块组成,每个虚拟物面的分块中有64×64个像素点包含原始信息,实际显示时拆分成4个时间上分别显示的子帧/分子帧,每个分子帧/分子帧包含30×18个64×64的分块,每个虚拟物面的分块中只有32×32个像素点包含原始信息,其余部分为0,不同时刻显示的子帧/分子帧的虚拟物面的分块中包含原始信息像素点的位置可以不同,其对应的虚拟光学面为64×64,这样就可以还原出30×18个不同特征的虚拟光学面且每一时刻虚拟物面分块对应的虚拟光学面都不会重叠。
此外还可以通过合理的设置分块传播距离和/或适当截止部分高频信息的方法,在保证成像质量的前提下减小虚拟光学面大小(例如使虚拟物面分块大小等于虚拟光学面大小),以及更优的方式排布虚拟光学面分布,从而减少时分复用的子图像数量或不使用时分复用的子图像方式成像。
此外,还可以通过对输入虚拟物面上的初始相位进行优化的方法,使每一分块传播至虚拟光学调制面处的光场只有很少的高频或没有高频信息,从而减小其对应的虚拟光学面大小,从而减少时分复用中所需的子图像数量,或不使用时分复用的方式成像。
此外,在某些情况下(例如最终图像成像距离较远),还可以通过在虚拟物面能量为0,致传播到虚拟光学调制面后对应分块能量也为0的部分叠加周边其它分块(例如具有能量的部分)的输出光场分布来增加对应分块图像的出瞳面积,由于上述能量为0部 分的光场并没有实际计算,只是复制了周边的光场分布,从而在像质可接受的情况下,相比与取更大的分块,将前述能量不为0的部分物面传播至此,可以节约运算量。
对虚拟物面传播至虚拟光学调制面并叠加虚拟光学面后的光场信息进行编码。
编码输出可以为纯相位格式的全息图/相息图,输出到空间光调制器成像。
控制电路同步控制光源输出与空间光调制器上显示对应的全息图/相息图所匹配的颜色及强度。
编码可以使用直接丢弃强度信息,只保留相位信息并采用离散化的方法。
编码可以采用在虚拟物面上加入相位信息,使传播后的光场满足设定的相位分布或者强度分布(例如强度或者相位分布较为均一)以优化显示效果。
编码可以通过对虚拟物面输入信息的强度或相位进行补偿的方式,以优化显示效果。
编码可以通过迭代的方式反复计算虚拟物面的传播及虚拟光学面,以优化显示效果
编码可以使用双相位编码的方法(double phase),公式:
Figure PCTCN2019093501-appb-000001
Figure PCTCN2019093501-appb-000002
其中:
φ 1 xy为对应虚拟光学面的奇数行/列点的相位
φ 2 xy对应偶数行/列点的相位(或者φ 1 xyφ 2 xy对应的点也可以采用棋盘格的方式排列)
φ xy为设定的物面传播后光场或物面传播并经虚拟光学面调制后光场的相位
φ z xy为照明光场在对应位置的相位分布
A z xy为照明光场在对应位置的振幅分布
A xy_max为物面传播后的光场或物面传播并经虚拟光学面调制后光场振幅的最大值
下标x表示物面传播后的光场或物面传播并经虚拟光学面调制后光场(等效于虚拟光学调制面或虚拟物面上相关的点的x方向)的坐标值
下标y表示物面传播后的光场或物面传播并经虚拟光学面调制后光场(等效于虚拟光学调制面或虚拟物面上相关的点的y方向)的坐标值
使用双相位的编码方式时,可以通过一片空间光调制器显示双相位的方式还原光场,例如将前述的奇偶行或列的方式。
此外,使用双相位的编码方式时,也可以使用两片或多片纯相位的空间光调制器,每片空间光调制器调制双相位中的一个相位分布,通过光学系统合成后还原出完整的光场。此时φ 1 xy单独显示在一片空间光调制器上,φ 2 xy单独显示在另一片空间光调制器上,无需在区分奇偶行/列。
编码可以使用复误差分散(complex error diffusion)的方法
此外,可以将虚拟物面传播后的光场与虚拟光学面叠加后再编码,也可以先对虚拟物面传播后的光场编码,再叠加虚拟光学面上虚拟光学器件对应的光学分布(虚拟光学器件的光学分布大多为纯相位分布,可以直接叠加,无需再次编码),所述的叠加可以是相位φ xy的直接相加,或者是相位分布
Figure PCTCN2019093501-appb-000003
的点乘(两者在数学上完全等效)。
此外,编码也可以保留强度和相位,器件上使用两块以上的空间光调制器,分别调制强度和相位,通过光学系统合成后还原出光场。例如使用一块强度的空间光调制器(例如强度调制的LCoS或LCD或OLED等器件),一块相位调制的透射式空间光调制器(基于硅基液晶的纯相位器件),分别还原光场的强度分布和相位分布,其中相位调制的空间光调制器放在强度调制的空间光调制器之后。当然也可以通过器件设计和制造工艺将强度调制的空间光调制器和相位调制的空间光调制器制作成一个器件来实现上述功能。
编码可以使用时分复用的方式使用多个子全息图显示同一子图像,来降低图像散斑,提高成像质量。
编码的控制电路可以使用GPU、FPGA、DSP、CPU或者开发ASIC芯片
上述编码处理及计算方法使用并行方式进行。
光学系统
本发明提供的一种光学系统,包括成像控制系统:将虚拟物面上的图像传播至虚拟光学调制面,得到经过虚拟光学调制面调制后的光场分布信息。
成像控制系统
成像控制系统可以采用电子元器件,负责接收输入信号,并按本发明提供的成像方法将输入信号转换为全息图/相息图输出。将虚拟物面上的图像传播至虚拟光学调制面,得到经过虚拟光学调制面调制后的光场分布信息,并将调制后的光场分布编码并以全息图/相息图的方式输出。
成像控制系统接收的输入信息包含接收的输入图像中的图像像素点颜色的灰度信息, 以及像素点的距离信息/深度信息和/或像素点的观看角度信息等特征信息。
成像控制系统接收的输入信息可以是按照多级结构组织的,例如每一帧中包含一个或多个子帧,子帧中还可以再进一步细分为一个或多个分子帧等等,成像控制系统按照每一级所包含的信息对该级中的多个子图像分别处理并生成全息图/相息图。
所述输入信息的分级可以是按距离、颜色、观看角度等特性来区分,例如第一级信息按RGB三种颜色将一帧图像分为三个子帧,第二级信息按距离将子帧分成多个成像距离不同的分子帧,例如R子帧中包含3个成像距离的分子帧,G帧中包含一个成像距离的分子帧,B子帧中包含4个成像距离不同的分子帧。
光学系统中还可以包含空间光调制器、光源、透镜系统,光阑以及其它光学器件
空间光调制器可以采用硅基液晶工艺制造的纯相位调制型器件,一套系统中可包含一块或多块空间光调制器
光源可以采用激光器或LED等;
光源可以采用一个或多个半导体激光器或LED,采用多个激光器或LED时可以通过合路器合路输出到空间光调制器;
光源中还可以包含准直系统,将光束准直或放大或缩小光束角度后输出至空间光调制器;
合路器可以是合束棱镜,或一块或多块二向色镜,或使用光纤耦合的方式;
光路系统中还可以包含透镜系统,对空间光调制器输出光场进行缩放(例如望远镜系统,或者成像镜头等);
光路系统中还可以包含波导和/或衍射光学器件,实现对输出图像进行扩瞳和/或视场的扩展;
光路系统中还可以包含阵列式波导,实现对输出图像进行扩瞳和/或视场的扩展;
光路系统中还可以包含光栅式波导(包含HOE、DOE类器件);
光路系统中还可以包含透镜阵列,实现对输出图像进行扩瞳和/或视场的扩展;
光路设计时可以通过适当的设计将全息图产生的0级和/或多余的衍射级引导在波导入瞳之外,或者使这些杂光的入射角度不满足波导的耦合条件,从而使波导输出的图像中不会包含0级杂光和/或多余衍射级形成的鬼影;
光路系统中还可以加入光阑,用于遮挡0级和/或多余衍射级;
对于使用单空间光调制器的双相位编码方式的系统,可以通过在光学系统中增加光 阑,遮挡由于双相位编码产生的噪声(图像的多余部分),或者与波导配合,将双相位编码产生的噪声引导在波导入瞳之外或者入射角度不满足波导的耦合条件,从而过滤掉由于双相位编码产生的噪声。
光学系统中可以有多块空间光调制器来还原虚拟光学调制面上的输出光场(例如两块空间光调制器,每块各显示双相位编码中的一部分编码,通过合路器件合成后输出完整光场)
光学系统中也可以有多套上述技术方案的装置(例如两套装置,分别显示观看者左眼和右眼看到的图像,达到更好的成像效果)
下面对本发明进行更为具体的说明。
应用例1
一种近眼显示系统,输入图像每一帧的分辨率为800×600,每一帧包含2个成像距离不同的子帧。例如一帧中有一个成像距离为10米分辨率为100×120的物体图像的子帧,记为第一子帧,还有另一个成像距离为0.2米分辨率为200×100的物体图像的子帧,记为第二子帧。成像控制系统将虚拟物面的分块大小设置为50×50,则第一子帧占据2×3个分块,第二子帧占据4×2个分块,虚拟物面与虚拟光学调制面的距离设置为0.1米,而对应每个分块的虚拟光学面的大小取与分块大小一致。成像距离为10米的子帧的虚拟物面的分块对应的虚拟光学面所模拟的透镜被设置为使图像成像在10米位置的焦距,可通过改变每个分块对应的模拟透镜的中心位置来调节分块图像在实际成像平面上的位置,从而达到更好的成像效果。成像距离为0.2米的子帧的虚拟物面分块对应的虚拟光学面所模拟的透镜被设置为使图像成像在0.2米位置的焦距。叠加虚拟物面传播至虚拟光学调制面的光场与虚拟光学面所模拟的透镜产生的相位分布,对产生的光场分布进行编码,例如采用纯相位编码,将编码后的结果输出到空间光调制器,同时成像控制系统同步光源照射空间光调制器输出图像光场。优选地,调制面调制物面传播至虚拟光学调制面的光场的方法可以是与虚拟光学面的复振幅点乘或相位的相加。此外,可以根据实际的光学系统对显示在10米处的物体图像和0.2米处的物体图像进行缩放,例如将0.2米处的物体原始图像X方向放大10%、Y方向放大9%,将10米处的图像X方向缩小5%、Y方向缩小4%(X,Y方向的放大缩小比例也可以相同,根据实际光学系统决定),从而使虚拟图像与其在空间中应该具有的实际大小相一致。
光源采用半导体激光器,经过准直透镜后以平行光的方式输出到空间光调制器(也可以使用有角度的入射光,从而改变输出的视场大小)。光学系统中还可包含波导系统,空间光调制器输出图像的出瞳与波导系统的入瞳耦合,无用的次级将被挡在波导的入瞳之外,或者角度不满足波导的输入要求从而被滤除,波导系统对输出的图像进行扩瞳,使观看者能够在更大的范围内观看图像。
此外,对于成像距离为0.2米分辨率的图像(成像距离较近),虚拟物面不同位置分块对应的虚拟光学面还可以根据波导中各出瞳图像传播的实际距离做像差补偿,补偿计算可以通过Zernike多项式或Seidel多项式计算生成。
应用例2
本领域技术人员可以将应用例2理解为应用例1的变化例。
一种近眼显示系统,输入图像每一帧的分辨率为640×480,每一帧包含对应左眼RGB三个颜色的子帧以及对应右眼的RGB三个颜色子帧共6个子帧,每个子帧包含数量不等的成像距离不同的分子帧。
成像控制系统接收一帧图像后,根据不同子帧处理后将每一分子帧分块,例如一个左眼G子帧中包含1个成像距离为1米的分子帧,其中有一个分辨率分别为128×160物体图像,以及一个成像距离为0.1米的分子帧,其中有一个分辨率为60×80的物体图像和一个分辨率为200×64的物体图像,则成像控制系统生成虚拟物面,并根据这三个物体图像在分子帧中的位置将其排布在对应的虚拟物面的位置上并分成4×10、2×5、7×4的分块,每个分块含有32×16个像素点。成像控制系统将这4×10+2×5+7×4共78个分块分别传播10厘米,可通过菲涅尔变换,或空间光角频谱传播的方法计算,也可以利用物面光场与传播函数的卷积来计算,卷积可通过快速傅立叶变换计算,或者也可以使用图像强度及相位分布乘以第一设定相位分布,再执行快速傅立叶/傅立叶逆变换,再乘以第二设定相位分布的方式计算,可以加快运算速度。或者也可以先计算(或从预存的数据中读取)单个像素点分别传播1米和0.1后的光场分布信息,将对应距离的分块中的每个像素点的强度及相位与所述光场分布相乘(需根据像素点位置平移光场分布信息),或者只用像素点的强度乘以对应光场相位信息(需根据像素点位置平移光场分布信息),然后加权叠加所有点各自的光场分布信息来计算光场传播。
此例中所有分块的传播距离都设置为固定的传播距离,例如10厘米,则利用快速傅立叶变换计算时,传播函数/或所述设定相位分布只需做一次计算并存储,后续计算的 分块即可反复使用,可有效节约计算时间。结合入射光波长、空间光调制器像素点大小,以及成像质量的综合考虑后设定虚拟光学调制面上对应每一分块的虚拟光学面的大小为48×24像素,采用双相位单空间光调制器的编码方式编码后对应虚拟光学面扩展为48×48像素,计算方法为
Figure PCTCN2019093501-appb-000004
Figure PCTCN2019093501-appb-000005
其中,φ 1 xy为经编码的物面传播后光场的奇数列像素的相位,φ 2 xy为经编码的物面传播后光场的偶数列像素的相位,φ xy为物面传播后光场的相位,照明光场在对应位置的相位分布为一个常数,照明光场在对应位置的振幅分布都为1,A xy_max为物面传播后的光场振幅的最大值。
Figure PCTCN2019093501-appb-000006
为经双相位编码的物面传播后的光场的光场分布
在虚拟光学调制面上生成对应的80个48×48像素点的虚拟光学面,对应成像距离为1米的虚拟物面分块的每一虚拟光学面上都生成一个焦距为100/9厘米的虚拟透镜(可以通过偏移虚拟透镜的中心,使其对应的虚拟物面分块在最终的成像平面上发生平移,从而更好的控制其位置),对应成像距离为0.1米的虚拟物面分块的虚拟光学面则不做距离调制。生成的虚拟光学调制面的相位分布与先前计算的传播至此经双相位编码后的虚拟物面的分块的光场分布相叠加,例如
Figure PCTCN2019093501-appb-000007
Figure PCTCN2019093501-appb-000008
为对应的虚拟光学面所模拟的透镜的相位分布(可以使用Zernike或Seidel多项式计算生成),ang()为取复数辐角运算,Q{}为离散化运算,例如根据四舍五入的方法将运算数据变为0~2π间的64个离散值,h xy表示输出至空间光调制器上的光场分布。输出的结果分辨率为960×1440,可以输出到分辨率大于960×1440的空间光调制器上成像。当然,上述步骤的顺序也可以根据需要调整,例如对于物面传播后的光场先不编码,待与对应的虚拟光学面的光场分布叠加后再将输出光场进行双相位编码。
此外,对于应用例2,也可以在系统中加入一个实体透镜,例如焦距为10厘米的凸透镜,放置在距空间光调制器5厘米处,则此时对应0.1米的成像距离的可以直接将物 面显示在空间光调制器上,对应1米的成像距离的虚拟光学面需要模拟一个焦距为39.9091厘米的凹透镜。或者直接将成像在1米的物面传播4.0909厘米至空间光调制器而无需再模拟透镜,若还有其它不同距离的虚拟物面,也可以将其设置为传播4.0909厘米,再在其对应虚拟光学面上模拟透镜。这样做的好处是虚拟光学面数量减少,且需要模拟的光学器件参数之间焦距的差别很小,对于实际物理器件较易实现,且还利于在透镜后聚焦处加入光阑过滤杂光。
此外,对于应用例2,如果输入图像分辨率较大,例如1024×768,采用上述方法会导致最终结果分辨率较高,而高分辨率的空间光调制器成本较高,此时也可采用时分复用的方法来生成虚拟物面和虚拟光学调制面。例如将前述子帧中两个分子帧生成的图像拆分成两个子图像,对应两个虚拟物面(每个虚拟物面都可以同时包含部分传播0.1米的图像和部分传播1米的图像),每个虚拟物面上的分块大小仍为32×16,但每个分块周边的16×8个像素点为0(如图6和图8),传播10厘米每个子图像对应的虚拟物面至虚拟光学面,每个分块对应的虚拟光学面的分辨率为48×24(分块对应的虚拟光学面的分辨率也可以根据调制参数而做出改变,例如调制后成像距离为1米的图像分块对应的虚拟光学面分辨率仍为48×24,但调制后成像距离为0.1的分块对应的虚拟光学面的分辨率调整为40×20),虚拟物面传播后的光场与虚拟光学面上的调制相位(可以对不同分块调制出不同距离)叠加,经双相位编码后的分辨率为48×48如图7和图9所示。子图像对应的全息图先后显示,通过人眼视觉残留效应叠加后观看者最终看到的图像与输入图像(图5)一致。而同一时刻显示的子图像对应的虚拟光学面不会相互干扰。
此外,对于应用例2,在使用时分复用的显示方式时,不同时刻显示的全息图可以采用不同的编码方式,例如对于第一时刻显示的全息图采用双相位编码,对于第二时刻显示的全息图采用丢弃强度的编码方式。这样做的好处是可以根据某些时刻显示的图像特性来选择最合适的编码方式。
对于上述应用例,还可以将虚拟物面上所有分块的传播距离都设置为无穷远,这样可以通过一次快速傅立叶或傅立叶逆变换即得出传播后的光场分布,然后通过对应的虚拟光学面调制成像距离,这样做的好处是可以降低运算复杂度,加快运算速度。
对于上述应用例,可以使用带有一定发散角度的光束照明。例如照明系统使用单色半导体激光器,直接斜入射空间光调制器,光斑的束腰位置覆盖空间光调制器。
也可以使用经过准直的平行光照明,例如使用透镜将半导体激光器出射的光束准直。
光源还可以使用R、G、B三色激光器各一个或数个,通过X棱镜或二向色镜合束后 输出到空间光调制器(准直或带有一定发散角度)
光源还可以使用光纤输出(例如单模的保偏光纤),将单个激光器或多个激光器或窄带LED发射出的光束耦合入光纤,然后使用光纤的输出端直接照明空间光调制器,或者经过透镜准直后输出到空间光调制器
光源输出到空间光调制器可以采用斜入射的方式直接入射空间光调制器,也可以使用BS或TIR或PBS等器件将输出光导入到空间光调制器。
经空间光调制器调制后出射的光场可以直接输出供人眼观看。也可以再经过光学系统后输出至人眼。
还可以在光学系统中加入波导,作扩瞳用。上述的TIR或BS类器件可以集成在波导内,做成一个整体,光源直接输出至与波导结合的TIR或BS类器件,将光输出到空间光调制器,空间光调制器调制后输出的光场耦合入波导,经扩瞳后输出至人眼。
或者空间光调制器输出的调制光场也可以先经过透镜系统(或者通过虚拟光学面来调制而不使用实际的透镜)放大或缩小视场(缩小或放大出瞳),与波导的入瞳及输入光耦合角度相匹配,再输入波导入瞳,经波导扩瞳后输出至观看者
使用非准直光照射空间光调制器时(例如使用半导体激光器直接照明,或者使用光纤),可以将发光点(半导体激光器的发光点,或者光纤的出射端面)设置到合适的位置,通过设置虚拟物面的距离及虚拟光学面的参数来直接调节空间光调制器输出光场的角度及出瞳,可以无需光学系统直接耦合入波导,或者将上述结构与波导集成在一个器件上,从而减小系统的体积。
还可以使两套上述系统并联,同步其显示的内容,做成双目显示系统,达到更好的显示效果。
对于上述的全息图生成方法,成像控制系统可以使用GPU或FPGA计算,可以对多个分块实行并行计算从而提高运算速度。
成像控制系统也可以使用DSP做计算。
成像控制系统也可以使用订制开发的ASIC芯片做计算。
成像控制系统可以和CPU结合,通过操作系统获得视频信息。
本领域技术人员知道,除了以纯计算机可读程序代码方式实现本发明提供的系统、装置及其各个模块以外,完全可以通过将方法步骤进行逻辑编程来使得本发明提供的系统、装置及其各个模块以逻辑门、开关、专用集成电路、可编程逻辑控制器以及嵌入式 微控制器等的形式来实现相同程序。所以,本发明提供的系统、装置及其各个模块可以被认为是一种硬件部件,而对其内包括的用于实现各种程序的模块也可以视为硬件部件内的结构;也可以将用于实现各种功能的模块视为既可以是实现方法的软件程序又可以是硬件部件内的结构。
以上对本发明的具体实施例进行了描述。需要理解的是,本发明并不局限于上述设定实施方式,本领域技术人员可以在权利要求的范围内做出各种变化或修改,这并不影响本发明的实质内容。在不冲突的情况下,本申请的实施例和实施例中的特征可以任意相互组合。

Claims (43)

  1. 一种成像方法,其特征在于,将虚拟物面上的图像传播至虚拟光学调制面,得到经过虚拟光学调制面调制后的光场分布信息。
  2. 一种光学系统,其特征在于,包括:
    成像控制系统:将虚拟物面上的图像传播至虚拟光学调制面,得到经过虚拟光学调制面调制后的光场分布信息。
  3. 根据权利要求1所述的成像方法或者权利要求2所述的光学系统,其特征在于,所述将虚拟物面上的图像传播至虚拟光学调制面,包括:
    对虚拟物面进行切分,生成得到多个分块;
    将输入图像进行切分,得到分别位于所述多个分块上的子图像;
    将各个分块上的子图像传播至整个或部分虚拟光学调制面上。
  4. 根据权利要求1所述的成像方法或者权利要求2所述的光学系统,其特征在于,所述虚拟光学调制面由一个或多个虚拟光学面构成;在所述多个虚拟光学面中,具有不同种类的虚拟光学面或者相同种类的虚拟光学面;在不同种类的虚拟光学面之间,计算得到的光学参数不同。
  5. 根据权利要求4所述的成像方法或者权利要求4所述的光学系统,其特征在于,每一个虚拟光学面分别对应虚拟物面的一部分;
    所述多个虚拟光学面之间,存在两个以上的虚拟光学面在空域上叠加和/或存在两个以上的虚拟光学面在空域上不叠加。
  6. 根据权利要求1所述的成像方法或者权利要求2所述的光学系统,其特征在于,所述将虚拟物面上的图像传播至虚拟光学调制面,具体为:
    一个虚拟物面对应一个虚拟光学调制面,或者多个虚拟物面分别对应各自的虚拟光学调制面;
    其中,所述多个虚拟物面上的图像由同一输入图像拆分生成得到,并且所述多个虚拟物面分别对应的各自的虚拟光学调制面调制后的光场分布采用在时间上分别显示的方式进行叠加。
  7. 根据权利要求6所述的成像方法或者权利要求6所述的光学系统,其特征在于,所述多个虚拟物面上的图像分别为同一输入图像的不同部分,所述多个虚拟物面上的图像叠加后等于所述输入图像。
  8. 根据权利要求1所述的成像方法或者权利要求2所述的光学系统,其特征在于,虚拟物面上的图像具有设置的相位,其中,所述设置的相位使得虚拟物面上的图像传播至虚拟光学调制面时能量成设定的分布模式,和/或所述设置的相位使得虚拟物面上的图像传播至虚拟光学调制面时的光场的相位成设定分布模式。
  9. 根据权利要求3所述的成像方法或者权利要求3所述的光学系统,其特征在于,根据实际空间光调制器参数、入射光波长、光学器件中的任一者或者任多者,生成如下任一种或任多种参数:
    -分块的大小和/或分块对应的虚拟光学面的大小;
    -虚拟物面到对应的虚拟光学调制面的距离;
    -虚拟光学面所模拟的光学器件的参数。
  10. 根据权利要求3所述的成像方法或者权利要求3所述的光学系统,其特征在于,每个分块上的图像的传播距离固定,每个分块上的图像的传播采用图像强度及相位分布卷积传播函数来计算。
  11. 根据权利要求10所述的成像方法或者权利要求10所述的光学系统,其特征在于,通过快速傅立叶/傅立叶逆变换虚拟物面的分块上的图像的强度及相位分布,点乘传播函数的快速傅立叶/傅立叶逆变换,再快速傅立叶逆/傅立叶变换来得到计算结果;传播函数的傅立叶/傅立叶逆变换预先计算并存储。
  12. 根据权利要求3所述的成像方法或者权利要求3所述的光学系统,其特征在于,每个分块上的图像的传播采用图像强度及相位分布,乘以第一设定相位分布,再执行傅立叶/傅立叶逆变换,再乘以第二设定相位分布的方式计算,所述第一设定相位分布及第二设定相位分布是预先生成并存储的,或者是实时计算生成的。
  13. 根据权利要求1所述的成像方法或者权利要求2所述的光学系统,其特征在于,物面图像的传播采用先计算生成或读取预先存储的物面上单点传播至虚拟光学面上的光场分布,记为第一光场分布;根据第一光场分布得到第二光场分布,即为物面图像上单点传播至虚拟光学面上的光场分布;再以叠加所述的第二光场分布的方式来计算所述物面图像的传播,所述物面图像是指所述虚拟物面上的图像;
    其中,物面上传播距离相同的点的第一光场分布相同,根据物面上的各点在物面各自的坐标对所述第一光场分布平移后,得到对应点在虚拟光学面上的光场分布,并乘以对应点各自的强度或乘以对应点各自的强度和相位,得到所述物面上的各点的第二光场分布。
  14. 根据权利要求3所述的成像方法或者权利要求3所述的光学系统,其特征在于,同一时间显示的虚拟物面的多个分块之间相互存在间隙。
  15. 根据权利要求14所述的成像方法或者权利要求14所述的光学系统,其特征在于,所述间隙用0能量来填充。
  16. 根据权利要求1所述的成像方法或者权利要求2所述的光学系统,其特征在于,根据实际空间光调制器参数和/或入射光波长,来生成虚拟物面和/或虚拟光学调制面。
  17. 根据权利要求1所述的成像方法或者权利要求2所述的光学系统,其特征在于,根据眼球追踪的结果,实时构成虚拟物面和/或虚拟光学调制面。
  18. 根据权利要求3所述的成像方法或者权利要求3所述的光学系统,其特征在于,对于总能量为0的虚拟物面的分块,所述总能量为0的虚拟物面的分块对应的虚拟光学面采用复制周边总能量不为0的虚拟光学面来复制生成。
  19. 根据权利要求1所述的成像方法或者权利要求2所述的光学系统,其特征在于,对调制后的光场分布信息进行编码,将虚拟物面上的图像传播后的光场与虚拟光学面叠加后再编码;或者,先对虚拟物面传播后的光场编码,再叠加虚拟光学面上虚拟光学器件对应的光学分布;其中,所述编码采用如下任一种或任多种方式的组合:
    -编码输出为纯相位格式的全息图/相息图,输出到空间光调制器成像;
    -编码采用直接丢弃强度信息,只保留相位信息并离散化的方式;
    -编码采用对虚拟物面输入信息的强度或相位进行补偿的方式;
    -编码采用通过迭代的方式反复计算虚拟物面的传播及虚拟光学面;
    -编码使用双相位编码的方式;
    -编码使用复误差分散类的方法;
    -编码对强度和/或相位进行离散化;
    -编码采用时分复用的方式使用多个子全息图显示同一子图像。
  20. 根据权利要求2所述的光学系统,其特征在于,包括光源及空间光调制器;光源输出至空间光调制器;空间光调制器在成像控制系统的控制下,接收所述光场分布信息形成光场分布。
  21. 根据权利要求20所述的光学系统,其特征在于,空间光调制器使用相位调制器件,或空间光调制器使用相位调制器件与强度调制器件的组合。
  22. 根据权利要求20所述的光学系统,其特征在于,成像控制系统包括控制电路,其中,所述控制电路用于参与输出至空间光调制器的信息的计算,空间光调制器的控制, 和/或控制调节光源。
  23. 根据权利要求20所述的光学系统,其特征在于,包括透镜系统;经过虚拟光学调制面调制后的光场分布通过所述透镜系统,得到输出图像。
  24. 根据权利要求23所述的光学系统,其特征在于,根据实际光学元器件的参数,生成虚拟光学调制面,来校正光学元器件产生的像差。
  25. 根据权利要求20所述的光学系统,其特征在于,还包含波导器件,波导器件用于扩展输出图像光场的出瞳大小和/或视场大小。
  26. 根据权利要求25所述的光学系统,其特征在于,通过虚拟光学调制面的不同设置来校正波导器件产生的像差。
  27. 根据权利要求25所述的光学系统,其特征在于,所述波导器件是主要由多个透射/反射率不同的面组成的阵列式波导器件;或者,所述波导器件是主要由衍射或全息类器件组成的波导器件。
  28. 根据权利要求23所述的光学系统,其特征在于,透镜系统调制光源输出的光束,和/或调制空间光调制器输出的光场。
  29. 根据权利要求25所述的光学系统,其特征在于,所述波导器件的入瞳小于人眼的瞳孔尺寸,光学系统中波导器件的前级系统输出的光场的出瞳与波导器件的入瞳耦合。
  30. 根据权利要求20所述的光学系统,其特征在于,所述光源包括激光器和/或发光二极管。
  31. 根据权利要求30所述的光学系统,其特征在于,所述光源还包含光纤部分,将激光器和/或发光二极管发出的光能耦合入光纤,再引导至空间光调制器。
  32. 根据权利要求20所述的光学系统,其特征在于,所述光源还包含合束器件,所述合束器件使用X棱镜、二向色镜、光纤中的任一者或任多者,将不同光源发出的光束合束输出至空间光调制器。
  33. 根据权利要求2所述的光学系统,其特征在于,还包括光阑,所述光阑遮蔽光场中不需要的部分。
  34. 根据权利要求20所述的光学系统,其特征在于,包含多个空间光调制器,多个空间光调制器各自还原的光场相叠加,来还原目标光场。
  35. 一种光学系统组合,其特征在于,包括多个权利要求2至34中任一项所述的光学系统,多个光学系统并联,输出不同光场至观看者的左右眼形成双眼视差图像,和/或输出至多个观看者。
  36. 根据权利要求1所述的成像方法或者权利要求2所述的光学系统,其特征在于输入信息除图像的光强分布信息外还包括特征信息;其中,通过所述输入信息中的图像经过处理或者不经过处理得到所述虚拟物面上的图像。
  37. 根据权利要求36所述的成像方法或控制系统,其特征在于,所述输入信息包含帧、子帧、分子帧多级结构中的一级或多级,所述多级结构按照特征信息组织。
  38. 根据权利要求36所述的成像方法或成像控制系统,其特征在于,所述输入信息的特征信息包含像素对象的成像距离、角度、帧/子帧/分子帧图像总亮度、子帧数量、分子帧数量、左右帧、接收目标、像差参数、缩放比例、消隐关系、颜色之中的至少一种;
    其中,所述像素对象包括像素点和/或像素分块。
  39. 根据权利要求36所述的成像方法或成像控制系统,其特征在于,所述输入信息是外部输入,或者存储在成像控制系统之中,或者部分存储在成像控制系统中,部分从外部输入。
  40. 根据权利要求36所述的成像方法或成像控制系统,其特征在于,所述虚拟光学调制面和/或虚拟光学面根据所述特征信息生成。
  41. 根据权利要求36所述的成像方法或成像控制系统,其特征在于,对于输入图像或输入图像的部分进行缩放,缩放比例根据特征信息生成或从特征信息之中读取。
  42. 一种存储有计算机程序的计算机可读存储介质,其特征在于,所述计算机程序被处理器执行时实现权利要求1、3至19中任一项或者36至42中任一项所述的成像方法的步骤。
  43. 一种集成有逻辑的ASIC芯片,其特征在于,所述ASIC芯片中的程序或硬件电路实现权利要求1、3至19中任一项或者36至42中任一项所述的成像方法的步骤。
PCT/CN2019/093501 2018-05-28 2019-06-28 成像方法和光学系统及其存储介质、芯片与组合 WO2019228539A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810523923.3A CN108762033B (zh) 2018-05-28 2018-05-28 成像方法和光学系统及其存储介质、芯片与组合
CN201810523923.3 2018-05-28

Publications (1)

Publication Number Publication Date
WO2019228539A1 true WO2019228539A1 (zh) 2019-12-05

Family

ID=64003126

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/093501 WO2019228539A1 (zh) 2018-05-28 2019-06-28 成像方法和光学系统及其存储介质、芯片与组合

Country Status (2)

Country Link
CN (1) CN108762033B (zh)
WO (1) WO2019228539A1 (zh)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108762033B (zh) * 2018-05-28 2022-12-09 江苏慧光电子科技有限公司 成像方法和光学系统及其存储介质、芯片与组合
CN110308566B (zh) * 2019-06-28 2021-12-03 上海慧希电子科技有限公司 显示系统及双目系统
GB2586512B (en) * 2019-08-23 2021-12-08 Dualitas Ltd Holographic projection
GB2587400B (en) * 2019-09-27 2022-02-16 Dualitas Ltd Hologram display using a liquid crystal display device
CN111240148B (zh) * 2019-12-27 2021-08-10 北京航空航天大学 一种基于自适应变焦相机的全息实时获取与投影系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103955127A (zh) * 2014-04-17 2014-07-30 中国人民解放军装甲兵工程学院 一种相位调制全视差全息体视图实现方法
CN105487242A (zh) * 2011-10-20 2016-04-13 松下知识产权经营株式会社 图像显示装置
CN107438796A (zh) * 2014-12-26 2017-12-05 Cy视觉公司 近眼显示装置
CN107710080A (zh) * 2015-04-01 2018-02-16 视瑞尔技术公司 用于计算二维和/或三维场景的全息重建的全息图的方法
CN108762033A (zh) * 2018-05-28 2018-11-06 江苏慧光电子科技有限公司 成像方法和光学系统及其存储介质、芯片与组合

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009205711A (ja) * 2008-02-26 2009-09-10 Fuji Xerox Co Ltd 光記録装置及び光記録再生装置
CN101881936B (zh) * 2010-06-04 2013-12-25 江苏慧光电子科技有限公司 全息波导显示器及其全息图像的生成方法
CN102024272A (zh) * 2010-09-21 2011-04-20 上海大学 一种获取三维运动物体计算全息图的装置和方法
KR102144338B1 (ko) * 2015-01-05 2020-08-13 한국전자통신연구원 홀로그램 생성 장치 및 방법
CN107329256B (zh) * 2016-04-28 2022-04-05 江苏慧光电子科技有限公司 显示装置及其控制方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105487242A (zh) * 2011-10-20 2016-04-13 松下知识产权经营株式会社 图像显示装置
CN103955127A (zh) * 2014-04-17 2014-07-30 中国人民解放军装甲兵工程学院 一种相位调制全视差全息体视图实现方法
CN107438796A (zh) * 2014-12-26 2017-12-05 Cy视觉公司 近眼显示装置
CN107710080A (zh) * 2015-04-01 2018-02-16 视瑞尔技术公司 用于计算二维和/或三维场景的全息重建的全息图的方法
CN108762033A (zh) * 2018-05-28 2018-11-06 江苏慧光电子科技有限公司 成像方法和光学系统及其存储介质、芯片与组合

Also Published As

Publication number Publication date
CN108762033A (zh) 2018-11-06
CN108762033B (zh) 2022-12-09

Similar Documents

Publication Publication Date Title
WO2019228539A1 (zh) 成像方法和光学系统及其存储介质、芯片与组合
US20110157667A1 (en) Holographic Image Display Systems
CN110308566B (zh) 显示系统及双目系统
JP7430699B2 (ja) 画像投影
AU2022216817B2 (en) Image projection
CN115933346A (zh) 为系统确定图像的全息图的方法
CN115808798A (zh) 全息虚拟现实显示器
Jang et al. Waveguide holography: Towards true 3d holographic glasses
Akşit et al. Holobeam: Paper-thin near-eye displays
TWI843319B (zh) 用於計算光學系統之虛擬影像之全像圖的方法、電腦可讀取媒體及系統
US20240231273A9 (en) Hologram calculation for compact head-up display
EP2527929A1 (en) Projection apparatus
RU2780511C1 (ru) Устройство дополненной реальности на основе изогнутного волновода, способ работы упомянутого устройства, очки дополненной реальности на основе упомянутого устройства
JP7572405B2 (ja) ホログラム計算
EP4273611A1 (en) Head-up display
US20240231275A1 (en) Holographic projector
US20230185101A1 (en) Augmented reality device based on curved waveguide, method therefor, augmented reality glasses based on said device
US20230367115A1 (en) Compact head-up display and waveguide therefor
Liu et al. Compact monocular 3D near-eye display
KR20240037841A (ko) 최적화된 홀로그램 업데이트
KR20240144394A (ko) 처리 수단 및 디스플레이 시스템
CN117980794A (zh) 全息系统及其光瞳扩展器
CN117111305A (zh) 一种大视场角全息近眼显示装置和显示方法
CN118567208A (zh) 光学系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19811996

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19811996

Country of ref document: EP

Kind code of ref document: A1