WO2018161883A1 - 虚拟光线跟踪方法及光场动态重聚焦显示系统 - Google Patents

虚拟光线跟踪方法及光场动态重聚焦显示系统 Download PDF

Info

Publication number
WO2018161883A1
WO2018161883A1 PCT/CN2018/078098 CN2018078098W WO2018161883A1 WO 2018161883 A1 WO2018161883 A1 WO 2018161883A1 CN 2018078098 W CN2018078098 W CN 2018078098W WO 2018161883 A1 WO2018161883 A1 WO 2018161883A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual reality
display
light
processor
refocusing
Prior art date
Application number
PCT/CN2018/078098
Other languages
English (en)
French (fr)
Inventor
虞启铭
曹煊
Original Assignee
叠境数字科技(上海)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 叠境数字科技(上海)有限公司 filed Critical 叠境数字科技(上海)有限公司
Priority to JP2019544763A priority Critical patent/JP6862569B2/ja
Priority to US16/346,942 priority patent/US10852821B2/en
Priority to KR1020197009180A priority patent/KR102219624B1/ko
Publication of WO2018161883A1 publication Critical patent/WO2018161883A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0093Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B30/00Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
    • G02B30/20Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes
    • G02B30/34Stereoscopes providing a stereoscopic pair of separated images corresponding to parallactically displaced views of the same object, e.g. 3D slide viewers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/163Wearable computers, e.g. on a belt
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10052Images from lightfield camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the invention relates to the technical field of image rendering, in particular to a virtual ray tracing method and a light field dynamic refocusing display system.
  • the ray tracing algorithm is a realistic image generation method proposed in 1980. It can generate high-quality realistic graphics. It is currently one of the commonly used algorithms for 3D realistic graphics.
  • the ray tracing algorithm is actually the approximate inverse of the physical process of light illumination. The process, which tracks specularly reflected light between objects and regular transmission, simulates the propagation of light from an ideal surface.
  • Figure 1a shows a ray tracing method orifice image and depth map, assuming the scene depth range [z 0, z 1] ( z 1> z 0> 0), wherein z 1 can be infinity, the photosensitive sheet to The distance between the pinholes is s.
  • a virtual thin lens is placed at the position of the pinhole, and the virtual thin lens focal length is f, f is satisfied. That is to say, the farthest plane z 1 is focused on the photosensitive sheet. Further, it is assumed that the diameter of the lens is R. In such a virtual imaging system, the point in the plane z ⁇ [z 0 , z 1 ] will be a blurred image on the photosensitive sheet, and the size of the circle of confusion is:
  • the point on the plane z 0 has the largest radius of the circle on the photosensitive sheet.
  • the radius of the circle of dispersion also depends on the circular cone of the light cone (large cone in Figure 1b) and the z 0 plane.
  • the intersection radius can be or
  • we can dynamically adjust the position of the photosensitive s f to render the image focused on the plane z f , where z f satisfies 1/s_f+1/z f 1/f.
  • the pixel index is used to calculate the light cone index of the central ray. This ensures that the rendered image has the same resolution as the original image and that the same pixel position corresponds to the same position in the space.
  • the final color of each pixel p is obtained by integrating all the rays in the cone.
  • each ray needs to be tracked into the scene.
  • the ray tracing algorithm in the image space tracks a ray into the scene and backtracks to the original image based on the three-dimensional coordinates in the scene.
  • the pixel color can be directly used as the color of the back-tracked light.
  • the existing virtual reality device can only display the all-focus image or the all-focus virtual reality scene, only the direction of the head of the experiencer, and cannot provide the principle of human eye imaging.
  • the content and the content of the area of interest of the human eye so it is impossible to display the content and achieve refocusing in full accordance with the direction of the human eye.
  • the ciliary muscles of the human eye 310 can relax and contract, so that the human eye 310 can be focused on different objects at different distances, and the human eye 310 can selectively focus on a spherical surface of different radii, focusing at a distance (i.e., radius).
  • the existing virtual reality device lacks an effective data processing method, so that the human eye 310 can only see the image of the full focus or the fixed focal plane. See the fixed focus image shown in FIG. 12, and cannot refocus, this Not meeting the visual characteristics of the human eye 310 viewing the three-dimensional world reduces the visual experience of the human eye 310 viewing virtual reality content. Therefore, in addition to the need to simplify the computational complexity, there is an urgent need to invent a virtual reality display system that can dynamically focus on the direction of the human eye to see multiple fixed focus planes.
  • the present invention aims to solve the existing problems, and aims to provide a virtual ray tracing method and a light field dynamic refocusing display system.
  • Light information collection that is, random retrieval of pixels on a full-focus image to reduce the number of light samples; and/or simplify the intersection search by assuming that the surface of the object is smooth;
  • Color fusion is performed based on the above ray information to obtain a final display result.
  • the light collection is a vibration sampling, that is, the aperture is divided into a plurality of equally divided regions, and then one pixel is collected in each of the equally divided regions.
  • intersection search is based on Calculate l pq directly with a fixed d q , sample the light in radius r, and initialize one As Pixel offset; if The displacement of the pixel is equal to Then you can determine the location of the intersection; otherwise set the The displacement of the pixel is Continue to iterate until the results are met Stops and returns the color of the pixel at the end of the iteration.
  • the technical solution adopted by the present invention further provides a virtual reality light field dynamic refocusing display system, comprising a virtual reality display content processor, a virtual reality display, an eyeball tracker capable of acquiring an eyeball rotation direction, an eyeball tracking information processor, and having light a field dynamic refocusing processing function of the light field refocusing processor, wherein the virtual reality display content processor acquires the display content, and sequentially passes the data with the virtual reality display, the eyeball tracker, the eyeball tracking information processor, and the light field refocusing processor Wire connection.
  • the virtual reality display comprises an imaging display screen corresponding to the left and right eyes, an optical lens with an amplification function, and an equal positioning sensor for displaying a binocular stereo image processed by the virtual reality display content processor.
  • a micro LED lamp is disposed on an edge of the optical lens, and a camera that collects an image of the human eye is disposed above the optical lens.
  • the present invention reduces the sampling of the ray information and simplifies the intersection retrieval process by random retrieval, reduces the computational complexity, and can replace the erroneous sampling by itself, thereby obtaining true and accurate refocusing; virtual reality light field dynamics
  • the refocus display system can dynamically focus according to the direction of the human eye, allowing the user to focus on different objects at different distances, in line with the human eye's observation characteristics.
  • Figure 1a is a schematic view of the principle of small hole imaging
  • Figure 1b is a schematic view of an imaging method incorporating a virtual thin lens
  • Figure 1c is a schematic diagram of image space light image tracking
  • Figure 1d is a schematic view of the method of the present invention.
  • Figure 2 is a schematic diagram of light shock sampling
  • Figure 3a, Figure 3b are schematic diagrams of erroneous sampling analysis of light
  • Figure 4a is a gamma2.2 curve and a gradient map
  • Figure 4b is a schematic diagram of tone mapping corrected image brightness
  • FIG. 5 is a schematic diagram of effects of different color fusion methods according to an embodiment of the present invention.
  • Figure 6 is a schematic diagram of rendering effects
  • Figure 7 is a comparison diagram of the depth of field effect diagram of the SLR camera and the Lytro camera
  • FIG. 8 is a diagram showing a special focus effect rendered by an embodiment of the present invention.
  • FIG. 9 is a schematic structural diagram of a virtual reality light field dynamic refocusing display system
  • FIG. 10 is a schematic structural view of a virtual reality helmet
  • FIG. 11 is a schematic structural diagram of a virtual reality display
  • Figure 12 is a schematic view of a single fixed focus plane
  • Figure 13 is a schematic view of a multi-fixed focus surface
  • a virtual reality display content processor 100 a virtual reality display 200, an eye tracker 110, an eye direction information processor 120, a light field refocusing processor 130, a camera 1101, a micro LED light 1102, an optical lens 2201, a person The eye 310, the imaging display screen 2101, the human eye area image 2202, the left eye image 5101, and the right eye image 5102.
  • This embodiment is different from the traditional ray tracing algorithm in that only one full-focus photo and its corresponding depth map can complete the same effect, which is greatly simplified in calculation; the following will show a random retrieval algorithm that displays statistical prior probability. Finally, a complete description of the virtual ray tracing method that is comparable to the SLR camera can be generated.
  • the human eye 310 In the real world, when the human eye 310 is viewing the three-dimensional world, the upper and lower rectus muscles, the inner and outer rectus muscles, and the superior and inferior oblique muscles can control the rotation of the eyeball, thereby enabling the human eye 310 to see scenes in different directions or aims.
  • the direction of the human head 310 can only be determined by the built-in or external sensor to determine the direction of the human eye 310, but due to the flexibility of the human eye 310.
  • the gaze direction of the human eye 310 is not completely consistent with the direction of the head, which results in the inability to accurately obtain the information of the direction in which the human eye is gazing, and thus the correct picture cannot be displayed.
  • the virtual reality light field dynamic refocus display system used in the implementation of the present invention mainly includes five modules, including a virtual reality display content processor 100 and a virtual reality display 200 (virtual reality display 200).
  • a realistic helmet such as a VR head display such as HTC VIVE, an eye tracker 110 capable of acquiring the direction of rotation of the eyeball, an eyeball tracking information processor 120, and a light field refocusing processor 130 having a light field dynamic refocusing processing function.
  • the virtual reality display content processor 100 mainly refers to projecting and transforming scenes in the display content according to the camera imaging principle in computer vision to obtain two different images with respect to the left and right eyes, and the images have parallax between them, thereby displaying Save the scene stereo information in the image.
  • the virtual reality display 200 mainly includes an imaging display screen 2101 corresponding to the left and right eyes of the human eye, an optical lens 2201 having an amplification function, and a sensor function of a gyroscope, etc., for displaying a virtual reality display.
  • the content processor processes the resulting binocular stereo image.
  • a micro LED lamp 1102 is surrounded on the edge of the optical lens 2201, and a camera 1101 for capturing an image of the human eye is disposed above the optical lens 1102.
  • the eye tracker 110 for detecting the direction of rotation of the eyeball mainly monitors the rotation of the human eye 310 by the camera 1101 in the virtual reality device, and collects the image of the eye in real time and transmits it to the eye direction information processor 120.
  • the eye direction information processor 120 mainly acquires the rotation direction information of the eyeball by the eye movement tracking technology according to the human eye area image 2202, and records it as O eye , that is, the direction that the current experience person is interested in.
  • the intersection position P of the human eye line of sight and the display content (ie, the displayed image) of the display system can be determined according to the direction information; and further, the position of the P can be known by the position of the P.
  • the depth information which is the focus plane position. Then use the virtual ray tracing algorithm to calculate the refocusing imaging results of the human eye's region of interest or the plane of interest, and perform blurring in the non-interesting region to achieve the human eye visual effect in the real world - this part of the work is mainly in the final light.
  • the field refocusing processor 130 is completed.
  • the display content After acquiring the display content processed by the light field refocusing processor 130, the display content is transmitted to the virtual reality display content processor 100 for processing, and the binocular stereo image of the scene is acquired, thereby realizing the final virtual reality light field dynamic weight. Focus display.
  • the device part of the embodiment can detect the rotation direction information of the human eye 310, so that the light field dynamic refocusing process can be performed on the display content according to the attention area of the human eye 310, so that the displayed content can be adapted to the viewing principle of the human eye, so that the displayed content can be adapted to the viewing principle of the human eye.
  • Experiencers can get a better experience when they experience it.
  • the embodiment is based on the above device, and the method part mainly comprises the following steps:
  • Light information collection that is, random retrieval of pixels on a full-focus image to reduce the number of light samples; and/or simplify the intersection search by assuming that the surface of the object is smooth;
  • Color fusion is performed based on the above ray information to obtain a final display result.
  • the intersection point may be discretely determined along the ray direction, in order to determine whether each discrete depth z of the ray direction corresponds to a point in the space, and if there is a point, the point may be backprojected into the image, and Make sure the pixel depth is z.
  • the light is "stopped" from the center r of the thin lens.
  • the distance from the central axis of the Q lens is Therefore, suppose the center pixel of the image is p, and the projection point of Q in the image is the distance of q, p, q.
  • the depth range of the scene is [z 0 , z 1 ].
  • the ray segments of different depths are projected onto the line [q 0 , q 1 ], q 0 , q 1 are respectively Q 0 , Q 1 in the image
  • the projected points on the depth are z 0 , z 1 , respectively.
  • the size of z q can be extracted from the depth map.
  • the left and right sides of (4) can be calculated according to the depth information, and the ray tracing of the image space avoids the reconstruction of the scene 3D model.
  • This embodiment assumes that the color image and the depth image are discrete to the two-dimensional plane space, and therefore, even if the constraint of the formula (3) is satisfied, it is impossible to ensure that the light intersects with the object in the space. To alleviate this problem, this embodiment relaxes the constraints to:
  • the input of the present invention has only a single color picture, the content (color and geometry) of the scene occlusion area cannot be detected.
  • the solution of this embodiment is to assume that the surface (geometry and texture) is symmetrical about the center of the film lens, and that l pq is occluded, and this embodiment uses a pixel of -l pq .
  • the ray tracing of the picture field of the present invention is similar to the ray tracing of a conventional geometric space, and its computational complexity is high, especially in refocusing applications.
  • this embodiment adopts a method of vibration sampling:
  • the first step is to divide the aperture into several grids
  • the size of the mesh is the original aperture size. Times, the computational complexity is reduced to
  • each grid should not be characterized by only one light.
  • random sampling does not guarantee the quality of the sampled light.
  • the present invention also proposes a process of simplifying the intersection search, that is, reducing K, to accelerate the intersection search by assuming that the surface of the space object is smooth.
  • q ⁇ [q 0 , q 1 ] does not detect whether q satisfies the constraint (4) pixel by pixel, but directly calculates l pq according to the formula (4) and the fixed d q . See Figure 1d to initialize a sample of the light within the radius r. (t is the number of iterations) as The offset of the pixel.
  • the present invention can also be accelerated by a random retrieval method.
  • the most flammable random retrieval of a light does not guarantee a complete correct intersection every time, but the probability of error is very low- - Because the pixel that returned the error may be the correct sample of another ray, that is, a erroneous sample does not need to be treated as an outlier: it is still very likely to be an intersection, still valid for the final refocus rendering.
  • the present embodiment distinguishes between correct and erroneous samples by different color shading.
  • the wrong sampling point may be outside the fuzzy convolution kernel, and the error sampling is very small compared to the total ray sampling.
  • the performance of the g p (I) response is first analyzed with the gamma 2.2 function curve.
  • the gradient of the curve in the high luminance region is steeper than the low luminance region, indicating that the highlight portion is more compressed.
  • the relative ratio is directly proportional to the intensity.
  • the gamma 2.2 curve gives the highlighted portion a higher weight when blending highlights and highlights.
  • the values of ⁇ and ⁇ can control color performance.
  • Figure 5 shows the different color fusion methods.
  • the linear fusion is directly averaging the integration domain.
  • the gamma 2.2 can make the color blue.
  • the Gaussian weight color fusion is preferred, and the fusion effect is relatively real.
  • the embodiment can generate a very high depth of field effect; in particular, the effect is the same as that of the SLR and the Lytro camera, but this embodiment only needs a normal mobile camera to complete the above effect (even if It is the gray scale effect in the drawing, and can also show the comparison of the technical advantages of the present invention; the same applies hereinafter.
  • the present invention is further applicable to Tilt-shift photography, which is a technique that requires special focus in a small scene. As can be seen from the comparison of the four groups in Fig. 8, it is not necessary to use a special lens to obtain the Tilt-shift effect as in the prior art. Only the user needs to modify the depth map simply, and the present invention can simulate the Tilt-shift effect very simply. The process is similar to the above, so it will not be described.
  • the present embodiment reduces the sampling of the ray information and simplifies the intersection retrieval process by random retrieval, reduces the computational complexity, and can replace the erroneous sampling by itself to obtain true and accurate refocusing; the virtual reality light field dynamic refocusing display
  • the system can dynamically focus according to the direction of the human eye, allowing the user to focus on different objects in the distance, in line with the human eye's observation characteristics.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Optics & Photonics (AREA)
  • Computer Hardware Design (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)
  • Image Generation (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

本发明涉及虚拟光线跟踪方法及光场动态重聚焦显示系统,光线信息采集,即随机检索全焦图片上的像素点,以此来减少光线采样的数量;并/或假设物体表面为光滑而简化交点检索;包括虚拟现实显示内容处理器、虚拟现实显示器、眼球跟踪器、眼球追踪信息处理器以及光场重聚焦处理器。本发明减小了计算复杂度,能够自行替换错误采样,得到真实准确的重新对焦;可以根据人眼注视方向而动态聚焦,能够让使用者聚焦在远近不同的物体上,符合人体肉眼的观察特征。

Description

虚拟光线跟踪方法及光场动态重聚焦显示系统 技术领域
本发明涉及图像绘制技术领域,具体地说是虚拟光线跟踪方法及光场动态重聚焦显示系统。
背景技术
光线跟踪算法是一种提出于1980的真实感图形生成方法,能够生成高质量的真实感图形,目前是三维真实感图形的常用算法之一;光线跟踪算法实际上是光照明物理过程的近似逆过程,可以跟踪物体间的镜面反射光线和规则透射,模拟了理想表面的光的传播。
参见图1a,图1a显示了小孔图像和深度图的光线跟踪方法,假设场景的深度范围是[z 0,z 1](z 1>z 0>0),其中z 1可以无穷大,感光片到针孔的距离为s。
参见图1b,在渲染渲染景深效果时,假设在针孔的位置放置一个虚拟的薄透镜,虚拟的薄透镜焦距为f,f满足
Figure PCTCN2018078098-appb-000001
也就是说最远的平面z 1对焦在感光片上。进一步地,假设透镜的直径为R。在这样的一个虚拟成像系统中,平面z∈[z 0,z 1]内的点将在感光片上成模糊的像,其弥散圆大小为:
Figure PCTCN2018078098-appb-000002
根据式(2)可见,平面z 0上的点在感光片上成的弥散圆半径最大为
Figure PCTCN2018078098-appb-000003
同时弥散圆半径大小也取决于光线圆锥(如图1b中的大圆锥)与z 0平面的圆形交盘。参见图1b,根据相似三角形变换关系,交盘半径可以为
Figure PCTCN2018078098-appb-000004
或者
Figure PCTCN2018078098-appb-000005
基于上述系统设置,我们可以动态的调整感光片的位置s f渲染出对焦到平面z f上的图片,其中z f满足1/s_f+1/z f=1/f。如图1(c),使用像素索引原始图片,计算得到中央光线的光锥指数。这样可以保证渲染后的图像和原始图像具有同样的分辨率,并且保证同一像素位置对应空间中同样的位置。
在最终的成像结果中,每一个像素点p的最终颜色是由光锥内所有光线的积分得到。在传统的光线跟踪算法中,需要跟踪每一条光线到场景中去。然而在图 像空间中的光线跟踪算法,跟踪一条光线到场景中,根据场景中的三维坐标反跟踪到原图像。基于朗勃逊假设可以直接使用像素点颜色作为反跟踪回来的光线颜色。
为了确定该光线的颜色,对光线和空间中的物体的交点的定位是必不可少的。在基于光场的算法中,光线跟踪主要依据前向弯曲获得:对于中央视图,使用其深度图扭曲周围光场视图。该方法能够应用的前提是,光线必须密集采样。另个一个方面,每渲染一张图片需要创建大量的虚拟视角,因此算法复杂度非常的高。因此,需要减小光线跟踪重对焦的计算复杂度。
此外,作为光线跟踪计算方法的具体应用,现有的虚拟现实装置只能显示全聚焦图像或者全聚焦虚拟现实场景,仅及体验者的头部所在方向内容,而不能提供符合人眼成像原理的体验内容以及人眼感兴趣区域的内容——因此无法完全按照人眼注视方向进行内容显示以及实现重聚焦。参见图13,人眼310中睫状肌可以舒张和收缩,从而使人眼310可以聚焦在远近不同的物体上,人眼310可以选择聚焦在不同半径的球面上,聚焦在远处(即半径较大的球面320上)时睫状肌舒张,聚焦在近处(即半径较小的球面340、330上)时睫状肌紧绷。但是现有的虚拟现实装置因为缺少有效的数据处理方法,从而导致人眼310只能看到全聚焦或者说固定焦平面的图像,参见图12所示的固定聚焦图像,而不能重新聚焦,这不符合人眼310观看三维世界的视觉特性,降低了人眼310观看虚拟现实内容的视觉体验。因此,除了需要简化计算复杂度之外,还迫切需要发明一款可以根据人眼注视方向而动态聚焦从而看见多固定聚焦面的虚拟现实显示系统。
发明内容
本发明为解决现有的问题,旨在提供一种虚拟光线跟踪方法及光场动态重聚焦显示系统。
本发明采用的技术方案包含如下步骤:
获取全焦图片及其深度信息;
光线信息采集,即随机检索全焦图片上的像素点,以此来减少光线采样的数量;并/或假设物体表面为光滑而简化交点检索;
基于上述光线信息进行颜色融合,得到最终显示结果。
其中,光线采集为震动采样,即将光圈分为若干等分区域,随后在所述每个等分区域内采集一个像素点。
其中,所述简化交点检索,即根据 和固定的d q直接计算l pq,对半径r内的光线进行采样,并初始化一个
Figure PCTCN2018078098-appb-000007
作为
Figure PCTCN2018078098-appb-000008
像素的偏移;如果
Figure PCTCN2018078098-appb-000009
像素点的位移等于
Figure PCTCN2018078098-appb-000010
则可确定交点位置;否则设置刚才
Figure PCTCN2018078098-appb-000011
像素点的位移为
Figure PCTCN2018078098-appb-000012
继续往下迭代,直到结果满足
Figure PCTCN2018078098-appb-000013
式停止,并将迭代结束时的像素点颜色返回。
本发明采用的技术方案还提供一种虚拟现实光场动态重聚焦显示系统,包括虚拟现实显示内容处理器、虚拟现实显示器、能够获取眼球转动方向的眼球跟踪器、眼球追踪信息处理器以及具有光场动态重聚焦处理功能的光场重聚焦处理器,其中,虚拟现实显示内容处理器获取显示内容,并且和虚拟现实显示器、眼球跟踪器、眼球追踪信息处理器和光场重聚焦处理器依次通过数据线连接。其中,虚拟现实显示器包括对应于左、右眼的成像显示屏、具有放大功能的光学镜片以及等定位传感器组成,用于显示虚拟现实显示内容处理器处理得到的双目立体图像。
其中,所述虚拟现实显示器中,光学镜片的边缘上设有微型LED灯,在光学镜片的上方设有采集人眼图像的摄像头。
和现有技术相比,本发明通过随机检索减少对光线信息的采样和简化交点检索过程,减小了计算复杂度,并且能够自行替换错误采样,得到真实准确的重新对焦;虚拟现实光场动态重聚焦显示系统可以根据人眼注视方向而动态聚焦,能够让使用者聚焦在远近不同的物体上,符合人体肉眼的观察特征。
附图说明
图1a为小孔成像原理的示意图;
图1b为加入虚拟薄透镜的成像方法的示意图;
图1c为图像空间光像跟踪的示意图;
图1d为本发明方法的示意图;
图2为光线震动采样的示意图;
图3a、图3b为光线错误采样分析的示意图;
图4a为gamma2.2曲线和梯度图;
图4b为色调映射矫正图像亮度的示意图;
图5为本发明实施例中不同颜色融合方法的效果示意图;
图6为渲染效果比较示意图
图7为单反相机和Lytro相机景深效果图比较图;
图8为本发明实施例渲染的特别对焦效果图;
图9为虚拟现实光场动态重聚焦显示系统的结构示意图;
图10为虚拟现实头盔的结构示意图;
图11为虚拟现实显示器的结构示意图;
图12为单固定聚焦面的示意图;
图13为多固定聚焦面的示意图;
参见附图,虚拟现实显示内容处理器100、虚拟现实显示器200、眼球跟踪器110、眼球方向信息处理器120、光场重聚焦处理器130、摄像头1101、微型LED灯1102、光学镜片2201、人眼310、成像显示屏2101、人眼区域图像2202、左眼的图像5101,右眼的图像5102。
具体实施方式
现结合附图对本发明作进一步地说明。
本实施例不同于传统的光线跟踪算法,仅需一张全焦照片与其对应深度图即可完成同样的效果,在计算上大为简化;下文中将通过展示统计先验概率的随机检索算法,最终完整地描述能够生成和单反相机效果媲美的虚拟光线跟踪方法。
在真实世界中,人眼310在观看三维世界时,上、下直肌,内、外直肌和上、下斜肌可以控制眼球的转动,从而使人眼310能够看到不同方向的场景或者目标。但是现有的虚拟现实头盔因为无法定位到人眼的转动方向,因此只能通过内置或者外部感应器确定人头部转动方向来判断人眼310的注视方向,但由于人眼310的灵活性导致人眼310注视方向与头部方向并不是完全一致的,这也就导致无法准确的获取人眼注视方向的信息,进而导致无法显示正确的画面。参见图9至图11,为了解决上述问题,本发明实施中所用的虚拟现实光场动态重聚焦显示系统主要包括五个模块,包括具有虚拟现实显示内容处理器100、虚拟现实显示器200(为虚拟现实头盔,诸如HTC VIVE此类VR头显)、能够获取眼球转动方向的眼球跟踪器110、眼球追踪信息处理器120以及具有光场动态重聚焦处理功能的光场重聚焦处理器130。
其中,虚拟现实显示内容处理器100主要是指将显示内容中的场景依据计算机 视觉中的相机成像原理投影变换得到相对于左、右双眼的两张不同图像,图像之间具有视差,从而在显示的图像中保存场景立体信息。
参见图11、图10,虚拟现实显示器200主要包括对应于人眼左、右眼的成像显示屏2101、具有放大功能的光学镜片2201以及陀螺仪等定位功能的传感器组成,用于显示虚拟现实显示内容处理器处理得到的双目立体图像。光学镜片2201的边缘上环绕有微型LED灯1102,在光学镜片1102的上方设有采集人眼图像的摄像头1101。
检测眼球转动方向的眼球跟踪器110主要是依靠虚拟现实设备中的摄像头1101对人眼310转动情况进行监控,实时采集眼部的图像,并传输给眼球方向信息处理器120。
眼球方向信息处理器120主要是根据人眼区域图像2202通过眼动跟踪技术获取眼球的转动方向信息,记为O eye,也就是当前体验者所感兴趣的方向。
在获取了人眼310的观看方向信息后,根据方向信息可确定人眼视线与显示系统内部展示内容(即展示的图像)的交点位置P;进而通过P的位置,可以得知它所在场景的深度信息,也就是聚焦平面位置。然后利用虚拟光线跟踪算法来计算人眼关注区域或者说关注平面的重聚焦成像结果,而在非关注区域进行模糊处理,从而达到真实世界中人眼视觉效果——这部分工作主要在最后的光场重聚焦处理器130中完成。
在获取经过光场重聚焦处理器130处理后的显示内容后,将显示内容传输到虚拟现实显示内容处理器100进行处理,获取场景的双目立体图像,从而实现最终的虚拟现实光场动态重聚焦显示。
本实施例的装置部分可以检测人眼310的转动方向信息,从而可以根据人眼310的关注区域对显示内容进行光场动态重聚焦处理,使显示的内容可以适应于人眼的观看原理,使体验者在体验时能够获取更为优秀的体验。
本实施例基于上述装置,方法部分主要包含如下步骤:
获取全焦图片及其深度信息;
光线信息采集,即随机检索全焦图片上的像素点,以此来减少光线采样的数量;并/或假设物体表面为光滑而简化交点检索;
基于上述光线信息进行颜色融合,得到最终显示结果。
一般在上述操作中,可以沿光线方向离散的确定交点,目的是为了确定光线方向每一个离散深度z是否对应空间中的点,如果存在点存在,可以将该点反向 投影到图像中,并确定该像素深度为z。假设光线在距离薄透镜中心r的地方“停止”,该光线方向上存在深度为z Q∈(z 0,z 1)的一点Q,基于相似三角形,Q距离薄透镜中轴线的距离为
Figure PCTCN2018078098-appb-000014
因此,假设图像的中心像素点为p,Q在图像中的投影点为q,p,q的距离
Figure PCTCN2018078098-appb-000015
场景的深度范围是[z 0,z 1],参见图1d,不同深度的光线段会被投影到线[q 0,q 1]上,q 0,q 1分别为Q 0,Q 1在图像上的投影点,其深度分别为z 0,z 1。在图像的光线跟踪算法中,我们需要在图像中找出光线与空间中的物体最近的交点所对应的像素q∈[q 0,q 1]。假设最近的交点的投影点为q,则q满足:
Figure PCTCN2018078098-appb-000016
其中z q的大小可以从深度图中提取。
上面所有的分析都是基于深度信息的基础上进行的,接下来将深度信息转换为图像像素点的偏移量。由相似三角形关系
Figure PCTCN2018078098-appb-000017
将偏移量归一化到[0,1],即偏移0代表∞的深度,1代表最近的深度。假设使用一个大小为K(=sR)的掩膜,我们可以简化(3)式为:
Figure PCTCN2018078098-appb-000018
其中
Figure PCTCN2018078098-appb-000019
(4)式左右两边都可以根据深度信息计算得到,图像空间的光线跟踪避免了去重建场景三维模型。
事实上,这样的计算方法的计算复杂度为O(NMK),N是输入图片的像素点数目,M是掩膜的像素点个数(M与光圈的大小保持一致,也就是说正比于K 2),K涉及到最近交点检测。不像三维模型的的光线跟踪算法,图像的光线跟踪算法存在两个主要的问题:
本实施例假设将颜色图像和深度图像是离散到二维平面空间,因此,即便满足(3)式的制约条件,并不能确保光线与空间中的物体相交。为了减缓这一问题, 本实施例将制约条件放宽到:
Figure PCTCN2018078098-appb-000020
也就是所说,小于一个像素点。由于本发明输入只有单独一张彩色图片,因此并不能检测到场景遮挡区域的内容(颜色和几何)。本实施例的解决方法是假设表面(几何和纹理)是关于薄膜透镜中心对称的,假设l pq被遮挡了,本实施例使用-l pq的像素。
本发明的图片域的光线跟踪和传统的几何空间的光线跟踪相似,其计算复杂度都很高,尤其是在重对焦的应用中。上文提到M正比于K 2,因此计算复杂度随着半径的扩大指数增加(O(NMK)=O(NK 2)。对于大光圈,会导致计算复杂度非常高,很难达到实时渲染的效果。
光线信息采集中,通过随机采样的方式减少对光线信息采样,可以大幅度减小M。参见图2,本实施例采用震动采样的方式:
第一步,将光圈均分成几个网格;
第二步,在每个个网格中随机采样一个像素点;
假设网格的尺寸是原光圈尺寸的
Figure PCTCN2018078098-appb-000021
倍,则计算复杂度减小为
Figure PCTCN2018078098-appb-000022
然而,虽然震动随机采样能够保证每个采样点的正确性,采样的稀疏和随机性在场景对比度较高和颜色丰富的情况下会产生比较好的视觉效果。然而也存在一些问题,首先每个网格不应该只用一根光线来表征;其次,随机采样并不能保证采样光线的质量。这两个原因会导致相邻两个像素点的模糊程度不一致,并且会产生随机噪声。
本发明还提出简化交点检索的过程,即减小K,通过假设空间物体表面是光滑的去加速交点检索。当在检测交点的时候并不q∈[q 0,q 1]逐个像素检测q是否满足制约式(4),而是根据(4)式和固定的d q直接计算l pq。参见图1d,对半径r内的光线采样初始化一个
Figure PCTCN2018078098-appb-000023
(t为迭代次数)作为
Figure PCTCN2018078098-appb-000024
像素的偏移。给定一个
Figure PCTCN2018078098-appb-000025
可以计算
Figure PCTCN2018078098-appb-000026
如果
Figure PCTCN2018078098-appb-000027
像素点的位移等于
Figure PCTCN2018078098-appb-000028
则可确定交点位置;否则设置刚才
Figure PCTCN2018078098-appb-000029
像素点的位移为
Figure PCTCN2018078098-appb-000030
继续往下迭代,直到结果满足(5)式停止,并将迭 代结束时的像素点颜色返回。
由于在现实场景中,物体表面通常是光滑并且连续的,相邻像素有非常大的概率是具有相同的深度值。也就是说很大可能
Figure PCTCN2018078098-appb-000031
在这种情况下我们能在迭代次数较少就能找到交点。
参见图3a、图3b,进一步的,本发明还可以通过随机检索的方法进行加速,最燃随机检索一根光线并不能保证每次的到完全正确的交点,但是错误的概率是非常低的——因为返回错误的像素点可能是另外一个光线的正确采样,也就是说,一个错误的采样没有必要将其视为异常值:它依然非常可能是交点,对最后的的重对焦渲染依然有效。
为了更好的处理对焦到前景物体的情况,假设返回的像素点q是一个错误的交点,同时q点深度相对表p点更远,也就是说d p>d q,我们将p的颜色值替换掉q。因此虽然随机交点检索会返回错误的交点值,在所有采样光线融合后依然会得到比较正真实、准确的重对焦效果,同时计算复杂度降为O(NM)。
参见图3a、图3b,为本实施例的方法采样光线的结果,本实施例通过不同的颜色底纹来区分正确和错误的采样。如前所述,错误的采样点可能是在模糊卷积核的外部,错误采样相对比全部光线采样而言是非常少的。
在实际的摄影中,线性场景辐射J转化到数字图像强度I是高度非线性的,因为高动态范围压缩。在单反数码相机中这样的非线性压缩过程可以用函数f:I=f(J)。最理想的深度合成J=f -1(I),最后的图像I′可以通过积分
Figure PCTCN2018078098-appb-000032
p,q代表的图像中的坐标,Ω p代表中心点在p的卷积核g p(I)代表对图像所有的变换。
然而直接计算g p(I)存在两个问题:首先,输入图像的响应函数未知,通常用gamma色调曲线参数等于2.2来近似,但是相机的响应函数和gamma 2.2差距。其次,单一响应函数不足以表示颜色处理模型。参见图5,以gamma 2.2近似的结果,可以看出来结果存在色差。为了减少色差和非线性效果,我们根据图像内容对像素点颜色直接加权积分得到。
本实施例中首先分析用gamma 2.2函数曲线近似g p(I)响应的性能,参见图4a,曲线在高亮度区域的梯度相对比低亮度区域陡峭,这说明高亮部分会被更多地 压缩,相对比直接对强度积分。参见图4b,gamma 2.2曲线在对高亮和低亮融合的时候,会赋予高亮部分更高的权重。本实施例使用下式进行颜色融合:
Figure PCTCN2018078098-appb-000033
其中I(q) max=max{I r(q),I g(q),I b(q)},是该积分像素点颜色通道最大值。
Figure PCTCN2018078098-appb-000034
和σ的值可以控制色彩表现。图5显示了不同的颜色融合的方法,线性融合是直接对积分域取平均,gamma 2.2可以使颜色偏蓝,本实施例优选高斯权重颜色融合,其融合效果相对比较真实。
参见图7、图6,通过对比可见本实施例能够生成质量非常高的景深效果;尤其与单反、Lytro相机比较而言效果相同,但是本实施例只需要普通移动相机皆可以完成上述效果(即使是附图中灰度效果,也能展现本发明技术优势对比的情况;以下同)。参见图8,进一步地本发明还能够在Tilt-shift摄影中应用,Tilt-shift摄影是一个需要特殊聚焦在一个小场景技术。图8中四组对比图可见,不需要如现有技术一般使用特殊的镜片来获取Tilt-shift效果只需要用户简单的修改一下深度图,本发明即能够非常简单的模拟Tilt-shift效果。其过程与上述雷同,故不赘述。
综上,本实施例通过随机检索减少对光线信息的采样和简化交点检索过程,减小了计算复杂度,并且能够自行替换错误采样,得到真实准确的重新对焦;虚拟现实光场动态重聚焦显示系统可以根据人眼注视方向而动态聚焦,能够让使用者聚焦在远近不同的物体上,符合人体肉眼的观察特征。
上面结合附图及实施例描述了本发明的实施方式,实施例给出的结构并不构成对本发明的限制,本领域内熟练的技术人员可依据需要做出调整,在所附权利要求的范围内做出各种变形或修改均在保护范围内。

Claims (6)

  1. 一种虚拟光线跟踪方法,其特征在于包含如下步骤:
    获取全焦图片及其深度信息;
    光线信息采集,即随机检索全焦图片上的像素点,以此来减少光线采样的数量;
    并/或假设物体表面为光滑而简化交点检索;
    基于上述光线信息进行颜色融合,得到最终显示结果。
  2. 根据权利要求1所述的一种虚拟光线跟踪方法,其特征在于:光线采集为震动采样,即将光圈分为若干等分区域,随后在所述每个等分区域内采集一个像素点。
  3. 根据权利要求1所述的一种虚拟光线跟踪方法,其特征在于:所述简化交点检索,即根据
    Figure PCTCN2018078098-appb-100001
    和固定的d q直接计算l pq,对半径r内的光线进行采样,并初始化一个
    Figure PCTCN2018078098-appb-100002
    作为
    Figure PCTCN2018078098-appb-100003
    像素的偏移;如果
    Figure PCTCN2018078098-appb-100004
    像素点的位移等于
    Figure PCTCN2018078098-appb-100005
    则可确定交点位置;否则设置刚才
    Figure PCTCN2018078098-appb-100006
    像素点的位移为
    Figure PCTCN2018078098-appb-100007
    继续往下迭代,直到结果满足
    Figure PCTCN2018078098-appb-100008
    停止,并将迭代结束时的像素点颜色返回。
  4. 一种基于权利要求1、2或3所述内容的虚拟现实光场动态重聚焦显示系统,其特征在于:包括虚拟现实显示内容处理器、虚拟现实显示器、能够获取眼球转动方向的眼球跟踪器、眼球追踪信息处理器以及具有光场动态重聚焦处理功能的光场重聚焦处理器,其中,虚拟现实显示内容处理器获取显示内容,并且和虚拟现实显示器、眼球跟踪器、眼球追踪信息处理器和光场重聚焦处理器依次通过数据线连接。
  5. 一种根据权利要求4所述的虚拟现实光场动态重聚焦显示系统,其特征在于:虚拟现实显示器包括对应于左、右眼的成像显示屏、具有放大功能的光学镜片以及等定位传感器组成,用于显示虚拟现实显示内容处理器处理得到的双目立体图像。
  6. 一种根据权利要求4所述的虚拟现实光场动态重聚焦显示系统,其特征在于:所述虚拟现实显示器中,光学镜片的边缘上设有微型LED灯,在光学镜片的上方设有采集人眼图像的摄像头。
PCT/CN2018/078098 2017-03-09 2018-03-06 虚拟光线跟踪方法及光场动态重聚焦显示系统 WO2018161883A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2019544763A JP6862569B2 (ja) 2017-03-09 2018-03-06 仮想光線追跡方法および光照射野の動的リフォーカス表示システム
US16/346,942 US10852821B2 (en) 2017-03-09 2018-03-06 Virtual ray tracing method and dynamic light field refocusing display system
KR1020197009180A KR102219624B1 (ko) 2017-03-09 2018-03-06 가상 광선 추적 방법 및 라이트 필드 동적 리포커싱 디스플레이 시스템

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710137526.8A CN107452031B (zh) 2017-03-09 2017-03-09 虚拟光线跟踪方法及光场动态重聚焦显示系统
CN201710137526.8 2017-03-09

Publications (1)

Publication Number Publication Date
WO2018161883A1 true WO2018161883A1 (zh) 2018-09-13

Family

ID=60486245

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/078098 WO2018161883A1 (zh) 2017-03-09 2018-03-06 虚拟光线跟踪方法及光场动态重聚焦显示系统

Country Status (5)

Country Link
US (1) US10852821B2 (zh)
JP (1) JP6862569B2 (zh)
KR (1) KR102219624B1 (zh)
CN (1) CN107452031B (zh)
WO (1) WO2018161883A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113256783A (zh) * 2021-03-29 2021-08-13 北京航空航天大学 基于眼动跟踪的立即辐射度渲染方法

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107452031B (zh) * 2017-03-09 2020-06-26 叠境数字科技(上海)有限公司 虚拟光线跟踪方法及光场动态重聚焦显示系统
CN109242900B (zh) * 2018-08-01 2021-09-21 曜科智能科技(上海)有限公司 焦平面定位方法、处理装置、焦平面定位系统及存储介质
CN109360235B (zh) * 2018-09-29 2022-07-19 中国航空工业集团公司上海航空测控技术研究所 一种基于光场数据的混合深度估计方法
CN110365965B (zh) * 2019-06-26 2020-11-06 北京邮电大学 三维光场图像生成方法及装置
CN110689498B (zh) * 2019-09-27 2024-03-12 西北大学 一种基于对非关注点部分分级模糊的高清视频优化方法
CN111650754B (zh) * 2020-07-17 2022-08-12 北京耐德佳显示技术有限公司 一种平视显示设备
KR20220067950A (ko) 2020-11-18 2022-05-25 삼성전자주식회사 디스플레이 장치 및 그의 제어 방법
CN112967370B (zh) * 2021-03-03 2022-06-21 北京邮电大学 三维光场重建方法、装置及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110316968A1 (en) * 2010-06-29 2011-12-29 Yuichi Taguchi Digital Refocusing for Wide-Angle Images Using Axial-Cone Cameras
CN105144245A (zh) * 2013-04-24 2015-12-09 高通股份有限公司 用于扩增现实的辐射转移取样的设备和方法
CN105812778A (zh) * 2015-01-21 2016-07-27 成都理想境界科技有限公司 双目ar头戴显示设备及其信息显示方法
CN106408643A (zh) * 2016-08-31 2017-02-15 上海交通大学 一种基于图像空间的图像景深模拟方法
CN107452031A (zh) * 2017-03-09 2017-12-08 叠境数字科技(上海)有限公司 虚拟光线跟踪方法及光场动态重聚焦显示系统

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020013573A1 (en) * 1995-10-27 2002-01-31 William B. Telfair Apparatus and method for tracking and compensating for eye movements
US6028606A (en) * 1996-08-02 2000-02-22 The Board Of Trustees Of The Leland Stanford Junior University Camera simulation system
JP3629243B2 (ja) * 2002-02-13 2005-03-16 Necマイクロシステム株式会社 モデリング時の距離成分を用いてレンダリング陰影処理を行う画像処理装置とその方法
US20100110069A1 (en) * 2008-10-31 2010-05-06 Sharp Laboratories Of America, Inc. System for rendering virtual see-through scenes
US8493383B1 (en) * 2009-12-10 2013-07-23 Pixar Adaptive depth of field sampling
JP6141584B2 (ja) * 2012-01-24 2017-06-07 アリゾナ ボード オブ リージェンツ オン ビハーフ オブ ザ ユニバーシティ オブ アリゾナ 小型視線追従型ヘッドマウントディスプレイ
US9171394B2 (en) * 2012-07-19 2015-10-27 Nvidia Corporation Light transport consistent scene simplification within graphics display system
CA2931776A1 (en) * 2013-11-27 2015-06-04 Magic Leap, Inc. Virtual and augmented reality systems and methods
US9905041B2 (en) * 2014-11-24 2018-02-27 Adobe Systems Incorporated Depth of field synthesis using ray tracing approximation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110316968A1 (en) * 2010-06-29 2011-12-29 Yuichi Taguchi Digital Refocusing for Wide-Angle Images Using Axial-Cone Cameras
CN105144245A (zh) * 2013-04-24 2015-12-09 高通股份有限公司 用于扩增现实的辐射转移取样的设备和方法
CN105812778A (zh) * 2015-01-21 2016-07-27 成都理想境界科技有限公司 双目ar头戴显示设备及其信息显示方法
CN106408643A (zh) * 2016-08-31 2017-02-15 上海交通大学 一种基于图像空间的图像景深模拟方法
CN107452031A (zh) * 2017-03-09 2017-12-08 叠境数字科技(上海)有限公司 虚拟光线跟踪方法及光场动态重聚焦显示系统

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113256783A (zh) * 2021-03-29 2021-08-13 北京航空航天大学 基于眼动跟踪的立即辐射度渲染方法

Also Published As

Publication number Publication date
US10852821B2 (en) 2020-12-01
KR20190043590A (ko) 2019-04-26
KR102219624B1 (ko) 2021-02-23
JP6862569B2 (ja) 2021-04-21
JP2020504984A (ja) 2020-02-13
US20200057492A1 (en) 2020-02-20
CN107452031B (zh) 2020-06-26
CN107452031A (zh) 2017-12-08

Similar Documents

Publication Publication Date Title
WO2018161883A1 (zh) 虚拟光线跟踪方法及光场动态重聚焦显示系统
JP6246757B2 (ja) 現実環境の視野におけるバーチャルオブジェクトを表現方法及びシステム
EP3101624B1 (en) Image processing method and image processing device
US6618054B2 (en) Dynamic depth-of-field emulation based on eye-tracking
WO2018019282A1 (zh) 双目全景图像获取方法,装置及存储介质
US20130335535A1 (en) Digital 3d camera using periodic illumination
Fuhl et al. Fast camera focus estimation for gaze-based focus control
WO2021197370A1 (zh) 一种光场显示方法及系统、存储介质和显示面板
JP7479729B2 (ja) 三次元表現方法及び表現装置
CN109064533B (zh) 一种3d漫游方法及系统
CN113724391A (zh) 三维模型构建方法、装置、电子设备和计算机可读介质
Zhai et al. Image real-time augmented reality technology based on spatial color and depth consistency
US20230334806A1 (en) Scaling neural representations for multi-view reconstruction of scenes
JP3387856B2 (ja) 画像処理方法、画像処理装置および記憶媒体
CN114157853A (zh) 一种用于生成像素光束的数据表示的装置和方法
JP3392078B2 (ja) 画像処理方法、画像処理装置および記憶媒体
CN113821107B (zh) 一种实时、自由视点的室内外裸眼3d系统
TWI831583B (zh) 虛擬實境設備調節方法、裝置、電子設備及儲存介質
Zhang et al. [Poster] an accurate calibration method for optical see-through head-mounted displays based on actual eye-observation model
Du et al. Location Estimation from an Indoor Selfie
Corke et al. Image Formation
WO2023200936A1 (en) Scaling neural representations for multi-view reconstruction of scenes
Corke et al. Image Formation
Zhang et al. Image Acquisition Modes
Zhang et al. Single-Image 3-D Scene Reconstruction

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18763552

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20197009180

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2019544763

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18763552

Country of ref document: EP

Kind code of ref document: A1