WO2015067071A1 - 一种实现虚拟视图转立体视图的方法及装置 - Google Patents

一种实现虚拟视图转立体视图的方法及装置 Download PDF

Info

Publication number
WO2015067071A1
WO2015067071A1 PCT/CN2014/082831 CN2014082831W WO2015067071A1 WO 2015067071 A1 WO2015067071 A1 WO 2015067071A1 CN 2014082831 W CN2014082831 W CN 2014082831W WO 2015067071 A1 WO2015067071 A1 WO 2015067071A1
Authority
WO
WIPO (PCT)
Prior art keywords
axis
view
angle
miscut
viewpoint
Prior art date
Application number
PCT/CN2014/082831
Other languages
English (en)
French (fr)
Inventor
刘美鸿
高炜
徐万良
Original Assignee
深圳市云立方信息科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市云立方信息科技有限公司 filed Critical 深圳市云立方信息科技有限公司
Priority to JP2015545662A priority Critical patent/JP2015536010A/ja
Priority to US14/417,557 priority patent/US9704287B2/en
Priority to EP14814675.6A priority patent/EP3067866A4/en
Publication of WO2015067071A1 publication Critical patent/WO2015067071A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/363Image reproducers using image projection screens
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03HHOLOGRAPHIC PROCESSES OR APPARATUS
    • G03H1/00Holographic processes or apparatus using light, infrared or ultraviolet waves for obtaining holograms or for obtaining an image from them; Details peculiar thereto
    • G03H1/04Processes or apparatus for producing holograms
    • G03H1/08Synthesising holograms, i.e. holograms synthesized from objects or objects from holograms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/60Rotation of a whole image or part thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • H04N13/279Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals the virtual viewpoint locations being selected by the viewers or determined by tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/383Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes

Definitions

  • the present invention relates to the field of 3D display technologies, and in particular, to a method and apparatus for implementing a virtual view to a stereoscopic view.
  • the existing 2D to 3D technology converts the existing 2D video into 3D video through the view conversion method, but because the technology is not mature enough, and the conversion is time consuming, the cost is high, and the 3D effect obtained by the conversion is not ideal, thus affecting The development of the 3D industry.
  • the present invention provides a method and apparatus for realizing a virtual view to a stereoscopic view, so that a user can adjust a 3D effect to obtain a better holographic 3D visual experience.
  • the present invention provides a method and apparatus for implementing a virtual view to a stereoscopic view.
  • the tracking angle of the human eye is tracked to determine the rotation angle of the virtual scene, and then the virtual scene is rotated to obtain a virtual holographic stereoscopic view matrix.
  • the miscut matrix to multiply the matrix of each viewpoint model, the image of each viewpoint is obtained after projection, and according to the user's 3D effect experience, the position of the observer in the scene and the miscut angle are adjusted to finally obtain a better 3D effect experience.
  • the first technical solution provided by the present invention provides a method for implementing a virtual view to a stereoscopic view, and the method includes the following steps:
  • the method further comprises the steps of:
  • the user adjusts the miscut angle of the second image processing module and the position coordinates of the observer in the scene according to the experienced 3D effect, thereby improving the 3D effect of the projected 3D image.
  • the virtual scene view matrix before rotation is represented by A
  • the virtual holographic stereo view matrix is represented by A
  • M2 gets the rotated view A', where, in the spatial Cartesian coordinate system 0-XYZ before the rotation, the center of the screen is at the origin of the coordinate system 0-XYZ, and the projection of the human eye to the center of the screen in the XOZ plane is projected.
  • the angle between the positive half-axis of the axis is ⁇
  • the angle between the projection of the human eye and the center of the screen on the YOZ plane and the positive half-axis of the z-axis is ⁇
  • the X-axis direction points from the midpoint of the left side of the screen to the midpoint of the right side of the screen.
  • the direction of the x-axis is from the midpoint of the screen to the midpoint of the lower side of the screen.
  • the user adjusts the miscut angle of the second image processing module and the position coordinates of the observer in the scene according to the experienced 3D effect, thereby improving the 3D effect of the projected 3D image.
  • the new coordinate system after rotation is represented by O'-X'Y'Z', the origin 0' and the viewpoint in the original coordinate system
  • the center position coincides, and the positive direction of the Z' axis points along the coordinates of the observer in the original coordinate system to the center coordinates of the viewpoint.
  • the miscut transformation means that the y' and z' of the viewpoint are unchanged, and the x' value is the z' axis.
  • the miscut angle ⁇ refers to the angle between the viewpoint coordinate and the positive direction of the z' axis.
  • the coordinates of any viewpoint after the miscut are represented by (X", y ⁇ z" ), and are located at X'
  • the miscut expressions for the negative half-axis view of the axis are:
  • the corresponding miscut matrices are:
  • the corresponding miscut matrices are:
  • the method further comprises the steps of:
  • the user adjusts the miscut angle of the second image processing module and the position coordinates of the observer in the scene according to the experienced 3D effect, thereby improving the 3D effect of the projected 3D image.
  • the second technical solution provided by the present invention provides a device for implementing a virtual view to a stereoscopic view, and the device includes:
  • a human eye tracking module for capturing human eye position coordinates
  • a first image processing module is electrically connected to the human eye tracking module, configured to determine a rotation angle of the virtual scene according to the position coordinates of the human eye and the center coordinates of the screen of the projection display module, and rotate the virtual scene according to the rotation angle to obtain a virtual holographic stereoscopic view matrix.
  • the second image processing module is electrically connected to the first image processing module, and is configured to determine a miscut angle of each viewpoint according to the center of the virtual scene, the position of the observer in the scene, and the coordinates of each viewpoint, thereby generating and viewing the viewpoints.
  • Corresponding miscut matrices the right-handed matrix of the mismatched matrix is multiplied by the corresponding viewpoint model matrix to generate left and right views;
  • the projection display module is electrically connected to the second image processing module for projecting left and right views of the respective viewpoints.
  • the second image processing module is further configured to change the miscut angle and the position coordinates of the observer in the scene according to the input of the user, thereby implementing a stereoscopic effect of improving the stereoscopic image obtained by the projection.
  • the virtual scene view matrix before the rotation is represented by A
  • the virtual holographic stereo view matrix is represented by A
  • the rotated view ⁇ ' is obtained, wherein the space rectangular coordinate system before the rotation is used to represent the ⁇ - ⁇ , the center of the screen is located at the origin of the coordinate system 0-XYZ, and the projection of the line connecting the human eye to the center of the screen in the pupil plane and ⁇
  • the angle between the positive half axis of the axis is ⁇
  • the angle between the projection of the human eye and the center of the screen in the pupil plane and the positive half axis of the ⁇ axis is ⁇
  • the X axis points from the midpoint of the left side of the screen to the midpoint of the right side of the screen, ⁇
  • the axis points from the midpoint of the screen to the lower midpoint of the screen.
  • the second image processing module is further configured to change the miscut angle and the position coordinates of the observer in the scene according to the input of the user, thereby implementing a stereoscopic effect of improving the stereoscopic image obtained by the projection.
  • the new coordinate system after rotation is represented by O'-X'Y'Z', wherein the origin 0' coincides with the center position of the viewpoint in the original coordinate system, and the positive direction of the Z' axis is along the observer in the original coordinate system.
  • the miscut transformation refers to the y' and z' of the viewpoint, and the x' value is linearly transformed with the z' axis as the dependent axis, and the miscut angle ⁇ refers to the viewpoint coordinate and z 'The positive angle of the axis, the coordinates of the viewpoint after the miscut are represented by ( ⁇ '', yz"), then the miscut expressions of the negative half-axis view of the X' axis are:
  • the corresponding miscut matrices are:
  • the corresponding miscut matrices are:
  • the second image processing module is further configured to change the miscut angle and the position coordinates of the observer in the scene according to the input of the user, thereby implementing a stereoscopic effect of improving the stereoscopic image obtained by the projection.
  • the present invention provides an implementation virtual The method and device for transforming the stereoscopic view into a stereoscopic view, the rotation angle of the virtual scene is determined by tracking the dynamic coordinates of the human eye, and then the virtual holographic stereoscopic view matrix is obtained by rotating the virtual scene, and then the parallax matrix is used to right-multiply the matrix of each viewpoint model, and after projection The image of each viewpoint is obtained, and according to the user's 3D effect experience, the position of the observer in the scene and the miscut angle are adjusted to finally obtain a better 3D effect experience.
  • FIG. 1 is a schematic flow chart of an embodiment of a method for implementing a virtual view to a stereoscopic view of the present invention
  • FIG. 2 is a schematic view showing an angle between a human eye position coordinate and a screen in the embodiment shown in FIG.
  • Fig. 3 is a schematic view showing the relationship between the angle of rotation of the virtual scene around the Y-axis and the position of the human eye, the center of the scene, and the position of the screen in the embodiment shown in Fig. 1.
  • FIG. 4 is a schematic diagram showing the relationship between the miscut angle and the position of the observer, the center of the viewpoint, and the position of the viewpoint in the embodiment shown in FIG. 1;
  • FIG. 5 is a schematic structural diagram of an embodiment of an apparatus for implementing a virtual view to a stereoscopic view of the present invention.
  • FIG. 1 is a schematic flow chart of an embodiment of a method for implementing a virtual view to a stereoscopic view according to the present invention. As shown in FIG. 1 , the process of implementing a virtual view to a stereo view in this embodiment includes the following steps:
  • FIG. 2 is the position coordinates of the human eye and the screen in the embodiment shown in FIG.
  • the angle diagram as shown in Figure 2, before the rotation of the virtual scene, the space rectangular coordinate system 0-XYZ, the center of the screen is located at the origin 0 of the coordinate system 0-XYZ, the human eye to the center of the screen 0
  • the angle between the projection of the line in the XOZ plane and the positive half-axis of the z-axis is ⁇
  • the angle between the projection of the human eye and the center of the screen 0 on the YOZ plane and the positive half-axis of the z-axis is ⁇
  • the direction of the X-axis is from the left side of the screen.
  • the midpoint points to the midpoint of the right side of the screen, and the direction of the x-axis points from the midpoint of the upper side of the screen to the midpoint of the lower side of the screen.
  • the use of the human eye tracking module can make the projected image of the human eye change when the human eye is in different positions, so that the user can still experience a better 3D effect during the moving process;
  • the first image processing module determines a rotation angle of the virtual scene according to the position coordinates of the human eye and the coordinates of the center of the screen of the projection display module, and rotates the virtual scene according to the rotation angle to obtain a virtual holographic stereoscopic view matrix.
  • FIG. 3 is a view showing the rotation angle of the virtual scene around the Y axis and the position of the human eye in the embodiment shown in FIG. The center of the scene and the positional relationship of the screen. As shown in Figure 3, the distance from the human eye to the screen on the plane XOZ is L, and the distance from the center of the virtual scene to the screen is Z ⁇ .
  • M2 gets the rotated view ⁇ ' ;
  • the second image processing module determines a miscut angle of each viewpoint according to the center of the virtual scene, the position of the observer in the scene, and the coordinates of each viewpoint, thereby generating a miscut matrix corresponding to each viewpoint, and the right-cut matrix is right-multiplied
  • the corresponding viewpoint model matrix generates a left and right view
  • FIG. 4 is the miscut angle and the position of the observer, the center of the viewpoint, and the position of the viewpoint in the embodiment shown in FIG.
  • the coordinate ⁇ ⁇ points to the center coordinate of the viewpoint.
  • the miscut transformation refers to the y' and ⁇ ' of the viewpoint, and the X' value is linearly transformed with the Z' axis as the dependent axis.
  • the miscut angle ⁇ refers to the viewpoint coordinate and The z' axis is in the positive direction
  • FIG. 4 shows that the four viewpoints include the viewpoint 1, the viewpoint 2, the viewpoint 3, and the viewpoint 4, wherein the viewpoint 1 and the viewpoint 4 are a pair of viewpoints corresponding to the left view and the right view, respectively.
  • Viewpoint 2 and viewpoint 3 are a pair of viewpoints corresponding to the left view and the right view, respectively, and the angle between the viewpoint 3 and the positive direction of the Z' axis in FIG. 4 is ⁇ , and the coordinates of any viewpoint after the miscut are (X" , yz" ) means that the miscut expressions of the negative half-axis viewpoint 2 of the X' axis are:
  • the corresponding miscut matrices are:
  • the corresponding miscut matrices are:
  • the method flow shown in FIG. 1 further includes the steps of:
  • the user adjusts a miscut angle and a scene of the second image processing module according to the experienced 3D effect.
  • the position coordinates of the observer in the middle thereby achieving a 3D effect of improving the projected 3D image.
  • z 'coordinate is larger than the viewer [sigma] [zeta], which point to the wrong cut when x' axis negative directions, z 'coordinates ⁇ less than the observer, the point when the shearing The x' axis moves in the positive direction, and the miscut direction of view 2 and view 3 is different, and the miscut angle is the same.
  • the projection display module allows the user to experience the holographic stereo view by projecting the miscut view.
  • the user can adjust the miscut angle of the second image processing module and the position of the observer in the scene according to his own experience, thereby improving the 3D effect of the projected view. .
  • the user improves the stereoscopic effect of the projected image by changing the size of the Z G sum: when Z G is increased, z _ z G is decreased, the stereoscopic effect is weakened, and vice versa, the stereoscopic effect is enhanced; ( 0 ⁇ ), tan/ increases, the stereoscopic effect of the projected image is enhanced, and conversely,
  • the method for realizing the virtual view to the stereoscopic view of the present invention can achieve a better 3D stereoscopic effect experience by appropriately modifying z G and ⁇ , and the dynamic tracking human eye position coordinates used in the embodiment of the present invention makes the user A better holographic stereo view can be viewed during the movement process, which avoids the inconvenience that the user can experience a better holographic stereo view only at a certain fixed point.
  • FIG. 5 is a schematic structural diagram of an embodiment of an apparatus for implementing a virtual view to a stereoscopic view according to the present invention. As shown in FIG.
  • the apparatus 20 for implementing a virtual view to a stereoscopic view of the present embodiment includes: a human eye tracking module 21 for capturing human eye position coordinates; a first image processing module 22, and the human eye tracking module 21 Electrical connection for use in accordance with human eye position coordinates and the center of the screen of projection display module 24 Coordinates determine the rotation angle of the virtual scene, and rotate the virtual scene according to the rotation angle to obtain a virtual holographic stereoscopic view matrix.
  • the second image processing module 23 is electrically connected to the first image processing module 22 for viewing according to the center of the virtual scene and the scene.
  • the position of each viewpoint and the coordinates of each viewpoint determine the miscut angle of each viewpoint, and then generate a miscut matrix corresponding to each viewpoint, and the misalignment matrix is multiplied by the corresponding viewpoint model matrix to generate a left and right view; projection display module 24,
  • the second image processing module 23 is electrically connected to project a left and right view of each viewpoint.
  • the human eye tracking module 21 can track the position of the human eye in real time, please refer to FIG. 2, FIG. 2 shows a schematic diagram of the angle between the position coordinates of the human eye and the screen, as shown in FIG. 2, in the virtual Before the scene is rotated, the space rectangular coordinate system 0-XYZ is used.
  • the center of the screen is located at the origin 0 of the coordinate system 0-XYZ, and the projection of the human eye to the center of the screen 0 in the XOZ plane and the positive z-axis of the z-axis.
  • the angle between the angle of the human eye and the center of the screen is 0.
  • the angle between the projection of the line on the ⁇ plane and the positive half of the ⁇ axis is ⁇ , and the direction of the X axis is from the midpoint of the left side of the screen to the midpoint of the right side of the screen. Point to the lower midpoint of the screen by the midpoint on the screen.
  • the user can view the holographic stereoscopic view changed according to the change of the human eye position during the movement, thereby avoiding that the user can only view at a certain fixed point. The inconvenience caused by the holographic stereo view.
  • FIG. 3 is a schematic diagram showing the relationship between the rotation angle of the virtual scene around the x-axis and the position of the human eye, the center of the scene, and the position of the screen in the embodiment shown in FIG.
  • the distance from the projection of the human eye on the plane to the screen is L, and the distance from the center of the virtual scene to the screen is ⁇ center.
  • M2 gets the rotated view A'.
  • FIG. 4 is the miscut angle and the position of the observer, the center of the viewpoint, and the position of the viewpoint in the embodiment shown in FIG.
  • the relationship diagram, as shown in Figure 4, the new coordinate system after rotation is represented by O'-X'Y'Z', the origin 0' coincides with the center position of the viewpoint in the original coordinate system, and the positive direction of the Z' axis follows the original
  • the coordinate ⁇ ⁇ of the observer in the coordinate system points to the coordinates of the center of the viewpoint.
  • the miscut transformation means that the y' and ⁇ ' of the viewpoint are unchanged, and the value of ⁇ is linearly transformed with the z' axis as the dependent axis, and the miscut angle is set.
  • refers to the angle between the viewpoint coordinates and the positive direction of the z' axis.
  • four viewpoints include viewpoint 1, viewpoint 2, viewpoint 3, and viewpoint 4, where viewpoint 1 and viewpoint 4 are left view and right, respectively.
  • the pair of viewpoints corresponding to the view, the viewpoint 2 and the viewpoint 3 are respectively a pair of viewpoints corresponding to the left view and the right view. As shown in FIG.
  • the angle between the viewpoint 3 and the positive direction of the Z′ axis is ⁇ , after the miscut
  • the coordinates of any viewpoint are represented by (X", ⁇ ⁇ " ), and the miscut expression of the viewpoint 2 of the negative axis of X and the axis They are:
  • the second image processing module 23 is further configured to change the miscut angle and the position coordinates of the observer in the scene according to the input of the user, thereby implementing a stereoscopic effect of improving the stereoscopic image obtained by the projection.
  • the point moves in the negative direction of the x′ axis when the miscut is performed, and when z′ is smaller than the coordinate Z G of the observer, the point is missed. Move to the 'axis' direction.
  • the miscut direction of the viewpoint 2 and the viewpoint 3 are different, and the miscut angle is the same.
  • any point A (X, y, z) of the view is rotated to obtain A' (x, y, ⁇ ,) in the Cartesian coordinate system O'-X'Y'Z' .
  • the projection display module 24 allows the user to experience the holographic stereoscopic view by projecting the miscut view.
  • the user can adjust the miscut angle of the second image processing module and the position of the observer in the scene according to his own experience, thereby improving the 3D effect of the projected view. .
  • the user improves the stereoscopic effect of the projected image by changing the size of z G and 6 >: when z g is increased, the stereoscopic effect is reduced, and vice versa, the stereoscopic effect is enhanced; when increasing / is (0) ⁇ ), tan/ increases, the stereoscopic effect of the projected image is enhanced, and conversely,
  • the present invention provides a method and apparatus for implementing a virtual view to a stereoscopic view, by tracking the dynamic coordinates of the human eye to determine the rotation angle of the virtual scene, and then rotating the virtual scene to obtain a virtual holographic stereoscopic view matrix, and then using the error
  • the tangent matrix is multiplied by each viewpoint model matrix, and the image of each viewpoint is obtained after projection, and according to the user's 3D effect experience, the position of the observer in the scene and the miscut angle are adjusted to finally obtain a better 3D effect experience.

Abstract

本发明提供了一种实现虚拟视图转立体视图的方法及装置,其中,所述方法包括:S1、利用人眼追踪模块捕捉人眼位置坐标;S2、利用第一图像处理模块根据人眼位置坐标以及投影显示模块的屏幕中心坐标确定虚拟场景的旋转角度,按照旋转角度旋转虚拟场景获得虚拟全息立体视图矩阵;S3、利用第二图像处理模块根据虚拟场景中心、场景中观察者的位置、各视点的坐标确定各视点的错切角度,进而生成与各视点一一对应的错切矩阵,错切矩阵右乘与之对应的视点模型矩阵生成左右视图;S4、利用投影显示模块对各视点的左右视图进行投影。通过上述方式,本发明提供的实现虚拟视图转立体视图的方法及装置,可达到全息立体显示的目的。

Description

一种实现虚拟视图转立体视图的方法及装置
【技术领域】
本发明涉及 3D显示技术领域,特别是涉及一种实现虚拟视图转立体视图的 方法及装置。
【背景技术】
现有的 2D转 3D技术是把现有的 2D视频通过视图转换方法转换为 3D视频, 但是因为技术的不够成熟、 以及转换很费时导致成本很高, 另外转换得到的 3D 效果不够理想, 从而影响了 3D产业的发展。
为解决上述技术问题, 本发明提供一种实现虚拟视图转立体视图的方法及 装置, 以使得用户可以通过对 3D效果进行调整进而获得较好的全息 3D视觉体 验。
【发明内容】
为了至少部分解决以上问题, 本发明提供一种实现虚拟视图转立体视图的 方法及装置, 通过追踪人眼动态坐标确定虚拟场景的旋转角度, 进而对虚拟场 景进行旋转获得虚拟全息立体视图矩阵, 进而利用错切矩阵右乘各视点模型矩 阵, 投影后得到各视点的图像, 并根据用户的 3D效果体验, 调整场景中观察者 的位置以及错切角度最终获得较好的 3D效果体验。
本发明提供的第一个技术方案为提供一种实现虚拟视图转立体视图的方 法, 所述方法包括步骤:
Sl、 利用人眼追踪模块捕捉人眼位置坐标;
S2、 利用第一图像处理模块根据人眼位置坐标以及投影显示模块的屏幕中 心坐标确定虚拟场景的旋转角度, 按照旋转角度旋转虚拟场景获得虚拟全息立 体视图矩阵;
S3、 利用第二图像处理模块根据虚拟场景中心、 场景中观察者的位置、 各 视点的坐标确定各视点的错切角度, 进而生成与各视点——对应的错切矩阵, 错切矩阵右乘与之对应的视点模型矩阵生成左右视图;
S4、 利用投影显示模块对各视点的左右视图进行投影。
其中, 所述方法进一步包括步骤:
S5、用户根据体验到的 3D效果调整第二图像处理模块的错切角度以及场景 中观察者的位置坐标, 进而实现改善投影 3D图像的 3D效果。
其中, 若旋转之前的虚拟场景视图矩阵用 A表示, 虚拟全息立体视图矩阵 用 A、表示,
则 A、=M1*M2*A,
cos <3, 0,— sin <3, 0 1,0,0,0
0,1, 0,0 0, cos^,sin^,0
Ml= Μ2=
sin <3, 0, cos<3,0 1, - sin ZJ, cos <3, 0 , 立体视图 Α右乘 Ml , 0,0,0,1 0,0, 0,1
M2得到旋转后的视图 A' , 其中, 在旋转之前的空间直角坐标系 0-XYZ 中, 屏幕中心位于坐标系 0-XYZ的原点,人眼到屏幕中心的连线在 XOZ平面的投影与 z轴正半轴的夹角 为 α, 人眼到屏幕中心的连线在 YOZ平面的投影与 z轴正半轴的夹角为 β , X 轴方向由屏幕左边中点指向屏幕的右边中点, Υ轴方向由屏幕上边中点指向屏 幕的下边中点,
根据角度 α、 β、 人眼到屏幕的距离 L, 场景中心到屏幕的距离 Ζ, 可以确
L · tan a
定场景绕 Υ 轴旋转的角度《 = arctan 以及场景绕 X 轴旋转的角度
L + Z
L · tan β
b = arctan
L + Z 其中, 所述方法进一步包括步骤:
S5、用户根据体验到的 3D效果调整第二图像处理模块的错切角度以及场景 中观察者的位置坐标, 进而实现改善投影 3D图像的 3D效果。
其中, 旋转后的新坐标系用 O'-X'Y'Z'表示, 原点 0'与原坐标系中视点的 中心位置重合, Z'轴的正方向沿着原坐标系中观察者的坐标指向视点中心坐标, 所述的错切变换是指视点的 y'和 z'不变, x'值以 z'轴为依赖轴呈线性变换, 设 错切角度 Θ是指视点坐标与 z'轴正方向夹角, 错切之后的任一视点的坐标用 ( X", y \ z" )表示, 则位于 X'轴负半轴视点的错切表达式均为:
" = '+(z'-zG) tan^
对应的错切矩阵均为:
Figure imgf000005_0001
位于 X'轴正半轴的视点的错切表达式均为:
" = '-(ζ'- zG) tan θ
对应的错切矩阵均为:
Figure imgf000005_0002
其中, 所述方法进一步包括步骤:
S5、用户根据体验到的 3D效果调整第二图像处理模块的错切角度以及场景 中观察者的位置坐标, 进而实现改善投影 3D图像的 3D效果。
本发明提供的第二个技术方案是提供一种实现虚拟视图转立体视图的装 置, 所述装置包括:
人眼追踪模块, 用于捕捉人眼位置坐标; 第一图像处理模块, 与所述人眼追踪模块电连接, 用于根据人眼位置坐标 以及投影显示模块的屏幕中心坐标确定虚拟场景的旋转角度, 按照旋转角度旋 转虚拟场景获得虚拟全息立体视图矩阵;
第二图像处理模块, 与所述第一图像处理模块电连接, 用于根据虚拟场景 中心、 场景中观察者的位置、 各视点的坐标确定各视点的错切角度, 进而生成 与各视点——对应的错切矩阵, 错切矩阵右乘与之对应的视点模型矩阵生成左 右视图;
投影显示模块, 与所述第二图像处理模块电连接, 用于对各视点的左右视 图进行投影。
其中, 第二图像处理模块进一步用于根据用户的输入改变错切角度以及场 景中观察者的位置坐标, 进而实现改善投影获得的立体图像的立体效果。
其中, 若所述旋转之前的虚拟场景视图矩阵用 A表示, 虚拟全息立体视图 矩阵用 A、表示,
则 A、=M1*M2*A,
Ml 立体视图 A右乘 Ml , M2
Figure imgf000006_0001
得到旋转后的视图 Α' , 其中, 旋转之前的空间直角坐标系用表示 Ο-ΧΥΖ中, 屏幕中心位于坐标系 0-XYZ的原点,人眼到屏幕中心的连线在 ΧΟΖ平面的投影与 ζ轴正半轴的夹角 为 α, 人眼到屏幕中心的连线在 ΥΟΖ平面的投影与 ζ轴正半轴的夹角为 β , X 轴由屏幕左边中点指向屏幕的右边中点, Υ轴由屏幕上边中点指向屏幕的下边 中点,
根据角度 α、 β、 人眼到屏幕的距离 L, 场景中心到屏幕的距离 Ζ, 可以确
L · tan a
定场景绕 Y 轴旋转的角度《 = arctan- 以及场景绕 X 轴旋转的角度
L + Z
L · tan β
b - arctan
L + Z 其中, 第二图像处理模块进一步用于根据用户的输入改变错切角度以及场 景中观察者的位置坐标, 进而实现改善投影获得的立体图像的立体效果。 其中, 旋转后的新坐标系用 O'-X'Y'Z'表示, 其中, 原点 0'与原坐标系中 视点的中心位置重合, Z'轴的正方向沿着原坐标系中观察者的坐标指向视点中 心坐标, 所述的错切变换是指视点的 y'和 z'不变, x'值以 z'轴为依赖轴呈线性 变换, 设错切角度 Θ是指视点坐标与 z'轴正方向夹角, 错切之后的视点的坐标 用 (χ'' , y z" )表示, 则位于 X'轴负半轴视点的错切表达式均为:
" = '+ (z '- zG) tan ^
对应的错切矩阵均为:
Figure imgf000007_0001
位于 X'轴正半轴的视点的错切表达式均为:
" = '- (ζ '- zG) tan θ
对应的错切矩阵均为:
Figure imgf000007_0002
其中, 第二图像处理模块进一步用于根据用户的输入改变错切角度以及场 景中观察者的位置坐标, 进而实现改善投影获得的立体图像的立体效果。
本发明的有益效果是: 区别于现有技术的情况, 本发明提供的一种实现虚 拟视图转立体视图的方法及装置, 通过追踪人眼动态坐标确定虚拟场景的旋转 角度, 进而对虚拟场景进行旋转获得虚拟全息立体视图矩阵, 进而利用错切矩 阵右乘各视点模型矩阵,投影后得到各视点的图像,并根据用户的 3D效果体验, 调整场景中观察者的位置以及错切角度最终获得较好的 3D效果体验。
【附图说明】
图 1是本发明的实现虚拟视图转立体视图的方法的一实施例的流程示意图; 图 2是图 1所示的实施例中人眼位置坐标与屏幕的夹角示意图;
图 3是图 1所示的实施例中是虚拟场景绕 Y轴旋转角度与人眼位置、 场景 中心以及屏幕的位置关系示意图。
图 4是图 1所示的实施例中错切角度与场景中观察者的位置、 视点中心坐 标以及视点位置的关系示意图;
图 5是本发明的实现虚拟视图转立体视图的装置的一实施例的结构示意图。
【具体实施方式】
下面结合附图和实施例对本发明进行详细说明。
本发明的第一技术方案是提供一种实现虚拟视图转立体视图的方法, 请参 见图 1 ,图 1是本发明的实现虚拟视图转立体视图的方法的一实施例的流程示意 图。 如图 1所示, 本实施例的实现虚拟视图转立体视图的流程包括步骤:
Sl、 利用人眼追踪模块捕捉人眼位置坐标, 利用人眼追踪模块实时追踪人 眼所处的位置,请参见图 2, 图 2是图 1所示的实施例中人眼位置坐标与屏幕的 夹角示意图, 如图 2 所示, 在对虚拟场景进行旋转之前, 釆用的空间直角坐标 系 0-XYZ中, 屏幕中心位于坐标系 0-XYZ的原点 0, 人眼到屏幕中心 0的连 线在 XOZ平面的投影与 z轴正半轴的夹角为 α, 人眼到屏幕中心 0的连线在 YOZ平面的投影与 z轴正半轴的夹角为 β , X轴方向由屏幕左边中点指向屏幕 的右边中点, Υ轴方向由屏幕上边中点指向屏幕的下边中点, 本实施例中, 通 过使用人眼追踪模块可以使得人眼处于不同位置时, 看到的投影图像随之改变, 使得用户在移动的过程中仍能体验到较好的 3D效果;
S2、 利用第一图像处理模块根据人眼位置坐标以及投影显示模块的屏幕中 心坐标确定虚拟场景的旋转角度, 按照旋转角度旋转虚拟场景获得虚拟全息立 体视图矩阵,
本发明的实现虚拟视图转立体视图的方法中所述的对图像的旋转步骤的优 选方案可参见图 3 , 图 3是图 1所示的实施例中虚拟场景绕 Y轴旋转角度与人 眼位置、 场景中心以及屏幕的位置关系示意图, 如图 3所示, 人眼在平面 XOZ 上的投影到屏幕的距离为 L, 虚拟场景中心到屏幕的距离为 Z†, 。
L · tan a
其中, 角度 a表示场景绕 Y轴旋转的角度" = arctan , 同理可以得
_/-十 到场景绕 X轴旋转的角度 b = arctan: ^ , 原虚拟场景绕 Y轴旋转角度 a, 然后再绕 X轴旋转角度 b, 获得虚拟全息立体视图矩阵, 若旋转之前的虚拟场 景视图矩阵用 A表示, 虚拟全息立体视图矩阵用 A、表示,
、=M1*M2*A,
" 0 立体视图 Α右乘 Ml ,
Figure imgf000009_0001
M2得到旋转后的视图 Α' ;
S3、 利用第二图像处理模块根据虚拟场景中心、 场景中观察者的位置、 各 视点的坐标确定各视点的错切角度, 进而生成与各视点——对应的错切矩阵, 错切矩阵右乘与之对应的视点模型矩阵生成左右视图,
本实施例中对全息虚拟立体视图进行错切的角度的确定请参见图 4,图 4是 图 1 所示的实施例中错切角度与场景中观察者的位置、 视点中心坐标以及视点 位置的关系示意图, 如图 4 所示, 旋转后的新坐标系用 O'-X'Y'Z'表示, 原点 0'与原坐标系中视点的中心位置重合, Z'轴的正方向沿着原坐标系中观察者的 坐标 ζσ指向视点中心坐标, 所述的错切变换是指视点的 y'和 ζ'不变, X'值以 Z' 轴为依赖轴呈线性变换, 设错切角度 Θ是指视点坐标与 z'轴正方向夹角, 图 4 示出了 4个视点包括视点 1、 视点 2、 视点 3和视点 4, 其中, 视点 1和视点 4 是分别是左视图和右视图对应的一对视点, 视点 2和视点 3分别是左视图和右 视图对应的一对视点, 图 4中的视点 3与 Z'轴的正方向的夹角为 θ , 错切之后 的任一视点的坐标用 (X" , y z" )表示, 则位于 X'轴负半轴视点 2的错切表 达式均为:
Figure imgf000010_0001
对应的错切矩阵均为:
Figure imgf000010_0002
位于 X'轴正半轴的视点 2的错切表达式均为:
" = '- (ζ '- zG) tan θ
对应的错切矩阵均为:
Figure imgf000010_0003
在本发明的实现虚拟视图转立体视图的方法的优选实施例中, 图 1 所示的 方法流程进一步包括步骤:
S5、用户根据体验到的 3D效果调整第二图像处理模块的错切角度以及场景 中观察者的位置坐标, 进而实现改善投影 3D图像的 3D效果。 具体地, 当场景 中的任意一点的 z'大于观察者的坐标 Ζσ时, 错切时该点向 x'轴负方向移动, z' 小于观察者的坐标 Ζσ时, 错切时该点向 x'轴正方向移动, 视点 2和视点 3的错 切方向不同, 错切角度相同,
其中, 直角坐标系 0-XYZ中, 视图的任意一点 A ( X , y, z )经旋转得到 直角坐标系 O'-X'Y'Z'中的 A' ( x、, y , ζ、)。 Α'错切变换后得到 A" ( x、、, y、、 , z、、), 因此, 由 A到 A"的对应关系为 A''=M1*M2*A*M3;
S4、 利用投影显示模块对各视点的左右视图进行投影,
投影显示模块通过将错切后的视图进行投影, 使得用户可以体验到全息立 体视图。
在本发明实现虚拟视图转立体视图的一优选实施例中, 用户根据自己的体 验, 可以调整第二图像处理模块的错切角度以及场景中观察者的位置, 进, 进 而改善投影视图的 3D效果。
具体地, 用户通过改变 ZG和 的大小, 进而改善投影图像的立体效果: 当增大 ZG时, z _ zG减小, 立体效果减弱, 反之, 则立体效果增强; 当增大 / 时 ( 0 < < ), tan/ 就增大, 投影图像的立体效果增强, 反之,
2
则立体效果减弱。 因此, 本发明的实现虚拟视图转立体视图的方法可以通过适当的修改 zGθ , 实现较好的 3D立体效果体验, 另外本发明的实施例中釆用的动态追踪人眼 位置坐标使得用户在移动过程中可以观看到较好的全息立体视图, 避免了用户 只有在某几个固定点上才能体验到较好的全息立体视图带来的不便。
本发明提供的第二个技术方案是提供一种实现虚拟视图转立体视图的装 置。请参见图 5 , 图 5是本发明的实现虚拟视图转立体视图的装置的一实施例的 结构示意图。如图 5所示,本实施例的实现虚拟视图转立体视图的装置 20包括: 人眼追踪模块 21 , 用于捕捉人眼位置坐标; 第一图像处理模块 22, 与所述人眼 追踪模块 21 电连接, 用于根据人眼位置坐标以及投影显示模块 24的屏幕中心 坐标确定虚拟场景的旋转角度, 按照旋转角度旋转虚拟场景获得虚拟全息立体 视图矩阵; 第二图像处理模块 23 , 与所述第一图像处理模块 22电连接, 用于根 据虚拟场景中心、 场景中观察者的位置、 各视点的坐标确定各视点的错切角度, 进而生成与各视点——对应的错切矩阵, 错切矩阵右乘与之对应的视点模型矩 阵生成左右视图; 投影显示模块 24 , 与所述第二图像处理模块 23电连接, 用于 对各视点的左右视图进行投影。
在本实施例中,人眼追踪模块 21可实时追踪人眼所处的位置,请参见图 2 , 图 2表示了人眼位置坐标与屏幕的夹角示意图, 如图 2所示, 在对虚拟场景进 行旋转之前,釆用的空间直角坐标系 0-XYZ ,其中,屏幕中心位于坐标系 0-XYZ 的原点 0 ,人眼到屏幕中心 0的连线在 XOZ平面的投影与 z轴正半轴的夹角为 α, 人眼到屏幕中心 0的连线在 ΥΟΖ平面的投影与 ζ轴正半轴的夹角为 β , X 轴方向由屏幕左边中点指向屏幕的右边中点, Υ轴方向由屏幕上边中点指向屏 幕的下边中点。 本实施例中, 通过使用人眼追踪模块 21使得用户在移动的过程 中能够观看到随人眼位置改变而改变的全息立体视图, 从而避免了用户只能在 某几个固定点上才能观看到全息立体视图带来的不便。
在本实施例一优选方案可参见图 3 ,图 3是图 1所示的实施例中是虚拟场景 绕 Υ轴旋转角度与人眼位置、 场景中心以及屏幕的位置关系示意图, 如图 3所 示, 人眼在平面 ΧΟΖ上的投影到屏幕的距离为 L, 虚拟场景中心到屏幕的距离 为 Ζ 中心。
· tan oc
其中, 角度 a表示场景绕 Y轴旋转的角度 " = arctan^~^, 同理, 可以 得到场景绕 X轴旋转的角度 b = arctan Z * tan /?; 原虚拟场景绕 Y轴旋转角度 a,
L + Z
然后再绕 X轴旋转角度 b , 获得虚拟全息立体视图矩阵, 若旋转之前的虚拟场 景视图矩阵用 A表示, 虚拟全息立体视图矩阵用 A、表示, 贝' J A'=M1 *M2*A, , 立体视图 A右乘 Ml,
Figure imgf000013_0001
M2得到旋转后的视图 A'。
本实施例中对全息虚拟立体视图进行错切的角度的确定请参见图 4,图 4是 图 1 所示的实施例中错切角度与场景中观察者的位置、 视点中心坐标以及视点 位置的关系示意图, 如图 4 所示, 旋转后的新坐标系用 O'-X'Y'Z'表示, 原点 0'与原坐标系中视点的中心位置重合, Z'轴的正方向沿着原坐标系中观察者的 坐标 ζσ指向视点中心坐标, 所述的错切变换是指视点的 y'和 ζ'不变, χ'值以 z' 轴为依赖轴呈线性变换,设错切角度 Θ是指视点坐标与 z'轴正方向夹角, 如图 4 示出了 4个视点包括视点 1、 视点 2、 视点 3和视点 4, 其中, 视点 1和视点 4 是分别是左视图和右视图对应的一对视点, 视点 2和视点 3分别是左视图和右 视图对应的一对视点, 如图 4所示, 视点 3与 Z'轴的正方向的夹角为 θ , 错切 之后的任一视点的坐标用 (X", γ ζ" )表示, 则位于 X、轴负半轴视点 2的错 切表达式均为:
" = '+(ζ z tan θ
Figure imgf000013_0002
对应的错切矩阵均为:
1, 0, tan Θ, -zG tan θ
0,1,0,0
0,0,1,0
0,0,0,1 位于 X'轴正半轴的视点 3的错切表达式均为
z'-z^)tan^
yn = y 对应的错切矩阵均为:
Figure imgf000014_0001
在本实施例的优选方案中, 第二图像处理模块 23进一步用于根据用户的输 入改变错切角度以及场景中观察者的位置坐标, 进而实现改善投影获得的立体 图像的立体效果。 具体地, 当场景中的任意一点的 ζ、大于观察者的坐标 zG时, 错切时该点向 x'轴负方向移动, z'小于观察者的坐标 ZG时, 错切时该点向 χ'轴 正方向移动。, 视点 2和视点 3的错切方向不同, 错切角度相同。
其中, 直角坐标系 0-XYZ中, 视图的任意一点 A ( X , y, z )经旋转得到 直角坐标系 O'-X'Y'Z'中的 A' ( x、, y , ζ、)。 Α'错切变换后得到 A" ( x、、, y、、 , z、、), 因此, 由 A到 A"的对应关系为 A''=M1*M2*A*M3。
投影显示模块 24通过将错切后的视图进行投影, 使得用户可以体验到全息 立体视图。
在本发明实现虚拟视图转立体视图的一优选实施例中, 用户根据自己的体 验, 可以调整第二图像处理模块的错切角度以及场景中观察者的位置, 进, 进 而改善投影视图的 3D效果。
具体地, 用户通过改变 zG和 6>的大小, 进而改善投影图像的立体效果: 当增大 zG时, 减小, 立体效果减弱, 反之, 则立体效果增强; 当增大 / 时 ( 0 < < ), tan/ 就增大, 投影图像的立体效果增强, 反之,
2
则立体效果减弱。 通过上述方式, 本发明提供的一种实现虚拟视图转立体视图的方法及装置, 通过追踪人眼动态坐标确定虚拟场景的旋转角度, 进而对虚拟场景进行旋转获 得虚拟全息立体视图矩阵, 进而利用错切矩阵右乘各视点模型矩阵, 投影后得 到各视点的图像, 并根据用户的 3D效果体验, 调整场景中观察者的位置以及错 切角度最终获得较好的 3D效果体验。 在上述实施例中, 仅对本发明进行了示范性描述, 但是本领域技术人员在 阅读本专利申请后可以在不脱离本发明的精神和范围的情况下对本发明进行各 种修改。

Claims

权利 要求
1、 一种实现虚拟视图转立体视图的方法, 其特征在于, 所述方法 包括步骤:
51、 利用人眼追踪模块捕捉人眼位置坐标;
52、 利用第一图像处理模块根据人眼位置坐标以及投影显示模块的 屏幕中心坐标确定虚拟场景的旋转角度, 按照旋转角度旋转虚拟场景获 得虚拟全息立体视图矩阵;
53、 利用第二图像处理模块根据虚拟场景中心、 场景中观察者的位 置、 各视点的坐标确定各视点的错切角度, 进而生成与各视点——对应 的错切矩阵, 错切矩阵右乘与之对应的视点模型矩阵生成左右视图;
S4、 利用投影显示模块对各视点的左右视图进行投影。
2、 根据权利要求 1 所述的方法, 其特征在于, 所述方法进一步包 括步骤:
S5、 用户根据体验到的 3D效果调整第二图像处理模块的错切角度 以及场景中观察者的位置坐标, 进而实现改善投影 3D图像的 3D效果。
3、 根据权利要求 1 所述的方法, 其特征在于, 若旋转之前的虚拟 场景视图矩阵用 A表示, 虚拟全息立体视图矩阵用 A、表示,
则 A、=M1*M2*A,
cos <3, 0,一 sin <3, 0 1, 0, 0, 0
0, 1, 0, 0 0, cos ^,sin^, 0
Ml= M2=
sin <3, 0, cos <3, 0 1, -sin^, cos<3, 0 立体视图 A右 0, 0, 0,1 0, 0, 0, 1
乘 M 1 , M2得到旋转后的视图 A' , 其中, 在旋转之前的空间直角坐标系 0-XYZ 中, 屏幕中心位于坐 标系 0-XYZ的原点, 人眼到屏幕中心的连线在 XOZ平面的投影与 z轴 正半轴的夹角为 α,人眼到屏幕中心的连线在 ΥΟΖ平面的投影与 ζ轴正 半轴的夹角为 β , X轴方向由屏幕左边中点指向屏幕的右边中点, Υ轴 方向由屏幕上边中点指向屏幕的下边中点,
根据角度 α β、 人眼到屏幕的距离 L, 场景中心到屏幕的距离 Ζ,
L · tan
可以确定场景绕 Y轴旋转的角度" = arctan——— ,以及场景绕 X轴旋 转的角度 = arctan L* tan_^
4、 根据权利要求 3 所述的方法, 其特征在于, 所述方法进一步包 括步骤:
S5、 用户根据体验到的 3D效果调整第二图像处理模块的错切角度 以及场景中观察者的位置坐标, 进而实现改善投影 3D图像的 3D效果。
5、 根据权利要求 3 所述的方法, 其特征在于, 旋转后的新坐标系 用 0' -X' Y' Z'表示, 原点 0'与原坐标系中视点的中心位置重合, Z'轴的 正方向沿着原坐标系中观察者的坐标指向视点中心坐标, 所述的错切变 换是指视点的 y'和 z'不变, x'值以 z'轴为依赖轴呈线性变换, 设错切角 度 Θ是指视点坐标与 z'轴正方向夹角, 错切之后的任一视点的坐标用
( X", y z" )表示, 则位于 X'轴负半轴视点的错切表达式均为: " = '+(z'-zG)tan^ " = '
- II - I
Z = Z 对应的错切矩阵均为:
1, 0, tan Θ, -zG tan θ
0,1,0,0
0,0,1,0
0,0,0,1 位于 X'轴正半轴的视点的错切表达式均为:
" = '- (ζ '- zG) tan θ
对应的错切矩阵均为:
Figure imgf000018_0001
6、 根据权利要求 5 所述的方法, 其特征在于, 所述方法进一步包 括步骤:
S5、 用户根据体验到的 3D效果调整第二图像处理模块的错切角度 以及场景中观察者的位置坐标, 进而实现改善投影 3D图像的 3D效果。
7、 一种实现虚拟视图转立体视图的装置, 其特征在于, 所述装置 包括:
人眼追踪模块, 用于捕捉人眼位置坐标;
第一图像处理模块, 与所述人眼追踪模块电连接, 用于根据人眼位 置坐标以及投影显示模块的屏幕中心坐标确定虚拟场景的旋转角度, 按 照旋转角度旋转虚拟场景获得虚拟全息立体视图矩阵;
第二图像处理模块, 与所述第一图像处理模块电连接, 用于根据虚 拟场景中心、 场景中观察者的位置、 各视点的坐标确定各视点的错切角 度, 进而生成与各视点一一对应的错切矩阵, 错切矩阵右乘与之对应的 视点模型矩阵生成左右视图;
投影显示模块, 与所述第二图像处理模块电连接, 用于对各视点的 左右视图进行投影。 8、 根据权利要求 7 所述的装置, 其特征在于, 第二图像处理模块 进一步用于根据用户的输入改变错切角度以及场景中观察者的位置坐 标, 进而实现改善投影获得的立体图像的立体效果。
9、 根据权利要求 7 所述的装置, 其特征在于, 若所述旋转之前的 虚拟场景视图矩阵用 A表示, 虚拟全息立体视图矩阵用 A、表示,
则 A、=M1 *M2*A,
Ml= 立体视图 A 右乘
Figure imgf000019_0001
Ml , M2得到旋转后的视图 A' 其中, 旋转之前的空间直角坐标系用表示 0-XYZ 中, 屏幕中心位 于坐标系 0-XYZ的原点,人眼到屏幕中心的连线在 XOZ平面的投影与 z轴正半轴的夹角为 a, 人眼到屏幕中心的连线在 YOZ平面的投影与 z 轴正半轴的夹角为 β , X轴由屏幕左边中点指向屏幕的右边中点, Υ轴 由屏幕上边中点指向屏幕的下边中点,
根据角度 α、 β、 人眼到屏幕的距离 L, 场景中心到屏幕的距离 Ζ,
· tan oc
可以确定场景绕 Y轴旋转的角度 " = arctan τ ^ ,以及场景绕 X轴旋
L + Z
L · tan β
转的角度 = arctan
L + Z
10、 根据权利要求 9所述的装置, 其特征在于, 第二图像处理模块 进一步用于根据用户的输入改变错切角度以及场景中观察者的位置坐 标, 进而实现改善投影获得的立体图像的立体效果。
11、 根据权利要求 9所述的装置, 其特征在于, 旋转后的新坐标系 用 O'-X'Y'Z'表示, 其中, 原点 0'与原坐标系中视点的中心位置重合, Z'轴的正方向沿着原坐标系中观察者的坐标指向视点中心坐标, 所述的 错切变换是指视点的 y'和 z'不变, x'值以 z'轴为依赖轴呈线性变换, 设 错切角度 Θ是指视点坐标与 z'轴正方向夹角, 错切之后的视点的坐标用 ( X", y \ z" )表示, 则位于 X'轴负半轴视点的错切表达式均为:
" = '+(z'-zG)tan^
对应的错切矩阵均为:
Figure imgf000020_0001
位于 X'轴正半轴的视点的错切表达式均为:
" = '-(ζ'- zG) tan θ
对应的错切矩阵均为:
Figure imgf000020_0002
12、 根据权利要求 11 所述的装置, 其特征在于, 第二图像处理模 块进一步用于根据用户的输入改变错切角度以及场景中观察者的位置 坐标, 进而实现改善投影获得的立体图像的立体效果。
PCT/CN2014/082831 2013-11-05 2014-07-23 一种实现虚拟视图转立体视图的方法及装置 WO2015067071A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2015545662A JP2015536010A (ja) 2013-11-05 2014-07-23 仮想ビューから立体ビューへの変換を実現する方法及び装置
US14/417,557 US9704287B2 (en) 2013-11-05 2014-07-23 Method and apparatus for achieving transformation of a virtual view into a three-dimensional view
EP14814675.6A EP3067866A4 (en) 2013-11-05 2014-07-23 Method and device for converting virtual view into stereoscopic view

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201310542642.XA CN103996215A (zh) 2013-11-05 2013-11-05 一种实现虚拟视图转立体视图的方法及装置
CN201310542642.X 2013-11-05

Publications (1)

Publication Number Publication Date
WO2015067071A1 true WO2015067071A1 (zh) 2015-05-14

Family

ID=51310368

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/082831 WO2015067071A1 (zh) 2013-11-05 2014-07-23 一种实现虚拟视图转立体视图的方法及装置

Country Status (5)

Country Link
US (1) US9704287B2 (zh)
EP (1) EP3067866A4 (zh)
JP (1) JP2015536010A (zh)
CN (1) CN103996215A (zh)
WO (1) WO2015067071A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113379897A (zh) * 2021-06-15 2021-09-10 广东未来科技有限公司 应用于3d游戏渲染引擎的自适应虚拟视图转立体视图的方法及装置

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103996215A (zh) * 2013-11-05 2014-08-20 深圳市云立方信息科技有限公司 一种实现虚拟视图转立体视图的方法及装置
KR101666959B1 (ko) * 2015-03-25 2016-10-18 ㈜베이다스 카메라로부터 획득한 영상에 대한 자동보정기능을 구비한 영상처리장치 및 그 방법
CN106454315A (zh) * 2016-10-26 2017-02-22 深圳市魔眼科技有限公司 一种自适应虚拟视图转立体视图的方法、装置及显示设备
CN106961592B (zh) * 2017-03-01 2020-02-14 深圳市魔眼科技有限公司 3d视频的vr显示方法及系统
CN107027015A (zh) * 2017-04-28 2017-08-08 广景视睿科技(深圳)有限公司 基于增强现实的3d动向投影系统以及用于该系统的投影方法
US10672311B2 (en) * 2017-05-04 2020-06-02 Pure Depth, Inc. Head tracking based depth fusion
CN107193372B (zh) * 2017-05-15 2020-06-19 杭州一隅千象科技有限公司 从多个任意位置矩形平面到可变投影中心的投影方法
CN109960401B (zh) * 2017-12-26 2020-10-23 广景视睿科技(深圳)有限公司 一种基于人脸追踪的动向投影方法、装置及其系统
CN115842907A (zh) * 2018-03-27 2023-03-24 京东方科技集团股份有限公司 渲染方法、计算机产品及显示装置
CN109189302B (zh) * 2018-08-29 2021-04-06 百度在线网络技术(北京)有限公司 Ar虚拟模型的控制方法及装置
CN111050145B (zh) * 2018-10-11 2022-07-01 上海云绅智能科技有限公司 一种多屏融合成像的方法、智能设备及系统
CN111131801B (zh) * 2018-11-01 2023-04-28 华勤技术股份有限公司 投影仪校正系统、方法及投影仪
CN111182278B (zh) * 2018-11-09 2022-06-14 上海云绅智能科技有限公司 一种投影展示管理方法及系统
AT522012A1 (de) * 2018-12-19 2020-07-15 Viewpointsystem Gmbh Verfahren zur Anpassung eines optischen Systems an einen individuellen Benutzer
CN110913200B (zh) * 2019-10-29 2021-09-28 北京邮电大学 一种多屏拼接同步的多视点图像生成系统及方法
CN111031298B (zh) * 2019-11-12 2021-12-10 广景视睿科技(深圳)有限公司 控制投影模块投影的方法、装置和投影系统
CN112235562B (zh) * 2020-10-12 2023-09-15 聚好看科技股份有限公司 一种3d显示终端、控制器及图像处理方法
CN112672139A (zh) * 2021-03-16 2021-04-16 深圳市火乐科技发展有限公司 投影显示方法、装置及计算机可读存储介质
CN116819925B (zh) * 2023-08-29 2023-11-14 廊坊市珍圭谷科技有限公司 一种基于全息投影的互动娱乐系统及方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7574045B2 (en) * 2001-07-27 2009-08-11 Matrox Electronic Systems Ltd. Model-based recognition of objects using a calibrated image system
CN101853518A (zh) * 2010-05-28 2010-10-06 电子科技大学 基于各向异性体数据的错切变形体绘制方法
CN101866497A (zh) * 2010-06-18 2010-10-20 北京交通大学 基于双目立体视觉的智能三维人脸重建方法及系统
CN102509334A (zh) * 2011-09-21 2012-06-20 北京捷成世纪科技股份有限公司 一种将虚拟3d场景转换为立体视图的方法
CN103996215A (zh) * 2013-11-05 2014-08-20 深圳市云立方信息科技有限公司 一种实现虚拟视图转立体视图的方法及装置

Family Cites Families (48)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB8706348D0 (en) * 1987-03-17 1987-04-23 Quantel Ltd Electronic image processing systems
JP2812254B2 (ja) * 1995-05-31 1998-10-22 日本電気株式会社 視点追従型立体映像表示装置
US5850225A (en) * 1996-01-24 1998-12-15 Evans & Sutherland Computer Corp. Image mapping system and process using panel shear transforms
US6108440A (en) * 1996-06-28 2000-08-22 Sony Corporation Image data converting method
US6522312B2 (en) * 1997-09-01 2003-02-18 Canon Kabushiki Kaisha Apparatus for presenting mixed reality shared among operators
US6640018B1 (en) * 1999-06-08 2003-10-28 Siemens Aktiengesellschaft Method for rotating image records with non-isotropic topical resolution
AU2001239926A1 (en) * 2000-02-25 2001-09-03 The Research Foundation Of State University Of New York Apparatus and method for volume processing and rendering
US7657083B2 (en) * 2000-03-08 2010-02-02 Cyberextruder.Com, Inc. System, method, and apparatus for generating a three-dimensional representation from one or more two-dimensional images
GB2372659A (en) * 2001-02-23 2002-08-28 Sharp Kk A method of rectifying a stereoscopic image
US7003175B2 (en) * 2001-03-28 2006-02-21 Siemens Corporate Research, Inc. Object-order multi-planar reformatting
US7043073B1 (en) * 2001-10-19 2006-05-09 Zebra Imaging, Inc. Distortion correcting rendering techniques for autostereoscopic displays
JP3805231B2 (ja) * 2001-10-26 2006-08-02 キヤノン株式会社 画像表示装置及びその方法並びに記憶媒体
US7565004B2 (en) * 2003-06-23 2009-07-21 Shoestring Research, Llc Fiducial designs and pose estimation for augmented reality
US7643025B2 (en) * 2003-09-30 2010-01-05 Eric Belk Lange Method and apparatus for applying stereoscopic imagery to three-dimensionally defined substrates
DE602004016185D1 (de) * 2003-10-03 2008-10-09 Automotive Systems Lab Insassenerfassungssystem
GB0410551D0 (en) * 2004-05-12 2004-06-16 Ller Christian M 3d autostereoscopic display
JP4434890B2 (ja) * 2004-09-06 2010-03-17 キヤノン株式会社 画像合成方法及び装置
AU2004240229B2 (en) * 2004-12-20 2011-04-07 Canon Kabushiki Kaisha A radial, three-dimensional, hierarchical file system view
US20070127787A1 (en) * 2005-10-24 2007-06-07 Castleman Kenneth R Face recognition system and method
JP4375325B2 (ja) * 2005-11-18 2009-12-02 カシオ計算機株式会社 画像処理装置、画像処理方法及びプログラム
US8548265B2 (en) * 2006-01-05 2013-10-01 Fastvdo, Llc Fast multiplierless integer invertible transforms
GB0716776D0 (en) * 2007-08-29 2007-10-10 Setred As Rendering improvement for 3D display
DE102007056528B3 (de) * 2007-11-16 2009-04-02 Seereal Technologies S.A. Verfahren und Vorrichtung zum Auffinden und Verfolgen von Augenpaaren
US20100110069A1 (en) * 2008-10-31 2010-05-06 Sharp Laboratories Of America, Inc. System for rendering virtual see-through scenes
TWI398796B (zh) * 2009-03-27 2013-06-11 Utechzone Co Ltd Pupil tracking methods and systems, and correction methods and correction modules for pupil tracking
JP2011165068A (ja) * 2010-02-12 2011-08-25 Nec System Technologies Ltd 画像生成装置、画像表示システム、画像生成方法、及びプログラム
US8890934B2 (en) * 2010-03-19 2014-11-18 Panasonic Corporation Stereoscopic image aligning apparatus, stereoscopic image aligning method, and program of the same
JP5872185B2 (ja) * 2010-05-27 2016-03-01 任天堂株式会社 携帯型電子機器
JP4869430B1 (ja) * 2010-09-24 2012-02-08 任天堂株式会社 画像処理プログラム、画像処理装置、画像処理システム、および、画像処理方法
US8896631B2 (en) * 2010-10-25 2014-11-25 Hewlett-Packard Development Company, L.P. Hyper parallax transformation matrix based on user eye positions
TWI433530B (zh) * 2010-11-01 2014-04-01 Ind Tech Res Inst 具有立體影像攝影引導的攝影系統與方法及自動調整方法
US9020241B2 (en) * 2011-03-03 2015-04-28 Panasonic Intellectual Property Management Co., Ltd. Image providing device, image providing method, and image providing program for providing past-experience images
US20130113701A1 (en) * 2011-04-28 2013-05-09 Taiji Sasaki Image generation device
JP5849811B2 (ja) * 2011-05-27 2016-02-03 株式会社Jvcケンウッド 裸眼立体視用映像データ生成方法
KR101779423B1 (ko) * 2011-06-10 2017-10-10 엘지전자 주식회사 영상처리방법 및 영상처리장치
WO2012172719A1 (ja) * 2011-06-16 2012-12-20 パナソニック株式会社 ヘッドマウントディスプレイおよびその位置ずれ調整方法
KR101265667B1 (ko) * 2011-06-21 2013-05-22 ㈜베이다스 차량 주변 시각화를 위한 3차원 영상 합성장치 및 그 방법
KR101315303B1 (ko) * 2011-07-11 2013-10-14 한국과학기술연구원 착용형 디스플레이 장치 및 컨텐츠 디스플레이 방법
JP2013128181A (ja) * 2011-12-16 2013-06-27 Fujitsu Ltd 表示装置、表示方法および表示プログラム
CN102520970A (zh) * 2011-12-28 2012-06-27 Tcl集团股份有限公司 一种立体用户界面的生成方法及装置
JP2013150249A (ja) * 2012-01-23 2013-08-01 Sony Corp 画像処理装置と画像処理方法およびプログラム
US20140002443A1 (en) * 2012-06-29 2014-01-02 Blackboard Inc. Augmented reality interface
US9092897B2 (en) * 2012-08-10 2015-07-28 Here Global B.V. Method and apparatus for displaying interface elements
US9639924B2 (en) * 2012-09-24 2017-05-02 Seemsome Everyone Ltd Adding objects to digital photographs
KR101416378B1 (ko) * 2012-11-27 2014-07-09 현대자동차 주식회사 영상 이동이 가능한 디스플레이 장치 및 방법
US9225969B2 (en) * 2013-02-11 2015-12-29 EchoPixel, Inc. Graphical system with enhanced stereopsis
KR102040653B1 (ko) * 2013-04-08 2019-11-06 엘지디스플레이 주식회사 홀로그래피 입체 영상 표시장치
US9264702B2 (en) * 2013-08-19 2016-02-16 Qualcomm Incorporated Automatic calibration of scene camera for optical see-through head mounted display

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7574045B2 (en) * 2001-07-27 2009-08-11 Matrox Electronic Systems Ltd. Model-based recognition of objects using a calibrated image system
CN101853518A (zh) * 2010-05-28 2010-10-06 电子科技大学 基于各向异性体数据的错切变形体绘制方法
CN101866497A (zh) * 2010-06-18 2010-10-20 北京交通大学 基于双目立体视觉的智能三维人脸重建方法及系统
CN102509334A (zh) * 2011-09-21 2012-06-20 北京捷成世纪科技股份有限公司 一种将虚拟3d场景转换为立体视图的方法
CN103996215A (zh) * 2013-11-05 2014-08-20 深圳市云立方信息科技有限公司 一种实现虚拟视图转立体视图的方法及装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3067866A4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113379897A (zh) * 2021-06-15 2021-09-10 广东未来科技有限公司 应用于3d游戏渲染引擎的自适应虚拟视图转立体视图的方法及装置

Also Published As

Publication number Publication date
CN103996215A (zh) 2014-08-20
JP2015536010A (ja) 2015-12-17
US20150339844A1 (en) 2015-11-26
US9704287B2 (en) 2017-07-11
EP3067866A4 (en) 2017-03-29
EP3067866A1 (en) 2016-09-14

Similar Documents

Publication Publication Date Title
WO2015067071A1 (zh) 一种实现虚拟视图转立体视图的方法及装置
EP3460746B1 (en) Generating stereoscopic light field panoramas using concentric viewing circles
US8994780B2 (en) Video conferencing enhanced with 3-D perspective control
CN107637060B (zh) 相机装备和立体图像捕获
CN102665087B (zh) 3d立体摄像设备的拍摄参数自动调整系统
US20150358539A1 (en) Mobile Virtual Reality Camera, Method, And System
WO2018005235A1 (en) System and method for spatial interaction using automatically positioned cameras
WO2012050040A1 (ja) 立体映像変換装置及び立体映像表示装置
JP2017536565A (ja) ステレオ視のための広視野カメラ装置
JP2016500954A5 (zh)
JP6057570B2 (ja) 立体パノラマ映像を生成する装置及び方法
Tang et al. A system for real-time panorama generation and display in tele-immersive applications
US11812009B2 (en) Generating virtual reality content via light fields
CN103247020A (zh) 一种基于径向特征的鱼眼图像展开方法
CN103269430A (zh) 基于bim的三维场景生成方法
WO2013178188A1 (zh) 视频会议显示方法及装置
WO2019206827A1 (en) Apparatus and method for rendering an audio signal for a playback to a user
WO2013185429A1 (zh) 一种投影显示系统、投影设备及投影显示方法
WO2022267694A1 (zh) 一种显示调节方法、装置、设备及介质
WO2012100495A1 (zh) 双摄像头立体拍摄的处理方法及装置
WO2023056803A1 (zh) 一种全息展示方法及装置
CN109961395B (zh) 深度图像的生成及显示方法、装置、系统、可读介质
JP2012216883A (ja) 表示制御装置、表示制御方法、及びプログラム
Zhang et al. A new 360 camera design for multi format VR experiences
CN203896436U (zh) 一种虚拟现实投影机

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2015545662

Country of ref document: JP

Kind code of ref document: A

REEP Request for entry into the european phase

Ref document number: 2014814675

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2014814675

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 14417557

Country of ref document: US

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14814675

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE