WO2022116194A1 - 一种全景呈现方法及其装置 - Google Patents

一种全景呈现方法及其装置 Download PDF

Info

Publication number
WO2022116194A1
WO2022116194A1 PCT/CN2020/134040 CN2020134040W WO2022116194A1 WO 2022116194 A1 WO2022116194 A1 WO 2022116194A1 CN 2020134040 W CN2020134040 W CN 2020134040W WO 2022116194 A1 WO2022116194 A1 WO 2022116194A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
panorama
virtual camera
projection
spherical screen
Prior art date
Application number
PCT/CN2020/134040
Other languages
English (en)
French (fr)
Inventor
杨琴
蔚鹏飞
王立平
Original Assignee
中国科学院深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国科学院深圳先进技术研究院 filed Critical 中国科学院深圳先进技术研究院
Priority to PCT/CN2020/134040 priority Critical patent/WO2022116194A1/zh
Priority to EP20956450.9A priority patent/EP4044104A4/en
Priority to US17/729,890 priority patent/US20220253975A1/en
Publication of WO2022116194A1 publication Critical patent/WO2022116194A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B37/00Panoramic or wide-screen photography; Photographing extended surfaces, e.g. for surveying; Photographing internal surfaces, e.g. of pipe
    • G03B37/06Panoramic or wide-screen photography; Photographing extended surfaces, e.g. for surveying; Photographing internal surfaces, e.g. of pipe involving anamorphosis
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B37/00Panoramic or wide-screen photography; Photographing extended surfaces, e.g. for surveying; Photographing internal surfaces, e.g. of pipe
    • G03B37/04Panoramic or wide-screen photography; Photographing extended surfaces, e.g. for surveying; Photographing internal surfaces, e.g. of pipe with cameras or projectors providing touching or overlapping fields of view
    • G06T3/047
    • G06T3/08

Definitions

  • the invention belongs to the field of virtual reality, and in particular relates to a panorama presentation method and a device thereof.
  • the virtual reality used by people is a head-mounted device, which presents images with parallax in both eyes of a person to form a stereoscopic vision.
  • small animals such as mice and rabbits are lateral-eyed animals.
  • the two eyes are located on the sides of the head, which makes the horizontal field of view of the small animals extremely wide, and the mouse can reach 220°-230°. . If you want to cover such a wide field of view, you need to use a larger display screen near both sides of the animal's head to present the image on a large screen that surrounds the animal, and animal virtual reality is used for brain research.
  • the department uses a variety of equipment such as microscopes and nerve electrodes. Therefore, head-mounted VR devices for humans are not suitable for small animals.
  • the virtual reality device assembled with multiple display screens arranged along the horizontal line cannot meet the requirements of the obliquely above field of view of small animals, and cannot be displayed.
  • An experimental environment that simulates the structure of real space when a flat image is displayed on an uneven medium, such as multiple displays around small animals, the position and shape of each object in the three-dimensional space cannot be accurately mapped, and the virtual space presented will be distorted. The sense of immersion is greatly reduced.
  • Another small animal VR setup which is more commonly used, uses a spherical screen to surround the small animal, and a projector projects images on the spherical screen that surrounds the mouse.
  • One is to set the projector behind and above the small animal, and the projector image is reflected on the spherical screen through a flat mirror and a convex mirror; the other is to move the projector to the front and bottom of the small animal.
  • the projected image is reflected by a single convex mirror onto a spherical screen to facilitate instrument placement and experimental manipulation above and diagonally behind the mouse's head.
  • the existing technology is limited by the optical path device and the projection optical path, and the vertical projection field of view is small.
  • the known vertical projection field of view is only about -20-60° at the maximum, which cannot meet the partial nerve research of visual stimulation of small animals from above.
  • the virtual reality image projected by the above method has significant distortion.
  • the existing virtual reality technology is not suitable for small animal brain research in terms of field of view, image distortion, and generation speed, so it is necessary to improve the existing technology.
  • the present invention provides a panorama presentation method, which is used to solve at least one of the problems of the prior art, such as large image distortion, slow image conversion speed, and small vertical field of view.
  • the method includes the following steps:
  • Step S1 constructing a projection structure
  • the projection structure includes a projector and a spherical screen, the projector is equipped with a fisheye lens, the projector is arranged on the spherical screen, the fisheye lens is tangent to the spherical screen and faces the spherical screen spherical screen ball center;
  • Step S2 changing the content of the projection transformation in the virtual reality software according to the isometric imaging formula, and bringing all the scenes in the virtual space into the field of view of the virtual camera to generate a 720° panorama, where the 720° panorama is a 2D panorama;
  • Step S3 project the 2D panorama image through the fisheye lens, the viewer is located at the spherical center of the spherical screen, and the 2D panorama image forms a three-dimensional space on the spherical screen.
  • the projection transformation includes adjusting the coordinates of the scene and the projection matrix according to the isometric imaging formula.
  • step S2 also includes adjusting the frustum culling and changing the shadow transformation.
  • the adjustment of the coordinates of the scene includes the following steps:
  • is the vector angle of the object, that is, the angle between the line connecting the object and the virtual camera and the z-axis, and the z-axis represents the direction of the virtual camera;
  • the projection matrix is:
  • A is the image aspect ratio of the projector
  • n is the closest distance between the visible object and the virtual camera
  • f is the farthest distance between the visible object and the virtual camera.
  • the adjusting view frustum culling includes the following steps:
  • the scene culled by the view frustum moves a distance f in the direction of approaching the virtual camera
  • f is the farthest distance between the visible object and the virtual camera.
  • the shadow transformation adjusts the coordinates of the scene according to the steps of claim 5 to generate shadows.
  • a panoramic presentation device comprising:
  • the receiving unit includes a spherical screen
  • the projection unit includes a projector equipped with a fisheye lens, the projector is located at the edge of the spherical screen and the fisheye lens is directly facing the center of the spherical screen;
  • the image processing unit is configured to convert the acquired 3D scene model into a panoramic plane image suitable for the projection unit and the receiving unit, and project it onto the receiving unit through the projection unit, the The image processing unit applies the panorama presentation method according to any one of claims 1-8.
  • a computer-readable storage medium storing a computer program, when the computer program is executed by a processor, implements the panorama presentation method according to any one of claims 1-8.
  • the present invention has at least the following beneficial effects: the present invention projects the image onto the entire spherical surface, and the horizontal and vertical fields of view are both 360°.
  • the environment image is evenly distributed on the spherical surface by equidistant projection, and the position of each position of the image is consistent with the orientation of the observed real environment, and theoretically zero distortion.
  • the fisheye lens is spherical imaging, which can preserve the information of the hemisphere completely and without distortion.
  • the projector and spherical screen have a specific geometric relationship, so the distortion of the combination of fisheye projection and spherical screen can be regarded as zero.
  • the software when generating the projected image, the software needs to reconstruct the light path to obtain the mathematical relationship before and after the image is deformed. Different areas of the image often have different mathematical relationships, and each point of the image needs to choose a suitable mathematical formula, so that the Image generation is slow.
  • the present invention adopts a uniform isometric transformation, does not need a point-by-point judgment process, and greatly improves the speed of image generation.
  • the present invention only changes one step in the image generation process, and directly generates a panoramic image, without the complicated steps of generating six images, synthesizing a cube map, and then additionally converting it into a panoramic image, and the speed is at least 6 times higher than that of the previous cube map method.
  • the image simulation of the present invention is natural and rich in content.
  • the existing small animal virtual reality equipment adopts a flat reflector with a diameter of 15 cm and a special convex mirror-angle magnifying glass.
  • the material is soft and easily damaged.
  • FIG. 1 is a schematic diagram of the conversion and derivation process of the projection image of the fisheye lens of the present invention.
  • Fig. 2 is a flow chart of a 3D object generating a 2D plane image.
  • the horizontal field of view of the prior art is 270°, which is larger than that of small animals, while the maximum vertical projection field of view is only -20° to 60°, which cannot satisfy part of the nerve research of visual stimulation to mice from above.
  • the present invention selects the projector 2 with replaceable lens, replaces the ordinary medium telephoto lens with a fisheye lens, and forms a projection structure with the spherical screen 1 without any reflective lens.
  • the spherical screen 1 used in the present invention is a true spherical shape. In order to facilitate the experimental operation, the bottom and rear of 1/4 of the spherical screen are generally cut off.
  • the present invention selects a fisheye lens with a projection angle of 180°, the fisheye lens is arranged on the edge of the spherical screen 1, is tangent to the spherical screen and faces the center of the sphere, and the small animal watching the image is located at the center of the spherical screen 1,
  • the image displayed by the fisheye lens can be a complete spherical surface, covering the entire spherical screen 1 to solve the problem of the narrow vertical projection field of view in the prior art.
  • Projector 2 needs to display a complete spherical image on spherical screen 1, and projector 2 projects a flat image, so the image needs to be transformed into a panoramic image containing all the contents of the environment, so that the projected image is around the small animal Forms a visual space with the correct location and shape.
  • the image needs to be transformed into an isometric image
  • the small animal is located at the center of the spherical screen 1, and the images viewed on the spherical screen 1 have an isometric relationship, so the projector 2 needs to project an isometric image.
  • the vast majority of civilian fisheye lenses are designed and manufactured according to the isometric projection model, so the image projected by the fisheye lens is an isometric image. As shown in Figure 1, the derivation process for the fisheye lens projection image conversion.
  • the present invention uses the fisheye lens to replace the lens and the reflected light path of the projector 2 in the prior art, thereby expanding the vertical field of view of the projected image.
  • the invention evenly distributes the image on the spherical surface by equidistant projection, so that the position of each point on the image is consistent with the orientation of the real environment, and is different from the plane imaging of the ordinary lens, the fisheye lens is spherical imaging, It can save the environmental information completely and without distortion.
  • the projector and the spherical screen have a specific geometric relationship, so as to achieve zero distortion of the projected image on the spherical screen. In addition to the isometric transformation, the present invention does not require any other distortion correction process.
  • the isometric imaging structure combined with the fisheye lens and the spherical screen 1 combines the multiple relationship of the central angle and the circumferential angle to jointly realize the isometric panoramic transformation of the image, which greatly simplifies The image transformation process is improved, and the image transformation speed is improved.
  • the 720° panorama of the present invention includes 360° horizontal and vertical fields of view, that is, all the information visible in the environment, while the 360° panorama commonly used in life is a horizontal 360° panorama shot by a handheld camera in a circle.
  • °Environmental image which can be regarded as an unwrapped cylindrical image, excluding the sky directly above and the ground directly below.
  • the existing panorama generation methods generally use cube maps, that is, six cameras facing each face of the cube take pictures respectively, or one camera takes pictures along the x, -x, y, -y, z, -z axes respectively, a total of six
  • the images are combined into a cubemap, passed to the main virtual camera representing the eye, and converted into a panorama.
  • This kind of panorama is slow to generate, and if the environment changes are to be reflected all the time, it may cause a lag in the generation of virtual reality space video and interactive response.
  • the present invention can directly generate a panorama image by intervening in the middle process of generating a picture without going through the cube map conversion, which saves a lot of time.
  • the core step of converting 3D space into 2D plane image in virtual reality software is projective transformation, so the present invention intervenes in the projective transformation step.
  • the orientation setting of the virtual camera needs to be the same as the installation orientation of the projector 2.
  • the coordinates of each scene in the 3D space before projection transformation can be calculated through the isometric imaging formula, but in order to meet the special requirements of virtual reality software for homogeneous coordinates, depth judgment, etc.
  • the present invention adjusts the projection transformation in the original process according to the characteristics of the projection transformation.
  • the original projection transformation only needs one projection matrix, but the projection matrix can only be translated, rotated, and scaled, and other complex changes cannot be completed.
  • the adjusted projection transformation of the present invention includes two parts: adjustment of scene coordinates and projection matrix. details as follows:
  • the 3D coordinates are adjusted to:
  • the projection matrix is changed to:
  • A is the projector image aspect ratio
  • n is the closest distance of the visible object
  • f is the farthest distance of the visible object.
  • shadow transformation adjusts the coordinates of the scene according to the 3D coordinate adjustment method in the aforementioned projection transformation to generate shadows.
  • the virtual reality software main control program is performed in the CPU, and some graphics operations are performed in the GPU, in order to be compatible with the graphics programming interface, the aforementioned projection matrix is changed in the CPU through the virtual reality software script, and the 3D coordinates are in the GPU through the virtual reality software shader program. change within.
  • the above changes are applicable to the OpenGL graphics programming interface. If other graphics programming interfaces such as DirectX are used, the corresponding sign and parameters need to be changed according to their coordinate definitions.
  • the present invention also provides another embodiment, which is different from the above embodiment in that two back-to-back cameras are set up to capture images of the two hemispheres respectively, and then the two images are combined into one image and transmitted to the representative
  • the main virtual camera of the eye can also display the panorama directly after adjusting the parameters through the dual-camera display function of the virtual reality software.
  • the present invention also provides a panoramic presentation device, comprising:
  • the receiving unit includes a spherical screen 1;
  • the projection unit includes a projector 2 equipped with a fisheye lens, the projector 2 is located at the edge of the spherical screen 1 and the fisheye lens is facing the center of the spherical screen 1;
  • the image processing unit is used to convert the acquired 3D scene model into a panoramic image suitable for the projection unit and the receiving unit, and project it onto the receiving unit through the projection unit, and the image processing unit applies the panoramic presentation method of the present invention .
  • the panorama presentation method of the present invention can also be applied to a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the panorama presentation method of the present invention can be implemented.
  • the present invention can be applied to a landscape sphere.
  • the existing landscape sphere views an image from the outside of the sphere, and the image is a 360° panorama image converted by a cylindrical projection method.
  • the upper and lower ends of the sphere are greatly distorted, and can only be used for entertainment, etc.
  • the application of the landscape sphere of the present invention can reduce image distortion and improve the ability of the landscape sphere to interact with complex images. It is only necessary to change the front projection screen of the virtual reality device to the rear projection screen of the landscape sphere.
  • the projection device of the present invention can also be applied to dome-screen halls, experience halls or theaters.
  • the spherical screen 1 covers a much wider field of view, and the screen has no obvious turns and joints.
  • the combination of a single fisheye projector 2 and a complete spherical screen 1 in the present invention can provide a 720° projection environment, with better viewing immersion, and can be used as a glass ground venue or a flying theater or an aerial theater.
  • the specific position of the projector 2 is not required, and the projector 2 only needs to be at the edge of the spherical screen 1 and face the center of the sphere. Taking advantage of this feature, the projector 2 can be set up on the curved guide rail to simulate the sun rising and setting in the west. It is only necessary to set the orientation of the virtual camera in the software to be the same as the orientation of the projector 2.
  • the present invention presents a panoramic image by using the combined structure of a single fisheye projector and a spherical screen, and uses the geometric relationship between the fisheye lens and the viewer's position and isometric projection transformation to make the 180° fisheye projection to the center of the spherical screen.
  • the viewer presents 720° environmental information, and greatly simplifies the image conversion process, achieves zero distortion, can restore the real environment, and makes the viewer feel immersive.
  • the one-step generation method of 720° panorama based on a single virtual camera does not require the complicated steps of collecting six images from multiple cameras, combining cube maps and converting them into panorama images, but directly adjusting the vertex position of the scene in the process of 3D space to 2D image, Generate the final panorama in one step, which is more than 6 times faster.

Abstract

本发明提供了一种全景呈现方法及其装置,本发明在硬件上通过使用单个鱼眼投影仪和球形屏幕的组合结构呈现全景图像,利用鱼眼镜头与观看者位置的几何关系和等距投影变换,使180°鱼眼投影向球形屏幕中心的观看者呈现出720°环境信息,并且大大简化图像转换过程,实现零畸变。软件上基于单虚拟摄像机的720°全景图一步生成法,无需多个摄像机采集六张图、合成立方体贴图再转换为全景图的繁杂步骤,而是直接在3D空间到2D图像过程中调整景物顶点位置,一步生成最终全景图,软硬件的紧密结合使图像转换速度提升超过6倍,足以展示内容真实环境,使观看者产生身临其境的沉浸感,并可实时变换,与观看者互动。

Description

一种全景呈现方法及其装置 技术领域
本发明属于虚拟现实领域,具体涉及一种全景呈现方法及其装置。
背景技术
由于伦理因素,许多神经科学实验不能在人体进行,而小动物的脑研究给技术人员探索脑功能机制、开拓神经科学的新领域带来极大的帮助。神经科学行为实验常基于动物在特定空间中的自由活动,比如迷宫测试记忆能力、十字高架测试探索行为、三箱装置测试社交障碍等国际通用的标准实验范式,但这些行为实验难以进一步阐释神经原理。神经原理的阐明需要应用到很多位置固定的大型记录设备,如双光子显微镜、在体膜片钳等。这些大型记录设备具有微型动物携带式装置难以达到的优异性能,如记录范围广、分辨率大等,更有利于透彻解析介导动物行为的神经网络。然而使用位置固定的大型记录设备研究动物在空间中的活动具有一定的局限性,而虚拟现实技术(Virtual Reality,VR)为位置固定的空间漫游提供了可能。
人用的虚拟现实是头戴式装置,在人的两眼分别呈现具有视差的图像,形成立体视觉。不同于人这样双眼看向前方的横眼动物,小动物如鼠、兔均为侧眼动物,两眼位于头两侧,使得小动物的水平视野极宽,小鼠可达220°-230°。如果要覆盖这么宽的视野,需要在动物头部两侧近处使用较大的显示屏,将图像呈现在包绕动物的大屏幕上,且动物虚拟现实用于脑研究,实验时必须在脑部使用显微镜、神经电极等多种设备。所以,人用的头戴式VR装置不适用于小动物。
现有的小动物VR装置有两种,一种是使用多个显示屏围绕小动物,同时为了方便实验操作,需要将显示屏距离拉远,相应地,单个显示屏占据的视野范围变小,于是需要增加围绕动物的显示屏个数以覆盖小动物的水平视野,而继续增加显示屏个数则受到计算机图像操作能力的限制。另外小动物视野范围主要集中在斜上方,部分实验中的视觉刺激甚至直接从上方显示,因此沿水平线排列的多个显示屏拼凑的虚拟现实设备不能满足小动物斜上方视野的需求,不能够展现模拟真实空间结构的实验环境。并且平面图像在不平整的介质,如围绕小动物的多个显示屏上显示时,不能够准确映射三维空间中各物体的位置和形状,呈现的虚拟空间会发生扭曲,观看者的代入感或临境感大大降低。
另一种更常使用的小动物VR装置是使用球形屏幕围绕小动物,再由投影仪将图像投射在包绕小鼠的球形屏幕上。其投影光路主要有两种,一种是将投影仪设置在小动物后上方,投影仪图像经一个平面镜和凸面镜反射到球形屏幕上;另一种是将投影仪移到小动物前下方,投影图像经单个凸面镜反射到球形屏幕上,以方便在小鼠头部上方和斜后方进行放置仪器和实验操作。然而现有技术受限于光路器件和投影光路,垂直投影视野范围较小,已知的垂直投影视野范围最大仅-20-60°左右,不能满足从上方给予小动物视觉刺激的部分神经研究,并且由上述方法投影得到的虚拟现实图像存在显著的畸变。
此外,人和动物的运动视觉速度很快,所以投影图像延迟会使观看者产生不适感。为了提升图像的生成速度,需要提高显示屏和投影仪的刷新频率,而这样将会使得图像运算的负担大大增加。为了满足图像生成速度的要求,现有技术通过减少虚拟景物来减轻图像运算的负担。采用这种方法的小动物虚拟现实投影简化到以点、线、面来展现实验环境,这种实验环境过于抽象,难以被动物理解,需要大量时间和人力成本来训练动物才能正式实验。
综上所述,现有虚拟现实技术在视野范围、图像畸变、生成速度方面不适用于小动物脑研究,所以需要对现有技术进行改进。
发明内容
本发明提供了一种全景呈现方法,用以解决现有技术图像畸变大、图像转换速度慢、垂直视野范围小等至少一个问题,包括以下步骤:
步骤S1,构建投影结构;
所述投影结构包括投影仪和球形屏幕,所述投影仪配备有鱼眼镜头,所述投影仪设置在所述球形屏幕上,所述鱼眼镜头与所述球形屏幕相切且正对所述球形屏幕球心;
步骤S2,根据等距成像公式更改虚拟现实软件中投影变换的内容,并将虚拟空间内所有景物纳入虚拟摄像机视野范围,生成720°全景图,所述720°全景图为2D全景图;
步骤S3,将所述2D全景图通过所述鱼眼镜头投影,观看者位于所述球形屏幕的球心,所述2D全景图在所述球形屏幕上形成立体空间。
进一步,所述投影变换包括按照等距成像公式调整景物坐标和投影矩阵。
进一步,所述步骤S2还包括调整视椎体剔除和更改阴影变换。
进一步,所述等距成像公式为d=n*θ,其中d为物体在2D平面图像上位置到图像中心的距离,n为成像面到虚拟摄像机距离,θ为物体矢量角,即物体与所述虚拟摄像机连线与z轴的夹角,z轴代表所述虚拟摄像机朝向,所述虚拟摄像机代表观看者眼睛,所述虚拟摄像机的方向与所述投影仪的安装方向相同。
进一步,所述调整景物坐标包括以下步骤:
x1=x0*θ*secθ;
y1=y0*θ*secθ;
z1=√(x0^2+y0^2+z0^2);
其中θ为物体矢量角,即物体与虚拟摄像机连线与z轴的夹角,z轴代表虚拟摄像机朝向;
进一步,所述投影矩阵为:
[1/(π*A),0,0,0,
0,1/π,0,0,
0,0,-(f+n)(f-n),-2*n*f/(f-n)
0,0,-1,0]
其中,A为所述投影仪图像宽高比,n为可见物体与所述虚拟摄影机的最近距离,f为可见物体与所述虚拟摄影机的最远距离。
进一步,所述调整视椎体剔除包括以下步骤:
将所述景物往所述虚拟摄像机的前方移动距离f进行视锥体剔除;
经过所述视锥体剔除的所述景物向靠近所述虚拟摄像机方向移动距离f;
f为可见物体与所述虚拟摄影机的最远距离。
进一步,所述阴影变换根据如权利要求5所述的步骤调整景物坐标,生成阴影。
一种全景呈现装置,包括:
接收单元,所述接收单元包括球形屏幕;
投影单元,所述投影单元包括一个配备有鱼眼镜头的投影仪,所述投影仪位于球形屏幕的边缘且所述鱼眼镜头正对所述球形屏幕的球心;
图像处理单元,所述图像处理单元用于将获取的3D景物模型转换成适应所述投影单元和所述接收单元的全景平面图像,并通过所述投影单元投射到所述接收单元上,所述图像处理单元应用了如权利要求1-8任一项所述的全景呈现方法。
一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1-8任一项所述的全景呈现方法。
本发明至少具有以下有益效果:本发明是将图像投影到整个球面上,水平和垂直视野均为360°。
通过等距投影的方式将环境图像均匀分布在球面上,图像每个位点所处的位置与观察真实环境的方位是一致的,理论上零畸变。
不同于普通镜头的平面成像,鱼眼镜头是球面成像,能完整无损无畸地保存半球面的信息。投影仪和球形屏幕具有特定的几何关系,所以鱼眼投影与球形屏幕组合的畸变可视为零。
针对现有虚拟现实投影光路,在生成投影图像时,需要软件重建光路得到图像变形前后的数学关系,图像不同区域往往有不同的数学关系,图像每一个点需要各自选择适合的数学公式,从而使得图像生成速度慢。而本发明采用统一的等距变换,不需要逐点判断过程,大大提高了图像生成的速度。而且,本发明仅仅更改图像生成流程中的一个步骤,直接生成全景图,而无需生成六张图、合成立方体贴图再额外转换为全景图的繁杂步骤,速度至少大于既往立方体贴图方法的6倍。
相比于同类实验室和商业设备简单抽象的图像,本发明的图像仿真自然、内容丰富。
现有小动物虚拟现实设备采用直径15厘米的平面反光镜和特殊的凸面镜-角度放大镜,该镜面制造成本高,在动物脱落毛发和排泄物较多的实验环境中,维护麻烦,并且其铝质材料较软易损坏。本发明中无任何中间反射镜面,只需采用鱼眼投影仪即可,减少了制造和维护成本。
附图说明
下面结合附图和具体实施方式对本发明作进一步详细的说明。
图1本发明鱼眼镜头投影图像转换推导过程示意图。
图2为3D物体生成2D平面图像的流程框图。
图中:1-球形屏幕、2-投影仪。
具体实施方式
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本发明,并不用于限定本发明。
现有技术在水平方向上的视野范围为270°,大于小动物的水平视野,而垂直投影视野范围最大仅为-20°-60°,不能满足从上方给予小鼠视觉刺激的部分神经研究。为扩大小动物虚拟现实的垂直视野范围,本发明选择可替换镜头的投影仪2,将普通中长焦镜头替换为鱼眼镜头,和球形屏幕1形成投影结构,无需设置任何反射镜片。本发明使用的球形屏幕1为正球形,为方便实验操作,一般会切除球形屏幕1/4的底部和后部。本发明选择投影角度为180°的鱼眼镜头,鱼眼镜头设置在球形屏幕1的边缘,与球形屏幕相切且方向正对球心,观看图像的小动物处于球形屏幕1的球心处,根据圆心角为圆周角两倍的几何关系,可使鱼眼镜头展示的图像为完整的球面,布满整个球形屏幕1以解决现有技术垂直投影视野范围小的问题。
投影仪2需要在球形屏幕1上展示的是完整的球面图像,而投影仪2投出的是一张平面图,所以该图需要变换为包含环境所有内容的全景图,使得投影图像在小动物周围形成位置形状正确的视觉空间。
图像需变换为等距图
小动物位于球形屏幕1的球心处,在球形屏幕1上看的图像具有等距关系,所以投影仪2需要投影出等距图。绝大多数民用鱼眼镜头是按照等距投影模型设计和制造的,所以鱼眼镜头投射出的图像为等距图。如图1所示,为鱼眼镜头投影图像转换推导过程。图1(a)中作为观看者的小动物位于球心,所看到的球形屏幕1上景物尺寸与其观看的角度是成正比的,如图1(a)所示,∠1/s1=∠2/s2,这样的成像方式称为等距或等角投影;图1(b)中鱼眼镜头位于球形屏幕1边缘,且与球形屏幕1相切,圆周角∠3与圆心角∠2存在1:2的关系,即∠2=2∠3,所以鱼眼镜头180°投射出的图像在球心的动物看来在任何方向都恰好对应360°环境,为水平和垂直视野均为360°的720°全景图;图1(c)中小动物看到的等距成像是投影仪1的等距投影经圆心/圆周角按比例变换后得到的,仍具有等距(等角)的关系。
本发明使用鱼眼镜头代替原有技术中投影仪2的镜头和反射光路,扩大了投影图像的垂直视野范围。本发明通过等距投影的方式将图像均匀分布在球面上,使 图像上每个位点所处的位置与真实环境的方位一致,而且不同于普通镜头的平面成像,鱼眼镜头是球面成像,能完整无损无畸地保存环境信息,投影仪和球幕具有特定的几何关系,从而实现投影图像在球形屏幕上的零畸变。除了等距变换外,本发明无需其它任何畸变矫正过程,鱼眼镜头与球形屏幕1组合的等距成像结构结合圆心角和圆周角的倍数关系,共同实现了图像的等距全景变换,大大简化了图像变换过程,提升了图像转换速度。
需要注意的是,本发明的720°全景图包含360°的水平和垂直视野,即环境中可见的所有信息,而生活中常用的360°全景图为手持相机原地转圈一周所拍的水平360°环境图像,可视为展开的圆柱面图像,不包括正上方天空和正下方地面。
3D虚拟空间一步生成全景图
上述内容介绍了本发明在硬件上的创新解决了投影图像垂直视野范围小、图像畸变、图像转换速度慢等问题,本发明在现有技术的软件上改进了全景图生成方法,进一步提升图像转换速度。
现有的全景图生成方法一般采用立方体贴图,即面向立方体各个面的六个摄像机分别拍图,或者一个摄像机沿x、-x、y、-y、z、-z轴分别拍图,总共六张图合成一份立方体贴图,传递给代表眼睛的主虚拟摄像机,然后转换为全景图。这种全景图生成速度慢,如果要时刻反映环境变化,可能造成虚拟现实空间视频生成和互动反应滞后。
本发明可以不经过立方体贴图转换,而是在生成图片的中间过程进行干预,直接生成全景图,节省了大量时间。虚拟现实软件中3D空间转换为2D平面图像的核心步骤为投影变换,因此本发明对投影变换步骤进行干预。
具体干预操作包括以下步骤:
首先,虚拟摄像机的方向设置需要与投影仪2的安装方向相同。
然后,纳入场景内所有景物,并利用等距成像公式,更改原有投影变换。其中等距成像公式为d=n*θ,式中d为物体在平面图上位置到图像中心的距离,n为成像面到摄像机距离,θ为物体矢量角,即物体与摄像机连线与z轴的夹角,z轴代表摄像机朝向。
已知投影变换前3D空间中各景物的坐标,通过等距成像公式可计算出投影变换后全景图中各景物的坐标,但是为了满足虚拟现实软件关于齐次坐标、深度判断等的特殊要求,且为了与投影变换上下游流程对接,本发明根据投影变换的特性,将原流程中的投影变换进行调整。原有投影变换仅需一个投影矩阵,但该投 影矩阵只能平移、旋转、缩放,无法完成其它复杂的改动。本发明调整后的投影变换包括调整景物坐标和投影矩阵两部分。具体如下:
3D坐标调整为:
x1=x0*θ*secθ,
y1=y0*θ*secθ,
z1=√(x0^2+y0^2+z0^2),
投影矩阵改为:
[1/(π*A),0,0,0,
0,1/π,0,0,
0,0,-(f+n)(f-n),-2*n*f/(f-n)
0,0,-1,0]
其中,A为投影仪图像宽高比,n为可见物体的最近距离,f为可见物体的最远距离。
通过上述方法更改原流程中投影转换的具体步骤可以直接生成一张等距分布的720°全景图像,不需要采用立方体贴图的方法,大大缩短了全景图像生成速度。为了更好地实现全景成像,对以下两个方面进行调整:
1、为避免虚拟摄像机后方景物被切除,更改虚拟现实软件的视椎体切除步骤。由于在全景成像中该步骤无法删除,所以具体调整步骤为:在视椎体剔除前所有景物沿z轴前移距离f,完成视椎体剔除后所有景物再后移距离f。
2、由于阴影生成流程和景物图像的生成流程是各自独立的,所以阴影变换按照前述投影变换中3D坐标的调整方法调整景物坐标,生成阴影。
因为虚拟现实软件主控程序在CPU内进行,部分图形运算在GPU内进行,为兼容图形编程接口,前述投影矩阵通过虚拟现实软件脚本在CPU内更改,3D坐标通过虚拟现实软件着色器程序在GPU内更改。上述更改内容适用于OpenGL图形编程接口,如果采用其它图形编程接口如DirectX,需根据其坐标定义更改相应的正负号和参数。
除此之外,本发明还提供了另外一种实施例,与上述实施例不同的是,设置两个背靠背的摄像机,分别拍摄两个半球的图像,然后两张图合成一张图传递给代表眼睛的主虚拟摄像机,也可通过虚拟现实软件的双摄像头显示功能,调整参数后直接显示全景图。
本发明还提供了一种全景呈现装置,包括:
接收单元,接收单元包括球形屏幕1;
投影单元,投影单元包括配备有鱼眼镜头的投影仪2,投影仪2位于球形屏幕1的边缘且鱼眼镜头正对球形屏幕1的球心;
图像处理单元,图像处理单元用于将获取的3D景物模型转换成适应投影单元和接收单元的全景图像,并通过投影单元投射到接收单元上,所述图像处理单元应用了本发明的全景呈现方法。
本发明的全景呈现方法还可以应用于一种计算机可读存储介质,计算机可读存储介质存储有计算机程序,计算机程序被处理器执行时能够实现本发明的全景呈现方法。
另外,本发明可以应用于一种景观球,现有的景观球从球外侧观看图像,图像为圆柱面投影法转换的360°全景图,球上下两端畸变极大,仅能用于娱乐等畸变要求低的场合,而且因为图像转换速度慢,一般只能转换完视频所有画面,才能放映,即使有少数可实时放映,也只能与人做极其简单的图像互动。而应用了本发明的景观球可减少图像畸变,并提升景观球与复杂图像互动的能力,只需将虚拟现实设备的正投幕改为景观球的背投幕即可。
本发明的投影装置也可以应用于球幕场景馆、体验馆或影院中。相比于平面幕和弧形墙幕,球形屏幕1覆盖的视野要宽得多,画面也无明显的转折和接缝。本发明中单个鱼眼投影仪2和完整球形屏幕1的组合可提供720°投影环境,观看的沉浸度更佳,可作为玻璃地面场馆或飞行影院、空中影院。本发明中对投影仪2具体位置没有要求,投影仪2只需在球形屏幕1边缘,面朝球心即可。利用这一特点,投影仪2架设在弧形导轨上可以模拟太阳东升西落,只需设置软件中虚拟摄像机的朝向与投影仪2朝向一致即可。
由此,本发明通过使用单个鱼眼投影仪和球形屏幕的组合结构呈现全景图像,利用鱼眼镜头与观看者位置的几何关系和等距投影变换,使180°鱼眼投影向球形屏幕中心的观看者呈现出720°环境信息,并且大大简化图像转换过程,实现零畸变,能够还原真实环境,使观看者产生身临其境的沉浸感。基于单虚拟摄像机的720°全景图一步生成法,无需多个摄像机采集六张图、合成立方体贴图再转换为全景图的繁杂步骤,而是直接在3D空间到2D图像过程中调整景物顶点位置,一步生成最终全景图,速度提升超过6倍。
以上所述仅为本发明的较佳实施例而已,并不用于限制本发明,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明的保护 范围之内。

Claims (10)

  1. 一种全景呈现方法,其特征在于,包括以下步骤:
    步骤S1,构建投影结构;
    所述投影结构包括投影仪和球形屏幕,所述投影仪配备有鱼眼镜头,所述投影仪设置在所述球形屏幕上,所述鱼眼镜头与所述球形屏幕相切且正对所述球形屏幕球心;
    步骤S2,根据等距成像公式更改虚拟现实软件中投影变换的内容,并将虚拟空间内所有景物纳入虚拟摄像机视野范围,生成720°全景图,所述720°全景图为2D全景图;
    步骤S3,将所述2D全景图通过所述鱼眼镜头投影,观看者位于所述球形屏幕的球心,所述2D全景图在所述球形屏幕上形成立体空间。
  2. 根据权利要求1所述的全景呈现方法,其特征在于,所述步骤S2中,所述投影变换更改的内容包括调整景物坐标和投影矩阵。
  3. 根据权利要求1所述的全景呈现方法,其特征在于,所述步骤S2还包括调整视椎体剔除和更改阴影变换。
  4. 根据权利要求1-3任一项所述的全景呈现方法,其特征在于,所述等距成像公式为d=n*θ,其中d为物体在2D平面图像上位置到图像中心的距离,n为成像面到虚拟摄像机距离,θ为物体矢量角,即物体与所述虚拟摄像机连线与z轴的夹角,z轴代表所述虚拟摄像机朝向,所述虚拟摄像机代表观看者眼睛,所述虚拟摄像机的方向与所述投影仪的安装方向相同。
  5. 根据权利要求2所述的全景呈现方法,其特征在于,所述调整景物坐标包括以下步骤:
    x1=x0*θ*secθ;
    y1=y0*θ*secθ;
    z1=√(x0^2+y0^2+z0^2);
    其中θ为物体矢量角,即物体与虚拟摄像机连线与z轴的夹角,z轴代表虚拟摄像机朝向;
  6. 根据权利要求2所述的全景呈现方法,其特征在于,所述投影矩阵为:
    [1/(π*A),0,0,0,
    0,1/π,0,0,
    0,0,-(f+n)(f-n),-2*n*f/(f-n)
    0,0,-1,0]
    其中,A为所述投影仪图像宽高比,n为可见物体与所述虚拟摄影机的最近距离,f为可见物体与所述虚拟摄影机的最远距离。
  7. 根据权利要求3所述的全景呈现方法,其特征在于,所述调整视椎体剔除包括以下步骤:
    将所述景物往所述虚拟摄像机的前方移动距离f进行视锥体剔除;
    经过所述视锥体剔除的所述景物向靠近所述虚拟摄像机方向移动距离f;
    f为可见物体与所述虚拟摄影机的最远距离。
  8. 根据权利要求3所述的全景呈现方法,其特征在于,所述阴影变换根据如权利要求5所述的步骤调整景物坐标,生成阴影。
  9. 一种全景呈现装置,其特征在于,包括:
    接收单元,所述接收单元包括球形屏幕;
    投影单元,所述投影单元包括一个配备有鱼眼镜头的投影仪,所述投影仪位于球形屏幕的边缘且所述鱼眼镜头正对所述球形屏幕的球心;
    图像处理单元,所述图像处理单元用于将获取的3D景物模型实时转换成适应所述投影单元和所述接收单元的全景平面图像,并通过所述投影单元投射到所述接收单元上,所述图像处理单元应用了如权利要求1-8任一项所述的全景呈现方法。
  10. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1-8任一项所述的全景呈现方法。
PCT/CN2020/134040 2020-12-04 2020-12-04 一种全景呈现方法及其装置 WO2022116194A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/CN2020/134040 WO2022116194A1 (zh) 2020-12-04 2020-12-04 一种全景呈现方法及其装置
EP20956450.9A EP4044104A4 (en) 2020-12-04 2020-12-04 PANORAMIC PRESENTATION PROCESS AND ASSOCIATED DEVICE
US17/729,890 US20220253975A1 (en) 2020-12-04 2022-04-26 Panoramic presentation methods and apparatuses

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/134040 WO2022116194A1 (zh) 2020-12-04 2020-12-04 一种全景呈现方法及其装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/729,890 Continuation US20220253975A1 (en) 2020-12-04 2022-04-26 Panoramic presentation methods and apparatuses

Publications (1)

Publication Number Publication Date
WO2022116194A1 true WO2022116194A1 (zh) 2022-06-09

Family

ID=81852840

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/134040 WO2022116194A1 (zh) 2020-12-04 2020-12-04 一种全景呈现方法及其装置

Country Status (3)

Country Link
US (1) US20220253975A1 (zh)
EP (1) EP4044104A4 (zh)
WO (1) WO2022116194A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116563186A (zh) * 2023-05-12 2023-08-08 中山大学 一种基于专用ai感知芯片的实时全景感知系统及方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020196538A1 (en) * 2001-06-06 2002-12-26 Lantz Edward J. Video-based immersive theater
CN109272478A (zh) * 2018-09-20 2019-01-25 华强方特(深圳)智能技术有限公司 一种荧幕投影方法和装置及相关设备
WO2019155903A1 (ja) * 2018-02-08 2019-08-15 ソニー株式会社 情報処理装置および方法
CN110211028A (zh) * 2019-05-09 2019-09-06 东南大学 一种基于球面全景显示的鱼眼图像显示方法

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106237588B (zh) * 2016-07-28 2019-03-29 秦皇岛视听机械研究所有限公司 基于二次曲面投影技术的多功能健身系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020196538A1 (en) * 2001-06-06 2002-12-26 Lantz Edward J. Video-based immersive theater
WO2019155903A1 (ja) * 2018-02-08 2019-08-15 ソニー株式会社 情報処理装置および方法
CN109272478A (zh) * 2018-09-20 2019-01-25 华强方特(深圳)智能技术有限公司 一种荧幕投影方法和装置及相关设备
CN110211028A (zh) * 2019-05-09 2019-09-06 东南大学 一种基于球面全景显示的鱼眼图像显示方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4044104A4 *

Also Published As

Publication number Publication date
US20220253975A1 (en) 2022-08-11
EP4044104A4 (en) 2023-01-04
EP4044104A1 (en) 2022-08-17

Similar Documents

Publication Publication Date Title
JP6873096B2 (ja) イメージ形成における及びイメージ形成に関する改良
EP3057066B1 (en) Generation of three-dimensional imagery from a two-dimensional image using a depth map
US20100110069A1 (en) System for rendering virtual see-through scenes
WO2019041351A1 (zh) 一种3d vr视频与虚拟三维场景实时混叠渲染的方法
WO2018086295A1 (zh) 一种应用界面显示方法及装置
CN107193124A (zh) 集成成像高密度小间距led显示参数设计方法
CN101631257A (zh) 一种实现二维视频码流立体播放的方法及装置
CN105137705B (zh) 一种虚拟球幕的创建方法和装置
WO2017128887A1 (zh) 全景图像的校正3d显示方法和系统及装置
JP2022542207A (ja) オブジェクトカテゴリモデリングのための生成潜在テクスチャプロキシ
WO2022076020A1 (en) Few-shot synthesis of talking heads
CN101968890A (zh) 基于球面显示的360°全景仿真系统
CN107810634A (zh) 用于立体增强现实的显示器
JP2000122176A (ja) 情報提示方法及び装置
CN106780759A (zh) 基于图片构建场景立体全景图的方法、装置及vr系统
CN107005689B (zh) 数字视频渲染
CN104581119A (zh) 一种3d图像的显示方法和一种头戴设备
CN107562185B (zh) 一种基于头戴vr设备的光场显示系统及实现方法
US20220253975A1 (en) Panoramic presentation methods and apparatuses
EP3057316B1 (en) Generation of three-dimensional imagery to supplement existing content
WO2009068942A1 (en) Method and system for processing of images
WO2019042028A1 (zh) 全视向的球体光场渲染方法
CN110060349B (zh) 一种扩展增强现实头戴式显示设备视场角的方法
CN106483814A (zh) 一种基于增强现实的3d全息投影系统及其使用方法
US20230367386A1 (en) Systems and Methods for Providing Observation Scenes Corresponding to Extended Reality (XR) Content

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2020956450

Country of ref document: EP

Effective date: 20220414

ENP Entry into the national phase

Ref document number: 2020956450

Country of ref document: EP

Effective date: 20220414

NENP Non-entry into the national phase

Ref country code: DE