WO2018232630A1 - 三维影像预处理方法、装置及头戴显示设备 - Google Patents

三维影像预处理方法、装置及头戴显示设备 Download PDF

Info

Publication number
WO2018232630A1
WO2018232630A1 PCT/CN2017/089362 CN2017089362W WO2018232630A1 WO 2018232630 A1 WO2018232630 A1 WO 2018232630A1 CN 2017089362 W CN2017089362 W CN 2017089362W WO 2018232630 A1 WO2018232630 A1 WO 2018232630A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
image
target
target image
dimensional
Prior art date
Application number
PCT/CN2017/089362
Other languages
English (en)
French (fr)
Inventor
黄政
赵越
谢俊
卢启栋
Original Assignee
深圳市柔宇科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市柔宇科技有限公司 filed Critical 深圳市柔宇科技有限公司
Priority to PCT/CN2017/089362 priority Critical patent/WO2018232630A1/zh
Priority to CN201780050816.7A priority patent/CN109644259A/zh
Publication of WO2018232630A1 publication Critical patent/WO2018232630A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof

Definitions

  • the present invention relates to the field of image processing technologies, and in particular, to a method and a device for preprocessing a three-dimensional image and a head-mounted display device.
  • the shooting of 3D images needs to be performed simultaneously by two cameras, and the two cameras can realize the 3D image shooting by simulating the distance between human eyes.
  • the double-axis spacing usually according to the axial distance of the dual camera (hereinafter referred to as the double-axis spacing)
  • the relationship between the two axes is proportional to the object distance
  • a small double-axis spacing is used when shooting a close-up, and when shooting a distant view.
  • the specific two-axis spacing adjustment needs to be judged by the photographer based on experience.
  • the two-axis spacing is adjusted according to the experience of the photographer, it will inevitably lead to the imaging quality of the three-dimensional image being affected by the experience of the photographer, and the stability of the imaging quality of the three-dimensional image cannot be guaranteed.
  • due to the differences between human individuals there may be differences in the distance between the eyes of different people, so that different people have great differences in the perception experience when viewing the same three-dimensional image, which leads to the proximity of the three-dimensional image. There is a deviation in the sense of distance, which affects the user's perception experience.
  • embodiments of the present invention provide a method, a device, and a head-mounted display device for reducing a distance of the three-dimensional image display device to the user when displaying the three-dimensional image. deviation.
  • a three-dimensional image preprocessing method includes:
  • a three-dimensional image preprocessing apparatus includes:
  • a relative position acquiring unit configured to acquire relative position information of a target object corresponding to each pixel point in the target image and color information of the pixel point, where the relative position information includes a distance and an angle of the target object with respect to the camera lens ;
  • a three-dimensional model reconstruction unit configured to reconstruct a three-dimensional model of the target image according to relative position information of the target object corresponding to the pixel point and color information of the pixel point;
  • a distance information obtaining unit configured to acquire the distance information of the target user who views the target image
  • a three-dimensional image projection unit configured to re-project the three-dimensional model of the target image into a left-eye two-dimensional image and a right-eye two-dimensional image according to the distance information.
  • a head mounted display device comprising: a processor and a memory electrically connected to the processor, a distance detecting module, a first display screen and a second display screen, wherein the memory is used for storing a target image and executable program code,
  • the processor is configured to read the target image and the executable program code from the memory, and perform the following operations:
  • the distance detecting module is configured to detect the spacing information of the first display screen and the second display screen, and acquire the distance information of the target user viewing the target image according to the spacing information;
  • the processor is further configured to reproject the three-dimensional model of the target image into a left-eye two-dimensional image and a right-eye two-dimensional image according to the distance information, and then display the left eye through the first display screen. a two-dimensional image and displaying the right-eye two-dimensional image through the second display screen.
  • the three-dimensional image preprocessing method acquires the relative position information of the target object corresponding to each pixel point in the target image, and combines the color information corresponding to the pixel point to reconstruct the three-dimensional model of the target image, and then according to different users.
  • the distance information of the target image is re-projected to the three-dimensional model of the target image, thereby ensuring that different users are viewing the target image. Both can obtain the projection effect corresponding to the distance of the user, and enhance the user's three-dimensional image perception experience.
  • the three-dimensional model reconstruction and re-projection can be performed on the target image, it is not necessary to adjust the dual-axis spacing according to the distance of the target object when shooting the target image, and directly shoot at a fixed dual-axis spacing. It can be free from the experience of the cameraman and can reduce the cost of shooting 3D images.
  • FIG. 1 is a first flow chart of a method for preprocessing a three-dimensional image according to an embodiment of the present invention
  • FIG. 2 is a schematic flow chart of the step 101 shown in FIG. 1;
  • FIG. 3 is a schematic diagram of an imaging position relationship of a target image of a three-dimensional image preprocessing method according to an embodiment of the present invention
  • FIG. 4 is a schematic diagram of a program module of a three-dimensional image preprocessing apparatus according to an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of a relative position acquiring unit of a three-dimensional image preprocessing apparatus according to an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of a three-dimensional model reconstruction unit of a three-dimensional image preprocessing apparatus according to an embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of a slanting distance information acquiring unit of a three-dimensional image preprocessing apparatus according to an embodiment of the present invention.
  • FIG. 8 is a schematic structural diagram of a three-dimensional image projection unit of a three-dimensional image preprocessing apparatus according to an embodiment of the present invention.
  • FIG. 9 is a schematic structural diagram of a head mounted display device according to an embodiment of the present invention.
  • a 3D image preprocessing method is provided, which can be applied to a head mounted display device (for example, a 3D head-mounted theater, a virtual reality helmet, an augmented reality helmet, etc.). Pre-processing and re-projecting the target three-dimensional image to be displayed according to different user's distance information, thereby reducing the distance deviation of the head-mounted display device presented to the user when displaying the three-dimensional image, thereby improving the user's perception experience. .
  • a head mounted display device for example, a 3D head-mounted theater, a virtual reality helmet, an augmented reality helmet, etc.
  • the three-dimensional image preprocessing method includes at least the following steps:
  • Step 101 Acquire relative position information of a target object corresponding to each pixel point in the target image and color information of the pixel point, where the relative position information includes a distance and an angle of the target object with respect to the camera lens;
  • Step 102 reconstruct a three-dimensional model of the target image according to relative position information of the target object corresponding to the pixel point and color information of the pixel point.
  • Step 103 Acquire a distance information of a target user who views the target image.
  • Step 104 Reproject the three-dimensional model of the target image into a left-eye two-dimensional image and a right-eye two-dimensional image according to the distance information.
  • the target image is a three-dimensional image formed by synchronous shooting by two cameras in advance.
  • the axial spacing between the two cameras (hereinafter referred to as the dual-axis spacing) may be a fixed value.
  • the dual shaft spacing can be set to a typical value of 6.5 cm for human eyelid distance. In this way, the imaging quality of the target image can be prevented from being affected by the experience of the photographer, and the cost of capturing the 3D image can be reduced.
  • the acquiring relative position information of a target object corresponding to each pixel point in the target image includes:
  • Step 201 Obtain bi-axle spacing information of the target image at the time of shooting and corresponding focal length information of each pixel point, wherein the target image is kept at a fixed distance between the two axes when shooting;
  • Step 202 Obtain an imaging position of each pixel in the target image and disparity information of the left-eye two-dimensional image and the right-eye two-dimensional image corresponding to the pixel.
  • Step 203 Calculate, according to the dual-axis distance information, the focal length information, and the parallax information, a distance of a target object corresponding to each pixel point in the target image with respect to a camera lens;
  • Step 204 Calculate an angle of the target object corresponding to each camera pixel in the target image with respect to the camera lens according to the imaging position of each pixel in the target image and the focal length information.
  • the information about the distance between the two axes of the camera is the distance between the optical axes of the lenses of the dual cameras that capture the target image, that is, the distance between the two axes.
  • the double shaft pitch can be set to a fixed value, for example, 6.5 cm.
  • 301 and 302 are a left camera and a right camera, respectively.
  • P is the target object.
  • S1 and S2 are the photosensitive surfaces of the left and right cameras, respectively.
  • f is the lens focal length of the left and right cameras.
  • P1 and P2 are imaging points of the target object P on the photosensitive surface S1 of the left camera and the photosensitive surface S2 of the right camera, respectively.
  • x l is the distance between the imaging point P1 and the lens optical axis of the left camera 301
  • x r is the distance between the imaging point P2 and the lens optical axis of the right camera 303.
  • O l and O r are the lens optical center of the left camera 301 and the lens optical center of the right camera 303, respectively.
  • the imaging position of each pixel in the target image may include an imaging position of the pixel on the photosensitive surface S1 and an imaging position of the pixel on the photosensitive surface S2.
  • the imaging position of the pixel on the photosensitive surface S1 is represented by the distance x l between the imaging point P1 and the lens optical axis of the left camera 301, and the imaging position of the pixel on the photosensitive surface S2 is imaged.
  • x r the distance between the lens optical axis characterizing point P2 and the right camera 303. Based on this, the disparity information of the left eye two-dimensional image and the right eye two-dimensional image corresponding to the pixel point can be determined by the difference between the distance x l and the distance x r .
  • the optical axis of the lens of the left camera 301 is a straight line passing through O l
  • the optical axis of the lens of the right camera 303 is a straight line passing through O r .
  • the left camera lens optical axis 301 of the camera 303 and the right lens optical axes are parallel to each other, and therefore, the lens pitch of the optical center O r left camera lens 301 and the optical center O l T is the right camera 303 Double shaft spacing.
  • the optical center of the lens respectively, a lens optical center O r O l right camera and the left camera as the origin of the two dimensional coordinate system established, the arrow in positive and negative directions of the coordinate system as shown, the imaging
  • the coordinates of the point P1 are (x l , f)
  • the coordinates of the imaging point P2 are (x r , f).
  • x l and x r are signed values.
  • x l and x r are negative values.
  • x l And x r are positive values. Therefore, in FIG.
  • x l in the coordinates of the imaging point P1 is a positive value, that is, the distance between the imaging point P1 and the lens optical axis of the left camera 301 is x l
  • the x r in the coordinates of the imaging point P2 is negative.
  • the value, that is, the distance between the imaging point P2 and the lens optical axis of the right camera 303 is -x r .
  • the parallax information of the corresponding pixel in the target image on the left-eye two-dimensional image and the right-eye two-dimensional image can be calculated according to the coordinates of the imaging point, that is, the imaging point P1 of the target object P on the photosensitive surface S1 and the photosensitive image
  • the angular relationship of the target object P with respect to the lens optical axis of the dual camera can also be obtained from the positional relationship shown in FIG. Specifically, assuming that the angle of the target object P with respect to the lens optical axis of the left camera is ⁇ , according to the positional relationship shown in FIG. 3, the following angle calculation equation can be obtained:
  • the relative object corresponding to each pixel in the target image can be calculated according to the above equation (3).
  • the angle ⁇ of the camera lens is the angle ⁇ of the camera lens.
  • the reconstructing the three-dimensional model of the target image according to the relative position information of the target object corresponding to the pixel point and the color information of the pixel point includes:
  • the distance Z of the target object corresponding to each pixel point in the target image relative to the camera lens and the target object corresponding to each pixel point relative to the camera lens are calculated by using the method provided in the embodiment shown in FIG. 3 .
  • the position of the point on the target object corresponding to each pixel point relative to the camera lens can be determined, and then the three-dimensional contour of the target image is constructed according to the points on the target object corresponding to all the pixel points.
  • the corresponding pixel points on the three-dimensional contour can be colored to obtain a three-dimensional model of the target image, thereby realizing A three-dimensional model reconstruction of the target image.
  • the obtaining the distance information of the target user viewing the target image comprises:
  • the user can view the target image through a head-mounted display device such as a 3D head-mounted theater, a virtual reality helmet, an augmented reality helmet, or the like.
  • the head mounted display device may include a first display screen and a second display screen, and the spacing between the first display screen and the second display screen may be adjusted to accommodate different user's lay lengths.
  • the head mounted display device may further include a distance detecting module configured to detect a spacing between the first display screen and the second display screen. When the user adjusts the spacing between the first display and the second display screen, the distance between the first display screen and the second display screen may be detected by a distance detecting module on the head mounted display device.
  • the first display screen and the second display screen is exactly the same as the user's lay length. Therefore, by detecting the spacing between the first display screen and the second display screen, the user's distance information can be determined.
  • the spacing between the first display screen and the second display screen may be directly determined as the pupil distance of the target user.
  • the three-dimensional model of the target image is heavier according to the distance information
  • the new projection is a left eye 2D image and a right eye 2D image, including:
  • the left-eye two-dimensional image and the right-eye two-dimensional image corresponding to each frame image are extracted from the three-dimensional model of the target image, and respectively projected to The first display screen and the second display screen of the display device are worn to achieve a realistic three-dimensional display effect.
  • the left eye two-dimensional image and the right eye two-dimensional image are extracted according to the target user's pupil distance, and then respectively projected to the corresponding first display screen and the second display screen, so that The three-dimensional image composed of the left-eye two-dimensional image and the right-eye two-dimensional image can better match the pupil distance of the current target user, thereby obtaining a more realistic three-dimensional display effect.
  • a 3D image preprocessing apparatus 400 including:
  • the relative position obtaining unit 410 is configured to acquire relative position information of a target object corresponding to each pixel point in the target image, where the relative position information includes a distance and an angle of the target object with respect to the camera lens;
  • the three-dimensional model reconstruction unit 430 is configured to reconstruct a three-dimensional model of the target image according to the relative position information of the target object corresponding to the pixel point and the color information of the pixel point;
  • the distance information acquiring unit 450 is configured to acquire the distance information of the target user who views the target image
  • the three-dimensional image projection unit 470 is configured to re-project the three-dimensional model of the target image into a left-eye two-dimensional image and a right-eye two-dimensional image according to the distance information.
  • the relative location obtaining unit 410 includes:
  • the distance information acquisition sub-unit 411 is configured to acquire the bi-axle spacing information of the target image at the time of shooting and the corresponding focal length information of each pixel point, wherein the target image is kept at a fixed distance between the two axes when shooting;
  • the disparity information obtaining sub-unit 413 is configured to acquire an imaging position of each pixel in the target image and disparity information of the left-eye two-dimensional image and the right-eye two-dimensional image corresponding to the pixel;
  • a target distance calculation sub-unit 415 configured to calculate, according to the dual-axis-axis distance information, the focal length information, and the parallax information, a distance of a target object corresponding to each camera point in the target image with respect to a camera lens;
  • the imaging angle calculation sub-unit 417 is configured to calculate, according to the imaging position of each pixel point in the target image and the focal length information, an angle of the target object corresponding to each camera pixel in the target image with respect to the camera lens.
  • the three-dimensional model reconstruction unit 430 includes:
  • a three-dimensional contour construction sub-unit 431, configured to construct a three-dimensional contour of the target image according to relative position information of the target object corresponding to each of the pixel points;
  • the three-dimensional contour coloring sub-unit 433 is configured to color corresponding pixel points on the three-dimensional contour of the target image according to color information of each of the pixel points to obtain a three-dimensional model of the target image.
  • the distance information acquiring unit 450 includes:
  • a screen spacing detecting subunit 451 configured to detect spacing information of the first display screen and the second display screen in the head mounted display device of the target user viewing the target image;
  • the distance information determining sub-unit 453 is configured to determine the distance information of the target user according to the spacing information of the first display screen and the second display screen.
  • the three-dimensional image projection unit 470 includes:
  • the image extraction sub-unit 471 is configured to extract, according to the distance information, a left-eye two-dimensional image and a right-eye two-dimensional image corresponding to each frame image from the three-dimensional model of the target image;
  • An image projection subunit 473 configured to project the left eye two-dimensional image to a first display screen of the head mounted display device, and project the right eye two-dimensional image to a second of the head mounted display device Display.
  • a head mounted display device 600 including: a processor 610 and a memory 630 electrically connected to the processor, a distance detecting module 650, a first display screen 670, and a second display screen 690, the memory 630 is configured to store a target image and executable program code, and the processor 610 is configured to read the target image and the executable program code from the memory 630, and execute Do the following:
  • the distance detecting module is configured to detect the spacing information of the first display screen and the second display screen, and acquire the distance information of the target user viewing the target image according to the spacing information;
  • the processor is further configured to reproject the three-dimensional model of the target image into a left-eye two-dimensional image and a right-eye two-dimensional image according to the distance information, and then display the left eye through the first display screen. a two-dimensional image and displaying the right-eye two-dimensional image through the second display screen.
  • the processor 610 is further configured to:
  • the processor 610 is further configured to:
  • the processor 610 is further configured to:
  • the three-dimensional image preprocessing method acquires the relative position information of the target object corresponding to each pixel point in the target image, and combines the color information corresponding to the pixel point to reconstruct the three-dimensional model of the target image, and then according to different users.
  • the distance information of the target image is re-projected to the three-dimensional model of the target image, so that different users can obtain the projection effect corresponding to the distance of the target when viewing the target image, and improve the three-dimensionality of the user.
  • Image perception experience since the three-dimensional model reconstruction and re-projection can be performed on the target image, it is not necessary to adjust the dual-axis spacing according to the distance of the target object when shooting the target image, and directly shoot at a fixed dual-axis spacing. It can be free from the experience of the cameraman and can reduce the cost of shooting 3D images.
  • the steps in the method of the embodiment of the present invention may be sequentially adjusted, merged, and deleted according to actual needs.
  • the units in the apparatus of the embodiment of the present invention may be combined, divided, and deleted according to actual needs.
  • the functional units in the various embodiments of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

一种三维影像预处理方法、装置及头戴显示设备,所述三维影像预处理方法包括:获取目标影像中每一个像素点对应的目标物体的相对位置信息,所述相对位置信息包括所述目标物体相对于相机镜头的距离和角度;根据所述像素点对应的目标物体的相对位置信息和所述像素点的色彩信息,重建所述目标影像的三维模型;获取观看所述目标影像的目标用户的瞳距信息;根据所述瞳距信息将所述目标影像的三维模型重新投影为左眼二维图像和右眼二维图像。所述三维影像预处理方法可以减小所述头戴显示设备在显示三维影像时呈现给用户的距离感偏差。

Description

三维影像预处理方法、装置及头戴显示设备 技术领域
本发明涉及影像处理技术领域,尤其涉及一种三维影像预处理方法、装置及头戴显示设备。
背景技术
三维影像的拍摄需要采用两部摄像机同步进行,两部摄像机之间通过模拟人类双眼瞳距来实现三维影像的拍摄。目前,在进行三维影像拍摄时,通常根据双摄像机的轴间距(以下简称双机轴间距)与物距成正比的关系,在拍摄近景时采用较小的双机轴间距,并在拍摄远景时采用较大的双机轴间距,具体的双机轴间距调节幅度需要由摄影师根据经验来判断。然而,如果根据摄像师的经验来调节双机轴间距,难免会导致三维影像的成像质量受到摄像师经验的影响,无法保证三维影像的成像质量的稳定性。同时,由于人类个体之间的差异,不同人的双眼瞳距可能存在差异,从而使得不同的人在观看同一三维影像时,在观感体验上存在较大的差异,进而导致三维影像给人的远近距离感存在偏差,影响用户的观感体验。
发明内容
鉴于现有技术中存在的上述问题,本发明实施例提供一种三维影像预处理方法、装置及头戴显示设备,以减小所述头戴显示设备在显示三维影像时呈现给用户的距离感偏差。
一种三维影像预处理方法,包括:
获取目标影像中每一个像素点对应的目标物体的相对位置信息以及所述像素点的色彩信息,所述相对位置信息包括所述目标物体相对于相机镜头的距离和角度;
根据所述像素点对应的目标物体的相对位置信息和所述像素点的色彩信息,重建所述目标影像的三维模型;
获取观看所述目标影像的目标用户的瞳距信息;
根据所述瞳距信息将所述目标影像的三维模型重新投影为左眼二维图像和右眼二维图像。
一种三维影像预处理装置,包括:
相对位置获取单元,用于获取目标影像中每一个像素点对应的目标物体的相对位置信息以及所述像素点的色彩信息,所述相对位置信息包括所述目标物体相对于相机镜头的距离和角度;
三维模型重建单元,用于根据所述像素点对应的目标物体的相对位置信息和所述像素点的色彩信息,重建所述目标影像的三维模型;
瞳距信息获取单元,用于获取观看所述目标影像的目标用户的瞳距信息;
三维影像投影单元,用于根据所述瞳距信息将所述目标影像的三维模型重新投影为左眼二维图像和右眼二维图像。
一种头戴显示设备,包括:处理器及与所述处理器电连接的存储器、距离检测模块、第一显示屏和第二显示屏,所述存储器用于存储目标影像及可执行程序代码,所述处理器用于从所述存储器中读取所述目标影像和所述可执行程序代码,并执行如下操作:
获取目标影像中每一个像素点对应的目标物体的相对位置信息以及所述像素点的色彩信息,所述相对位置信息包括所述目标物体相对于相机镜头的距离和角度;
根据所述像素点对应的目标物体的相对位置信息和所述像素点的色彩信息,重建所述目标影像的三维模型;
所述距离检测模块,用于检测所述第一显示屏和所述第二显示屏的间距信息,并根据所述间距信息获取观看所述目标影像的目标用户的瞳距信息;
所述处理器,还用于根据所述瞳距信息将所述目标影像的三维模型重新投影为左眼二维图像和右眼二维图像,进而通过所述第一显示屏显示所述左眼二维图像,并通过所述第二显示屏显示所述右眼二维图像。
所述三维影像预处理方法通过获取目标影像中每一个像素点对应的目标物体的相对位置信息,并结合该像素点对应的色彩信息,重建所述目标影像的三维模型,进而根据不同用户在观看所述目标影像时的瞳距信息,对所述目标影像的三维模型进行重新投影,从而可以保证不同用户在观看所述目标影像时 均可以获得与自身瞳距对应的投影效果,提升用户的三维影像观感体验。
同时,由于可以对所述目标影像进行三维模型重建和重新投影,从而使得在拍摄所述目标影像时,无需根据目标物体的远近来调节双机轴间距,直接以固定的双机轴间距进行拍摄即可,因而可以不受摄像师经验的影响,并可以降低三维影像的拍摄成本。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍。
图1是本发明实施例提供的三维影像预处理方法的第一流程示意图;
图2是图1所示的步骤101详细的流程示意图;
图3是本发明实施例提供的三维影像预处理方法的目标影像的成像位置关系示意图;
图4是本发明实施例提供的三维影像预处理装置的程序模块的示意图;
图5是本发明实施例提供的三维影像预处理装置的相对位置获取单元的结构示意图;
图6是本发明实施例提供的三维影像预处理装置的三维模型重建单元的结构示意图;
图7是本发明实施例提供的三维影像预处理装置的瞳距信息获取单元的结构示意图;
图8是本发明实施例提供的三维影像预处理装置的三维影像投影单元的结构示意图;
图9是本发明实施例提供的头戴显示设备的结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
请参阅图1,在本发明一个实施例中,提供一种三维影像预处理方法,该方法可以应用于头戴显示设备(例如,3D头戴影院、虚拟现实头盔、增强现实头盔等)中,以实现根据不同用户的瞳距信息来对需要显示的目标三维影像进行预处理和重新投影,从而减小所述头戴显示设备在显示三维影像时呈现给用户的距离偏差,提升用户的观感体验。
在本实施例中,所述三维影像预处理方法至少包括如下步骤:
步骤101:获取目标影像中每一个像素点对应的目标物体的相对位置信息以及所述像素点的色彩信息,所述相对位置信息包括所述目标物体相对于相机镜头的距离和角度;
步骤102:根据所述像素点对应的目标物体的相对位置信息和所述像素点的色彩信息,重建所述目标影像的三维模型;
步骤103:获取观看所述目标影像的目标用户的瞳距信息;
步骤104:根据所述瞳距信息将所述目标影像的三维模型重新投影为左眼二维图像和右眼二维图像。
其中,所述目标影像为预先通过双摄像机同步拍摄形成的三维影像。在所述目标影像的拍摄过程中,双摄像机之间的轴间距(下称双机轴间距)可以为固定值。例如,可以将所述双机轴间距设置为人类双眼瞳距的典型值6.5厘米。如此,可以避免所述目标影像的成像质量受到摄像师经验的影响,并可以降低三维影像的拍摄成本。
请参阅图2,在一种实施方式中,所述获取目标影像中每一个像素点对应的目标物体的相对位置信息,包括:
步骤201:获取所述目标影像在拍摄时的双机轴间距信息和每一个像素点对应焦距信息,其中,所述目标影像在拍摄时双机轴间距保持固定;
步骤202:获取所述目标影像中每一个像素点的成像位置及所述像素点对应的左眼二维图像和右眼二维图像的视差信息;
步骤203:根据所述双机轴间距信息、所述焦距信息及所述视差信息,计算所述目标影像中每一个像素点对应的目标物体的相对于相机镜头的距离;
步骤204:根据所述目标影像中每一个像素点的成像位置及所述焦距信息,计算所述目标影像中每一个像素点对应的目标物体的相对于相机镜头的角度。
其中,所述双机轴间距信息即为拍摄所述目标影像的双摄像机的镜头光轴之间的距离,即双机轴间距。在本实施例中,所述双机轴间距可以设定为固定值,例如6.5厘米。请参阅图3,其中,301和302分别为左摄像机和右摄像机。P为目标物体。S1和S2分别为左摄像机和右摄像机的感光面。f为左摄像机和右摄像机的镜头焦距。P1和P2分别为目标物体P在左摄像机的感光面S1和右摄像机的感光面S2上的成像点。xl为成像点P1与左摄像机301的镜头光轴之间的距离,xr成像点P2与右摄像机303的镜头光轴之间的距离。Ol和Or分别为左摄像机301的镜头光心和右摄像机303的镜头光心。
可以理解,所述目标影像中每一个像素点的成像位置可以包括该像素点在感光面S1上的成像位置和该像素点在感光面S2上的成像位置。在本实施例中,该像素点在感光面S1上的成像位置由成像点P1与左摄像机301的镜头光轴之间的距离xl表征,该像素点在感光面S2上的成像位置由成像点P2与右摄像机303的镜头光轴之间的距离xr表征。在此基础上,该像素点对应的左眼二维图像和右眼二维图像的视差信息即可由距离xl与距离xr之差来确定。
可以理解,为方便说明上述各元素之间的位置关系,在本实施例中,将左摄像机301的感光面S1和右摄像机302的感光面S2分别绕所述镜头光心Ol和Or旋转180度,如图3所示。其中,左摄像机301的镜头光轴为通过Ol的直线,右摄像机303的镜头光轴为通过Or的直线。目标物体P距离左摄像机301的镜头光心Ol和右摄像机303的镜头光心Or所在直线的距离为Z,即目标物体的相对于相机镜头的距离为Z。在本实施例中,左摄像机301的镜头光轴与右摄像头303的镜头光轴相互平行,因此,左摄像机301的镜头光心Ol和右摄像机303的镜头光心Or的间距T即为双机轴间距。
根据双目测距原理,目标物体P在感光面S1上的成像点P1和在感光面S2上的成像点P2之间存在视差d=xl-xr。同时,由图3所示的位置关系可以看出,由P、P1、P2构成的三角形与由P、Ol、Or构成的三角形互为相似三角形。在本实施例中,分别以左摄像机的镜头光心Ol和右摄像机的镜头光心Or为原点建立二维坐标系,坐标系的正负方向如图3中的箭头所示,则成像点P1的坐标为(xl,f),成像点P2的坐标为(xr,f)。在本实施例中,xl和xr为带符号的数值,当成像点位于镜头光轴的左边时,xl和xr为负值,当成像点位 于镜头光轴的右边时,xl和xr为正值。因此,在图3中,成像点P1的坐标中的xl为正值,即成像点P1与左摄像机301的镜头光轴的距离为xl,而成像点P2的坐标中的xr为负值,即成像点P2与右摄像头303的镜头光轴的距离为-xr。如此,则可以根据成像点的坐标计算得到目标影像中对应像素点在左眼二维图像和右眼二维图像上的视差信息,即目标物体P在感光面S1上的成像点P1和在感光面S2上的成像点P2之间的视差d。基于上述三角形的相似关系可以得出如下等式:
Figure PCTCN2017089362-appb-000001
根据上述等式(1)可以得到如下等式:
Figure PCTCN2017089362-appb-000002
也就是说,目标物体的相对于相机镜头的距离为Z与双机轴间距信息T、镜头焦距f及目标物体在两个感光面S1、S2上的成像视差d=xl-xr相关。因此,通过获取所述目标影像中每一个像素点对应的左眼二维图像和右眼二维图像的视差信息、目标影像在拍摄时的双机轴间距信息和每一个像素点对应焦距信息,即可根据上述等式(2)计算出目标影像中每一个像素点对应的目标物体的相对于相机镜头的距离Z。
同时,由图3所示的位置关系还可以得到目标物体P相对于双摄像机的镜头光轴的角度信息。具体地,假设目标物体P相对于左摄像机的镜头光轴的角度为θ,根据图3所示的位置关系,可以得到如下角度计算等式:
Figure PCTCN2017089362-appb-000003
也就是说,通过获取所述目标影像中每一个像素点的成像位置及所述焦距信息,即可根据上述等式(3)计算所述目标影像中每一个像素点对应的目标物体的相对于相机镜头的角度θ。
在一种实施方式中,所述根据所述像素点对应的目标物体的相对位置信息和所述像素点的色彩信息,重建所述目标影像的三维模型,包括:
根据每一个所述像素点对应的目标物体的相对位置信息,构建所述目标影 像的三维轮廓;
根据每一个所述像素点的色彩信息,对所述目标影像的三维轮廓上对应的像素点上色,得到所述目标影像的三维模型。
具体地,在利用如图3所示实施例提供的方法计算得到目标影像中每一个像素点对应的目标物体的相对于相机镜头的距离Z及每一个像素点对应的目标物体的相对于相机镜头的角度θ之后,即可确定每一个像素点对应的目标物体上的点相对于相机镜头的位置,进而根据所有像素点对应的目标物体上的点构建所述目标影像的三维轮廓。在此基础上,结合每一个像素点的色彩信息,例如RGB值、灰度值等,即可以对所述三维轮廓上对应的像素点上色,得到所述目标影像的三维模型,从而实现对所述目标影像的三维模型重建。
在一种实施方式中,所述获取观看所述目标影像的目标用户的瞳距信息,包括:
检测目标用户观看所述目标影像的头戴显示设备中第一显示屏和第二显示屏的间距信息;
根据所述第一显示屏和所述第二显示屏的间距信息,确定所述目标用户的瞳距信息。
具体地,用户可以通过3D头戴影院、虚拟现实头盔、增强现实头盔等头戴显示设备来观看所述目标影像。在本实施例中,所述头戴显示设备可以包括第一显示屏和第二显示屏,所述第一显示屏和第二显示屏的间距可调节,以适应不同用户的瞳距。同时,所述头戴显示设备还可以包括距离检测模块,用于检测所述第一显示屏和第二显示屏的间距。当用户调节所述第一显示和第二显示屏的间距时,可以通过头戴显示设备上的距离检测模块检测所述第一显示屏和第二显示屏的间距。
可以理解,由于用户是根据自身的瞳距来调节所述第一显示屏和第二显示屏的间距以获得最佳的观感体验,也就是说,当获得最佳观感体验时,所述第一显示屏和第二显示屏的间距刚好与用户的瞳距相匹配。因此,通过检测所述第一显示屏和第二显示屏的间距,即可以确定用户的瞳距信息。在本实施例中,可以直接将所述第一显示屏和第二显示屏的间距确定为目标用户的瞳距。
在一种实施方式中,所述根据所述瞳距信息将所述目标影像的三维模型重 新投影为左眼二维图像和右眼二维图像,包括:
根据所述瞳距信息从所述目标影像的三维模型中提取出每一帧图像对应的左眼二维图像和右眼二维图像;
将所述左眼二维图像投影至所述头戴显示设备的第一显示屏,并将所述右眼二维图像投影至所述头戴显示设备的第二显示屏。
可以理解,根据头戴显示设备的三维立体影像显示原理,需要将每一帧图像对应的左眼二维图像和右眼二维图像从所述目标影像的三维模型中提取出来,并分别投影至头戴显示设备的第一显示屏和第二显示屏,从而实现逼真的三维显示效果。在本实施例中,通过根据目标用户的瞳距来提取出所述左眼二维图像和右眼二维图像,进而分别投影至对应的第一显示屏和第二显示屏,使得由所述左眼二维图像和右眼二维图像构成的三维图像可以更好地匹配当前目标用户的瞳距,从而获得更真实的三维显示效果。
请参阅图4,在本发明一个实施例中,提供一种三维影像预处理装置400,包括:
相对位置获取单元410,用于获取目标影像中每一个像素点对应的目标物体的相对位置信息,所述相对位置信息包括所述目标物体相对于相机镜头的距离和角度;
三维模型重建单元430,用于根据所述像素点对应的目标物体的相对位置信息和所述像素点的色彩信息,重建所述目标影像的三维模型;
瞳距信息获取单元450,用于获取观看所述目标影像的目标用户的瞳距信息;
三维影像投影单元470,用于根据所述瞳距信息将所述目标影像的三维模型重新投影为左眼二维图像和右眼二维图像。
请参阅图5,在一种实施方式中,所述相对位置获取单元410,包括:
距离信息获取子单元411,用于获取所述目标影像在拍摄时的双机轴间距信息和每一个像素点对应焦距信息,其中,所述目标影像在拍摄时双机轴间距保持固定;
视差信息获取子单元413,用于获取所述目标影像中每一个像素点的成像位置及所述像素点对应的左眼二维图像和右眼二维图像的视差信息;
目标距离计算子单元415,用于根据所述双机轴间距信息、所述焦距信息及所述视差信息,计算所述目标影像中每一个像素点对应的目标物体的相对于相机镜头的距离;
成像角度计算子单元417,用于根据所述目标影像中每一个像素点的成像位置及所述焦距信息,计算所述目标影像中每一个像素点对应的目标物体的相对于相机镜头的角度。
请参阅图6,在一种实施方式中,所述三维模型重建单元430,包括:
三维轮廓构建子单元431,用于根据每一个所述像素点对应的目标物体的相对位置信息,构建所述目标影像的三维轮廓;
三维轮廓上色子单元433,用于根据每一个所述像素点的色彩信息,对所述目标影像的三维轮廓上对应的像素点上色,得到所述目标影像的三维模型。
请参阅图7,在一种实施方式中,所述瞳距信息获取单元450,包括:
屏幕间距检测子单元451,用于检测目标用户观看所述目标影像的头戴显示设备中第一显示屏和第二显示屏的间距信息;
瞳距信息确定子单元453,用于根据所述第一显示屏和所述第二显示屏的间距信息,确定所述目标用户的瞳距信息。
请参阅图8,在一种实施方式中,所述三维影像投影单元470,包括:
图像提取子单元471,用于根据所述瞳距信息从所述目标影像的三维模型中提取出每一帧图像对应的左眼二维图像和右眼二维图像;
图像投影子单元473,用于将所述左眼二维图像投影至所述头戴显示设备的第一显示屏,并将所述右眼二维图像投影至所述头戴显示设备的第二显示屏。
可以理解,本发明实施例所述三维影像预处理装置400的各个单元的功能及其具体实现还可以参照图1至图3所示方法实施例中的相关描述,此处不再赘述。
请参阅图9,在本发明一个实施例中,提供一种头戴显示设备600,包括:处理器610及与所述处理器电连接的存储器630、距离检测模块650、第一显示屏670和第二显示屏690,所述存储器630用于存储目标影像及可执行程序代码,所述处理器610用于从所述存储器630中读取所述目标影像和所述可执行程序代码,并执行如下操作:
获取目标影像中每一个像素点对应的目标物体的相对位置信息,所述相对位置信息包括所述目标物体相对于相机镜头的距离和角度;
根据所述像素点对应的目标物体的相对位置信息和所述像素点的色彩信息,重建所述目标影像的三维模型;
所述距离检测模块,用于检测所述第一显示屏和所述第二显示屏的间距信息,并根据所述间距信息获取观看所述目标影像的目标用户的瞳距信息;
所述处理器,还用于根据所述瞳距信息将所述目标影像的三维模型重新投影为左眼二维图像和右眼二维图像,进而通过所述第一显示屏显示所述左眼二维图像,并通过所述第二显示屏显示所述右眼二维图像。
在一种实施方式中,所述处理器610,还用于:
获取所述目标影像在拍摄时的双机轴间距信息和每一个像素点对应焦距信息,其中,所述目标影像在拍摄时双机轴间距保持固定;
获取所述目标影像中每一个像素点的成像位置及所述像素点对应的左眼二维图像和右眼二维图像的视差信息;
根据所述双机轴间距信息、所述焦距信息及所述视差信息,计算所述目标影像中每一个像素点对应的目标物体的相对于相机镜头的距离;
根据所述目标影像中每一个像素点的成像位置及所述焦距信息,计算所述目标影像中每一个像素点对应的目标物体的相对于相机镜头的角度。
在一种实施方式中,所述处理器610,还用于:
根据每一个所述像素点对应的目标物体的相对位置信息,构建所述目标影像的三维轮廓;
根据每一个所述像素点的色彩信息,对所述目标影像的三维轮廓上对应的像素点上色,得到所述目标影像的三维模型。
在一种实施方式中,所述处理器610,还用于:
根据所述瞳距信息从所述目标影像的三维模型中提取出每一帧图像对应的左眼二维图像和右眼二维图像;
将所述左眼二维图像投影至所述头戴显示设备的第一显示屏,并将所述右眼二维图像投影至所述头戴显示设备的第二显示屏。
可以理解,本发明实施例所述处理器610执行的各操作的步骤及其具体实 现还可以参照图1至图3所示方法实施例中的相关描述,此处不再赘述。
所述三维影像预处理方法通过获取目标影像中每一个像素点对应的目标物体的相对位置信息,并结合该像素点对应的色彩信息,重建所述目标影像的三维模型,进而根据不同用户在观看所述目标影像时的瞳距信息,对所述目标影像的三维模型进行重新投影,从而可以保证不同用户在观看所述目标影像时均可以获得与自身瞳距对应的投影效果,提升用户的三维影像观感体验。同时,由于可以对所述目标影像进行三维模型重建和重新投影,从而使得在拍摄所述目标影像时,无需根据目标物体的远近来调节双机轴间距,直接以固定的双机轴间距进行拍摄即可,因而可以不受摄像师经验的影响,并可以降低三维影像的拍摄成本。
可以理解,本发明实施例描述的各示例的单元及步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。
另外,本发明实施例方法中的步骤可以根据实际需要进行顺序调整、合并和删减。本发明实施例装置中的单元可以根据实际需要进行合并、划分和删减。在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以是两个或两个以上单元集成在一个单元中。
以上所揭露的仅为本发明的较佳实施例而已,当然不能以此来限定本发明之权利范围,本领域普通技术人员可以理解实现上述实施例的全部或部分流程,并依本发明权利要求所作的等同变化,仍属于发明所涵盖的范围。

Claims (14)

  1. 一种三维影像预处理方法,其特征在于,包括:
    获取目标影像中每一个像素点对应的目标物体的相对位置信息以及每一像素点的色彩信息,所述相对位置信息包括所述目标物体相对于相机镜头的距离和角度;
    根据所述像素点对应的目标物体的相对位置信息和所述像素点的色彩信息,重建所述目标影像的三维模型;
    获取观看所述目标影像的目标用户的瞳距信息;
    根据所述瞳距信息将所述目标影像的三维模型重新投影为左眼二维图像和右眼二维图像。
  2. 如权利要求1所述的方法,其特征在于,所述获取目标影像中每一个像素点对应的目标物体的相对位置信息,包括:
    获取所述目标影像在拍摄时的双机轴间距信息和每一个像素点对应焦距信息,其中,所述目标影像在拍摄时双机轴间距保持固定;
    获取所述目标影像中每一个像素点的成像位置及所述像素点对应的左眼二维图像和右眼二维图像的视差信息;
    根据所述双机轴间距信息、所述焦距信息及所述视差信息,计算所述目标影像中每一个像素点对应的目标物体的相对于相机镜头的距离;
    根据所述目标影像中每一个像素点的成像位置及所述焦距信息,计算所述目标影像中每一个像素点对应的目标物体的相对于相机镜头的角度。
  3. 如权利要求1或2所述的方法,其特征在于,所述根据所述像素点对应的目标物体的相对位置信息和所述像素点的色彩信息,重建所述目标影像的三维模型,包括:
    根据每一个所述像素点对应的目标物体的相对位置信息,构建所述目标影像的三维轮廓;
    根据每一个所述像素点的色彩信息,对所述目标影像的三维轮廓上对应的 像素点上色,得到所述目标影像的三维模型。
  4. 如权利要求1或2所述的方法,其特征在于,所述获取观看所述目标影像的目标用户的瞳距信息,包括:
    检测目标用户观看所述目标影像的头戴显示设备中第一显示屏和第二显示屏的间距信息;
    根据所述第一显示屏和所述第二显示屏的间距信息,确定所述目标用户的瞳距信息。
  5. 如权利要求4所述的方法,其特征在于,所述根据所述瞳距信息将所述目标影像的三维模型重新投影为左眼二维图像和右眼二维图像,包括:
    根据所述瞳距信息从所述目标影像的三维模型中提取出每一帧图像对应的左眼二维图像和右眼二维图像;
    将所述左眼二维图像投影至所述头戴显示设备的第一显示屏,并将所述右眼二维图像投影至所述头戴显示设备的第二显示屏。
  6. 一种三维影像预处理装置,其特征在于,包括:
    相对位置获取单元,用于获取目标影像中每一个像素点对应的目标物体的相对位置信息,所述相对位置信息包括所述目标物体相对于相机镜头的距离和角度;
    三维模型重建单元,用于根据所述像素点对应的目标物体的相对位置信息和所述像素点的色彩信息,重建所述目标影像的三维模型;
    瞳距信息获取单元,用于获取观看所述目标影像的目标用户的瞳距信息;
    三维影像投影单元,用于根据所述瞳距信息将所述目标影像的三维模型重新投影为左眼二维图像和右眼二维图像。
  7. 如权利要求6所述的装置,其特征在于,所述相对位置获取单元,包括:
    距离信息获取子单元,用于获取所述目标影像在拍摄时的双机轴间距信息 和每一个像素点对应焦距信息,其中,所述目标影像在拍摄时双机轴间距保持固定;
    视差信息获取子单元,用于获取所述目标影像中每一个像素点的成像位置及所述像素点对应的左眼二维图像和右眼二维图像的视差信息;
    目标距离计算子单元,用于根据所述双机轴间距信息、所述焦距信息及所述视差信息,计算所述目标影像中每一个像素点对应的目标物体的相对于相机镜头的距离;
    成像角度计算子单元,用于根据所述目标影像中每一个像素点的成像位置及所述焦距信息,计算所述目标影像中每一个像素点对应的目标物体的相对于相机镜头的角度。
  8. 如权利要求6或7所述的装置,其特征在于,所述三维模型重建单元,包括:
    三维轮廓构建子单元,用于根据每一个所述像素点对应的目标物体的相对位置信息,构建所述目标影像的三维轮廓;
    三维轮廓上色子单元,用于根据每一个所述像素点的色彩信息,对所述目标影像的三维轮廓上对应的像素点上色,得到所述目标影像的三维模型。
  9. 如权利要求6或7所述的装置,其特征在于,所述瞳距信息获取单元,包括:
    屏幕间距检测子单元,用于检测目标用户观看所述目标影像的头戴显示设备中第一显示屏和第二显示屏的间距信息;
    瞳距信息确定子单元,用于根据所述第一显示屏和所述第二显示屏的间距信息,确定所述目标用户的瞳距信息。
  10. 如权利要求9所述的装置,其特征在于,所述三维影像投影单元,包括:
    图像提取子单元,用于根据所述瞳距信息从所述目标影像的三维模型中提取出每一帧图像对应的左眼二维图像和右眼二维图像;
    图像投影子单元,用于将所述左眼二维图像投影至所述头戴显示设备的第一显示屏,并将所述右眼二维图像投影至所述头戴显示设备的第二显示屏。
  11. 一种头戴显示设备,其特征在于,包括:处理器及与所述处理器电连接的存储器、距离检测模块、第一显示屏和第二显示屏,所述存储器用于存储目标影像及可执行程序代码,所述处理器用于从所述存储器中读取所述目标影像和所述可执行程序代码,并执行如下操作:
    获取目标影像中每一个像素点对应的目标物体的相对位置信息以及每一像素点的色彩信息,所述相对位置信息包括所述目标物体相对于相机镜头的距离和角度;
    根据所述像素点对应的目标物体的相对位置信息和所述像素点的色彩信息,重建所述目标影像的三维模型;
    所述距离检测模块,用于检测所述第一显示屏和所述第二显示屏的间距信息,并根据所述间距信息获取观看所述目标影像的目标用户的瞳距信息;
    所述处理器,还用于根据所述瞳距信息将所述目标影像的三维模型重新投影为左眼二维图像和右眼二维图像,进而通过所述第一显示屏显示所述左眼二维图像,并通过所述第二显示屏显示所述右眼二维图像。
  12. 如权利要求11所述的头戴显示设备,其特征在于,所述处理器,还用于:
    获取所述目标影像在拍摄时的双机轴间距信息和每一个像素点对应焦距信息,其中,所述目标影像在拍摄时双机轴间距保持固定;
    获取所述目标影像中每一个像素点的成像位置及所述像素点对应的左眼二维图像和右眼二维图像的视差信息;
    根据所述双机轴间距信息、所述焦距信息及所述视差信息,计算所述目标影像中每一个像素点对应的目标物体的相对于相机镜头的距离;
    根据所述目标影像中每一个像素点的成像位置及所述焦距信息,计算所述目标影像中每一个像素点对应的目标物体的相对于相机镜头的角度。
  13. 如权利要求11或12所述的头戴显示设备,其特征在于,所述处理器,还用于:
    根据每一个所述像素点对应的目标物体的相对位置信息,构建所述目标影像的三维轮廓;
    根据每一个所述像素点的色彩信息,对所述目标影像的三维轮廓上对应的像素点上色,得到所述目标影像的三维模型。
  14. 如权利要求11或12所述的头戴显示设备,其特征在于,所述处理器,还用于:
    根据所述瞳距信息从所述目标影像的三维模型中提取出每一帧图像对应的左眼二维图像和右眼二维图像;
    将所述左眼二维图像投影至所述头戴显示设备的第一显示屏,并将所述右眼二维图像投影至所述头戴显示设备的第二显示屏。
PCT/CN2017/089362 2017-06-21 2017-06-21 三维影像预处理方法、装置及头戴显示设备 WO2018232630A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2017/089362 WO2018232630A1 (zh) 2017-06-21 2017-06-21 三维影像预处理方法、装置及头戴显示设备
CN201780050816.7A CN109644259A (zh) 2017-06-21 2017-06-21 三维影像预处理方法、装置及头戴显示设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/089362 WO2018232630A1 (zh) 2017-06-21 2017-06-21 三维影像预处理方法、装置及头戴显示设备

Publications (1)

Publication Number Publication Date
WO2018232630A1 true WO2018232630A1 (zh) 2018-12-27

Family

ID=64737429

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/089362 WO2018232630A1 (zh) 2017-06-21 2017-06-21 三维影像预处理方法、装置及头戴显示设备

Country Status (2)

Country Link
CN (1) CN109644259A (zh)
WO (1) WO2018232630A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114245096B (zh) * 2021-12-08 2023-09-15 安徽新华传媒股份有限公司 一种智能摄影的3d模拟成像系统

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06167688A (ja) * 1992-11-30 1994-06-14 Sanyo Electric Co Ltd 立体カラー液晶表示装置
CN103517060A (zh) * 2013-09-03 2014-01-15 展讯通信(上海)有限公司 一种终端设备的显示控制方法及装置
CN104333747A (zh) * 2014-11-28 2015-02-04 广东欧珀移动通信有限公司 一种立体拍照方法和立体拍照设备
US20150185484A1 (en) * 2013-12-30 2015-07-02 Electronics And Telecommunications Research Institute Pupil tracking apparatus and method
CN104918035A (zh) * 2015-05-29 2015-09-16 深圳奥比中光科技有限公司 一种获取目标三维图像的方法及系统
CN105611278A (zh) * 2016-02-01 2016-05-25 欧洲电子有限公司 防裸眼3d观看眩晕感的图像处理方法及系统和显示设备
CN106199964A (zh) * 2015-01-21 2016-12-07 成都理想境界科技有限公司 能自动调节景深的双目ar头戴设备及景深调节方法

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103440036B (zh) * 2013-08-23 2018-04-17 Tcl集团股份有限公司 三维图像的显示和交互操作方法及装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06167688A (ja) * 1992-11-30 1994-06-14 Sanyo Electric Co Ltd 立体カラー液晶表示装置
CN103517060A (zh) * 2013-09-03 2014-01-15 展讯通信(上海)有限公司 一种终端设备的显示控制方法及装置
US20150185484A1 (en) * 2013-12-30 2015-07-02 Electronics And Telecommunications Research Institute Pupil tracking apparatus and method
CN104333747A (zh) * 2014-11-28 2015-02-04 广东欧珀移动通信有限公司 一种立体拍照方法和立体拍照设备
CN106199964A (zh) * 2015-01-21 2016-12-07 成都理想境界科技有限公司 能自动调节景深的双目ar头戴设备及景深调节方法
CN104918035A (zh) * 2015-05-29 2015-09-16 深圳奥比中光科技有限公司 一种获取目标三维图像的方法及系统
CN105611278A (zh) * 2016-02-01 2016-05-25 欧洲电子有限公司 防裸眼3d观看眩晕感的图像处理方法及系统和显示设备

Also Published As

Publication number Publication date
CN109644259A (zh) 2019-04-16

Similar Documents

Publication Publication Date Title
US10460521B2 (en) Transition between binocular and monocular views
TWI712918B (zh) 擴增實境的影像展示方法、裝置及設備
US10691934B2 (en) Real-time visual feedback for user positioning with respect to a camera and a display
Kuster et al. Gaze correction for home video conferencing
KR102054363B1 (ko) 시선 보정을 위한 화상 회의에서 영상 처리를 위한 방법 및 시스템
WO2018188277A1 (zh) 视线校正方法、装置、智能会议终端及存储介质
US9467685B2 (en) Enhancing the coupled zone of a stereoscopic display
JP2017174125A (ja) 情報処理装置、情報処理システム、および情報処理方法
WO2019062056A1 (zh) 一种智能投影方法、系统及智能终端
JP7148634B2 (ja) ヘッドマウントディスプレイ装置
CN104599317A (zh) 一种实现3d扫描建模功能的移动终端及方法
US20230024396A1 (en) A method for capturing and displaying a video stream
US10802390B2 (en) Spherical omnipolar imaging
TWI766316B (zh) 可透光顯示系統及其圖像輸出方法與處理裝置
JP2022061495A (ja) 動的クロストークを測定する方法及び装置
CN101854485A (zh) 一种自动调节手持立体拍摄设备抖动的方法及装置
WO2021237952A1 (zh) 一种增强现实的显示系统及方法
CN113744411A (zh) 图像处理方法及装置、设备、存储介质
WO2018232630A1 (zh) 三维影像预处理方法、装置及头戴显示设备
CN103530870B (zh) 用单应变换校正双目摄影的系统和方法
CN115334296B (zh) 一种立体图像显示方法和显示装置
TWM642547U (zh) 虛擬分身的立體建模系統
CN108875711A (zh) 一种生成识别用户或对象的面部签名的方法
Zaitseva et al. The development of mobile applications for the capturing and visualization of stereo and spherical panoramas
JP2013223133A (ja) 誘導装置、誘導方法、及び誘導プログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17914249

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17914249

Country of ref document: EP

Kind code of ref document: A1