WO2016107519A1 - 一种3d图像的显示方法和一种头戴设备 - Google Patents

一种3d图像的显示方法和一种头戴设备 Download PDF

Info

Publication number
WO2016107519A1
WO2016107519A1 PCT/CN2015/099194 CN2015099194W WO2016107519A1 WO 2016107519 A1 WO2016107519 A1 WO 2016107519A1 CN 2015099194 W CN2015099194 W CN 2015099194W WO 2016107519 A1 WO2016107519 A1 WO 2016107519A1
Authority
WO
WIPO (PCT)
Prior art keywords
video image
image
video
background model
display area
Prior art date
Application number
PCT/CN2015/099194
Other languages
English (en)
French (fr)
Inventor
尹琪
周宏伟
Original Assignee
青岛歌尔声学科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 青岛歌尔声学科技有限公司 filed Critical 青岛歌尔声学科技有限公司
Priority to JP2017530067A priority Critical patent/JP6384940B2/ja
Priority to US15/324,247 priority patent/US10104358B2/en
Publication of WO2016107519A1 publication Critical patent/WO2016107519A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/122Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/111Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/133Equalising the characteristics of different image components, e.g. their average brightness or colour balance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • H04N13/279Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals the virtual viewpoint locations being selected by the viewers or determined by tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/339Displays for viewing with the aid of special glasses or head-mounted displays [HMD] using spatial multiplexing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/344Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/383Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes

Definitions

  • the present invention relates to the field of 3D image display technology, and in particular to a 3D image display method and a headwear device.
  • head-mounted display devices have become more popular, becoming a tool for more users to experience home theater.
  • the conventional head-mounted display device focuses on the display of the movie content itself, and the user does not sit in the theater to observe the effect.
  • the traditional head-mounted device processes 3D video based on the blurring of the edge portion of each frame image, but this causes the blurring of the edge portion of the 3D video image, and also causes the 3D image display angle. Small problem.
  • the invention provides a method for displaying a 3D image and a head-wearing device for solving the problem of narrow display angle of a 3D image in the existing head-mounted device.
  • the invention discloses a method for displaying a 3D image, the method comprising:
  • the first 3D video image and the second 3D video image are respectively refracted by two lenses in the video image display area and displayed.
  • the method further includes:
  • the first 3D video image and the second 3D video image after the image smoothing process are separately displayed.
  • performing image deformation processing on the 3D background model on which the video image data is projected, and generating a first 3D video image corresponding to the left eye and a second 3D video image corresponding to the right eye respectively include:
  • first video data into the first frame to generate a first 3D video image corresponding to the left eye; placing the second video data into the second frame to generate a second corresponding to the right eye 3D video image.
  • performing image smoothing processing on the edge portion of the first 3D video image corresponding to the left eye and the edge portion of the second 3D video image corresponding to the right eye respectively includes:
  • the edge portion of the first 3D video image refers to an area where the distance from the center of the first 3D video display image is greater than a preset value; the edge portion of the second 3D video image refers to the second portion The distance between the center of the 3D video image is greater than the preset value.
  • the performing image smoothing processing on the edge portion of the first 3D video image and the edge portion of the second 3D video image by using a convolution smoothing algorithm includes:
  • the method further includes: setting an eye observation position matrix,
  • the method further includes: acquiring a viewing angle translation parameter, and performing a translation operation on the 3D background model according to the viewing angle translation parameter to obtain a new 3D background model.
  • the 3D background model is a 3D cinema model
  • the screen model in the 3D cinema model is a corresponding video image display area.
  • the present invention also discloses a head-wearing device, which includes: a background processing module, an acquisition processing module, an image processing module, and a display module;
  • the background processing module is configured to establish a 3D background model, and set a video image display area in the 3D background model;
  • the collection processing module is connected to the background processing module, configured to acquire video image data, project the video image data in a video image display area of the 3D background model, and collect display parameters of the headset device Transmitting display parameters of the headset to the image processing module;
  • the image processing module is connected to the collection processing module, configured to perform image deformation processing on the 3D background model on which the video image data is projected according to the display parameter, and generate a first 3D video image corresponding to the left eye and Corresponding to a second 3D video image of the right eye;
  • the display module is connected to the image processing module and the image processing module 603, respectively, for respectively performing the first 3D video image and the second 3D video image in the video image display area
  • the lens is refracted for display.
  • the image processing module is further configured to generate a first frame and a second frame according to the length value and the width value of the display area of the headset; and display the 3D background with the video image Performing image deformation processing on the model, generating first video data corresponding to the left eye and second video data corresponding to the right eye; placing the first video data into the first frame to generate a first 3D corresponding to the left eye a video image; placing the second video data into the second frame to generate a second 3D video image corresponding to the right eye; respectively, an edge portion of the first 3D video image and the second 3D video image The edge portion of the image is smoothed.
  • the headset further includes: a viewing angle adjustment module, configured to connect the image processing module, configured to set an eye observation position matrix, and modify the eye according to the viewing angle offset angle after acquiring the viewing angle offset angle Observe the position matrix.
  • a viewing angle adjustment module configured to connect the image processing module, configured to set an eye observation position matrix, and modify the eye according to the viewing angle offset angle after acquiring the viewing angle offset angle Observe the position matrix.
  • the image processing module is further configured to perform matrix operations on the vertices of the 3D background model according to the eye observation position matrix to obtain new vertices; perform coloring processing on the new vertices to generate a 3D background model corresponding to the perspective .
  • the first 3D video image and the second 3D video image are respectively generated by setting a 3D background model and performing image deformation processing on the 3D background model on which the video image data is projected;
  • the user views through the lens in the wearing device, the user can see a larger viewing angle and improve the user experience, thereby solving the problem that the existing wearing device has a narrow visual angle and blurred edges when viewing the 3D video.
  • FIG. 1 is a flow chart showing a method of displaying a 3D image in the present invention
  • FIG. 2 is a detailed flowchart of a method for displaying a 3D image in the present invention
  • 3 is a flow chart of smoothing processing of a 3D video image in the present invention.
  • FIG. 4 is a flow chart showing a perspective change of a 3D image display in the present invention.
  • FIG. 5 is a flow chart of view conversion of another 3D image display in the present invention.
  • FIG. 6 is a schematic structural view of a head wear device according to the present invention.
  • Figure 7 is a detailed structural view of a headgear device in the present invention.
  • FIG. 1 is a flow chart showing a method of displaying a 3D image in the present invention. Referring to FIG. 1, the method includes the following steps.
  • Step 101 Establish a 3D background model, and set a video image display area in the 3D background model.
  • Step 102 Acquire video image data, and project the video image data into a video image display area of the 3D background model.
  • Step 103 Obtain a display parameter of the headwear device, perform image deformation processing on the 3D background model on which the video image data is projected according to the display parameter, and generate a first 3D video image corresponding to the left eye and a corresponding right eye respectively.
  • the second 3D video image is a display parameter of the headwear device.
  • Step 104 The first 3D video image and the second 3D video image are respectively refracted by two lenses in the video image display area and displayed.
  • a display method of a 3D image disclosed in the present invention is applicable to a head-mounted device, and a video image display area is set in a 3D background model according to different observation points of a human eye, and the acquired video image data is projected. And performing image deformation processing on the projected video image 3D background model in the video image display area of the 3D background model to respectively generate a first 3D video image and a second 3D video image; and causing the user to pass the lens in the headset When viewing, you can see a larger viewing angle. And by setting a 3D background model, the user can have an immersive feeling while watching the video, and improve the user experience.
  • FIG. 2 is a detailed flowchart of a method for displaying a 3D image in the present invention. Referring to FIG. 2, the method includes the following steps.
  • Step 201 Establish a 3D background model, and set a video image display area in the 3D background model.
  • the 3D background model may be a cinema 3D model, that is, a 3D model including a screen, a seat, and the like.
  • the video image display area provided in the 3D background model corresponds to the screen model in the cinema 3D model.
  • Step 202 Acquire video image data, and project the video image data into a video image display area of the 3D background model.
  • step 202 the video image data for playing is acquired, and the video image data is projected into a screen model of the theater 3D model; that is, when the user watches the video through the headset, the screen in the 3D model of the theater can be seen.
  • the corresponding video is played on the screen, so that the effect of watching the video is immersive.
  • Step 203 Acquire a display parameter of the headwear device, perform image deformation processing on the 3D background model on which the video image data is projected according to the display parameter, and generate a first 3D video image corresponding to the left eye and a corresponding right eye respectively.
  • the second 3D video image Acquire a display parameter of the headwear device, perform image deformation processing on the 3D background model on which the video image data is projected according to the display parameter, and generate a first 3D video image corresponding to the left eye and a corresponding right eye respectively.
  • the second 3D video image is
  • the display parameter of the headset is the length value and the width value of the display area of the headset.
  • a first frame and a second frame are generated according to the length value and the width value of the display area of the headset; and the 3D background model displaying the video image data is subjected to image deformation processing to generate corresponding correspondences.
  • a first video data of the left eye and a second video data corresponding to the right eye placing the first video data into the first frame to generate a first 3D video image corresponding to the left eye; and the second video Data is placed into the second frame to generate a second 3D video image corresponding to the right eye.
  • the first frame and the second frame are generated on the display screen of the head mounted device, and the first 3D video image and the second frame are respectively displayed in the first frame and the second frame.
  • the 3D video image enables the human eye to view the cinema 3D model respectively displayed in the first 3D video image and the second 3D video image through the optical lens, and a larger viewing angle can be obtained, that is, the viewing effect of the IMAX can be achieved.
  • Step 204 Perform image smoothing processing on the edge portion of the first 3D video image and the edge portion of the second 3D video image.
  • step 204 in order to prevent image distortion processing of the cinema 3D model in which the video image data is displayed on the screen, the edge portion of the generated first 3D video image and the edge portion of the generated second 3D video image are jagged, resulting in a person There is a problem of distortion in the 3D image that the eye sees through the optical lens. Therefore, it is also necessary to perform image smoothing processing on the edge portion of the first 3D video image and the edge portion of the second 3D video image, respectively.
  • the edge portion of the first video display area and the edge portion of the second video display area may be subjected to image smoothing processing using a convolution smoothing algorithm.
  • Step 205 Display the first video display area and the second video display area after the image smoothing process.
  • step 205 the first 3D video image and the second 3D video image subjected to image smoothing processing are displayed in the display screen of the headwear in the video image display area.
  • step 203 according to the size of the display screen of the headset, image distortion processing is performed on the cinema 3D model in which the video image data is displayed on the screen. And generating a first frame (Mesh) for placing the first 3D video image on the display screen of the headwear device, and generating a second frame (Mesh) for placing the second 3D video image corresponding to the left eye and the right eye.
  • the cinema 3D model displayed in the first video display area and the second video display area may differ due to differences in the left and right eye viewing points of the human body.
  • the human brain can recognize the content respectively displayed in the first 3D video image and the second 3D video image. , thus producing the effect of a real cinema 3D model.
  • the method provided in the present invention can make the user feel like being in a movie theater after wearing the device.
  • the user can also select different viewing angles to make the size and angle of the movie different, and at the same time, the 3D video.
  • the content is displayed in a 3D background model, augmented reality.
  • FIG. 3 is a flow chart of a smoothing process of a 3D video image in the present invention.
  • the method includes the following steps.
  • Step 301 Acquire a pixel point of an edge portion of the first 3D video image.
  • the edge portion of the first 3D video image refers to an area where the distance from the center of the first 3D video image is greater than a preset value.
  • Step 302 Acquire a pixel point of an edge portion of the second 3D video image.
  • the edge portion of the second 3D video image refers to an area where the distance from the center of the second 3D video image is greater than a preset value.
  • the preset value in step 301 and step 302 may be a distance between one-half of the farthest pixel point and the center point.
  • Step 303 For each pixel point in the edge portion of the first 3D video image and the edge portion of the second 3D video image, collecting pixel points around the pixel point to form a pixel neighborhood matrix.
  • step 303 8 pixel points around the target pixel point may be acquired to form a pixel neighborhood matrix of 3 ⁇ 3. In other embodiments of the invention, more pixel points may be acquired to form a larger pixel neighborhood matrix to produce a better image smoothing effect.
  • Step 304 Perform a weighted calculation on the pixel neighborhood matrix and the preset convolution weight matrix to obtain a new value.
  • the preset convolution weight matrix corresponds to the collected pixel neighborhood matrix, and the convolution weight matrix sets different weights for each pixel in the collected pixel neighborhood matrix, wherein The target pixel has the largest weight.
  • Step 305 replacing the original value of the pixel with the new value.
  • step 301 and step 302 are performed in a sequential manner, and may be performed simultaneously. Moreover, the convolution smoothing operation is performed only on pixels outside the center point more than half of the range, and processing the pixel points within less than half of the center point can improve the efficiency of the GPU processing, making the viewing of the 3D video smoother. .
  • the change of the angle of view can be realized by constructing an eye observation position matrix.
  • 4 is a flow chart showing a perspective change of a 3D image display in the present invention. Referring to Figure 4, the method includes the following steps.
  • step 401 a matrix of eye observation positions is set.
  • Step 402 Acquire a viewing angle offset angle, and modify an eye observation position matrix according to the viewing angle offset angle.
  • Step 403 Perform matrix operations on the vertices of the 3D background model according to the eye observation position matrix to obtain new vertices.
  • Step 404 Perform coloring processing on the new vertex to generate a 3D background model corresponding to the perspective.
  • Fig. 5 is a flow chart showing the change of the angle of view of another 3D image display in the present invention. Referring to Figure 5, the method includes the following steps.
  • Step 501 Acquire a viewing angle translation parameter.
  • Step 502 Perform a translation operation on the 3D background model according to the viewing angle translation parameter to generate a 3D background model corresponding to the perspective.
  • the conversion of the 3D image viewing angle is not limited to the above technology, and the conversion of the 3D image viewing angle may be implemented in other ways. For example, moving the silver screen position to the z-axis of the world coordinate system and moving the eye observation position to the z-axis negatively is the same.
  • FIG. 6 is a schematic structural view of a head-wearing device according to the present invention.
  • the head-wearing device includes: a background processing module 601, an acquisition processing module 602, and image processing. Module 603, display module 604;
  • a background processing module 601 configured to establish a 3D background model, and set a video image display area in the 3D background model
  • An acquisition processing module 602 coupled to the background processing module 601, configured to acquire video image data, project the video image data in a video image display area of the 3D background model, and collect display parameters of the headset device Transmitting the display parameters of the headset to the image processing module 603;
  • the image processing module 603 is connected to the collection processing module 602, configured to perform image deformation processing on the 3D background model on which the video image data is projected according to the display parameter, to generate a first 3D video image corresponding to the left eye respectively.
  • the second 3D video image corresponding to the right eye is sent to the display module 604;
  • the display module 604 is connected to the image processing module 603 and the image processing module 603, respectively, for respectively performing the first 3D video image and the second 3D video image in the video image display area.
  • the lens is refracted for display.
  • FIG. 7 is a detailed structural diagram of a head wear device according to the present invention, as shown in FIG.
  • the image processing module 603 generates a first frame and a second frame according to the length value and the width value of the display area of the headset; and displays the 3D displaying the video image data.
  • the background model performs image deformation processing to generate first video data corresponding to the left eye and second video data corresponding to the right eye respectively; placing the first video data into the first frame to generate a first corresponding to the left eye a 3D video image; placing the second video data into the second frame to generate a second 3D video image corresponding to the right eye.
  • the image processing module 603 respectively An edge portion of a 3D video image and an edge portion of the second 3D video image are subjected to image smoothing processing.
  • the image processing module 603 performs image smoothing processing on the edge portion of the first 3D video image and the edge portion of the second 3D video image by a convolution smoothing algorithm.
  • the edge portion of the first 3D video image refers to an area where the distance from the center of the first 3D video display image is greater than a preset value; the edge portion of the second 3D video image refers to the second 3D video.
  • the area in the center of the image is larger than the preset value.
  • the image processing module 603 collects pixel points around the pixel point for each of the edge portions of the first 3D video image and the edge portion of the second 3D video image. A new value obtained by weighting the pixel neighborhood matrix and the preset convolution weight matrix, and replacing the original value of the pixel with the new value.
  • the headset further includes: a viewing angle adjustment module 605;
  • the angle of view adjustment module 605 is connected to the image processing module 603 for setting an eye observation position matrix; and after acquiring the angle of view offset angle, modifying the eye observation position matrix according to the angle of view offset angle.
  • the image processing module 603 is further configured to perform matrix operations on the vertices of the 3D background model according to the eye observation position matrix to obtain new vertices; perform coloring processing on the new vertices to generate a 3D background model corresponding to the perspective.
  • the viewing angle adjustment module 605 is configured to acquire a viewing angle translation parameter, and send the acquired viewing angle translation parameter to the image processing module 603.
  • the image processing module 603 performs a translation operation on the 3D background model according to the viewing angle translation parameter to obtain a new 3D background model.
  • the first 3D video image and the second 3D video image are respectively generated by setting a 3D background model and performing image deformation processing on the 3D background model on which the video image is projected;
  • a larger viewing angle can be seen, and the user experience is improved, thereby solving the problem that the existing headset has a small visual angle when viewing 3D video.
  • the user does not have the problem of aliasing while viewing the edge of the 3D video image, and can also make the image of the edge portion of the 3D video image Clear.
  • the viewing angle of the 3D video image is further converted by setting the eye observation position matrix, thereby allowing the user to select a different angle to view the 3D video image, thereby increasing the user experience.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

一种3D图像的显示方法和一种头戴设备。该方法包括:建立3D背景模型,在所述3D背景模型中设置视频图像显示区域;获取视频图像数据,将所述视频图像数据投射在所述3D背景模型的视频图像显示区域中;获取头戴设备的显示参数,根据所述显示参数,对所述投射有视频图像数据的3D背景模型进行图像变形处理,生成分别对应左眼的第一3D视频图像和对应右眼的第二3D视频图像;在所述视频图像显示区域中将所述第一3D视频图像和所述第二3D视频图像分别经过两个透镜折射后进行显示。上述技术方案能够解决现有的头戴设备中3D图像显示角度狭小的问题。

Description

一种3D图像的显示方法和一种头戴设备 技术领域
本发明涉及3D图像的显示技术领域,特别是涉及一种3D图像的显示方法和一种头戴设备。
发明背景
近年来,头戴显示设备愈加流行,成为更多用户体验家庭影院的工具。然而传统头戴显示设备注重于电影内容本身显示,用户没有坐在影院观影效果。
传统的头戴设备对于3D视频的处理都是基于对每一帧图像中的边缘部分进行模糊处理,但这会导致3D视频图像边缘部分存在模糊不清晰的问题,同时还会造成3D图像显示角度狭小的问题。
由上述可知,现有的头戴设备中存在3D图像显示角度狭小的问题。
发明内容
本发明提供了一种3D图像的显示方法和一种头戴设备,用于解决现有的头戴设备中3D图像显示角度狭小的问题。
为达到上述目的本发明的技术方案是这样实现的:
本发明公开了一种3D图像的显示方法,该方法包括:
建立3D背景模型,在所述3D背景模型中设置视频图像显示区域;
获取视频图像数据,将所视频述图像数据投射在所述3D背景模型的视频图像显示区域中;
获取头戴设备的显示参数,根据所述显示参数,对所述投射有视频图像数据的3D背景模型进行图像变形处理,生成分别对应左眼的第一3D视频图像和对应右眼的第二3D视频图像;
在所述视频图像显示区域中将所述第一3D视频图像和所述第二3D视频图像分别经过两个透镜折射后进行显示。
可选的,在所述生成分别对应左眼的第一视频显示区域和对应右眼的第二视频显示区域之后,该方法还包括:
分别对所述对应左眼的第一3D视频图像的边缘部分和所述对应右眼的第二3D视频图像的边缘部分进行图像平滑处理;
将所述经过图像平滑处理后的所述第一3D视频图像和所述第二3D视频图像分别进行显示。
可选的,所述对所述投射有视频图像数据的3D背景模型进行图像变形处理,生成分别对应左眼的第一3D视频图像和对应右眼的第二3D视频图像包括:
根据所述头戴设备的显示区域的长度值和宽度值,生成第一框体和第二框体;
将所述显示有视频图像的3D背景模型进行图像变形处理,生成分别对应左眼的第一视频数据和对应右眼的第二视频数据;
将所述第一视频数据放置到所述第一框体内,生成对应左眼的第一3D视频图像;将所述第二视频数据放置到所述第二框体内,生成对应右眼的第二3D视频图像。
可选的,所述分别对所述对应左眼的第一3D视频图像的边缘部分和所述对应右眼的第二3D视频图像的边缘部分进行图像平滑处理包括:
对所述第一3D视频图像的边缘部分以及第二3D视频图像的边缘部分通过卷积平滑算法进行图像平滑处理;
其中,所述第一3D视频图像的边缘部分是指与所述第一3D视频显示图像中心的距离大于预设值的区域;所述第二3D视频图像的边缘部分是指与所述第二3D视频图像中心的距离大于预设值的区域。
可选的,上述对所述第一3D视频图像的边缘部分以及第二3D视频图像的边缘部分通过卷积平滑算法进行图像平滑处理包括:
对所述第一3D视频图像的边缘部分以及第二3D视频图像的边缘部分中的每个像素点,采集所述像素点周围的像素点构成像素邻域矩阵,
将所述像素邻域矩阵与预设的卷积权值矩阵进行加权计算后得到的新值,使用该新值替换该像素点的原值。
可选的,该方法还包括:设置眼部观察位置矩阵,
获取视角偏移角度,根据所述视角偏移角度修改眼部观察位置矩阵;
根据所述眼部观察位置矩阵,对3D背景模型的顶点进行矩阵运算,得到新的顶点;对所述新的顶点进行着色处理,生成对应视角的3D背景模型。
可选的,该方法还包括:获取视角平移参数,根据所述视角平移参数,对所述3D背景模型进行平移运算,得到新的3D背景模型。
可选的,所述3D背景模型为3D影院模型;
其中,所述3D影院模型中的银幕模型为对应的视频图像显示区域。
本发明还公开了一种头戴设备,该头戴设备包括:背景处理模块、采集处理模块、图像处理模块、显示模块;
所述背景处理模块,用于建立3D背景模型,在所述3D背景模型中设置视频图像显示区域;
所述采集处理模块,连接所述背景处理模块,用于获取视频图像数据,将所述视频图像数据投射在所述3D背景模型的视频图像显示区域中;以及用于采集头戴设备的显示参数,将所述头戴设备的显示参数发送给图像处理模块;
所述图像处理模块,连接所述采集处理模块,用于根据所述显示参数,对所述投射有视频图像数据的3D背景模型进行图像变形处理;生成分别对应左眼的第一3D视频图像和对应右眼的第二3D视频图像;
所述显示模块,分别连接所述图像处理模块和所述图像处理模块603,用于在所述视频图像显示区域中将所述第一3D视频图像和所述第二3D视频图像分别经过两个透镜折射后进行显示。
可选的,所述图像处理模块,还用于根据所述头戴设备的显示区域的长度值和宽度值,生成第一框体和第二框体;将所述显示有视频图像的3D背景模型进行图像变形处理,生成分别对应左眼的第一视频数据和对应右眼的第二视频数据;将所述第一视频数据放置到所述第一框体内,生成对应左眼的第一3D视频图像;将所述第二视频数据放置到所述第二框体内,生成对应右眼的第二3D视频图像;分别对所述第一3D视频图像的边缘部分和所述第二3D视频图像的边缘部分进行图像平滑处理。
可选的,该头戴设备还包括:视角调节模块,连接所述图像处理模块,用于设置眼部观察位置矩阵,以及在获取到视角偏移角度之后,根据所述视角偏移角度修改眼部观察位置矩阵。
所述图像处理模块,还用于根据所述眼部观察位置矩阵,对3D背景模型的顶点进行矩阵运算,得到新的顶点;对所述新的顶点进行着色处理,生成对应视角的3D背景模型。
综上所述,本发明提供的技术方案中,通过设置3D背景模型,并且对投射有视频图像数据的3D背景模型进行图像变形处理,分别产生第一3D视频图像和第二3D视频图像;使得用户通过头戴设备中的透镜进行观看时,能够看到较大的视觉角度,提高用户体验,从而解决了现有的头戴设备在观看3D视频时存在视觉角度狭小,边缘模糊的问题。
附图简要说明
附图用来提供对本发明的进一步理解,并且构成说明书的一部分,与本发明实施例一起用于解释本发明,并不构成对本发明的限制。在附图中:
图1是本发明中一种3D图像的显示方法的流程图;
图2是本发明中一种3D图像的显示方法的详细流程图;
图3是本发明中一种3D视频图像的平滑处理的流程图;
图4是本发明中一种3D图像显示的视角变换流程图;
图5是本发明中另一种3D图像显示的视角变换流程图;
图6是本发明中一种头戴设备的结构示意图;
图7是本发明中一种头戴设备的详细结构示意图。
具体实施方式
为使本发明的目的、技术方案和优点更加清楚,下面将结合附图对本发明实施方式作还地详细描述。
图1是本发明中一种3D图像的显示方法的流程图,参见图1所示,该方法包括如下步骤。
步骤101,建立3D背景模型,在所述3D背景模型中设置视频图像显示区域。
步骤102,获取视频图像数据,将所述视频述图像数据投射在所述3D背景模型的视频图像显示区域中;
步骤103,获取头戴设备的显示参数,根据所述显示参数,对所述投射有视频图像数据的3D背景模型进行图像变形处理,生成分别对应左眼的第一3D视频图像和对应右眼的第二3D视频图像。
步骤104,在所述视频图像显示区域中将所述第一3D视频图像和所述第二3D视频图像分别经过两个透镜折射后进行显示。
由上述可知,本发明所公开的一种3D图像的显示方法,适用于头戴设备,根据人眼的观察点的不同,在3D背景模型中设置视频图像显示区域,将获取的视频图像数据投射在该3D背景模型的视频图像显示区域中,对所述投射有视频图像3D背景模型进行图像变形处理,分别产生第一3D视频图像和第二3D视频图像;使得用户通过头戴设备中的透镜进行观看时,能够看到较大的视觉角度。并且通过设置3D背景模型使得用户在观看视频的同时能够有身临其境的感觉,提高用户体验。
图2是本发明中一种3D图像的显示方法的详细流程图,参见图2所示,该方法包括如下步骤。
步骤201,建立3D背景模型,在所述3D背景模型中设置视频图像显示区域。
在本发明的一种实施例中,3D背景模型可以为影院3D模型,即包括银幕、座位等3D模型。其中,3D背景模型中设有的视频图像显示区域对应于影院3D模型中的银幕模型。
步骤202,获取视频图像数据,将所述视频述图像数据投射在所述3D背景模型的视频图像显示区域中。
在步骤202中,获取用于播放的视频图像数据,将该视频图像数据投射到影院3D模型的银幕模型中;即用户通过头戴设备观看视频时,可以看见影院3D模型中的银幕,在该银幕中播放对应的视频,从而实现身临其境观看视频的效果。
步骤203,获取头戴设备的显示参数,根据所述显示参数,对所述投射有视频图像数据的3D背景模型进行图像变形处理,生成分别对应左眼的第一3D视频图像和对应右眼的第二3D视频图像。
在本发明的一种实施例中,头戴设备的显示参数为头戴设备的显示区域的长度值和宽度值。
在步骤203中,根据头戴设备的显示区域的长度值和宽度值,生成第一框体和第二框体;将所述显示有视频图像数据的3D背景模型进行图像变形处理,生成分别对应左眼的第一视频数据和对应右眼的第二视频数据;将所述第一视频数据放置到所述第一框体内,生成对应左眼的第一3D视频图像;将所述第二视频数据放置到所述第二框体内,生成对应右眼的第二3D视频图像。
在本发明的上述实施例中,通过在头戴设备的显示屏幕上生成第一框体和第二框体,在该第一框体和第二框体内分别显示第一3D视频图像和第二3D视频图像,使得人眼通过光学透镜观看第一3D视频图像和第二3D视频图像中分别显示的影院3D模型时,可以得到较大的视角,即能够实现IMAX的观影效果。
步骤204,对所述第一3D视频图像的边缘部分以及和第二3D视频图像的边缘部分进行图像平滑处理。
在步骤204中,为了防止银幕中显示有视频图像数据的影院3D模型进行图像变形处理后,所生成第一3D视频图像的边缘部分以及所生成第二3D视频图像的边缘部分存在锯齿,导致人眼通过光学透镜看到的3D影像存在失真的问题。因此还需要分别对第一3D视频图像的边缘部分以及第二3D视频图像的边缘部分进行图像平滑处理。
在本发明的一种实施例中,可以采用卷积平滑算法对所述第一视频显示区域的边缘部分以及第二视频显示区域的边缘部分进行图像平滑处理。
步骤205,显示所述经过图像平滑处理后的所述第一视频显示区域和所述第二视频显示区域。
在步骤205中,在所述视频图像显示区域中将经过图像平滑处理的第一3D视频图像和第二3D视频图像显示在头戴设备的显示屏中。
在本发明的一种具体实施例中,在步骤203中,根据头戴设备显示屏幕的大小,对银幕中显示有视频图像数据的影院3D模型进行图像变形处理。并且在头戴设备的显示屏幕上生成第一框体(Mesh)用于放置第一3D视频图像,和生成第二框体(Mesh)用于放置第二3D视频图像,分别对应左眼和右眼。由于人体左右眼睛观察点的不同,因此在第一视频显示区域和第二视频显示区域中显示的影院3D模型会有所不同。使得人眼通过光学透镜观看第一3D视频图像和第二3D视频图像中分别显示的影院3D模型时,人的大脑可以对第一3D视频图像和第二3D视频图像中分别显示的内容进行识别,从而产生真实的影院3D模型的效果。
本发明中提供方法能够使得用户带上头戴设备后犹如置身电影院的感觉,除了可以看到电影以外,还可以通过选择不同的观察视角,使得看到的电影大小和角度不同,同时将3D视频内容显示在3D背景模型中,增强现实感。
图3是本发明中一种3D视频图像的平滑处理的流程图,在本发明的一种具体实施例中,参见图3所示,为了更好的实现对第一3D视频图像的边缘部分和第二3D视频图像的边缘部分进行图像平滑处理,该方法包括如下步骤。
步骤301,获取第一3D视频图像的边缘部分的像素点。
在步骤301中,第一3D视频图像的边缘部分是指与所述第一3D视频图像中心的距离大于预设值的区域。
步骤302,获取第二3D视频图像的边缘部分的像素点。
在步骤302中,第二3D视频图像的边缘部分是指与所述第二3D视频图像中心的距离大于预设值的区域。在本发明的一种较佳实施例中,步骤301和步骤302中的所述预设值可以为二分之一的最远像素点与中心点的距离。
步骤303,对所述第一3D视频图像的边缘部分以及第二3D视频图像的边缘部分中的每个像素点,采集所述像素点周围的像素点构成像素邻域矩阵。
在步骤303中,可以采集目标像素点周围的8个像素点构成3X3的像素邻域矩阵。在本发明的其他实施例中,可以采集更多的像素点构成更大的像素邻域矩阵,以便产生更好的图像平滑效果。
步骤304,将所述像素邻域矩阵与预设的卷积权值矩阵进行加权计算后得到的新值。
在步骤304中,预设的卷积权值矩阵与采集的像素邻域矩阵相对应,所述卷积权值矩阵对采集的像素邻域矩阵中的每个像素点设置不同的权值,其中,目标像素点的权值最大。
步骤305,使用该新值替换该像素点的原值。
在本发明的一种实施例中,步骤301和步骤302不分先后顺序,可以同时进行。并且,只将距离中心点大于一半范围之外的像素点做卷积平滑运算,对距离中心点小于一半范围之内的像素点不做处理可以提高GPU处理的效率,使得观看3D视频时更加流畅。
在本发明的一种实施例中,为实现在不同角度观看3D影像的效果,可以通过构建眼部观察位置矩阵的方式实现视角的变更。图4是本发明中一种3D图像显示的视角变换流程图。参见图4所示,该方法包括如下步骤。
步骤401,设置眼部观察位置矩阵。
步骤402,获取视角偏移角度,根据所述视角偏移角度修改眼部观察位置矩阵。
步骤403,根据所述眼部观察位置矩阵,对3D背景模型的顶点进行矩阵运算,得到新的顶点。
步骤404,对所述新的顶点进行着色处理,生成对应该视角的3D背景模型。
在本发明的一种实施例中,在获取到视角偏移角度之后,可以通过调用Matrix.setLookAem来修改眼部观察位置矩阵,从而改变用户对3D背景模型的观察角度。具体为,通过眼部观察位置矩阵M对3D背景模型的顶点V(三维向量)进行矩阵运算,得到新的顶点V’(V’=M*V),顶点着色器处理新的顶点V’,举例为:gl_Position=M*(V,1.0)。
在本发明的一种实施例中,为实现在不同角度观看3D影像的效果,可以通过设置3D背景模型移动矩阵的方式实现视角的变更。图5是本发明中另一种3D图像显示的视角变换流程图。参见图5所示,该方法包括如下步骤。
步骤501,获取视角平移参数。
步骤502,根据所述视角平移参数,对所述3D背景模型进行平移运算,生成对应该视角的3D背景模型。
在本发明的一种实施例中,3D背景模型的原有顶点V=[x,y,z],视角平移参数为[x’,y’,z’],表示为分别为沿着世界坐标系移动x’,y’,z’。则根据所述视角平移参数,对所述3D背景模型进行平移运算后为V’=[x+x’,y+y’,z+z’]。
在本发明的其他实施例中,3D影像观看视角的转换从实现角度不限于以上技术,由于变换的对偶性同样可以采用其它方式来实现3D影像观看视角的变换。如把银屏位置往世界坐标系z轴正向移动和把眼部观察位置往z轴负向移动是一样的。
本发明还公开了一种头戴设备,图6是本发明中一种头戴设备的结构示意图,参见图6所示,该头戴设备包括:背景处理模块601、采集处理模块602、图像处理模块603、显示模块604;
背景处理模块601,用于建立3D背景模型,在所述3D背景模型中设置视频图像显示区域;
采集处理模块602,连接所述背景处理模块601,用于获取视频图像数据,将所述视频图像数据投射在所述3D背景模型的视频图像显示区域中;以及用于采集头戴设备的显示参数,将所述头戴设备的显示参数发送给图像处理模块603;
图像处理模块603,连接所述采集处理模块602,用于根据所述显示参数,对所述投射有视频图像数据的3D背景模型进行图像变形处理,生成分别对应左眼的第一3D视频图像和对应右眼的第二3D视频图像发送给显示模块604;
显示模块604,分别连接所述图像处理模块603和所述图像处理模块603,用于在所述视频图像显示区域中将所述第一3D视频图像和所述第二3D视频图像分别经过两个透镜折射后进行显示。
图7是本发明中一种头戴设备的详细结构示意图,参见图7所示,
在本发明的一种实施例中,图像处理模块603,根据头戴设备的显示区域的长度值和宽度值,生成第一框体和第二框体;将所述显示有视频图像数据的3D背景模型进行图像变形处理,生成分别对应左眼的第一视频数据和对应右眼的第二视频数据;将所述第一视频数据放置到所述第一框体内,生成对应左眼的第一3D视频图像;将所述第二视频数据放置到所述第二框体内,生成对应右眼的第二3D视频图像。
为了使得人眼通过光学透镜观看第一3D视频图像和第二3D视频图像时,能够更好的达到观看的效果,在本发明的一种实施例中,图像处理模块603,分别对所述第一3D视频图像的边缘部分和所述第二3D视频图像的边缘部分进行图像平滑处理。
在本发明的另一种实施例中,图像处理模块603,对所述第一3D视频图像的边缘部分以及第二3D视频图像的边缘部分通过卷积平滑算法进行图像平滑处理。
其中,所述第一3D视频图像的边缘部分是指与所述第一3D视频显示图像中心的距离大于预设值的区域;第二3D视频图像的边缘部分是指与所述第二3D视频图像中心的距离大于预设值的区域。
在本发明的一种实施例中,图像处理模块603对所述第一3D视频图像的边缘部分以及第二3D视频图像的边缘部分中的每个像素点,采集所述像素点周围的像素点构成像素邻域矩阵,将所述像素邻域矩阵与预设的卷积权值矩阵进行加权计算后得到的新值,使用该新值替换该像素点的原值。
为了实现从不同角度观看3D视频的效果,在本发明的一种实施例中,该头戴设备还包括:视角调节模块605;
视角调节模块605,连接所述图像处理模块603,用于设置眼部观察位置矩阵;以及在获取到视角偏移角度之后,根据所述视角偏移角度修改眼部观察位置矩阵。
图像处理模块603,还用于根据所述眼部观察位置矩阵,对3D背景模型的顶点进行矩阵运算,得到新的顶点;对所述新的顶点进行着色处理,生成对应视角的3D背景模型。
在本发明的另一种实施例中,视角调节模块605,用于获取视角平移参数,将获取的视角平移参数发送给图像处理模块603。
图像处理模块603,根据视角平移参数,对所述3D背景模型进行平移运算,得到新的3D背景模型。
由上述可知,本发明提供的技术方案中,通过设置3D背景模型,并且对投射有视频图像3D背景模型进行图像变形处理,分别产生第一3D视频图像和第二3D视频图像;使得用户通过头戴设备中的透镜进行观看时,能够看到较大的视觉角度,提高用户体验,从而解决了现有的头戴设备在观看3D视频时存在视觉角度狭小的问题。此外,通过对第一3D视频图像和第二3D视频图像的边缘部分进行平滑处理,使得用户在观看3D视频图像的边缘不会出现锯齿的问题,并且还能使得3D视频图像的边缘部分的图像清晰。此外,在本发明中进一步通过设置眼部观察位置矩阵,实现对3D视频图像的观察角度的转换,从而让用户可以选择不同的角度观看3D视频图像,增加用户的体验。
以上所述仅为本发明的较佳实施例而已,并非用于限定本发明的保护范围。凡在本发明的精神和原则之内所作的任何修改、等同替换、改进等,均包含在本发明的保护范围内。

Claims (11)

  1. 一种3D图像的显示方法,其特征在于,该方法包括:
    建立3D背景模型,在所述3D背景模型中设置视频图像显示区域;
    获取视频图像数据,将所视频述图像数据投射在所述3D背景模型的视频图像显示区域中;
    获取头戴设备的显示参数,根据所述显示参数,对所述投射有视频图像数据的3D背景模型进行图像变形处理,生成分别对应左眼的第一3D视频图像和对应右眼的第二3D视频图像;
    在所述视频图像显示区域中将所述第一3D视频图像和所述第二3D视频图像分别经过两个透镜折射后进行显示。
  2. 根据权利要求1所述的方法,其特征在于,在所述生成分别对应左眼的第一视频显示区域和对应右眼的第二视频显示区域之后,该方法还包括:
    分别对所述对应左眼的第一3D视频图像的边缘部分和所述对应右眼的第二3D视频图像的边缘部分进行图像平滑处理;
    在所述视频图像显示区域中将所述经过图像平滑处理后的所述第一3D视频图像和所述第二3D视频图像分别进行显示。
  3. 根据权利要求2所述的方法,其特征在于,所述对所述投射有视频图像数据的3D背景模型进行图像变形处理,生成分别对应左眼的第一3D视频图像和对应右眼的第二3D视频图像包括:
    根据所述头戴设备的显示区域的长度值和宽度值,生成第一框体和第二框体;
    将所述显示有视频图像数据的3D背景模型进行图像变形处理,生成分别对应左眼的第一视频数据和对应右眼的第二视频数据;
    将所述第一视频数据放置到所述第一框体内,生成对应左眼的第一3D视频图像;将所述第二视频数据放置到所述第二框体内,生成对应右眼的第二3D视频图像。
  4. 根据权利要求2所述的方法,其特征在于,所述分别对所述对应左眼的第一3D视频图像的边缘部分和所述对应右眼的第二3D视频图像的边缘部分进行图像平滑处理包括:
    对所述第一3D视频图像的边缘部分以及第二3D视频图像的边缘部分通过卷积平滑算法进行图像平滑处理;
    其中,所述第一3D视频图像的边缘部分是指与所述第一3D视频显示图像中心的距离大于预设值的区域;所述第二3D视频图像的边缘部分是指与所述第二3D视频图像中心的距离大于预设值的区域。
  5. 根据权利要求4所述的方法,其特征在于,所述对所述第一3D视频图像的边缘部分以及第二3D视频图像的边缘部分通过卷积平滑算法进行图像平滑处理包括:
    对所述第一3D视频图像的边缘部分以及第二3D视频图像的边缘部分中的每个像素点,采集所述像素点周围的像素点构成像素邻域矩阵,
    将所述像素邻域矩阵与预设的卷积权值矩阵进行加权计算后得到的新值,使用该新值替换该像素点的原值。
  6. 根据权利要求1~5中任意一项所述的方法,其特征在于,该方法还包括:
    设置眼部观察位置矩阵,
    获取视角偏移角度,根据所述视角偏移角度修改眼部观察位置矩阵;
    根据所述眼部观察位置矩阵,对3D背景模型的顶点进行矩阵运算,得到新的顶点;
    对所述新的顶点进行着色处理,生成对应视角的3D背景模型。
  7. 根据权利要求1~5中任意一项所述的方法,其特征在于,该方法还包括:
    获取视角平移参数,根据所述视角平移参数,对所述3D背景模型进行平移运算,得到新的3D背景模型。
  8. 根据权利要求1~5中任意一项所述的方法,其特征在于,所述3D背景模型为3D影院模型;
    其中,所述3D影院模型中的银幕模型为对应的视频图像显示区域。
  9. 一种头戴设备,其特征在于,该头戴设备包括:背景处理模块、采集处理模块、图像处理模块、显示模块;
    所述背景处理模块,用于建立3D背景模型,在所述3D背景模型中设置视频图像显示区域;
    所述采集处理模块,连接所述背景处理模块,用于获取视频图像数据,将所述视频图像数据投射在所述3D背景模型的视频图像显示区域中;以及用于采集头戴设备的显示参数,将所述头戴设备的显示参数发送给图像处理模块;
    所述图像处理模块,连接所述采集处理模块,用于根据所述显示参数,对所述投射有视频图像数据的3D背景模型进行图像变形处理,生成分别对应左眼的第一3D视频图像和对应右眼的第二3D视频图像;
    所述显示模块,分别连接所述图像处理模块和所述图像处理模块603,用于在所述视频图像显示区域中将所述第一3D视频图像和所述第二3D视频图像分别经过两个透镜折射后进行显示。
  10. 根据权利要求9所述的头戴设备,其特征在于,
    所述图像处理模块,还用于根据所述头戴设备的显示区域的长度值和宽度值,生成第一框体和第二框体;将所述显示有视频图像的3D背景模型进行图像变形处理,生成分别对应左眼的第一视频数据和对应右眼的第二视频数据;将所述第一视频数据放置到所述第一框体内,生成对应左眼的第一3D视频图像;将所述第二视频数据放置到所述第二框体内,生成对应右眼的第二3D视频图像;分别对所述第一3D视频图像的边缘部分和所述第二3D视频图像的边缘部分进行图像平滑处理。
  11. 根据权利要求9所述的头戴设备,其特征在于,该头戴设备还包括:视角调节模块,连接所述图像处理模块,用于设置眼部观察位置矩阵,以及在获取到视角偏移角度之后,根据所述视角偏移角度修改眼部观察位置矩阵;
    所述图像处理模块,还用于根据所述眼部观察位置矩阵,对3D背景模型的顶点进行矩阵运算,得到新的顶点;对所述新的顶点进行着色处理,生成对应视角的3D背景模型。
PCT/CN2015/099194 2014-12-31 2015-12-28 一种3d图像的显示方法和一种头戴设备 WO2016107519A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2017530067A JP6384940B2 (ja) 2014-12-31 2015-12-28 3d画像の表示方法及びヘッドマウント機器
US15/324,247 US10104358B2 (en) 2014-12-31 2015-12-28 3D image display method and head-mounted device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410854774.0 2014-12-31
CN201410854774.0A CN104581119B (zh) 2014-12-31 2014-12-31 一种3d图像的显示方法和一种头戴设备

Publications (1)

Publication Number Publication Date
WO2016107519A1 true WO2016107519A1 (zh) 2016-07-07

Family

ID=53096193

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/099194 WO2016107519A1 (zh) 2014-12-31 2015-12-28 一种3d图像的显示方法和一种头戴设备

Country Status (4)

Country Link
US (1) US10104358B2 (zh)
JP (1) JP6384940B2 (zh)
CN (1) CN104581119B (zh)
WO (1) WO2016107519A1 (zh)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104581119B (zh) * 2014-12-31 2017-06-13 青岛歌尔声学科技有限公司 一种3d图像的显示方法和一种头戴设备
CN104898280B (zh) * 2015-05-04 2017-09-29 青岛歌尔声学科技有限公司 一种头戴式显示器的显示方法和头戴式显示器
CN106680995A (zh) * 2015-11-05 2017-05-17 丰唐物联技术(深圳)有限公司 显示控制方法及装置
CN105915877A (zh) * 2015-12-27 2016-08-31 乐视致新电子科技(天津)有限公司 一种三维视频的自由观影方法及设备
CN105657407B (zh) * 2015-12-31 2018-11-23 深圳纳德光学有限公司 头戴显示器及其双目3d视频显示方法和装置
CN105915885A (zh) * 2016-03-02 2016-08-31 优势拓展(北京)科技有限公司 鱼眼图像的3d交互显示方法和系统
CN109996060B (zh) * 2017-12-30 2021-09-03 深圳多哚新技术有限责任公司 一种虚拟现实影院系统及信息处理方法
WO2020146003A1 (en) * 2019-01-07 2020-07-16 Yutou Technology (Hangzhou) Co., Ltd. Mobile device integrated visual enhancement system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101309389A (zh) * 2008-06-19 2008-11-19 深圳华为通信技术有限公司 一种合成可视图像的方法、装置和终端
CN101605271A (zh) * 2009-07-08 2009-12-16 无锡景象数字技术有限公司 一种基于单幅图像的2d转3d方法
CN102238396A (zh) * 2010-04-28 2011-11-09 周修平 立体视觉的影像转换方法、成像方法及系统
CN102438157A (zh) * 2010-08-18 2012-05-02 索尼公司 图像处理装置、方法和程序
US20140055353A1 (en) * 2011-04-28 2014-02-27 Sharp Kabushiki Kaisha Head-mounted display
WO2014199155A1 (en) * 2013-06-11 2014-12-18 Sony Computer Entertainment Europe Limited Head-mountable apparatus and systems
CN104581119A (zh) * 2014-12-31 2015-04-29 青岛歌尔声学科技有限公司 一种3d图像的显示方法和一种头戴设备

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2916076B2 (ja) * 1993-08-26 1999-07-05 シャープ株式会社 画像表示装置
US7907794B2 (en) * 2007-01-24 2011-03-15 Bluebeam Software, Inc. Method for aligning a modified document and an original document for comparison and difference highlighting
CN101533156A (zh) * 2009-04-23 2009-09-16 天津三维成像技术有限公司 利用单片显示器件制作的头盔式立体显示器
CN101661163A (zh) * 2009-09-27 2010-03-03 合肥工业大学 增强现实系统的立体头盔显示器
US8446461B2 (en) * 2010-07-23 2013-05-21 Superd Co. Ltd. Three-dimensional (3D) display method and system
JP5874176B2 (ja) * 2011-03-06 2016-03-02 ソニー株式会社 表示装置、並びに中継装置
JP2013050558A (ja) 2011-08-30 2013-03-14 Sony Corp ヘッド・マウント・ディスプレイ及び表示制御方法
CN103327352B (zh) * 2013-05-03 2015-08-12 四川虹视显示技术有限公司 采用串行处理方式实现双显示屏3d显示的装置及方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101309389A (zh) * 2008-06-19 2008-11-19 深圳华为通信技术有限公司 一种合成可视图像的方法、装置和终端
CN101605271A (zh) * 2009-07-08 2009-12-16 无锡景象数字技术有限公司 一种基于单幅图像的2d转3d方法
CN102238396A (zh) * 2010-04-28 2011-11-09 周修平 立体视觉的影像转换方法、成像方法及系统
CN102438157A (zh) * 2010-08-18 2012-05-02 索尼公司 图像处理装置、方法和程序
US20140055353A1 (en) * 2011-04-28 2014-02-27 Sharp Kabushiki Kaisha Head-mounted display
WO2014199155A1 (en) * 2013-06-11 2014-12-18 Sony Computer Entertainment Europe Limited Head-mountable apparatus and systems
CN104581119A (zh) * 2014-12-31 2015-04-29 青岛歌尔声学科技有限公司 一种3d图像的显示方法和一种头戴设备

Also Published As

Publication number Publication date
US10104358B2 (en) 2018-10-16
CN104581119B (zh) 2017-06-13
JP6384940B2 (ja) 2018-09-05
JP2018505580A (ja) 2018-02-22
US20170201734A1 (en) 2017-07-13
CN104581119A (zh) 2015-04-29

Similar Documents

Publication Publication Date Title
WO2016107519A1 (zh) 一种3d图像的显示方法和一种头戴设备
WO2019041351A1 (zh) 一种3d vr视频与虚拟三维场景实时混叠渲染的方法
WO2016000309A1 (zh) 基于穿戴设备的增强现实方法及系统
CN104811687A (zh) 一种虚拟现实头盔的电路系统和虚拟现实头盔
US20160269685A1 (en) Video interaction between physical locations
CN101877767A (zh) 一种六通道视频源生成三维全景连续视频的方法和系统
WO2020122488A1 (ko) 카메라 기반의 혼합현실 글래스 장치 및 혼합현실 디스플레이 방법
CN110866978A (zh) 一种实时混合现实视频拍摄中的相机同步方法
WO2019082794A1 (ja) 画像生成装置、画像生成システム、画像生成方法、およびプログラム
WO2016159444A1 (ko) 헤드 마운트 디스플레이 장치 및 이에 구비되는 헤드 마운트 디스플레이 장치용 휴대 단말기 고정 프레임
WO2018129792A1 (zh) Vr播放方法、vr播放装置及vr播放系统
WO2019098198A1 (ja) 画像生成装置、ヘッドマウントディスプレイ、画像生成システム、画像生成方法、およびプログラム
WO2018161817A1 (zh) 一种存储介质、在虚拟现实场景中模拟摄影的方法及系统
CN108989784A (zh) 虚拟现实设备的图像显示方法、装置、设备及存储介质
CN116071471A (zh) 一种基于虚幻引擎的多机位渲染方法与装置
WO2018092992A1 (ko) 룩업테이블 기반의 실시간 파노라마 영상 제작 시스템 및 이를 이용한 실시간 파노라마 영상 제작 방법
JP2002232783A (ja) 画像処理装置および画像処理方法、並びにプログラム記憶媒体
Gilson et al. High fidelity immersive virtual reality
CN111083368A (zh) 一种基于云端的模拟物理云台全景视频展示系统
WO2013180442A1 (ko) 입체 동영상 촬영용 장치 및 카메라
CN108632538B (zh) 一种cg动画和相机阵列相结合的子弹时间拍摄系统及方法
WO2017008338A1 (zh) 一种三维图像处理方法及装置
CN204517985U (zh) 一种虚拟现实头盔的电路系统和虚拟现实头盔
CN110244837A (zh) 增强现实且具有虚拟影像叠加的体验眼镜及其成像方法
CN107087153B (zh) 3d图像生成方法、装置及vr设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15875195

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 15324247

Country of ref document: US

ENP Entry into the national phase

Ref document number: 2017530067

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15875195

Country of ref document: EP

Kind code of ref document: A1