WO2016107519A1 - 一种3d图像的显示方法和一种头戴设备 - Google Patents
一种3d图像的显示方法和一种头戴设备 Download PDFInfo
- Publication number
- WO2016107519A1 WO2016107519A1 PCT/CN2015/099194 CN2015099194W WO2016107519A1 WO 2016107519 A1 WO2016107519 A1 WO 2016107519A1 CN 2015099194 W CN2015099194 W CN 2015099194W WO 2016107519 A1 WO2016107519 A1 WO 2016107519A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- video image
- image
- video
- background model
- display area
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/122—Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/111—Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/133—Equalising the characteristics of different image components, e.g. their average brightness or colour balance
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/275—Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
- H04N13/279—Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals the virtual viewpoint locations being selected by the viewers or determined by tracking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/332—Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
- H04N13/339—Displays for viewing with the aid of special glasses or head-mounted displays [HMD] using spatial multiplexing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/332—Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
- H04N13/344—Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/366—Image reproducers using viewer tracking
- H04N13/383—Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
Definitions
- the present invention relates to the field of 3D image display technology, and in particular to a 3D image display method and a headwear device.
- head-mounted display devices have become more popular, becoming a tool for more users to experience home theater.
- the conventional head-mounted display device focuses on the display of the movie content itself, and the user does not sit in the theater to observe the effect.
- the traditional head-mounted device processes 3D video based on the blurring of the edge portion of each frame image, but this causes the blurring of the edge portion of the 3D video image, and also causes the 3D image display angle. Small problem.
- the invention provides a method for displaying a 3D image and a head-wearing device for solving the problem of narrow display angle of a 3D image in the existing head-mounted device.
- the invention discloses a method for displaying a 3D image, the method comprising:
- the first 3D video image and the second 3D video image are respectively refracted by two lenses in the video image display area and displayed.
- the method further includes:
- the first 3D video image and the second 3D video image after the image smoothing process are separately displayed.
- performing image deformation processing on the 3D background model on which the video image data is projected, and generating a first 3D video image corresponding to the left eye and a second 3D video image corresponding to the right eye respectively include:
- first video data into the first frame to generate a first 3D video image corresponding to the left eye; placing the second video data into the second frame to generate a second corresponding to the right eye 3D video image.
- performing image smoothing processing on the edge portion of the first 3D video image corresponding to the left eye and the edge portion of the second 3D video image corresponding to the right eye respectively includes:
- the edge portion of the first 3D video image refers to an area where the distance from the center of the first 3D video display image is greater than a preset value; the edge portion of the second 3D video image refers to the second portion The distance between the center of the 3D video image is greater than the preset value.
- the performing image smoothing processing on the edge portion of the first 3D video image and the edge portion of the second 3D video image by using a convolution smoothing algorithm includes:
- the method further includes: setting an eye observation position matrix,
- the method further includes: acquiring a viewing angle translation parameter, and performing a translation operation on the 3D background model according to the viewing angle translation parameter to obtain a new 3D background model.
- the 3D background model is a 3D cinema model
- the screen model in the 3D cinema model is a corresponding video image display area.
- the present invention also discloses a head-wearing device, which includes: a background processing module, an acquisition processing module, an image processing module, and a display module;
- the background processing module is configured to establish a 3D background model, and set a video image display area in the 3D background model;
- the collection processing module is connected to the background processing module, configured to acquire video image data, project the video image data in a video image display area of the 3D background model, and collect display parameters of the headset device Transmitting display parameters of the headset to the image processing module;
- the image processing module is connected to the collection processing module, configured to perform image deformation processing on the 3D background model on which the video image data is projected according to the display parameter, and generate a first 3D video image corresponding to the left eye and Corresponding to a second 3D video image of the right eye;
- the display module is connected to the image processing module and the image processing module 603, respectively, for respectively performing the first 3D video image and the second 3D video image in the video image display area
- the lens is refracted for display.
- the image processing module is further configured to generate a first frame and a second frame according to the length value and the width value of the display area of the headset; and display the 3D background with the video image Performing image deformation processing on the model, generating first video data corresponding to the left eye and second video data corresponding to the right eye; placing the first video data into the first frame to generate a first 3D corresponding to the left eye a video image; placing the second video data into the second frame to generate a second 3D video image corresponding to the right eye; respectively, an edge portion of the first 3D video image and the second 3D video image The edge portion of the image is smoothed.
- the headset further includes: a viewing angle adjustment module, configured to connect the image processing module, configured to set an eye observation position matrix, and modify the eye according to the viewing angle offset angle after acquiring the viewing angle offset angle Observe the position matrix.
- a viewing angle adjustment module configured to connect the image processing module, configured to set an eye observation position matrix, and modify the eye according to the viewing angle offset angle after acquiring the viewing angle offset angle Observe the position matrix.
- the image processing module is further configured to perform matrix operations on the vertices of the 3D background model according to the eye observation position matrix to obtain new vertices; perform coloring processing on the new vertices to generate a 3D background model corresponding to the perspective .
- the first 3D video image and the second 3D video image are respectively generated by setting a 3D background model and performing image deformation processing on the 3D background model on which the video image data is projected;
- the user views through the lens in the wearing device, the user can see a larger viewing angle and improve the user experience, thereby solving the problem that the existing wearing device has a narrow visual angle and blurred edges when viewing the 3D video.
- FIG. 1 is a flow chart showing a method of displaying a 3D image in the present invention
- FIG. 2 is a detailed flowchart of a method for displaying a 3D image in the present invention
- 3 is a flow chart of smoothing processing of a 3D video image in the present invention.
- FIG. 4 is a flow chart showing a perspective change of a 3D image display in the present invention.
- FIG. 5 is a flow chart of view conversion of another 3D image display in the present invention.
- FIG. 6 is a schematic structural view of a head wear device according to the present invention.
- Figure 7 is a detailed structural view of a headgear device in the present invention.
- FIG. 1 is a flow chart showing a method of displaying a 3D image in the present invention. Referring to FIG. 1, the method includes the following steps.
- Step 101 Establish a 3D background model, and set a video image display area in the 3D background model.
- Step 102 Acquire video image data, and project the video image data into a video image display area of the 3D background model.
- Step 103 Obtain a display parameter of the headwear device, perform image deformation processing on the 3D background model on which the video image data is projected according to the display parameter, and generate a first 3D video image corresponding to the left eye and a corresponding right eye respectively.
- the second 3D video image is a display parameter of the headwear device.
- Step 104 The first 3D video image and the second 3D video image are respectively refracted by two lenses in the video image display area and displayed.
- a display method of a 3D image disclosed in the present invention is applicable to a head-mounted device, and a video image display area is set in a 3D background model according to different observation points of a human eye, and the acquired video image data is projected. And performing image deformation processing on the projected video image 3D background model in the video image display area of the 3D background model to respectively generate a first 3D video image and a second 3D video image; and causing the user to pass the lens in the headset When viewing, you can see a larger viewing angle. And by setting a 3D background model, the user can have an immersive feeling while watching the video, and improve the user experience.
- FIG. 2 is a detailed flowchart of a method for displaying a 3D image in the present invention. Referring to FIG. 2, the method includes the following steps.
- Step 201 Establish a 3D background model, and set a video image display area in the 3D background model.
- the 3D background model may be a cinema 3D model, that is, a 3D model including a screen, a seat, and the like.
- the video image display area provided in the 3D background model corresponds to the screen model in the cinema 3D model.
- Step 202 Acquire video image data, and project the video image data into a video image display area of the 3D background model.
- step 202 the video image data for playing is acquired, and the video image data is projected into a screen model of the theater 3D model; that is, when the user watches the video through the headset, the screen in the 3D model of the theater can be seen.
- the corresponding video is played on the screen, so that the effect of watching the video is immersive.
- Step 203 Acquire a display parameter of the headwear device, perform image deformation processing on the 3D background model on which the video image data is projected according to the display parameter, and generate a first 3D video image corresponding to the left eye and a corresponding right eye respectively.
- the second 3D video image Acquire a display parameter of the headwear device, perform image deformation processing on the 3D background model on which the video image data is projected according to the display parameter, and generate a first 3D video image corresponding to the left eye and a corresponding right eye respectively.
- the second 3D video image is
- the display parameter of the headset is the length value and the width value of the display area of the headset.
- a first frame and a second frame are generated according to the length value and the width value of the display area of the headset; and the 3D background model displaying the video image data is subjected to image deformation processing to generate corresponding correspondences.
- a first video data of the left eye and a second video data corresponding to the right eye placing the first video data into the first frame to generate a first 3D video image corresponding to the left eye; and the second video Data is placed into the second frame to generate a second 3D video image corresponding to the right eye.
- the first frame and the second frame are generated on the display screen of the head mounted device, and the first 3D video image and the second frame are respectively displayed in the first frame and the second frame.
- the 3D video image enables the human eye to view the cinema 3D model respectively displayed in the first 3D video image and the second 3D video image through the optical lens, and a larger viewing angle can be obtained, that is, the viewing effect of the IMAX can be achieved.
- Step 204 Perform image smoothing processing on the edge portion of the first 3D video image and the edge portion of the second 3D video image.
- step 204 in order to prevent image distortion processing of the cinema 3D model in which the video image data is displayed on the screen, the edge portion of the generated first 3D video image and the edge portion of the generated second 3D video image are jagged, resulting in a person There is a problem of distortion in the 3D image that the eye sees through the optical lens. Therefore, it is also necessary to perform image smoothing processing on the edge portion of the first 3D video image and the edge portion of the second 3D video image, respectively.
- the edge portion of the first video display area and the edge portion of the second video display area may be subjected to image smoothing processing using a convolution smoothing algorithm.
- Step 205 Display the first video display area and the second video display area after the image smoothing process.
- step 205 the first 3D video image and the second 3D video image subjected to image smoothing processing are displayed in the display screen of the headwear in the video image display area.
- step 203 according to the size of the display screen of the headset, image distortion processing is performed on the cinema 3D model in which the video image data is displayed on the screen. And generating a first frame (Mesh) for placing the first 3D video image on the display screen of the headwear device, and generating a second frame (Mesh) for placing the second 3D video image corresponding to the left eye and the right eye.
- the cinema 3D model displayed in the first video display area and the second video display area may differ due to differences in the left and right eye viewing points of the human body.
- the human brain can recognize the content respectively displayed in the first 3D video image and the second 3D video image. , thus producing the effect of a real cinema 3D model.
- the method provided in the present invention can make the user feel like being in a movie theater after wearing the device.
- the user can also select different viewing angles to make the size and angle of the movie different, and at the same time, the 3D video.
- the content is displayed in a 3D background model, augmented reality.
- FIG. 3 is a flow chart of a smoothing process of a 3D video image in the present invention.
- the method includes the following steps.
- Step 301 Acquire a pixel point of an edge portion of the first 3D video image.
- the edge portion of the first 3D video image refers to an area where the distance from the center of the first 3D video image is greater than a preset value.
- Step 302 Acquire a pixel point of an edge portion of the second 3D video image.
- the edge portion of the second 3D video image refers to an area where the distance from the center of the second 3D video image is greater than a preset value.
- the preset value in step 301 and step 302 may be a distance between one-half of the farthest pixel point and the center point.
- Step 303 For each pixel point in the edge portion of the first 3D video image and the edge portion of the second 3D video image, collecting pixel points around the pixel point to form a pixel neighborhood matrix.
- step 303 8 pixel points around the target pixel point may be acquired to form a pixel neighborhood matrix of 3 ⁇ 3. In other embodiments of the invention, more pixel points may be acquired to form a larger pixel neighborhood matrix to produce a better image smoothing effect.
- Step 304 Perform a weighted calculation on the pixel neighborhood matrix and the preset convolution weight matrix to obtain a new value.
- the preset convolution weight matrix corresponds to the collected pixel neighborhood matrix, and the convolution weight matrix sets different weights for each pixel in the collected pixel neighborhood matrix, wherein The target pixel has the largest weight.
- Step 305 replacing the original value of the pixel with the new value.
- step 301 and step 302 are performed in a sequential manner, and may be performed simultaneously. Moreover, the convolution smoothing operation is performed only on pixels outside the center point more than half of the range, and processing the pixel points within less than half of the center point can improve the efficiency of the GPU processing, making the viewing of the 3D video smoother. .
- the change of the angle of view can be realized by constructing an eye observation position matrix.
- 4 is a flow chart showing a perspective change of a 3D image display in the present invention. Referring to Figure 4, the method includes the following steps.
- step 401 a matrix of eye observation positions is set.
- Step 402 Acquire a viewing angle offset angle, and modify an eye observation position matrix according to the viewing angle offset angle.
- Step 403 Perform matrix operations on the vertices of the 3D background model according to the eye observation position matrix to obtain new vertices.
- Step 404 Perform coloring processing on the new vertex to generate a 3D background model corresponding to the perspective.
- Fig. 5 is a flow chart showing the change of the angle of view of another 3D image display in the present invention. Referring to Figure 5, the method includes the following steps.
- Step 501 Acquire a viewing angle translation parameter.
- Step 502 Perform a translation operation on the 3D background model according to the viewing angle translation parameter to generate a 3D background model corresponding to the perspective.
- the conversion of the 3D image viewing angle is not limited to the above technology, and the conversion of the 3D image viewing angle may be implemented in other ways. For example, moving the silver screen position to the z-axis of the world coordinate system and moving the eye observation position to the z-axis negatively is the same.
- FIG. 6 is a schematic structural view of a head-wearing device according to the present invention.
- the head-wearing device includes: a background processing module 601, an acquisition processing module 602, and image processing. Module 603, display module 604;
- a background processing module 601 configured to establish a 3D background model, and set a video image display area in the 3D background model
- An acquisition processing module 602 coupled to the background processing module 601, configured to acquire video image data, project the video image data in a video image display area of the 3D background model, and collect display parameters of the headset device Transmitting the display parameters of the headset to the image processing module 603;
- the image processing module 603 is connected to the collection processing module 602, configured to perform image deformation processing on the 3D background model on which the video image data is projected according to the display parameter, to generate a first 3D video image corresponding to the left eye respectively.
- the second 3D video image corresponding to the right eye is sent to the display module 604;
- the display module 604 is connected to the image processing module 603 and the image processing module 603, respectively, for respectively performing the first 3D video image and the second 3D video image in the video image display area.
- the lens is refracted for display.
- FIG. 7 is a detailed structural diagram of a head wear device according to the present invention, as shown in FIG.
- the image processing module 603 generates a first frame and a second frame according to the length value and the width value of the display area of the headset; and displays the 3D displaying the video image data.
- the background model performs image deformation processing to generate first video data corresponding to the left eye and second video data corresponding to the right eye respectively; placing the first video data into the first frame to generate a first corresponding to the left eye a 3D video image; placing the second video data into the second frame to generate a second 3D video image corresponding to the right eye.
- the image processing module 603 respectively An edge portion of a 3D video image and an edge portion of the second 3D video image are subjected to image smoothing processing.
- the image processing module 603 performs image smoothing processing on the edge portion of the first 3D video image and the edge portion of the second 3D video image by a convolution smoothing algorithm.
- the edge portion of the first 3D video image refers to an area where the distance from the center of the first 3D video display image is greater than a preset value; the edge portion of the second 3D video image refers to the second 3D video.
- the area in the center of the image is larger than the preset value.
- the image processing module 603 collects pixel points around the pixel point for each of the edge portions of the first 3D video image and the edge portion of the second 3D video image. A new value obtained by weighting the pixel neighborhood matrix and the preset convolution weight matrix, and replacing the original value of the pixel with the new value.
- the headset further includes: a viewing angle adjustment module 605;
- the angle of view adjustment module 605 is connected to the image processing module 603 for setting an eye observation position matrix; and after acquiring the angle of view offset angle, modifying the eye observation position matrix according to the angle of view offset angle.
- the image processing module 603 is further configured to perform matrix operations on the vertices of the 3D background model according to the eye observation position matrix to obtain new vertices; perform coloring processing on the new vertices to generate a 3D background model corresponding to the perspective.
- the viewing angle adjustment module 605 is configured to acquire a viewing angle translation parameter, and send the acquired viewing angle translation parameter to the image processing module 603.
- the image processing module 603 performs a translation operation on the 3D background model according to the viewing angle translation parameter to obtain a new 3D background model.
- the first 3D video image and the second 3D video image are respectively generated by setting a 3D background model and performing image deformation processing on the 3D background model on which the video image is projected;
- a larger viewing angle can be seen, and the user experience is improved, thereby solving the problem that the existing headset has a small visual angle when viewing 3D video.
- the user does not have the problem of aliasing while viewing the edge of the 3D video image, and can also make the image of the edge portion of the 3D video image Clear.
- the viewing angle of the 3D video image is further converted by setting the eye observation position matrix, thereby allowing the user to select a different angle to view the 3D video image, thereby increasing the user experience.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Processing Or Creating Images (AREA)
- Image Processing (AREA)
Abstract
Description
Claims (11)
- 一种3D图像的显示方法,其特征在于,该方法包括:建立3D背景模型,在所述3D背景模型中设置视频图像显示区域;获取视频图像数据,将所视频述图像数据投射在所述3D背景模型的视频图像显示区域中;获取头戴设备的显示参数,根据所述显示参数,对所述投射有视频图像数据的3D背景模型进行图像变形处理,生成分别对应左眼的第一3D视频图像和对应右眼的第二3D视频图像;在所述视频图像显示区域中将所述第一3D视频图像和所述第二3D视频图像分别经过两个透镜折射后进行显示。
- 根据权利要求1所述的方法,其特征在于,在所述生成分别对应左眼的第一视频显示区域和对应右眼的第二视频显示区域之后,该方法还包括:分别对所述对应左眼的第一3D视频图像的边缘部分和所述对应右眼的第二3D视频图像的边缘部分进行图像平滑处理;在所述视频图像显示区域中将所述经过图像平滑处理后的所述第一3D视频图像和所述第二3D视频图像分别进行显示。
- 根据权利要求2所述的方法,其特征在于,所述对所述投射有视频图像数据的3D背景模型进行图像变形处理,生成分别对应左眼的第一3D视频图像和对应右眼的第二3D视频图像包括:根据所述头戴设备的显示区域的长度值和宽度值,生成第一框体和第二框体;将所述显示有视频图像数据的3D背景模型进行图像变形处理,生成分别对应左眼的第一视频数据和对应右眼的第二视频数据;将所述第一视频数据放置到所述第一框体内,生成对应左眼的第一3D视频图像;将所述第二视频数据放置到所述第二框体内,生成对应右眼的第二3D视频图像。
- 根据权利要求2所述的方法,其特征在于,所述分别对所述对应左眼的第一3D视频图像的边缘部分和所述对应右眼的第二3D视频图像的边缘部分进行图像平滑处理包括:对所述第一3D视频图像的边缘部分以及第二3D视频图像的边缘部分通过卷积平滑算法进行图像平滑处理;其中,所述第一3D视频图像的边缘部分是指与所述第一3D视频显示图像中心的距离大于预设值的区域;所述第二3D视频图像的边缘部分是指与所述第二3D视频图像中心的距离大于预设值的区域。
- 根据权利要求4所述的方法,其特征在于,所述对所述第一3D视频图像的边缘部分以及第二3D视频图像的边缘部分通过卷积平滑算法进行图像平滑处理包括:对所述第一3D视频图像的边缘部分以及第二3D视频图像的边缘部分中的每个像素点,采集所述像素点周围的像素点构成像素邻域矩阵,将所述像素邻域矩阵与预设的卷积权值矩阵进行加权计算后得到的新值,使用该新值替换该像素点的原值。
- 根据权利要求1~5中任意一项所述的方法,其特征在于,该方法还包括:设置眼部观察位置矩阵,获取视角偏移角度,根据所述视角偏移角度修改眼部观察位置矩阵;根据所述眼部观察位置矩阵,对3D背景模型的顶点进行矩阵运算,得到新的顶点;对所述新的顶点进行着色处理,生成对应视角的3D背景模型。
- 根据权利要求1~5中任意一项所述的方法,其特征在于,该方法还包括:获取视角平移参数,根据所述视角平移参数,对所述3D背景模型进行平移运算,得到新的3D背景模型。
- 根据权利要求1~5中任意一项所述的方法,其特征在于,所述3D背景模型为3D影院模型;其中,所述3D影院模型中的银幕模型为对应的视频图像显示区域。
- 一种头戴设备,其特征在于,该头戴设备包括:背景处理模块、采集处理模块、图像处理模块、显示模块;所述背景处理模块,用于建立3D背景模型,在所述3D背景模型中设置视频图像显示区域;所述采集处理模块,连接所述背景处理模块,用于获取视频图像数据,将所述视频图像数据投射在所述3D背景模型的视频图像显示区域中;以及用于采集头戴设备的显示参数,将所述头戴设备的显示参数发送给图像处理模块;所述图像处理模块,连接所述采集处理模块,用于根据所述显示参数,对所述投射有视频图像数据的3D背景模型进行图像变形处理,生成分别对应左眼的第一3D视频图像和对应右眼的第二3D视频图像;所述显示模块,分别连接所述图像处理模块和所述图像处理模块603,用于在所述视频图像显示区域中将所述第一3D视频图像和所述第二3D视频图像分别经过两个透镜折射后进行显示。
- 根据权利要求9所述的头戴设备,其特征在于,所述图像处理模块,还用于根据所述头戴设备的显示区域的长度值和宽度值,生成第一框体和第二框体;将所述显示有视频图像的3D背景模型进行图像变形处理,生成分别对应左眼的第一视频数据和对应右眼的第二视频数据;将所述第一视频数据放置到所述第一框体内,生成对应左眼的第一3D视频图像;将所述第二视频数据放置到所述第二框体内,生成对应右眼的第二3D视频图像;分别对所述第一3D视频图像的边缘部分和所述第二3D视频图像的边缘部分进行图像平滑处理。
- 根据权利要求9所述的头戴设备,其特征在于,该头戴设备还包括:视角调节模块,连接所述图像处理模块,用于设置眼部观察位置矩阵,以及在获取到视角偏移角度之后,根据所述视角偏移角度修改眼部观察位置矩阵;所述图像处理模块,还用于根据所述眼部观察位置矩阵,对3D背景模型的顶点进行矩阵运算,得到新的顶点;对所述新的顶点进行着色处理,生成对应视角的3D背景模型。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2017530067A JP6384940B2 (ja) | 2014-12-31 | 2015-12-28 | 3d画像の表示方法及びヘッドマウント機器 |
US15/324,247 US10104358B2 (en) | 2014-12-31 | 2015-12-28 | 3D image display method and head-mounted device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410854774.0 | 2014-12-31 | ||
CN201410854774.0A CN104581119B (zh) | 2014-12-31 | 2014-12-31 | 一种3d图像的显示方法和一种头戴设备 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016107519A1 true WO2016107519A1 (zh) | 2016-07-07 |
Family
ID=53096193
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2015/099194 WO2016107519A1 (zh) | 2014-12-31 | 2015-12-28 | 一种3d图像的显示方法和一种头戴设备 |
Country Status (4)
Country | Link |
---|---|
US (1) | US10104358B2 (zh) |
JP (1) | JP6384940B2 (zh) |
CN (1) | CN104581119B (zh) |
WO (1) | WO2016107519A1 (zh) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104581119B (zh) * | 2014-12-31 | 2017-06-13 | 青岛歌尔声学科技有限公司 | 一种3d图像的显示方法和一种头戴设备 |
CN104898280B (zh) * | 2015-05-04 | 2017-09-29 | 青岛歌尔声学科技有限公司 | 一种头戴式显示器的显示方法和头戴式显示器 |
CN106680995A (zh) * | 2015-11-05 | 2017-05-17 | 丰唐物联技术(深圳)有限公司 | 显示控制方法及装置 |
CN105915877A (zh) * | 2015-12-27 | 2016-08-31 | 乐视致新电子科技(天津)有限公司 | 一种三维视频的自由观影方法及设备 |
CN105657407B (zh) * | 2015-12-31 | 2018-11-23 | 深圳纳德光学有限公司 | 头戴显示器及其双目3d视频显示方法和装置 |
CN105915885A (zh) * | 2016-03-02 | 2016-08-31 | 优势拓展(北京)科技有限公司 | 鱼眼图像的3d交互显示方法和系统 |
CN109996060B (zh) * | 2017-12-30 | 2021-09-03 | 深圳多哚新技术有限责任公司 | 一种虚拟现实影院系统及信息处理方法 |
WO2020146003A1 (en) * | 2019-01-07 | 2020-07-16 | Yutou Technology (Hangzhou) Co., Ltd. | Mobile device integrated visual enhancement system |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101309389A (zh) * | 2008-06-19 | 2008-11-19 | 深圳华为通信技术有限公司 | 一种合成可视图像的方法、装置和终端 |
CN101605271A (zh) * | 2009-07-08 | 2009-12-16 | 无锡景象数字技术有限公司 | 一种基于单幅图像的2d转3d方法 |
CN102238396A (zh) * | 2010-04-28 | 2011-11-09 | 周修平 | 立体视觉的影像转换方法、成像方法及系统 |
CN102438157A (zh) * | 2010-08-18 | 2012-05-02 | 索尼公司 | 图像处理装置、方法和程序 |
US20140055353A1 (en) * | 2011-04-28 | 2014-02-27 | Sharp Kabushiki Kaisha | Head-mounted display |
WO2014199155A1 (en) * | 2013-06-11 | 2014-12-18 | Sony Computer Entertainment Europe Limited | Head-mountable apparatus and systems |
CN104581119A (zh) * | 2014-12-31 | 2015-04-29 | 青岛歌尔声学科技有限公司 | 一种3d图像的显示方法和一种头戴设备 |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2916076B2 (ja) * | 1993-08-26 | 1999-07-05 | シャープ株式会社 | 画像表示装置 |
US7907794B2 (en) * | 2007-01-24 | 2011-03-15 | Bluebeam Software, Inc. | Method for aligning a modified document and an original document for comparison and difference highlighting |
CN101533156A (zh) * | 2009-04-23 | 2009-09-16 | 天津三维成像技术有限公司 | 利用单片显示器件制作的头盔式立体显示器 |
CN101661163A (zh) * | 2009-09-27 | 2010-03-03 | 合肥工业大学 | 增强现实系统的立体头盔显示器 |
US8446461B2 (en) * | 2010-07-23 | 2013-05-21 | Superd Co. Ltd. | Three-dimensional (3D) display method and system |
JP5874176B2 (ja) * | 2011-03-06 | 2016-03-02 | ソニー株式会社 | 表示装置、並びに中継装置 |
JP2013050558A (ja) | 2011-08-30 | 2013-03-14 | Sony Corp | ヘッド・マウント・ディスプレイ及び表示制御方法 |
CN103327352B (zh) * | 2013-05-03 | 2015-08-12 | 四川虹视显示技术有限公司 | 采用串行处理方式实现双显示屏3d显示的装置及方法 |
-
2014
- 2014-12-31 CN CN201410854774.0A patent/CN104581119B/zh active Active
-
2015
- 2015-12-28 WO PCT/CN2015/099194 patent/WO2016107519A1/zh active Application Filing
- 2015-12-28 US US15/324,247 patent/US10104358B2/en active Active
- 2015-12-28 JP JP2017530067A patent/JP6384940B2/ja active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101309389A (zh) * | 2008-06-19 | 2008-11-19 | 深圳华为通信技术有限公司 | 一种合成可视图像的方法、装置和终端 |
CN101605271A (zh) * | 2009-07-08 | 2009-12-16 | 无锡景象数字技术有限公司 | 一种基于单幅图像的2d转3d方法 |
CN102238396A (zh) * | 2010-04-28 | 2011-11-09 | 周修平 | 立体视觉的影像转换方法、成像方法及系统 |
CN102438157A (zh) * | 2010-08-18 | 2012-05-02 | 索尼公司 | 图像处理装置、方法和程序 |
US20140055353A1 (en) * | 2011-04-28 | 2014-02-27 | Sharp Kabushiki Kaisha | Head-mounted display |
WO2014199155A1 (en) * | 2013-06-11 | 2014-12-18 | Sony Computer Entertainment Europe Limited | Head-mountable apparatus and systems |
CN104581119A (zh) * | 2014-12-31 | 2015-04-29 | 青岛歌尔声学科技有限公司 | 一种3d图像的显示方法和一种头戴设备 |
Also Published As
Publication number | Publication date |
---|---|
US10104358B2 (en) | 2018-10-16 |
CN104581119B (zh) | 2017-06-13 |
JP6384940B2 (ja) | 2018-09-05 |
JP2018505580A (ja) | 2018-02-22 |
US20170201734A1 (en) | 2017-07-13 |
CN104581119A (zh) | 2015-04-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2016107519A1 (zh) | 一种3d图像的显示方法和一种头戴设备 | |
WO2019041351A1 (zh) | 一种3d vr视频与虚拟三维场景实时混叠渲染的方法 | |
WO2016000309A1 (zh) | 基于穿戴设备的增强现实方法及系统 | |
CN104811687A (zh) | 一种虚拟现实头盔的电路系统和虚拟现实头盔 | |
US20160269685A1 (en) | Video interaction between physical locations | |
CN101877767A (zh) | 一种六通道视频源生成三维全景连续视频的方法和系统 | |
WO2020122488A1 (ko) | 카메라 기반의 혼합현실 글래스 장치 및 혼합현실 디스플레이 방법 | |
CN110866978A (zh) | 一种实时混合现实视频拍摄中的相机同步方法 | |
WO2019082794A1 (ja) | 画像生成装置、画像生成システム、画像生成方法、およびプログラム | |
WO2016159444A1 (ko) | 헤드 마운트 디스플레이 장치 및 이에 구비되는 헤드 마운트 디스플레이 장치용 휴대 단말기 고정 프레임 | |
WO2018129792A1 (zh) | Vr播放方法、vr播放装置及vr播放系统 | |
WO2019098198A1 (ja) | 画像生成装置、ヘッドマウントディスプレイ、画像生成システム、画像生成方法、およびプログラム | |
WO2018161817A1 (zh) | 一种存储介质、在虚拟现实场景中模拟摄影的方法及系统 | |
CN108989784A (zh) | 虚拟现实设备的图像显示方法、装置、设备及存储介质 | |
CN116071471A (zh) | 一种基于虚幻引擎的多机位渲染方法与装置 | |
WO2018092992A1 (ko) | 룩업테이블 기반의 실시간 파노라마 영상 제작 시스템 및 이를 이용한 실시간 파노라마 영상 제작 방법 | |
JP2002232783A (ja) | 画像処理装置および画像処理方法、並びにプログラム記憶媒体 | |
Gilson et al. | High fidelity immersive virtual reality | |
CN111083368A (zh) | 一种基于云端的模拟物理云台全景视频展示系统 | |
WO2013180442A1 (ko) | 입체 동영상 촬영용 장치 및 카메라 | |
CN108632538B (zh) | 一种cg动画和相机阵列相结合的子弹时间拍摄系统及方法 | |
WO2017008338A1 (zh) | 一种三维图像处理方法及装置 | |
CN204517985U (zh) | 一种虚拟现实头盔的电路系统和虚拟现实头盔 | |
CN110244837A (zh) | 增强现实且具有虚拟影像叠加的体验眼镜及其成像方法 | |
CN107087153B (zh) | 3d图像生成方法、装置及vr设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15875195 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15324247 Country of ref document: US |
|
ENP | Entry into the national phase |
Ref document number: 2017530067 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15875195 Country of ref document: EP Kind code of ref document: A1 |