WO2017107444A1 - 虚拟显示设备的影像播放方法和装置 - Google Patents

虚拟显示设备的影像播放方法和装置 Download PDF

Info

Publication number
WO2017107444A1
WO2017107444A1 PCT/CN2016/088677 CN2016088677W WO2017107444A1 WO 2017107444 A1 WO2017107444 A1 WO 2017107444A1 CN 2016088677 W CN2016088677 W CN 2016088677W WO 2017107444 A1 WO2017107444 A1 WO 2017107444A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
display device
framing area
user
virtual display
Prior art date
Application number
PCT/CN2016/088677
Other languages
English (en)
French (fr)
Inventor
楚明磊
Original Assignee
乐视控股(北京)有限公司
乐视致新电子科技(天津)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 乐视控股(北京)有限公司, 乐视致新电子科技(天津)有限公司 filed Critical 乐视控股(北京)有限公司
Priority to US15/237,671 priority Critical patent/US20170176934A1/en
Publication of WO2017107444A1 publication Critical patent/WO2017107444A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/398Synchronisation thereof; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/128Adjusting depth or disparity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/344Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays

Definitions

  • the present invention relates to a stereoscopic playback technology, and in particular, to an image omnidirectional stereoscopic playback method and apparatus for a virtual display device.
  • the virtual helmet is a helmet that uses a helmet display to close the visual and auditory sense of the outside world and guide the user to create a feeling in the virtual environment.
  • the working principle is that the left and right eye screens in the helmet respectively display images of left and right eyes with different viewing angles, and the human eye obtains the image information with the difference in viewing angles to generate a stereoscopic effect in the mind.
  • a stereoscopic image is generally played by playing a 360-degree scan of a planar photo.
  • This playback mode can only display part of the side or part of a thing, and cannot perform a full-scale stereoscopic display.
  • the 360-degree ring scene can be seen, and the viewing and experience of the full-scale scene cannot be performed.
  • the present invention provides a video playback method for a virtual display device, which aims to overcome the defects in the prior art, thereby achieving 306 degree panoramic playback and improving the user's visual experience.
  • the virtual helmet is a helmet that uses a helmet display to close the visual and auditory sense of the outside world and guide the user to create a feeling in the virtual environment.
  • the working principle is that the left and right eye screens in the helmet respectively display images of left and right eyes with different viewing angles, and the human eye obtains the image information with the difference in viewing angles to generate a stereoscopic effect in the mind.
  • a stereoscopic image is generally played by playing a 360-degree scan of a planar photo.
  • This playback mode can only display part of the side or part of a thing, and cannot perform a full-scale stereoscopic display.
  • the 360-degree ring scene can be seen, and the viewing and experience of the full-scale scene cannot be performed.
  • the present invention provides a video playback method for a virtual display device, which aims to overcome the defects in the prior art, thereby achieving 306 degree panoramic playback and improving the user's visual experience.
  • the virtual display device plays an image to the left eye of the user through the first screen, and the virtual display device plays the image to the right eye of the user through the second screen.
  • the image playing method includes the following steps.
  • Step S100 The image acquisition step is to acquire a panoramic image in the virtual display device, where the panoramic image includes a first image and a second image.
  • Step S200 the framing area initializing step, setting a first framing area on the first image, and setting a second framing area on the second image.
  • Step S300 the framing area adjusting step, in response to the triggering instruction of the user, adjusting the position of the first finder area within the range of the first image according to the triggering instruction, and simultaneously in the second according to the triggering instruction The position of the second viewing zone is adjusted within the range of the image.
  • step S400 the image playing step is to send the image corresponding to the adjusted first framing area to the first screen for playing, and send the image corresponding to the adjusted second framing area to the second screen for playing.
  • the framing area initialization step includes: step S210, calculating a parallax percentage range value of the first image and the second image; and step S220, setting a parallax a percentage range threshold, the parallax percentage range threshold is a percentage range of parallax that the user can tolerate when viewing the image; and in step S230, setting the said image on the first image according to the parallax percentage range value and the disparity percentage range threshold a first framing area, wherein the second framing area is set on the second image according to the parallax percentage range value and the parallax percentage range threshold.
  • the framing area adjustment step includes the following steps: Step S311, receiving coordinate change data of a user's head; and step S312, changing data according to coordinates of the user's head. Adjusting a position of the first framing area in the first image range, and adjusting a position of the second framing area in the second image range according to coordinate change data of the user head.
  • the framing area adjustment step includes: step S321, receiving a voice instruction or an action instruction from a user; and step S322, according to the voice instruction or the action instruction Adjusting the position of the first framing area in the first image range, and adjusting the position of the second framing area in the second image range according to the voice command or the motion instruction.
  • the image playback method further includes: step S500, an image preprocessing step, adjusting the first image between the image capturing step and the framing area initializing step And a parameter of the second image to reduce a color difference between the first image and the second image.
  • the virtual display device plays an image to the left eye of the user through the first screen, and the virtual display device plays the image to the right eye of the user through the second screen.
  • the image playback device includes: an image acquisition module, configured to acquire a panoramic image in the virtual display device, the panoramic image includes a first image and a second image; and a finder initializing module, configured to be in the first image Setting a first framing area, and setting a second framing area on the second image; the framing area adjusting module is configured to be within the range of the first image according to the triggering instruction in response to a triggering instruction of the user Adjusting a position of the first framing area, and adjusting a position of the second framing area in a range of the second image according to the triggering instruction; and a video playing module, configured to adjust the first framing area
  • the corresponding image is sent to the first screen for playing, and the image corresponding to the adjusted second viewing area is sent to the
  • the framing area initialization module includes: a disparity percentage range value calculation sub-module, configured to calculate a disparity percentage range value of the first image and the second image a parallax percentage range threshold setting sub-module, configured to set a disparity percentage range threshold, the parallax percentage range threshold is a percentage range of parallax that the user can tolerate when viewing the image; and a finder setting sub-module for using the parallax according to the parallax a percentage range value and the disparity percentage range threshold setting the first framing area on the first image while setting the photo on the second image according to the disparity percentage range value and the disparity percentage range threshold The second viewing area.
  • a disparity percentage range value calculation sub-module configured to calculate a disparity percentage range value of the first image and the second image
  • a parallax percentage range threshold setting sub-module configured to set a disparity percentage range threshold
  • the parallax percentage range threshold is a percentage range of parallax that the user can tolerate when
  • the framing area adjustment module includes: a data receiving sub-module for receiving coordinate change data of a user's head; and a first finder area adjusting sub-module for The coordinate change data of the user's head adjusts the position of the first framing area within the first image range, and adjusts the first image range according to the coordinate change data of the user's head. Second, take the location of the scenic spot.
  • the framing area adjustment module includes: an instruction receiving submodule for receiving a voice instruction or an action instruction from a user; and a second finder adjustment submodule, for the image playing device of the virtual display device Adjusting a position of the first framing area in the first image range according to the voice command or an action instruction, and adjusting the second framing area in the second image range according to the voice command or an action instruction s position.
  • the image playback device of the virtual display device preferably, the image playback device further includes: an image preprocessing module, configured to adjust parameters of the first image and the second image to reduce the number The color difference between an image and the second image.
  • an image preprocessing module configured to adjust parameters of the first image and the second image to reduce the number The color difference between an image and the second image.
  • the image playing method and device of the virtual display device provided by the invention realizes 360-degree stereo video or image playback through steps of image acquisition, framing area initialization, framing area adjustment and image playback.
  • the image playing method and device of the virtual display device provided by the present invention can automatically set a good initial parallax, and can also adjust the playing parallax according to relevant instructions in real time, thereby enabling the user to obtain an excellent visual experience.
  • 1 is a preferred embodiment of a video playback method of a virtual display device of the present invention.
  • Figure 2 is a state diagram of the initialization of a viewing zone in a preferred embodiment of the present invention.
  • FIG. 3 is a schematic diagram showing a first setting process of a viewing area in a preferred embodiment of the present invention.
  • FIG. 4 is a schematic diagram showing a second setting process of a viewing area in a preferred embodiment of the present invention.
  • Figure 5 is a schematic illustration of user head coordinate changes in a preferred embodiment of the present invention.
  • Fig. 6 is a view showing a preferred embodiment of the video playback apparatus of the virtual display device of the present invention.
  • the present invention provides a video playback method and apparatus for a virtual display device, wherein the virtual display device can store a panoramic image, and the virtual display device plays an image to the left eye of the user through the first screen, the virtual The display device plays an image to the right eye of the user through the second screen.
  • the panoramic image can be divided into two parts, a first image and a second image, wherein an image played on the first screen to the left eye of the user is taken from the first image, and on the second screen. The image played to the user's right eye is taken from the second image.
  • the virtual display device can be understood as various devices capable of providing a stereoscopic visual experience for the user, such as a virtual helmet, smart glasses, and the like.
  • FIG. 1 is a preferred embodiment of a video playback method of a virtual display device of the present invention. As shown in FIG. 1 , the video playing method of the virtual display device provided by the present invention can be implemented by the following steps.
  • Step S100 an image acquisition step.
  • a panoramic image in the virtual display device where the panoramic image includes a first image and a second image.
  • the panoramic image may be pre-stored in the memory of the virtual display device.
  • the panoramic image may be various stereoscopic panoramic images captured by a 360-degree omnidirectional camera.
  • Various stereoscopic images synthesized by post-production for example, can be realized by 360-degree left and right images or 360-degree left and right composite video.
  • the panoramic image can generally display the object in all directions, and the panoramic image is generally divided into the first image and the second image.
  • the image viewed by the left eye of the user is mainly taken from the first image
  • the image viewed by the right eye of the user is mainly taken from the second image.
  • Step S200 a framing area initialization step.
  • a first framing area is set on the first image
  • a second framing area is set on the second image.
  • the main purpose of the framing area initialization step is to determine the image range displayed to the left eye of the user on the first image, and determine the image range displayed to the right eye of the user on the second image, and set an initial range for the two ranges.
  • the position, in various subsequent processing steps, can be adjusted by using the initial position determined in the step as a reference.
  • the first viewing area may be set at a center position of the first image, and the second viewing area may also be set at a center position of the second image.
  • This design can effectively ensure that the two viewing areas have a large adjustable range on the image.
  • the initial positions of the first framing area and the second framing area may also be correspondingly changed according to specific needs.
  • the framing area initialization step may be implemented by: firstly calculating a disparity percentage range value of the first image and the second image; second, setting a disparity percentage range threshold,
  • the parallax percentage range threshold is a range of disparity percentages that the user can tolerate when viewing the image; third, setting the first view area on the first image according to the disparity percentage range value and the disparity percentage range threshold And setting the second framing area on the second image according to the parallax percentage range value and the parallax percentage range threshold.
  • FIG. 2 is a state diagram of initializing a viewing zone in a preferred embodiment of the present invention
  • FIG. 3 is a schematic diagram of a first setting process of a viewing zone in a preferred embodiment of the present invention
  • FIG. 4 is a viewing zone in a preferred embodiment of the present invention.
  • a schematic diagram of the second set process As shown in FIG. 2, the first viewfinder area 11 is set on the first image 1, and the second viewfinder area 21 is set on the second image 2.
  • the framing area initialization step can be implemented by the following process.
  • the parallax percentage calculation is performed on the panoramic image (which may include a panoramic image or video). Specifically, the parallax percentage calculation can be performed in various manners, for example, for a panoramic stereoscopic image, an object having the largest and smallest parallax can be selected in the stereoscopic image, and the position PL of the object in the left image is manually measured, and The object is at the position PR in the right picture, and the parallax of the object is PL.x - PR.x (where PL.x represents the component of the PL in the x direction, and PR.x represents the component of the PR in the x direction); It is also possible to use image matching to match each pixel on the left and right images to calculate the parallax of the stereo image.
  • some video frames containing the largest and smallest disparity can be selected from the video, and processed by the above-mentioned processing of the image, thereby obtaining the approximate parallax of the video.
  • the calculation of the parallax is not limited to the above method, and calculation can be performed by other methods, which are not enumerated here.
  • the maximum and minimum parallaxes PMAX1 and PMIN1 of the panoramic image or video are recorded according to the calculation result, and the maximum minimum parallax percentage PRMAX1 of the panoramic image or video is recorded.
  • PRMIN1, PRMAX1 And PRMIN1 are the ratio of PMAX1, PMIN1 and half of the video frame width of the panoramic image or video, respectively.
  • the maximum and minimum parallax percentages PRMAX0 and PRMIN0 that the viewing device can tolerate.
  • the maximum parallax percentage needs to be adjusted, for example, it can be adjusted to about 80% of the maximum parallax percentage of the viewing device.
  • the maximum minimum parallax percentage is obtained in order to obtain the parallax of the image or video.
  • partial regions are taken out on the first image 1 and the second image 2 as the first viewing zone 11 and the second viewing zone 21.
  • the maximum parallax percentage of the first image 1 and the second image 2 is PMAX1, and both widths are W; the widths of the first viewing zone 11 and the second viewing zone 21 taken out are both w.
  • PRMAX1 the parallax width of the first framing area 11 and the second framing area 21 is X1.
  • PRMAX1* w PRMAX1* w.
  • the parallax percentage is 80%*PRMAX0
  • the parallax in this case is relatively small, and the parallax needs to be increased, and the distances of the positions of the first framing area and the second framing area to the middle line are required to be shifted by X.
  • the dotted line indicates the original position of the first framing area and the second framing area
  • the solid line indicates the position after the initial setting of the first framing area and the second framing area.
  • the parallax in this case is relatively large, and the parallax needs to be reduced, the first framing area and the second framing area are required to move away from the center line. the distance.
  • the dotted line indicates the original position of the first framing area and the second framing area
  • the solid line indicates the position after the initial setting of the first framing area and the second framing area.
  • Step S300 the framing area adjustment step.
  • the framing area adjusting step in response to the triggering instruction of the user, adjusting the position of the first framing area within the range of the first image according to the triggering instruction, and simultaneously in the second image according to the triggering instruction Adjusting the position of the second viewfinder within the range.
  • the main purpose of this step is to realize the change of the position of the finder according to the change of various viewing states of the user or various other instructions of the user, thereby realizing the dynamic change of the image and the adjustment of the stereo effect.
  • the triggering instruction may include, but is not limited to, a coordinate change of the user's head, a change in the position of the user's eyeball, a voice instruction of the user, an action instruction of the user, and the like.
  • the triggering command may be coordinate change data of the user's head.
  • the coordinate change data of the user's head as shown in FIG. 5 is received, including the amount of coordinate change in three directions as shown.
  • the pitch can be used to characterize the amount of rotation around the X axis (Pitch Head movement), ie the pitch angle;
  • yaw can be used to characterize the amount of rotation around the Y axis (Yaw head Movement), ie the yaw angle;
  • roll can be used to characterize the amount of rotation around the Z axis (Roll head) Movement), that is, the roll angle. Adjusting a position of the first framing area in the first image range according to coordinate change data of the user head, and adjusting the second image range according to coordinate change data of the user head The location of the second viewport.
  • the coordinate change data can be implemented by setting a corresponding sensing device.
  • the left side change data of the user's head can be reflected by detecting the state data of the gyroscope.
  • the coordinate value corresponding to the initial position of the user's head can be set as the reference coordinate value, and the reference coordinate values corresponding to the gyroscope are recorded: picth0, yaw0, and roll0.
  • the gyroscope real-time coordinate pitch, yaw, and roll values are correspondingly changed.
  • the viewfinder area also changes, allowing the user to see that the image can be changed in real time according to the position of the head, thereby having a 360-degree viewing feeling.
  • YAW 180 degrees
  • the framing area reaches the far left.
  • YAW -180 degrees
  • the framing area reaches the far right.
  • the change of the Roll value reflects the overall head movement, and when displayed, it may not be processed. It can be understood that this is only one of the embodiments.
  • the position of the finder can also be adjusted according to the change of the Roll value. It should be noted that the picth0, yaw0, roll0, pitch, yaw, roll, PITCH, YAW, etc. mentioned in the text can be understood as variable symbols.
  • the triggering instruction may be a voice instruction or an action instruction or the like from a user.
  • Receiving a voice command or an action command from a user, and adjusting a position of the first viewfinder area in the first image range according to the voice command or the action command, and according to the voice command or the action instruction The position of the second framing area is adjusted within the second image range.
  • the adjustment of the stereoscopic effect is achieved by the adjustment of the disparity value.
  • a voice instruction can be recognized by a voice recognition technology, or a motion instruction of the user can be recognized by a sensor.
  • a voice command as an example, when the "convex" command issued by the user is recognized, the position of the finder area is adjusted accordingly to make the display scene bulge; when the user sends a "concave” command, according to this The adjustment of the position of the framing area makes the display scene concave.
  • Step S400 a video playing step.
  • the image corresponding to the adjusted first framing area is sent to the first screen for playing, and the image corresponding to the adjusted second framing area is sent to the second screen. Play.
  • the positions of the first viewfinder area and the second viewfinder area are adjusted and changed in real time, and correspondingly, in the image playback step, displayed on the first screen and the second screen.
  • the images are also adjusted and changed in real time accordingly.
  • the image playing method further includes: step S500, an image preprocessing step between the image capturing step and the framing area initializing step.
  • the image preprocessing step is mainly used to adjust parameters of the first image and the second image to reduce a color difference between the first image and the second image.
  • parameters such as color, brightness, and color saturation of the first image and the second image may be adjusted by an image preprocessing step to make the colors of the first image and the second image as close as possible.
  • the image playing method of the virtual display device provided by the invention can perform 360-degree stereo video or image playback, can automatically set a good initial parallax percentage, and can also adjust the play parallax percentage according to relevant instructions in real time, thereby enabling the user to obtain the pole. A good visual experience.
  • the present invention also provides a video playback device for a virtual display device.
  • the virtual display device plays an image to the left eye of the user through the first screen, and the virtual display device plays an image to the right eye of the user through the second screen.
  • Fig. 6 is a view showing a preferred embodiment of the video playback apparatus of the virtual display device of the present invention.
  • the video playback device of the virtual display device provided by the present invention includes an image acquisition module 10, a framing area initialization module 20, a framing area adjustment module 30, and a video playback module 40, and preferably includes an image preprocessing module 50. .
  • the image acquisition module 10 is configured to acquire a panoramic image in the virtual display device, where the panoramic image includes a first image and a second image.
  • the framing area initializing module 20 is configured to set a first framing area on the first image and a second framing area on the second image.
  • the framing area adjustment module 30 is configured to adjust a position of the first finder area within a range of the first image according to the triggering instruction according to the triggering instruction of the user, and according to the triggering instruction, in the The position of the second framing area is adjusted within the range of the two images.
  • the image playing module 40 is configured to send the image corresponding to the adjusted first framing area to the first screen for playing, and send the image corresponding to the adjusted second framing area to the second screen. Play.
  • the image pre-processing module 50 is configured to adjust parameters of the first image and the second image to obtain a color difference between the first image and the second image after acquiring the panoramic image.
  • the framing area initialization module further includes: a parallax percentage range value calculation sub-module, a parallax percentage range threshold value setting sub-module, and a framing area setting sub-module.
  • the parallax percentage range value calculation sub-module is configured to calculate a disparity percentage range value of the first image and the second image; the disparity percentage range threshold setting sub-module is configured to set a disparity percentage range threshold, The parallax percentage range threshold is a range of parallax percentages that the user can tolerate when viewing the image; the finder setting sub-module is configured to set the first image according to the parallax percentage range value and the parallax percentage range threshold The first framing area is described, and the second framing area is set on the second image according to the parallax percentage range value and the parallax percentage range threshold.
  • the framing area adjustment module includes: a data receiving sub-module and a first finder adjustment sub-module.
  • the data receiving sub-module is configured to receive coordinate change data of a user's head; the first finder adjusting sub-module is configured to adjust the first image range according to coordinate change data of the user's head Positioning the framing area while adjusting the position of the second framing area in the second image range according to the coordinate change data of the user head.
  • the finder adjustment module includes: an instruction receiving submodule and a second finder adjustment submodule.
  • the instruction receiving submodule is configured to receive a voice instruction or an action instruction from a user; the second finder adjustment submodule is configured to adjust the first in the first image range according to the voice instruction or the action instruction Positioning the framing area, and adjusting the position of the second framing area in the second image range according to the voice command or the motion instruction.
  • the image playing method and device of the virtual display device provided by the invention realizes 360-degree stereo video or image playback through steps of image acquisition, framing area initialization, framing area adjustment and image playback.
  • the image playing method and device of the virtual display device provided by the present invention can automatically set a good initial parallax, and can also adjust the playing parallax according to relevant instructions in real time, thereby enabling the user to obtain an excellent visual experience.
  • An embodiment of the present invention provides a video playback device for a virtual display device.
  • the virtual display device plays an image to a left eye of a user through a first screen, and the virtual display device plays an image to a right eye of a user through a second screen, where the device includes : one or more processors;
  • One or more programs the one or more programs being stored in the memory, when executed by the one or more processors:
  • Step S100 an image acquisition step, acquiring a panoramic image in the virtual display device, the panoramic image includes a first image and a second image, in step S200, a framing area initializing step, setting a first framing area on the first image, Setting a second framing area on the second image, in step S300, the framing area adjusting step, in response to the triggering instruction of the user, adjusting the position of the first framing area within the range of the first image according to the triggering instruction, Adjusting, according to the triggering instruction, the position of the second framing area in the range of the second image, in step S400, the video playing step, sending the image corresponding to the adjusted first framing area to the first screen for playing, The image corresponding to the adjusted second viewing area is sent to the second screen for playing.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

本发明公开了一种虚拟显示设备的影像播放方法和装置。所述影像播放方法包括:获取虚拟显示设备中的全景影像,全景影像包括第一影像和第二影像;在第一影像上设定第一取景区,在第二影像上设定第二取景区;响应于用户的触发指令,根据触发指令在第一影像的范围内调整第一取景区的位置,同时根据触发指令在第二影像的范围内调整第二取景区的位置;将调整后的第一取景区所对应的影像发送到第一屏幕上播放,将调整后的第二取景区所对应的影像发送到第二屏幕上播放。所述影像播放装置包括分别执行上述步骤的影像获取模块、取景区初始化模块、取景区调整模块和影像播放模块。本发明所提供的影像播放方法和装置能够进行全景播放,视觉效果极佳。

Description

虚拟显示设备的影像播放方法和装置
本申请要求在2015年12月21日提交中国专利局、申请号201510966519.X、发明名称为“虚拟显示设备的影像播放方法和装置”的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。
技术领域
本发明涉及立体播放技术,特别是涉及一种虚拟显示设备的影像全方位立体播放方法和装置。
背景技术
发明人在实现本发明的过程中发现随着虚拟显示技术的发展,越来越多的人们通过相关的虚拟显示装置来进行立体影像的观看,从而获得身临其境的视觉体验,例如:虚拟头盔。虚拟头盔是一种利用头盔显示器将人对外界的视觉、听觉封闭,引导用户产生一种身在虚拟环境中的感觉的头盔。其工作原理是头盔内部左右眼屏幕分别显示带有视角差异的左右眼的图像,人眼获取这种带有视角差异的图像信息后在脑海中产生立体感。
目前,对于传统的虚拟显示装置,一般是通过播放360度扫描拍摄的平面照片来实现立体影像的播放,这种播放方式只能展示事物的部分侧面或者是局部,不能进行全方位的立体展示。对于用户而言,只能看到360度环幕场景,而无法进行全方位场景的观看和体验。
因此,非常有必要设计一种新的虚拟显示设备的影像全方位立体播放方法,从而克服上述缺陷,实现影像的全方位立体播放。
技术问题
有鉴于此,本发明提供了一种虚拟显示设备的影像播放方法,旨在克服现有技术中的缺陷,从而实现306度全景播放,提升用户的视觉体验。
技术解决方案
发明人在实现本发明的过程中发现随着虚拟显示技术的发展,越来越多的人们通过相关的虚拟显示装置来进行立体影像的观看,从而获得身临其境的视觉体验,例如:虚拟头盔。虚拟头盔是一种利用头盔显示器将人对外界的视觉、听觉封闭,引导用户产生一种身在虚拟环境中的感觉的头盔。其工作原理是头盔内部左右眼屏幕分别显示带有视角差异的左右眼的图像,人眼获取这种带有视角差异的图像信息后在脑海中产生立体感。
目前,对于传统的虚拟显示装置,一般是通过播放360度扫描拍摄的平面照片来实现立体影像的播放,这种播放方式只能展示事物的部分侧面或者是局部,不能进行全方位的立体展示。对于用户而言,只能看到360度环幕场景,而无法进行全方位场景的观看和体验。
因此,非常有必要设计一种新的虚拟显示设备的影像全方位立体播放方法,从而克服上述缺陷,实现影像的全方位立体播放。
有鉴于此,本发明提供了一种虚拟显示设备的影像播放方法,旨在克服现有技术中的缺陷,从而实现306度全景播放,提升用户的视觉体验。
对于本发明所提供的虚拟显示设备的影像播放方法,所述虚拟显示设备通过第一屏幕向用户的左眼播放影像,所述虚拟显示设备通过第二屏幕向用户的右眼播放影像。所述影像播放方法包括以下步骤。
步骤S100,影像获取步骤,获取所述虚拟显示设备中的全景影像,所述全景影像包括第一影像和第二影像。
步骤S200,取景区初始化步骤,在所述第一影像上设定第一取景区,在所述第二影像上设定第二取景区。
步骤S300,取景区调整步骤,响应于用户的触发指令,根据所述触发指令在所述第一影像的范围内调整所述第一取景区的位置,同时根据所述触发指令在所述第二影像的范围内调整所述第二取景区的位置。
步骤S400,影像播放步骤,将调整后的第一取景区所对应的影像发送到所述第一屏幕上播放,将调整后的第二取景区所对应的影像发送到所述第二屏幕上播放。
如上所述的虚拟显示设备的影像播放方法,优选地,所述取景区初始化步骤包括:步骤S210,计算所述第一影像与所述第二影像的视差百分比范围值;步骤S220,设定视差百分比范围阈值,所述视差百分比范围阈值是用户观看影像时所能容忍的视差百分比范围;步骤S230,根据所述视差百分比范围值和所述视差百分比范围阈值在所述第一影像上设置所述第一取景区,同时根据所述视差百分比范围值和所述视差百分比范围阈值在所述第二影像上设置所述第二取景区。
如上所述的虚拟显示设备的影像播放方法,优选地,所述取景区调整步骤包括以下步骤:步骤S311,接收用户头部的坐标变化数据;步骤S312,根据所述用户头部的坐标变化数据在所述第一影像范围内调整所述第一取景区的位置,同时根据所述用户头部的坐标变化数据在所述第二影像范围内调整所述第二取景区的位置。
如上所述的虚拟显示设备的影像播放方法,优选地,所述取景区调整步骤包括:步骤S321,接收来自用户的语音指令或动作指令;步骤S322,根据所述语音指令或动作指令在所述第一影像范围内调整所述第一取景区的位置,同时根据所述语音指令或动作指令在所述第二影像范围内调整所述第二取景区的位置。
如上所述的虚拟显示设备的影像播放方法,优选地,所述影像播放方法在所述影像获取步骤和取景区初始化步骤之间还包括:步骤S500,影像预处理步骤,调整所述第一影像和所述第二影像的参数,以缩小所述第一影像和所述第二影像的颜色差距。
对于本发明所提供的虚拟显示设备的影像播放装置,所述虚拟显示设备通过第一屏幕向用户的左眼播放影像,所述虚拟显示设备通过第二屏幕向用户的右眼播放影像。所述影像播放装置包括:影像获取模块,用于获取所述虚拟显示设备中的全景影像,所述全景影像包括第一影像和第二影像;取景区初始化模块,用于在所述第一影像上设定第一取景区,在所述第二影像上设定第二取景区;取景区调整模块,用于响应于用户的触发指令,根据所述触发指令在所述第一影像的范围内调整所述第一取景区的位置,同时根据所述触发指令在所述第二影像的范围内调整所述第二取景区的位置;影像播放模块,用于将调整后的第一取景区所对应的影像发送到所述第一屏幕上播放,将调整后的第二取景区所对应的影像发送到所述第二屏幕上播放。
如上所述的虚拟显示设备的影像播放装置,优选地,所述取景区初始化模块包括:视差百分比范围值计算子模块,用于计算所述第一影像与所述第二影像的视差百分比范围值;视差百分比范围阈值设定子模块,用于设定视差百分比范围阈值,所述视差百分比范围阈值是用户观看影像时所能容忍的视差百分比范围;取景区设置子模块,用于根据所述视差百分比范围值和所述视差百分比范围阈值在所述第一影像上设置所述第一取景区,同时根据所述视差百分比范围值和所述视差百分比范围阈值在所述第二影像上设置所述第二取景区。
如上所述的虚拟显示设备的影像播放装置,优选地,所述取景区调整模块包括:数据接收子模块,用于接收用户头部的坐标变化数据;第一取景区调整子模块,用于根据所述用户头部的坐标变化数据在所述第一影像范围内调整所述第一取景区的位置,同时根据所述用户头部的坐标变化数据在所述第二影像范围内调整所述第二取景区的位置。
如上所述的虚拟显示设备的影像播放装置,优选地,所述取景区调整模块包括:指令接收子模块,用于接收来自用户的语音指令或动作指令;第二取景区调整子模块,用于根据所述语音指令或动作指令在所述第一影像范围内调整所述第一取景区的位置,同时根据所述语音指令或动作指令在所述第二影像范围内调整所述第二取景区的位置。
如上所述的虚拟显示设备的影像播放装置,优选地,所述影像播放装置还包括:影像预处理模块,用于调整所述第一影像和所述第二影像的参数,以缩小所述第一影像和所述第二影像的颜色差距。
有益效果
采用本发明所提供的虚拟显示设备的影像播放方法和装置,通过影像获取、取景区初始化、取景区调整和影像播放等步骤,实现了360度立体视频或图像的播放。同时,本发明所提供的虚拟显示设备的影像播放方法和装置还能够自动设置较好的初始视差,也能够实时地根据相关指令调整播放视差,从而使用户获得极佳的视觉体验。
附图说明
下面将通过附图详细描述本发明中优选实施例,将有助于理解本发明的目的和优点,其中:
图1是本发明的虚拟显示设备的影像播放方法的优选实施例。
图2是本发明优选实施例中取景区初始化的状态图。
图3是本发明优选实施例中取景区的第一种设定过程示意图。
图4是本发明优选实施例中取景区的第二种设定过程示意图。
图5是本发明优选实施例中用户头部坐标变化的示意图。
图6是本发明的虚拟显示设备的影像播放装置的优选实施例。
本发明的实施方式
下面结合实施例对本发明进行详细说明。需要说明的是,下面描述中使用的词语“前”、“后”、“左”、“右”、“上”和“下”指的是附图中的方向。
本发明提供了一种虚拟显示设备的影像播放方法和装置,其中,所述虚拟显示设备中可以存储有全景影像,所述虚拟显示设备通过第一屏幕向用户的左眼播放影像,所述虚拟显示设备通过第二屏幕向用户的右眼播放影像。优选地,所述全景影像可以分为第一影像和第二影像两部分,其中,在所述第一屏幕上播放给用户左眼的影像取自第一影像中,在所述第二屏幕上播放给用户右眼的影像取自第二影像中。
在本发明所提供的虚拟显示设备的影像播放方法和装置中,所述虚拟显示设备可以理解为各种能够为用户提供立体视觉体验的设备,例如:虚拟头盔、智能眼镜等。
图1是本发明的虚拟显示设备的影像播放方法的优选实施例。如图1所示,本发明所提供的虚拟显示设备的影像播放方法可以通过以下步骤来实现。
步骤S100,影像获取步骤。
在所述影像获取步骤中,获取所述虚拟显示设备中的全景影像,所述全景影像包括第一影像和第二影像。其中,所述全景影像可以预先存储在所述虚拟显示设备的存储器中,在优选实施例中,所述全景影像可以是通过360度全方位摄像装置所拍摄的各种立体全景影像,也可以是通过后期制作合成的各种立体影像,例如:可以通过360度的左右图像、或者360度的左右合成视频来实现。
为了使用户能够体验立体效果,所述全景影像一般可以对物体进行全方位的展示,通常将全景影像划分为第一影像和第二影像。在优选实施例中,用户左眼所观看到的影像主要是取自第一影像,而用户右眼所观看到的影像主要是取自第二影像。
步骤S200,取景区初始化步骤。
在所述取景区初始化步骤中,在所述第一影像上设定第一取景区,在所述第二影像上设定第二取景区。所述取景区初始化步骤主要目的是在第一影像上确定展示给用户左眼的影像范围,以及在第二影像上确定展示给用户右眼的影像范围,并给上述两个范围设定一个初始位置,在之后的各种处理步骤中,可以将该步骤中所确定的初始位置作为基准来进行调节。
其中优选的是,所述第一取景区可以被设定在所述第一影像的中心位置,所述第二取景区也可以设定在所述第二影像的中心位置。这种设计可以有效地保证两个取景区在影像上具有较大的可调范围。此外,所述第一取景区和所述第二取景区的初始位置还可以根据具体的需求进行相应的改变。
在一种优选实施例中,所述取景区初始化步骤可以通过以下方式实现:首先,计算所述第一影像与所述第二影像的视差百分比范围值;第二,设定视差百分比范围阈值,所述视差百分比范围阈值是用户观看影像时所能容忍的视差百分比范围;第三,根据所述视差百分比范围值和所述视差百分比范围阈值在所述第一影像上设置所述第一取景区,同时根据所述视差百分比范围值和所述视差百分比范围阈值在所述第二影像上设置所述第二取景区。
具体地,图2是本发明优选实施例中取景区初始化的状态图;图3是本发明优选实施例中取景区的第一种设定过程示意图;图4是本发明优选实施例中取景区的第二种设定过程示意图。如图2所示,在所述第一影像1上设定第一取景区11,同时在所述第二影像2上设定第二取景区21。
结合图2、图3和图4所示,在优选实施例中,取景区初始化步骤可以通过以下过程来实现。
对全景影像(可以包括全景图像或者视频)进行视差百分比计算。具体地,视差百分比计算可以采用多种方式进行计算,例如:对于全景立体图像,可以在立体图像中选定视差最大及最小的物体,手动的测量出该物体在左图中的位置PL,及该物体在右图中的位置PR,则该物体的视差为PL.x–PR.x(其中,PL.x表示PL在x方向的分量,PR.x表示PR在x方向上的分量);也可以使用图像匹配的方法将左右图上的每个像素都进行匹配,进而计算出立体图像的视差。对于视频,可以从视频中选取一些含有最大最小视差的视频帧,在通过上述处理图像的方式进行处理,进而得到视频的近似视差。可以理解的是,视差的计算并不限于上述方法,还可以通过其它方法进行计算,这里不再一一列举。之后,根据计算结果记录全景图像或者视频的最大最小视差PMAX1和PMIN1,并记录全景图像或者视频的最大最小视差百分比PRMAX1 和PRMIN1,PRMAX1 和PRMIN1分别是PMAX1、PMIN1和全景图像或者视频的视频帧宽度一半的比值。同时设定观看设备能容忍的最大最小视差百分比PRMAX0和PRMIN0。为了有更好的立体观看效果,需要将最大视差百分比进行调整,例如:可以调整到观看设备的最大视差百分比的80%左右。需要说明的是,在上述实施例中,为了获得图像或者视频的视差,才得到其最大最小视差百分比。此外,因为图像或者视频帧是左右图格式,所以求取比值时,需要取其宽度的一半,例如:所取图像或者视频帧的宽度为W,则PRMAX1=PMAX1/(W/2);PRMIN1 =PMIN1/(W/2)。
在初始化过程中,在第一影像1和第二影像2上各取出部分区域,作为第一取景区11和第二取景区21。设第一影像1和第二影像2的最大视差百分比为PMAX1,二者的宽度均为W;取出的第一取景区11和第二取景区21的宽度均为w。则当视差百分比为PRMAX1时,第一取景区11和第二取景区21的视差宽度为X1 = PRMAX1* w。当视差百分比为80%*PRMAX0时,第一取景区11和第二取景区21的视差宽度为X0=80%*PRMAX0* w。由此可得,若要使得第一取景区11和第二取景区21的初始化位置设置在合适的范围,其各自需要的移动量X =(X0-X1)/2。其中,“*”表示乘号。
在调整过程中,当X>0时,这种情况下的视差相对较小,需要加大视差,则需要第一取景区和第二取景区的位置均向中间线平移X的距离。如图3所示,在X>0的情况下,点画线表示第一取景区和第二取景区的原始位置,实线表示第一取景区和第二取景区经初始化设定后的位置。
对于另一种情况,当X<0时,这种情况下的视差相对较大,需要减小视差,则需要所述第一取景区和所述第二取景区向远离中心线的方向移动X的距离。如图4所示,在X<0的情况下,点画线表示第一取景区和第二取景区的原始位置,实线表示第一取景区和第二取景区经初始化设定后的位置。
步骤S300,取景区调整步骤。
在取景区调整步骤中,响应于用户的触发指令,根据所述触发指令在所述第一影像的范围内调整所述第一取景区的位置,同时根据所述触发指令在所述第二影像的范围内调整所述第二取景区的位置。该步骤的主要目的是根据用户的各种观看状态的改变、或用户的各种其它指令来实现取景区位置的变化,从而实现影像的动态变化和立体效果调节。
其中,所述触发指令可以包括但不限于:用户头部的坐标变化,用户眼球的位置变化,用户的语音指令,用户的动作指令等。
在一种优选实施例中,所述触发指令可以是用户头部的坐标变化数据。具体地,接收如图5所示的用户头部的坐标变化数据,其中包括如图所示的三个方向上的坐标变化量。具体地,pitch可以用于表征围绕X轴的旋转量(Pitch head movement),即俯仰角;yaw可以用于表征围绕Y轴的旋转量(Yaw head movement),即偏航角;roll可以用于表征围绕Z轴的旋转量(Roll head movement),即翻滚角。根据所述用户头部的坐标变化数据在所述第一影像范围内调整所述第一取景区的位置,同时根据所述用户头部的坐标变化数据在所述第二影像范围内调整所述第二取景区的位置。
在具体实施过程中,坐标变化数据可以通过设置相应的传感装置来实现,例如可以通过检测陀螺仪的状态数据来体现用户头部的左边变化数据。如图5所示,可以设定用户头部的初始位置处所对应的坐标值为基准坐标值,并记录陀螺仪中所对应的基准坐标值为:picth0、yaw0、roll0。当用户的头部转动时,对应地会使陀螺仪实时坐标pitch、yaw、roll值发生变化。随着pitch、yaw、roll值的变化,取景区域也随之发生改变,使用户看到了影像能够根据头部的位置进行实时变化,从而有360度观看的感觉。
以其中的yaw值的变化为例,当用户的头部坐标数据中的yaw值发生变化时,设初始坐标值为yaw0,变化后的实时坐标值为yaw,则此时的取景区的位置应移动的量YAW= yaw - yaw0。当YAW=180度时,取景区域到达最左边。当YAW=-180度时,取景区域到达最右边。当取景区的位置应移动的量为其他角度时,可以通过线性插值计算出具体的位置数据,并进行相应的移动。
同样,以pitch值的变化为例,设初始坐标值为pitch0,变化后的实时坐标值为pitch,则此时的取景区的位置应移动的量PITCH = picth- picth0,当PITCH =180度时,取景区域到达最上边。当PITCH =-180时,取景区域到达最下边。当取景区的位置应移动的量为其他角度时,可以通过线性插值计算出具体的位置数据,并进行相应的移动。
在具体实现过程中,如图5所示,Roll值的变化反应了整体头部移动,在显示时,可以不做处理。可以理解的是,这只是其中一种实施例,在具体实现过程中也可以根据Roll值的变化进行取景区位置的调整。需要说明的是,文中所提到的picth0、yaw0、roll0、pitch、yaw、roll、PITCH、YAW等可以作为变量符号来理解。
在另一种优选实施例中,所述触发指令可以是来自用户的语音指令或动作指令等。通过接收来自用户的语音指令或动作指令,并根据所述语音指令或动作指令在所述第一影像范围内调整所述第一取景区的位置,同时根据所述语音指令或动作指令在所述第二影像范围内调整所述第二取景区的位置。该实施例中通过视差值的调节实现了立体效果的调整。
具体地,当需要进行立体效果调整时,通过获取来自用户的立体调整指令的信号,例如可以通过语音识别技术识别语音指令,或通过传感器来识别用户的动作指令。以语音指令为例,当识别出用户所发出“凸”的指令时,据此进行取景区的位置的调整,使显示场景凸出;当识别出用户发出“凹”的指令时,据此进行取景区的位置的调整,使显示场景凹进。
具体可以通过以下过程实现。如当收到一个“凸”信号时,左图的取图区域向右平移M个像素,同时右图取图区域向左移动M个像素。收到N个信号时,移动N*M个像素,当N*M 大于X0/2时,即使接收到信号也不再移动。同样的当收到一个“凹”信号时,左图的取图区域向左平移M个像素,同时右图取图区域向右移动M个像素。收到N个信号时,移动N*M个像素,当N*M 大于X0/2时,即使接收到信号也不再移动。可以理解的是上文中的“凸”和“凹”信号只是一种具体的语音指令,在具体实现过程中可以进行相应的改变。
步骤S400,影像播放步骤。
在所述影像播放步骤中,将调整后的第一取景区所对应的影像发送到所述第一屏幕上播放,将调整后的第二取景区所对应的影像发送到所述第二屏幕上播放。如上文所述,在具体实施过程中,第一取景区和第二取景区的位置会进行实时地调整和变化,对应地,在影像播放步骤中,显示在第一屏幕和第二屏幕上的影像也相应地进行实时调整和改变。
在优选实施例中,所述影像播放方法在所述影像获取步骤和取景区初始化步骤之间还包括:步骤S500,影像预处理步骤。
所述影像预处理步骤主要用于调整所述第一影像和所述第二影像的参数,以缩小所述第一影像和所述第二影像的颜色差距。例如,可以通过影像预处理步骤对第一影像和第二影像的颜色、亮度、色彩饱和度等参数进行调整,使第一影像和第二影像的颜色尽量接近。
本发明所提供的虚拟显示设备的影像播放方法能够进行360度立体视频或图像的播放,能够自动设置较好的初始视差百分比,也能够实时地根据相关指令调整播放视差百分比,从而使用户获得极佳的视觉体验。
相应地,本发明还提供了一种虚拟显示设备的影像播放装置。所述虚拟显示设备通过第一屏幕向用户的左眼播放影像,所述虚拟显示设备通过第二屏幕向用户的右眼播放影像。图6是本发明的虚拟显示设备的影像播放装置的优选实施例。如图6所示,本发明所提供的虚拟显示设备的影像播放装置包括影像获取模块10、取景区初始化模块20、取景区调整模块30和影像播放模块40,并优选地包括影像预处理模块50。
所述影像获取模块10用于获取所述虚拟显示设备中的全景影像,所述全景影像包括第一影像和第二影像。所述取景区初始化模块20用于在所述第一影像上设定第一取景区,在所述第二影像上设定第二取景区。所述取景区调整模块30用于响应于用户的触发指令,根据所述触发指令在所述第一影像的范围内调整所述第一取景区的位置,同时根据所述触发指令在所述第二影像的范围内调整所述第二取景区的位置。所述影像播放模块40用于将调整后的第一取景区所对应的影像发送到所述第一屏幕上播放,将调整后的第二取景区所对应的影像发送到所述第二屏幕上播放。所述影像预处理模块50用于在获取全景影像之后,调整所述第一影像和所述第二影像的参数,以缩小所述第一影像和所述第二影像的颜色差距。
如上所述的虚拟显示设备的影像播放装置,优选地,所述取景区初始化模块进一步包括:视差百分比范围值计算子模块、视差百分比范围阈值设定子模块、和取景区设置子模块。
所述视差百分比范围值计算子模块用于计算所述第一影像与所述第二影像的视差百分比范围值;所述视差百分比范围阈值设定子模块用于设定视差百分比范围阈值,所述视差百分比范围阈值是用户观看影像时所能容忍的视差百分比范围;所述取景区设置子模块,用于根据所述视差百分比范围值和所述视差百分比范围阈值在所述第一影像上设置所述第一取景区,同时根据所述视差百分比范围值和所述视差百分比范围阈值在所述第二影像上设置所述第二取景区。
如上所述所述的虚拟显示设备的影像播放装置,优选地,所述取景区调整模块包括:数据接收子模块和第一取景区调整子模块。
所述数据接收子模块用于接收用户头部的坐标变化数据;所述第一取景区调整子模块用于根据所述用户头部的坐标变化数据在所述第一影像范围内调整所述第一取景区的位置,同时根据所述用户头部的坐标变化数据在所述第二影像范围内调整所述第二取景区的位置。
如上所述的虚拟显示设备的影像播放装置,优选地,所述取景区调整模块包括:指令接收子模块和第二取景区调整子模块。
所述指令接收子模块用于接收来自用户的语音指令或动作指令;所述第二取景区调整子模块用于根据所述语音指令或动作指令在所述第一影像范围内调整所述第一取景区的位置,同时根据所述语音指令或动作指令在所述第二影像范围内调整所述第二取景区的位置。
采用本发明所提供的虚拟显示设备的影像播放方法和装置,通过影像获取、取景区初始化、取景区调整和影像播放等步骤,实现了360度立体视频或图像的播放。同时,本发明所提供的虚拟显示设备的影像播放方法和装置还能够自动设置较好的初始视差,也能够实时地根据相关指令调整播放视差,从而使用户获得极佳的视觉体验。
本发明实施例提供一种虚拟显示设备的影像播放装置,该虚拟显示设备通过第一屏幕向用户的左眼播放影像,该虚拟显示设备通过第二屏幕向用户的右眼播放影像,该装置包括:一个或者多个处理器;
存储器;
一个或者多个程序,该一个或者多个程序存储在该存储器中,当被该一个或者多个处理器执行时:
步骤S100,影像获取步骤,获取该虚拟显示设备中的全景影像,该全景影像包括第一影像和第二影像,步骤S200,取景区初始化步骤,在该第一影像上设定第一取景区,在该第二影像上设定第二取景区,步骤S300,取景区调整步骤,响应于用户的触发指令,根据该触发指令在该第一影像的范围内调整该第一取景区的位置,同时根据该触发指令在该第二影像的范围内调整该第二取景区的位置,步骤S400,影像播放步骤,将调整后的第一取景区所对应的影像发送到该第一屏幕上播放,将调整后的第二取景区所对应的影像发送到该第二屏幕上播放。
本发明实施例的未尽描述细节,请参见前述各实施例的描述。
最后应说明的是:以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。

Claims (11)

  1. 一种虚拟显示设备的影像播放方法,所述虚拟显示设备通过第一屏幕向用户的左眼播放影像,所述虚拟显示设备通过第二屏幕向用户的右眼播放影像,其特征在于,所述影像播放方法包括以下步骤:
    步骤S100,影像获取步骤,获取所述虚拟显示设备中的全景影像,所述全景影像包括第一影像和第二影像;
    步骤S200,取景区初始化步骤,在所述第一影像上设定第一取景区,在所述第二影像上设定第二取景区;
    步骤S300,取景区调整步骤,响应于用户的触发指令,根据所述触发指令在所述第一影像的范围内调整所述第一取景区的位置,同时根据所述触发指令在所述第二影像的范围内调整所述第二取景区的位置;
    步骤S400,影像播放步骤,将调整后的第一取景区所对应的影像发送到所述第一屏幕上播放,将调整后的第二取景区所对应的影像发送到所述第二屏幕上播放。
  2. 根据权利要求1所述的虚拟显示设备的影像播放方法,其特征在于,所述取景区初始化步骤包括:
    步骤S210,计算所述第一影像与所述第二影像的视差百分比范围值;
    步骤S220,设定视差百分比范围阈值,所述视差百分比范围阈值是用户观看影像时所能容忍的视差百分比范围;
    步骤S230,根据所述视差百分比范围值和所述视差百分比范围阈值在所述第一影像上设置所述第一取景区,同时根据所述视差百分比范围值和所述视差百分比范围阈值在所述第二影像上设置所述第二取景区。
  3. 根据权利要求1所述的虚拟显示设备的影像播放方法,其特征在于,所述取景区调整步骤包括:
    步骤S311,接收用户头部的坐标变化数据;
    步骤S312,根据所述用户头部的坐标变化数据在所述第一影像范围内调整所述第一取景区的位置,同时根据所述用户头部的坐标变化数据在所述第二影像范围内调整所述第二取景区的位置。
  4. 根据权利要求1所述的虚拟显示设备的影像播放方法,其特征在于,所述取景区调整步骤包括:
    步骤S321,接收来自用户的语音指令或动作指令;
    步骤S322,根据所述语音指令或动作指令在所述第一影像范围内调整所述第一取景区的位置,同时根据所述语音指令或动作指令在所述第二影像范围内调整所述第二取景区的位置。
  5. 根据权利要求1-4中任意一项所述的虚拟显示设备的影像播放方法,其特征在于,所述影像播放方法在所述影像获取步骤和取景区初始化步骤之间还包括:
    步骤S500,影像预处理步骤,调整所述第一影像和所述第二影像的参数,以缩小所述第一影像和所述第二影像的颜色差距。
  6. 一种虚拟显示设备的影像播放装置,所述虚拟显示设备通过第一屏幕向用户的左眼播放影像,所述虚拟显示设备通过第二屏幕向用户的右眼播放影像,其特征在于,所述影像播放装置包括:
    影像获取模块,用于获取所述虚拟显示设备中的全景影像,所述全景影像包括第一影像和第二影像;
    取景区初始化模块,用于在所述第一影像上设定第一取景区,在所述第二影像上设定第二取景区;
    取景区调整模块,用于响应于用户的触发指令,根据所述触发指令在所述第一影像的范围内调整所述第一取景区的位置,同时根据所述触发指令在所述第二影像的范围内调整所述第二取景区的位置;
    影像播放模块,用于将调整后的第一取景区所对应的影像发送到所述第一屏幕上播放,将调整后的第二取景区所对应的影像发送到所述第二屏幕上播放。
  7. 根据权利要求6所述的虚拟显示设备的影像播放装置,其特征在于,所述取景区初始化模块包括:
    视差百分比范围值计算子模块,用于计算所述第一影像与所述第二影像的视差百分比范围值;
    视差百分比范围阈值设定子模块,用于设定视差百分比范围阈值,所述视差百分比范围阈值是用户观看影像时所能容忍的视差百分比范围;
    取景区设置子模块,用于根据所述视差百分比范围值和所述视差百分比范围阈值在所述第一影像上设置所述第一取景区,同时根据所述视差百分比范围值和所述视差百分比范围阈值在所述第二影像上设置所述第二取景区。
  8. 根据权利要求6所述的虚拟显示设备的影像播放装置,其特征在于,所述取景区调整模块包括:
    数据接收子模块,用于接收用户头部的坐标变化数据;
    第一取景区调整子模块,用于根据所述用户头部的坐标变化数据在所述第一影像范围内调整所述第一取景区的位置,同时根据所述用户头部的坐标变化数据在所述第二影像范围内调整所述第二取景区的位置。
  9. 根据权利要求6所述的虚拟显示设备的影像播放装置,其特征在于,所述取景区调整模块包括:
    指令接收子模块,用于接收来自用户的语音指令或动作指令;
    第二取景区调整子模块,用于根据所述语音指令或动作指令在所述第一影像范围内调整所述第一取景区的位置,同时根据所述语音指令或动作指令在所述第二影像范围内调整所述第二取景区的位置。
  10. 根据权利要求6-9中任意一项所述的虚拟显示设备的影像播放装置,其特征在于,所述影像播放装置还包括:
    影像预处理模块,用于调整所述第一影像和所述第二影像的参数,以缩小所述第一影像和所述第二影像的颜色差距。
  11. 一种虚拟显示设备的影像播放装置,所述虚拟显示设备通过第一屏幕向用户的左眼播放影像,所述虚拟显示设备通过第二屏幕向用户的右眼播放影像,其特征在于,所述装置包括:一个或者多个处理器;
    存储器;
    一个或者多个程序,所述一个或者多个程序存储在所述存储器中,当被所述一个或者多个处理器执行时:
    步骤S100,影像获取步骤,获取所述虚拟显示设备中的全景影像,所述全景影像包括第一影像和第二影像;
    步骤S200,取景区初始化步骤,在所述第一影像上设定第一取景区,在所述第二影像上设定第二取景区;
    步骤S300,取景区调整步骤,响应于用户的触发指令,根据所述触发指令在所述第一影像的范围内调整所述第一取景区的位置,同时根据所述触发指令在所述第二影像的范围内调整所述第二取景区的位置;
    步骤S400,影像播放步骤,将调整后的第一取景区所对应的影像发送到所述第一屏幕上播放,将调整后的第二取景区所对应的影像发送到所述第二屏幕上播放。
PCT/CN2016/088677 2015-12-21 2016-07-05 虚拟显示设备的影像播放方法和装置 WO2017107444A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/237,671 US20170176934A1 (en) 2015-12-21 2016-08-16 Image playing method and electronic device for virtual reality device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510966519.X 2015-12-21
CN201510966519.XA CN105898285A (zh) 2015-12-21 2015-12-21 虚拟显示设备的影像播放方法和装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/237,671 Continuation US20170176934A1 (en) 2015-12-21 2016-08-16 Image playing method and electronic device for virtual reality device

Publications (1)

Publication Number Publication Date
WO2017107444A1 true WO2017107444A1 (zh) 2017-06-29

Family

ID=57002469

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/088677 WO2017107444A1 (zh) 2015-12-21 2016-07-05 虚拟显示设备的影像播放方法和装置

Country Status (2)

Country Link
CN (1) CN105898285A (zh)
WO (1) WO2017107444A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111988534B (zh) * 2020-07-23 2021-08-20 首都医科大学附属北京朝阳医院 一种基于多摄像头的画面拼接方法和装置

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5486841A (en) * 1992-06-17 1996-01-23 Sony Corporation Glasses type display apparatus
US5905525A (en) * 1995-07-13 1999-05-18 Minolta Co., Ltd. Image display apparatus having a display controlled by user's head movement
CN101253778A (zh) * 2005-09-29 2008-08-27 株式会社东芝 三维图像显示设备、三维图像显示方法、及用于三维图像显示的计算机程序产品
CN101594549A (zh) * 2009-06-22 2009-12-02 华东师范大学 一种可裸眼观看的立体显示器
CN102497563A (zh) * 2011-12-02 2012-06-13 深圳超多维光电子有限公司 跟踪式裸眼立体显示控制方法、显示控制装置和显示系统
CN102540464A (zh) * 2010-11-18 2012-07-04 微软公司 提供环绕视频的头戴式显示设备
CN103416072A (zh) * 2011-03-06 2013-11-27 索尼公司 显示系统、显示设备、以及中继设备
CN104253989A (zh) * 2014-06-09 2014-12-31 黄石 全视角图像显示装置
CN204291252U (zh) * 2014-11-14 2015-04-22 西安中科微光医疗技术有限公司 一种基于虚拟现实头盔的全景地图显示系统
CN104781873A (zh) * 2012-11-13 2015-07-15 索尼公司 图像显示装置、图像显示方法、移动装置、图像显示系统、以及计算机程序
CN104867175A (zh) * 2015-06-02 2015-08-26 孟君乐 一种虚拟效果图实景展示装置及其实现方法
CN104883561A (zh) * 2015-06-06 2015-09-02 深圳市虚拟现实科技有限公司 三维全景显示方法和头戴式显示设备

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102033414B (zh) * 2009-09-27 2012-06-20 深圳市掌网立体时代视讯技术有限公司 一种立体数码成像会聚装置及方法
CN103543831A (zh) * 2013-10-25 2014-01-29 梁权富 头戴式全景播放装置
JP6353214B2 (ja) * 2013-11-11 2018-07-04 株式会社ソニー・インタラクティブエンタテインメント 画像生成装置および画像生成方法

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5486841A (en) * 1992-06-17 1996-01-23 Sony Corporation Glasses type display apparatus
US5905525A (en) * 1995-07-13 1999-05-18 Minolta Co., Ltd. Image display apparatus having a display controlled by user's head movement
CN101253778A (zh) * 2005-09-29 2008-08-27 株式会社东芝 三维图像显示设备、三维图像显示方法、及用于三维图像显示的计算机程序产品
CN101594549A (zh) * 2009-06-22 2009-12-02 华东师范大学 一种可裸眼观看的立体显示器
CN102540464A (zh) * 2010-11-18 2012-07-04 微软公司 提供环绕视频的头戴式显示设备
CN103416072A (zh) * 2011-03-06 2013-11-27 索尼公司 显示系统、显示设备、以及中继设备
CN102497563A (zh) * 2011-12-02 2012-06-13 深圳超多维光电子有限公司 跟踪式裸眼立体显示控制方法、显示控制装置和显示系统
CN104781873A (zh) * 2012-11-13 2015-07-15 索尼公司 图像显示装置、图像显示方法、移动装置、图像显示系统、以及计算机程序
CN104253989A (zh) * 2014-06-09 2014-12-31 黄石 全视角图像显示装置
CN204291252U (zh) * 2014-11-14 2015-04-22 西安中科微光医疗技术有限公司 一种基于虚拟现实头盔的全景地图显示系统
CN104867175A (zh) * 2015-06-02 2015-08-26 孟君乐 一种虚拟效果图实景展示装置及其实现方法
CN104883561A (zh) * 2015-06-06 2015-09-02 深圳市虚拟现实科技有限公司 三维全景显示方法和头戴式显示设备

Also Published As

Publication number Publication date
CN105898285A (zh) 2016-08-24

Similar Documents

Publication Publication Date Title
TWI503786B (zh) 用於生成全景視頻的移動設備和系統
CN106165415B (zh) 立体观看
WO2017215295A1 (zh) 一种摄像机参数调整方法、导播摄像机及系统
US10966017B2 (en) Microphone pattern based on selected image of dual lens image capture device
US20070182812A1 (en) Panoramic image-based virtual reality/telepresence audio-visual system and method
EP2046032A1 (en) A method and an apparatus for obtaining acoustic source location information and a multimedia communication system
CN103760980A (zh) 根据双眼位置进行动态调整的显示方法、系统及显示设备
WO2019082794A1 (ja) 画像生成装置、画像生成システム、画像生成方法、およびプログラム
EP2352290A1 (en) Method and apparatus for matching audio and video signals during a videoconference
US11871111B2 (en) Method and apparatus for active reduction of mechanically coupled vibration in microphone signals
JP7134060B2 (ja) 画像生成装置および画像生成方法
US20230328432A1 (en) Method and apparatus for dynamic reduction of camera body acoustic shadowing in wind noise processing
WO2018161817A1 (zh) 一种存储介质、在虚拟现实场景中模拟摄影的方法及系统
US20230262385A1 (en) Beamforming for wind noise optimized microphone placements
WO2017107444A1 (zh) 虚拟显示设备的影像播放方法和装置
WO2018018357A1 (zh) Vr图像拍摄装置及其基于移动终端的vr图像拍摄系统
JPH10322725A (ja) 立体撮影像位置決め装置
JP5899918B2 (ja) 画像処理装置、画像処理方法
JP2019074881A (ja) 画像生成装置および画像生成方法
EP4175291A1 (en) Automatically determining the proper framing and spacing for a moving presenter
JP2005110160A (ja) 撮像装置
RU2782312C1 (ru) Способ обработки изображения и устройство отображения, устанавливаемое на голове
US11937057B2 (en) Face detection guided sound source localization pan angle post processing for smart camera talker tracking and framing
JP2004159061A (ja) 撮像機能付き画像表示装置
CN117676097B (zh) 基于虚拟isp的三目摄像头拼接显示装置及方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16877256

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16877256

Country of ref document: EP

Kind code of ref document: A1