WO2018107781A1 - 虚拟现实实现方法及系统 - Google Patents

虚拟现实实现方法及系统 Download PDF

Info

Publication number
WO2018107781A1
WO2018107781A1 PCT/CN2017/096494 CN2017096494W WO2018107781A1 WO 2018107781 A1 WO2018107781 A1 WO 2018107781A1 CN 2017096494 W CN2017096494 W CN 2017096494W WO 2018107781 A1 WO2018107781 A1 WO 2018107781A1
Authority
WO
WIPO (PCT)
Prior art keywords
glasses
frame
virtual reality
information
viewing angle
Prior art date
Application number
PCT/CN2017/096494
Other languages
English (en)
French (fr)
Inventor
胡庆荣
Original Assignee
威创集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 威创集团股份有限公司 filed Critical 威创集团股份有限公司
Publication of WO2018107781A1 publication Critical patent/WO2018107781A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/167Synchronising or controlling image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/194Transmission of image signals
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B2027/0178Eyeglass type

Definitions

  • the present invention relates to the field of 3D display technologies, and in particular, to a virtual reality implementation method and system.
  • Virtual reality technology is a computer simulation system that can create and experience virtual worlds. It uses a computer to generate a simulation environment and immerses users through interactive three-dimensional dynamic visual and physical behavior simulation of multi-source information fusion. In this environment.
  • the virtual reality implementation method based on the large screen generally adopts the positioning of the 3D glasses perspective and the positioning of the operation device, cooperates with the virtual reality server, changes the angle of the display scene according to the change of the perspective of the 3D glasses, and operates according to the operation device. Change to change part of the scene to achieve first-person immersive virtual reality.
  • this method of implementing large-screen virtual reality can only achieve a first-person perspective. Other users who wear 3D glasses can only passively follow the first-person perspective, but cannot actively participate in the interaction of large-screen virtual reality. Big limitations.
  • a virtual reality implementation method comprising the following steps:
  • the 3D glasses send a switch synchronization command that controls the left and right eyeglasses of the 3D glasses.
  • the present invention also provides a virtual reality implementation system, the system comprising:
  • the output device outputs a left eye frame and a right eye frame of each of the 3D glasses according to the view angle information, a preset frame output sequence, and a field sync pulse signal, and outputs a sequence and the field sync pulse signal according to the preset frame output sequence A switch synchronization command is sent to the corresponding 3D glasses.
  • the virtual reality implementation method and system allocates the frame frames output by the virtual reality display system to each 3D glasses in a preset frame output order, so that the left eye frame and the right eye frame of each 3D glasses are controlled by the field synchronization pulse signal.
  • the output is alternately outputted, and the angle information of each 3D glasses is obtained in real time, and the left eye frame and the right eye frame of each 3D glasses are output according to the acquired angle of view information, the preset frame output sequence, and the field sync pulse signal, when the angle of any one of the 3D glasses is viewed.
  • the virtual reality implementation method and system of the present invention can adjust the output of the corresponding left-eye frame and right-eye frame according to the change of the angle of view of the 3D glasses, thereby realizing that multiple people perform the first person identity in front of the virtual reality screen. Interaction further enhances the experience of virtual reality.
  • FIG. 1 is a schematic flow chart of a virtual reality implementation method according to an embodiment of the present invention.
  • FIG. 2 is a schematic diagram of synchronization of a virtual reality implementation method in one embodiment of the present invention
  • FIG. 3 is a schematic diagram of synchronization of a virtual reality implementation method according to another embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of a virtual reality implementation system according to an embodiment of the present invention.
  • the present invention provides a virtual reality implementation method, the method comprising the following steps:
  • Step S100 acquiring perspective information of each 3D glasses in real time
  • Step S200 outputting a left eye frame and a right eye frame of each of the 3D glasses according to the view angle information, a preset frame output order, and a field pulse synchronization signal, and outputting the sequence and the field sync pulse signal according to the preset frame output sequence. Transmitting, to the corresponding 3D glasses, the left eye lens and the right eye lens switch of the 3D glasses Switch synchronization instructions.
  • the angle of view information of each 3D glasses is acquired.
  • the angle of view of the 3D glasses can be determined by using a fixed reflective ball on the 3D glasses and capturing the reflective ball through a camera fixed on the virtual reality display system.
  • the perspective information of the 3D glasses is acquired.
  • the left eye frame and the right eye frame of each 3D glasses are output according to the obtained angle of view information of each 3D glasses, the preset frame output order, and the field sync pulse signal, and corresponding to the preset frame output sequence and the field sync pulse signal.
  • the 3D glasses send a switch synchronization command.
  • the switch synchronization command is used to control the switch of the left eyeglass and the right eyeglass of the corresponding 3D glasses, for example, the switch synchronization command can be sent through the infrared emitter or the wireless transmitter to control the left eyeglass and the right eyeglass of the 3D glasses. Switching so that the left and right eyeglasses of the 3D glasses are only turned on when the virtual reality display system displays the left eye frame and the right eye frame of the 3D glasses, while the left eye frame and the right eye of the other 3D glasses are displayed. The frame is closed.
  • each 3D glasses can view a continuous picture at a first person perspective, wherein the field period of the field sync pulse signal or the frame rate of the virtual reality display system can be retained according to the human vision.
  • the time and the number of 3D glasses used in virtual reality are calculated, and within the response time range of 3D glasses, the higher the frame rate, the better the continuity of the picture seen by each first-person 3D glasses, and the weaker the flickering of the picture. .
  • the virtual reality implementation method provided in this embodiment allocates the frame frames output by the virtual reality display system to each 3D glasses in a preset frame output order, so that the left eye frame and the right eye frame of each 3D glasses are in the field sync pulse. Outputting alternately under the control of the signal, obtaining the angle of view information of each 3D glasses in real time, and outputting the left eye frame and the right eye frame of each 3D glasses according to the acquired angle of view information, the preset frame output order, and the field sync pulse signal, when any one of the 3D glasses
  • the virtual reality implementation method proposed by the present invention can adjust the corresponding left eye frame and right eye frame according to the change of the angle of view of the 3D glasses, thereby realizing the first person identity of the plurality of people in front of the virtual reality screen. Interactions further enhance the experience of virtual reality.
  • the preset frame output sequence in the present invention may adopt a method of sequentially outputting a left eye frame and a right eye frame of each 3D glasses, and the left eye frame and the right eye frame of each 3D glasses are adjacent in time series, and simultaneously,
  • the preset frame output sequence may be sequentially outputting a sequence of frames of each 3D glasses. After one frame of the left eye frame, one frame of the right eye frame of the frame sequence of each 3D glasses is sequentially outputted, or one frame of the right eye frame of each frame of the 3D glasses is sequentially output, and then the respective 3D glasses are sequentially output. One frame of the left eye frame in the sequence of frames.
  • the following is a detailed description of the preset frame output sequence in the present embodiment by using three 3D glasses (referred to as 3D glasses A, 3D glasses B, and 3D glasses C) as an example.
  • 3D glasses A, 3D glasses B, and 3D glasses C the implementation is as shown in FIG. 2 .
  • the method uses the field sync pulse signal to synchronize the frame output and the lens switch of the 3D glasses.
  • the preset frame output order of the glasses A right eye frame, the 3D glasses B right eye frame, and the 3D glasses C right eye frame, that is, the frame sequence of one frame left eye frame and 3D glasses B in the frame sequence of the 3D glasses A is sequentially output first.
  • One frame of the right eye frame of the frame right eye frame and the frame of the 3D glasses C is one cycle every six frame periods, and the left eye frame and the right eye frame of each 3D glasses are sequentially output in the frame output order, and the field synchronization is performed simultaneously.
  • the 3D glasses B and the 3D glasses C send a switch synchronization command to control the left eye piece switch and the right eye piece switch of each of the 3D glasses A, 3D glasses B and 3D glasses C, as shown in FIG.
  • the switch synchronization command controls to open the left eye lens of the 3D glasses A, and the right eye lens of the 3D glasses A is in the closed state, at which time the left and right glasses of the other 3D glasses are in the In the closed state, similarly, when the right eye frame of the 3D glasses A is outputted by the field pulse synchronization signal, the switch synchronization command controls to open the right eye lens of the 3D glasses A, and the left eye lens of the 3D glasses A is in the closed state.
  • the left eyeglass lens and the right eyeglass lens of the other 3D glasses are also in a closed state, and the switching modes of the left eyeglass lens and the right eyeglass lens of the 3D glasses B and the 3D glasses C are similar to those of the 3D glasses A. Let me repeat.
  • the present embodiment provides two specific implementation manners of the preset frame output sequence. By implementing the present embodiment, when the screen displayed by the virtual reality display screen is not the first person frame frame, the glasses are in a closed state due to the human eye.
  • the residual characteristics of the picture enable each first person to view a continuous picture if the frame rate meets the requirements, that is, multiple 3D glasses can simultaneously interact with the first person's perspective, and ensure the 3D glasses. Watch the effect.
  • the method before the step of acquiring the viewing angle information of each 3D glasses in real time, the method further includes the following steps: detecting, in real time, the viewing angle positioning graphic information of the plurality of reflective balls disposed on the respective 3D glasses, the viewing angle positioning graphic information The shape information or the size information of the graphic formed by the plurality of reflective balls, and the number of the reflective balls set on each 3D glasses or the shape of the reflective balls is different; and the viewing angle information of the corresponding 3D glasses is generated according to the viewing angle positioning graphic information. .
  • the 3D glasses are provided with a plurality of reflective balls for positioning the viewing angle of the 3D glasses, each of the reflective balls is fixedly connected to one end of one of the support rods, and the other end of the support rod is fixed to the frame of the 3D glasses. Connected, and the support rod and the frame of the 3D glasses are at different angles, and several reflective balls together form a figure, and the shape of the pattern formed by the reflective balls of each 3D glasses is different or the number of the reflective balls fixed on each 3D glasses Different, so as to facilitate the identification of different 3D glasses.
  • the viewing angle positioning graphic information of the plurality of reflective balls disposed on each of the 3D glasses is detected in real time, wherein the viewing angle positioning graphic information may include shape information or size information of the graphic formed by the plurality of reflective balls, and the graphic information is generated according to the viewing angle.
  • the viewing angle information of the corresponding 3D glasses wherein the viewing angle information includes angle information, height information, and the like of the 3D glasses viewing angle change.
  • the image of each 3D glasses is captured in real time by using a camera fixed on the top of the virtual reality display screen, and then the shape of the graphic formed by the reflective balls corresponding to each 3D glasses is extracted by using image recognition technology, and the adjacent 3D glasses are compared at a certain time.
  • Corresponding graphics change, thereby calculating angle information and height information of the 3D glasses change, and finally generating perspective information of the 3D glasses.
  • the viewing angle of the 3D glasses is positioned by using a reflective ball method disposed on the 3D glasses, and the viewing angle information of each 3D glasses is obtained. This method not only has high positioning accuracy, but is easy to implement and has low cost.
  • the method further includes the following steps: detecting the virtual reality operating device of the corresponding 3D glasses according to the switch synchronization command Action information; updating the left eye frame and the right eye frame of the 3D glasses in which the virtual reality operating device is located according to the action information.
  • the virtual reality technology not only the simulation simulation of the environment, that is, the visual perception, but also the motion perception, and the like, that is, the wearer of the first person 3D glasses can change the picture frame of the virtual reality reality screen by operating the device or the like, in the embodiment.
  • the virtual reality display system sends the corresponding frame output sequence and the field sync pulse signal to the corresponding 3D glasses.
  • the left eye frame and the right eye frame of the 3D glasses, where the 3D picture output server can calculate and update the left eye frame and the right eye frame of the 3D glasses where the virtual reality operating device is located according to the detected motion information, and the updated virtual reality operation
  • the left eye frame and the right eye frame of the 3D glasses in which the device is located will be output under the control of the field sync pulse signal.
  • 3D glasses A, 3D glasses B, and 3D glasses C the following three 3D glasses (referred to as 3D glasses A, 3D glasses B, and 3D glasses C) and virtual operating devices of 3D glasses (referred to as virtual operating device A, virtual operating device B, and virtual operating device C)
  • virtual operating device A, virtual operating device B, and virtual operating device C the steps in this embodiment are described in detail. As shown in FIG.
  • the 3D glasses sequentially open the left eyeglass and the right eyeglass, and the 3D glasses A are taken as an example, when 3D When the left lens of the glasses A is opened, the motion information of the virtual reality operating device A is detected, and the motion information of the virtual reality operating device B and the virtual reality operating device C is ignored, and when the right eyeglass of the 3D glasses A is opened, The motion information of the virtual reality operating device A is detected, and the motion information of the virtual reality operating device B and the virtual reality operating device C is ignored, and the detection of the motion information of the virtual reality operating device of the other 3D glasses is deduced.
  • the operation of the virtual reality operating device of the 3D glasses A can be performed, and the virtual reality operating device B of the 3D glasses B and the virtual of the 3D glasses C
  • the operation operation of the real operation device C will be ignored, and the operation actions of the virtual reality operation device B and the virtual reality operation device C will be executed when the virtual reality display system displays the left eye frame or the right eye frame of the 3D glasses B and the 3D glasses C.
  • the plurality of 3D glasses wearers can change the display of the respective virtual reality through the virtual reality operating device without affecting each other, thereby greatly improving the virtual reality. Efficiency and experience.
  • the process of detecting the action information of the virtual reality operating device of the corresponding 3D glasses according to the switch synchronization command includes the following steps: detecting a plurality of the virtual reality operating devices of the corresponding 3D glasses according to the switch synchronization command Reflective ball action positioning graphic information, action
  • the bit pattern information is a shape or a size of a figure composed of a plurality of reflective balls, and the number of reflective balls or the shape of the reflective balls set on each virtual reality device is different; and the action of the virtual reality operating device is generated according to the motion positioning graphic information. information.
  • the virtual reality operating device of the 3D glasses is provided with a plurality of reflective balls for positioning the motion of the device, and each of the reflective balls is fixed to one end of one of the support rods.
  • the other end of the support rod is fixedly connected with the virtual reality operating device of the 3D glasses, and the support rod and the virtual reality operating device of the 3D glasses are at different angles, and the plurality of reflective balls together form a graphic, and the virtual reality of each 3D glasses
  • the shape of the graphic formed by the reflective balls of the operating device is different or the number of the reflective balls fixed on the virtual reality operating device of each 3D glasses is different, thereby facilitating identification of different virtual reality operating devices of the 3D glasses, in particular, according to
  • the switch synchronization command detects motion positioning pattern information of a plurality of reflective balls disposed on the virtual reality operating device of each 3D glasses, that is, when the left eye lens or the right eye lens of the 3D glasses is opened, detecting the virtual reality operating device of the 3D glasses
  • the action of the virtual reality operating device is positioned by using a reflective ball method disposed on the virtual reality operating device, and motion information of the virtual reality operating device of each 3D glasses is obtained, which not only has high positioning accuracy but also is easy to implement. , the cost is lower.
  • the present invention also provides a virtual reality implementation system, as shown in FIG. 4, the system includes:
  • the acquiring device 100 is configured to acquire viewing angle information of each 3D glasses in real time;
  • the output device 200 outputs according to the view angle information, a preset frame output sequence, and a field sync pulse signal a left eye frame and a right eye frame of each of the 3D glasses, and sending a switch synchronization instruction to the corresponding 3D glasses according to the preset frame output order and the field sync pulse signal.
  • the acquiring device 100 first acquires the viewing angle information of each 3D glasses.
  • the acquiring device 100 can determine the 3D glasses by using the reflective ball fixed on the 3D glasses and capturing the reflective ball through a camera fixed on the virtual reality display system.
  • the viewing device 100 acquires the viewing angle information of the 3D glasses and transmits the viewing angle information to the output device 200; the output device 200 then presets according to the acquired viewing angle information of each 3D glasses.
  • the frame output sequence and the field sync pulse signal output the left eye frame and the right eye frame of each 3D glasses, and send a switch synchronization command to the corresponding 3D glasses according to the preset frame output sequence and the field sync pulse signal.
  • the acquisition is performed.
  • the device 100 and the output device 200 may be separately peripherally connected to the virtual reality display system, as shown in FIG. 4, or may be integrated into the virtual reality display system.
  • the switch synchronization command is used to control the switches of the left eyeglass and the right eyeglass of the corresponding 3D glasses, for example, the output device 200 can send a switch synchronization command through the infrared emitter or the wireless transmitter to control the left eyeglasses of the 3D glasses and The switch of the right eyeglass lens, so that the left eyeglass lens and the right eyeglass lens of the 3D glasses are only turned on when the virtual reality display system displays the left eye frame and the right eye frame of the 3D glasses, and the left eye frame of the other 3D glasses is displayed. And the right eye frame is off.
  • each 3D glasses can view a continuous picture at a first person perspective, wherein the field period of the field sync pulse signal or the frame rate of the virtual reality display system can be retained according to the human vision.
  • the time and the number of 3D glasses used in virtual reality are calculated, and within the response time range of 3D glasses, the higher the frame rate, the better the continuity of the picture seen by each first-person 3D glasses, and the weaker the flickering of the picture. .
  • the virtual reality implementation system of the present embodiment allocates the frame of the virtual reality display system to each 3D glasses in a preset frame output order, so that the left eye frame and the right eye frame of each 3D glasses are in the field sync pulse. Outputting alternately under the control of the signal, obtaining the angle of view information of each 3D glasses in real time, and outputting the left eye frame and the right eye frame of each 3D glasses according to the acquired angle of view information, the preset frame output order, and the field sync pulse signal, when any one of the 3D glasses
  • the virtual reality realization system proposed by the present invention can adjust the corresponding left eye frame and right eye frame according to the change of the angle of view of the 3D glasses, thereby realizing the first person identity of the plurality of people in front of the virtual reality screen. Engage, Further improve the experience of virtual reality.
  • the preset frame output sequence in the present invention may adopt a method of sequentially outputting a left eye frame and a right eye frame of each 3D glasses, and the left eye frame and the right eye frame of each 3D glasses are adjacent in time series, and simultaneously,
  • the preset frame output sequence may be: sequentially outputting one frame of the left eye frame in each of the frame sequences of the 3D glasses, and then sequentially outputting one frame of the right eye frame in the frame sequence of each 3D glasses, or After one frame of the right eye frame in the frame sequence of each 3D glasses is sequentially output, one frame of the left eye frame in the frame sequence of each 3D glasses is sequentially output.
  • the preset frame output sequence in the virtual reality implementation system of the present invention reference may be made to the implementation method described in the foregoing embodiment of the virtual reality implementation method, and details are not described herein again.
  • the virtual reality implementation system further includes: a viewing angle information detecting device, configured to detect, in real time, the viewing angle positioning graphic information of the plurality of reflective balls disposed on each of the 3D glasses, wherein the viewing angle positioning graphic information includes a plurality of reflective images.
  • a viewing angle information detecting device configured to detect, in real time, the viewing angle positioning graphic information of the plurality of reflective balls disposed on each of the 3D glasses, wherein the viewing angle positioning graphic information includes a plurality of reflective images.
  • the shape information or the size information of the graphic formed by the ball, and the number of the reflective balls set on each 3D glasses or the shape of the reflective balls is different;
  • the angle of view information generating means is configured to generate the corresponding 3D glasses according to the viewing angle positioning graphic information. Perspective information.
  • the 3D glasses are provided with a plurality of reflective balls for positioning the viewing angle of the 3D glasses, each of the reflective balls is fixedly connected to one end of one of the support rods, and the other end of the support rod is fixed to the frame of the 3D glasses. Connected, and the support rod and the frame of the 3D glasses are at different angles, and several reflective balls together form a figure, and the shape of the pattern formed by the reflective balls of each 3D glasses is different or the number of the reflective balls fixed on each 3D glasses Differently, it is advantageous to identify different 3D glasses.
  • the viewing angle information detecting device detects the viewing angle positioning graphic information of several reflective balls set on each 3D glasses in real time, wherein the viewing angle positioning graphic information may be composed of several reflective balls.
  • the viewing angle information generating device generates the viewing angle information of the corresponding 3D glasses according to the viewing angle positioning graphic information, wherein the viewing angle information includes angle information, height information, and the like of the 3D glasses viewing angle change.
  • the viewing angle information detecting device captures the images of the respective 3D glasses in real time by using a camera fixed on the virtual reality display screen, and then the viewing angle information generating device extracts the shape of the graphic formed by the reflective balls corresponding to the respective 3D glasses by using the image recognition technology, and compares the 3D.
  • the viewing angle information detecting device uses the reflective ball method disposed on the 3D glasses to locate the viewing angle of the 3D glasses, and obtains the viewing angle information of each 3D glasses, which not only has high positioning accuracy, but is easy to implement and has low cost. .
  • the virtual reality implementation system further includes: action information detecting means, configured to detect action information of the virtual reality operating device of the corresponding 3D glasses according to the switch synchronization instruction; and updating means for updating the virtual according to the action information
  • action information detecting means configured to detect action information of the virtual reality operating device of the corresponding 3D glasses according to the switch synchronization instruction
  • updating means for updating the virtual according to the action information
  • the simulation simulation of the environment that is, the visual perception, but also the motion perception, and the like, that is, the wearer of the first person 3D glasses can change the picture frame of the virtual reality reality screen by operating the device or the like, in the embodiment.
  • the output device sends a switch synchronization command to the corresponding 3D glasses according to the preset frame output sequence and the field sync pulse signal to control the left eye piece and the right eye piece switch of the 3D glasses and the left eye frame and the right output of the virtual reality system.
  • the eye frame is synchronized, and when the left eyeglass or the right eyeglass of a certain 3D glasses is opened under the control of the switch synchronization command, the action information detecting device detects motion information of the virtual reality operating device of the 3D glasses, and the motion information includes the deflection angle information.
  • the updating device updates the left eye frame and the right eye frame of the 3D glasses in which the virtual reality operating device is located according to the detected motion information of the virtual reality operating device, where the updating device can be implemented by using the 3D screen output server, according to the detecting
  • the action information to calculate and update the 3D where the virtual reality operating device is located The left eye frame and the right eye frame of the glasses, the left eye frame and the right eye frame of the 3D glasses in which the updated virtual reality operating device is located are output under the control of the field sync pulse signal.
  • the motion information detecting apparatus includes a motion detecting module and a generating module, and the detecting module is configured to detect, according to the switch synchronization command, the motion positioning graphic information of the plurality of reflective balls set on the virtual reality operating device of the corresponding 3D glasses.
  • the motion positioning graphic information includes shape information or size information of a plurality of reflective balls, and the number of reflective balls or the shape of the reflective balls set on each virtual reality device are different; the generating module is configured to be positioned according to the motion The graphic information generates motion information of the virtual reality operating device.
  • the virtual reality operation of the 3D glasses is similar to the viewing angle positioning of the 3D glasses.
  • the device is provided with a plurality of reflective balls for positioning the movement of the device, each of the reflective balls is fixedly connected with one end of a support rod, and the other end of the support rod is fixedly connected with the virtual reality operating device of the 3D glasses, and the support rod
  • a plurality of reflective balls together form a graphic
  • the reflective balls of the virtual reality operating device of each 3D glasses jointly form a shape of a graphic or a virtual reality operation of each 3D glasses.
  • the number of the reflective balls fixed on the device is different, thereby facilitating identification of different virtual reality operating devices of the 3D glasses.
  • the detecting module detects a plurality of reflective balls set on the virtual reality operating device of each 3D glasses according to the switch synchronization command.
  • the action positioning graphic information that is, when the left or right lens of the 3D glasses is opened, detecting the motion positioning graphic information of the virtual reality operating device of the 3D glasses, wherein the motion positioning graphic information may be composed of a plurality of reflective balls.
  • Shape information or size information of the graphic the generation module is based on the movement Performing motion information of the virtual reality operating device of the corresponding 3D glasses, wherein the motion information includes deflection angle information and motion distance information of the virtual reality operating device, for example, a left eye lens or a right eye lens of the 3D glasses.
  • the detecting module When opened, the detecting module captures an image of the virtual reality operating device of the 3D glasses by using a camera fixed on the virtual reality display system, and then the generating module uses the image recognition technology to extract the shape of the graphic formed by the reflective ball of the virtual reality operating device, and compares the shape of the graphic. The change of the corresponding graphics adjacent to each other at a time, thereby calculating the deflection angle information and the high motion distance information of the virtual reality operating device change, and finally generating the motion information of the virtual reality operating device of the 3D glasses.
  • the detecting module in the present embodiment locates the action of the virtual reality operating device by using a reflective ball method provided on the virtual reality operating device, and obtains action information of the virtual reality operating device of each 3D glasses, which not only has high positioning accuracy. And easy to implement, the cost is lower.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Optics & Photonics (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

本发明涉及一种虚拟现实实现方法及系统,该方法包括:实时获取各个3D眼镜的视角信息;根据视角信息、预设帧输出顺序和场同步脉冲信号输出各个3D眼镜的左眼帧和右眼帧,并根据预设帧输出顺序和场同步脉冲信号向对应的3D眼镜发送控制3D眼镜左眼镜片和右眼镜片开关的开关同步指令。上述虚拟现实实现方法通过实时获取各个3D眼镜的视角信息,根据获取的视角信息、预设帧输出顺序以及场同步脉冲信号输出各个3D眼镜的左眼帧和右眼帧,当任意一个3D眼镜的视角发生变化时,都可以根据3D眼镜的视角变化输出对应的左眼帧和右眼帧,从而实现多人在虚拟现实屏幕前以第一人称身份的互动,进一步提高了虚拟现实的体验度。

Description

虚拟现实实现方法及系统 技术领域
本发明涉及3D显示技术领域,特别是涉及一种虚拟现实实现方法及系统。
背景技术
虚拟现实技术是一种可以创建和体验虚拟世界的计算机仿真系统,它利用计算机生成一种模拟环境,并通过多源信息融合的交互式的三维动态视景和实体行为的系统仿真使用户沉浸到该环境中。现阶段基于大屏幕的虚拟现实实现方法一般是通过对3D眼镜视角的定位和操作装置动作的定位,配合虚拟现实服务器,根据3D眼镜视角的变化来改变显示场景的角度以及根据操作装置操作动作的变化来改变场景中的部分内容,以实现第一人称沉浸式的虚拟现实。然而这种大屏幕虚拟现实的实现方法只能实现一个第一人称视角,其他戴有3D眼镜的用户的视角只能被动跟随第一人称视角,而不能主动参与到大屏幕虚拟现实的互动中,具有较大的局限性。
发明内容
基于此,有必要针对现有的虚拟现实的实现方法只能实现一个第一人称视角的问题,提供一种虚拟现实实现方法及系统。
为实现上述目的,本发明采取如下的技术方案:
一种虚拟现实实现方法,所述方法包括以下步骤:
实时获取各个3D眼镜的视角信息;
根据所述视角信息、预设帧输出顺序和场同步脉冲信号输出各个所述3D眼镜的左眼帧和右眼帧,并根据所述预设帧输出顺序和所述场同步脉冲信号向对应的所述3D眼镜发送控制所述3D眼镜左眼镜片和右眼镜片开关的开关同步指令。
相应地,本发明还提出一种虚拟现实实现系统,所述系统包括:
获取装置,用于实时获取各个3D眼镜的视角信息;
输出装置,根据所述视角信息、预设帧输出顺序和场同步脉冲信号输出各个所述3D眼镜的左眼帧和右眼帧,并根据所述预设帧输出顺序和所述场同步脉冲信号向对应的所述3D眼镜发送开关同步指令。
上述虚拟现实实现方法及系统按照预设帧输出顺序将虚拟现实显示系统输出的画面帧按时分方式分配给每一个3D眼镜,使得各个3D眼镜的左眼帧和右眼帧在场同步脉冲信号的控制下交替输出,通过实时获取各个3D眼镜的视角信息,根据获取的视角信息、预设帧输出顺序以及场同步脉冲信号输出各个3D眼镜的左眼帧和右眼帧,当任意一个3D眼镜的视角发生变化时,本发明所提出的虚拟现实实现方法及系统都可以根据3D眼镜的视角变化调整输出的对应的左眼帧和右眼帧,从而实现多人在虚拟现实屏幕前以第一人称身份进行互动,进一步提高了虚拟现实的体验度。
附图说明
图1为本发明其中一个实施例中虚拟现实实现方法的流程示意图;
图2为本发明其中一个具体实施方式中虚拟现实实现方法的同步示意图;
图3为本发明另一个具体实施方式中虚拟现实实现方法的同步示意图;
图4为本发明其中一个实施例中虚拟现实实现系统的结构示意图。
具体实施方式
下面将结合附图及较佳实施例对本发明的技术方案进行详细描述。
在其中一个实施例中,如图1所示,本发明提供一种虚拟现实实现方法,该方法包括以下步骤:
步骤S100,实时获取各个3D眼镜的视角信息;
步骤S200,根据所述视角信息、预设帧输出顺序和场脉冲同步信号输出各个所述3D眼镜的左眼帧和右眼帧,并根据所述预设帧输出顺序和所述场同步脉冲信号向对应的所述3D眼镜发送控制所述3D眼镜左眼镜片和右眼镜片开关的 开关同步指令。
具体地,首先获取各个3D眼镜的视角信息,例如,可以利用3D眼镜上固定的反光球并通过固定在虚拟现实显示系统上的摄像头对反光球进行摄像,来判断3D眼镜的视角,在3D眼镜的视角发生变化时,获取到3D眼镜的视角信息。然后,根据获取的各个3D眼镜的视角信息、预设帧输出顺序以及场同步脉冲信号输出各个3D眼镜的左眼帧和右眼帧,并根据预设帧输出顺序和场同步脉冲信号向对应的3D眼镜发送开关同步指令。其中,开关同步指令用于控制对应的3D眼镜的左眼镜片和右眼镜片的开关,例如可以通过红外发射器或者无线发射器发送开关同步指令,以控制3D眼镜的左眼镜片和右眼镜片的开关,从而使得3D眼镜的左眼镜片和右眼镜片只有在虚拟现实显示系统显示该3D眼镜的左眼帧和右眼帧时才打开,而在显示其他3D眼镜的左眼帧和右眼帧时则处于关闭状态。因此,在场同步脉冲信号的控制下,每一个3D眼镜都可以以第一人称的视角观看到连续的画面,其中场同步脉冲信号的场周期或者虚拟现实显示系统的帧率可以根据人员视觉暂留的时间以及虚拟现实中使用的3D眼镜的数量计算得到,而且在3D眼镜响应时间范围内,帧率越高,每一个第一人称的3D眼镜看到的画面的连续性越好,画面闪烁感越弱。
本实施例所提出的虚拟现实实现方法按照预设帧输出顺序将虚拟现实显示系统输出的画面帧按时分方式分配给每一个3D眼镜,使得各个3D眼镜的左眼帧和右眼帧在场同步脉冲信号的控制下交替输出,通过实时获取各个3D眼镜的视角信息,根据获取的视角信息、预设帧输出顺序以及场同步脉冲信号输出各个3D眼镜的左眼帧和右眼帧,当任意一个3D眼镜的视角发生变化时,本发明所提出的虚拟现实实现方法都可以根据3D眼镜的视角变化调整输出的对应的左眼帧和右眼帧,从而实现多人在虚拟现实屏幕前以第一人称身份进行互动,进一步提高了虚拟现实的体验度。
本发明中的预设帧输出顺序可以采用依次输出各个3D眼镜的左眼帧和右眼帧且每个3D眼镜的左眼帧和右眼帧在时序上都是相邻的方式,同时,作为一种具体的实施方式,预设帧输出顺序可以为依次输出各个3D眼镜的帧序列中的 一帧左眼帧后,再依次对应输出各个3D眼镜的帧序列中的一帧右眼帧,或者依次输出各个3D眼镜的帧序列中的一帧右眼帧后,再依次对应输出各个3D眼镜的帧序列中的一帧左眼帧。
具体地,下面以三个3D眼镜(记为3D眼镜A、3D眼镜B和3D眼镜C)为例,对本实施方式中的预设帧输出顺序进行详细的说明,如图2所示,本实施方式采用场同步脉冲信号对帧输出和3D眼镜的镜片开关进行同步,在场同步脉冲信号的场周期下,按照3D眼镜A左眼帧、3D眼镜B左眼帧、3D眼镜C左眼帧、3D眼镜A右眼帧、3D眼镜B右眼帧、3D眼镜C右眼帧的预设帧输出顺序,即先依次输出3D眼镜A的帧序列中的一帧左眼帧、3D眼镜B的帧序列中的一帧左眼帧和3D眼镜C的帧序列中的一帧左眼帧后,再依次对应输出3D眼镜A的帧序列中的一帧右眼帧、3D眼镜B的帧序列中的一帧右眼帧和3D眼镜C的帧序列中的一帧右眼帧,每六个帧周期为一个循环,以这种帧输出顺序输出各个3D眼镜的左眼帧和右眼帧,同时在场同步脉冲信号的控制下,根据上述的预设帧输出顺序和场同步脉冲信号向3D眼镜A、3D眼镜B和3D眼镜C发送开关同步指令,控制3D眼镜A、3D眼镜B和3D眼镜C各自的左眼镜片开关和右眼镜片开关,如图2所示,当在场脉冲同步信号作用下输出3D眼镜A的左眼帧时,开关同步指令控制打开3D眼镜A的左眼镜片,而3D眼镜A的右眼镜片则处于关闭状态,此时其它3D眼镜的左眼镜片和右眼镜片都处于关闭的状态,同理,当在场脉冲同步信号作用下输出3D眼镜A的右眼帧时,开关同步指令控制打开3D眼镜A的右眼镜片,而3D眼镜A的左眼镜片则处于关闭状态,此时其它3D眼镜的左眼镜片和右眼镜片也都处于关闭的状态,3D眼镜B和3D眼镜C的左眼镜片和右眼镜片的开关方式与3D眼镜A的开关方式类似,此处不再赘述。本实施方式给出了预设帧输出顺序的两种具体实现方式,通过实施本实施方式,当虚拟现实显示屏幕所显示的画面不是本第一人称的帧画面时,眼镜处于关闭状态,由于人眼对画面的残留特性,使得在帧率满足要求的情况下,每一个第一人称都可以观看到连续的画面,即使得多个3D眼镜可以同时以第一人称的视角进行互动,并保证了3D眼镜的观看效果。
作为一种具体的实施方式,在实时获取各个3D眼镜的视角信息的步骤之前,还包括以下步骤:实时检测各个3D眼镜上设置的若干个反光球的视角定位图形信息,所述视角定位图形信息包括若干个反光球所组成的图形的形状信息或者尺寸信息,且每一3D眼镜上设置的反光球的数量或者反光球所组成的形状不同;根据视角定位图形信息生成对应的3D眼镜的视角信息。
在该实施方式中,3D眼镜上设置有用于对该3D眼镜的视角进行定位的若干个反光球,每一反光球与一个支撑杆的一端固定连接,支撑杆的另一端与3D眼镜的框架固定连接,并且支撑杆与3D眼镜的框架呈不同的角度,若干个反光球共同组成了图形,每一个3D眼镜的反光球共同组成的图形的形状不同或者每一个3D眼镜上固定的反光球的数量不同,从而有利于识别出不同的3D眼镜。具体地,实时检测各个3D眼镜上设置的若干个反光球的视角定位图形信息,其中视角定位图形信息可以包括若干个反光球所组成的图形的形状信息或者尺寸信息,并根据视角定位图形信息生成对应的3D眼镜的视角信息,其中视角信息包括3D眼镜视角变化的角度信息、高度信息等。例如,利用固定在虚拟现实显示屏幕上方的摄像头实时拍摄各个3D眼镜的图像,然后利用图像识别技术提取各个3D眼镜对应的反光球所组成的图形的形状,比较3D眼镜某一时刻前后相邻的对应的图形的变化情况,从而计算出3D眼镜变化的角度信息和高度信息,最终生成该3D眼镜的视角信息。本实施方式采用设置在3D眼镜上的反光球方式对3D眼镜的视角进行定位,得到各个3D眼镜的视角信息,该方式不仅定位准确度较高,而且容易实现,成本较低。
作为一种具体的实施方式,根据预设帧输出顺序和场同步脉冲信号向对应的3D眼镜发送开关同步指令步骤之后,还包括以下步骤:根据开关同步指令检测对应的3D眼镜的虚拟现实操作装置的动作信息;根据动作信息更新虚拟现实操作装置所在的3D眼镜的左眼帧和右眼帧。在虚拟现实技术中,不仅包括对环境的仿真模拟即视觉感知,而且还包括运动感知等,即第一人称3D眼镜的佩戴者可以通过操作装置等改变虚拟现实现实屏幕的画面帧,在本实施方式中,虚拟现实显示系统根据预设帧输出顺序和场同步脉冲信号向对应的3D眼镜发送 开关同步指令,以控制3D眼镜的左眼镜片和右眼镜片的开关与虚拟现实现实系统输出的左眼帧和右眼帧同步,当在开关同步指令控制下打开某一3D眼镜的左眼镜片或者右眼镜片时,检测该3D眼镜的虚拟现实操作装置的动作信息,该动作信息包括偏转角度信息及运动距离信息等,根据检测到的虚拟现实操作装置的动作信息更新虚拟现实操作装置所在的3D眼镜的左眼帧和右眼帧,这里可以利用3D画面输出服务器根据检测到的动作信息计算并更新虚拟现实操作装置所在的3D眼镜的左眼帧和右眼帧,更新后的虚拟现实操作装置所在的3D眼镜的左眼帧和右眼帧将在场同步脉冲信号的控制下输出。
具体地,下面仍以上述三个3D眼镜(记为3D眼镜A、3D眼镜B和3D眼镜C)及3D眼镜的虚拟操作装置(记为虚拟操作装置A、虚拟操作装置B和虚拟操作装置C)为例,对本实施方式中的步骤进行详细的说明,如图3所示,在开关同步指令的控制下,3D眼镜依次打开左眼镜片和右眼镜片,以3D眼镜A为例,当3D眼镜A的左眼镜片打开时,检测3虚拟现实操作装置A的动作信息,而忽略虚拟现实操作装置B和虚拟现实操作装置C的动作信息,并且在3D眼镜A的右眼镜片打开时,也检测虚拟现实操作装置A的动作信息,而忽略虚拟现实操作装置B和虚拟现实操作装置C的动作信息,对其他3D眼镜的虚拟现实操作装置的动作信息的检测以此类推。在本实施方式中,当虚拟现实显示系统显示3D眼镜A的画面帧时,3D眼镜A的虚拟现实操作装置的操作动作可以执行,而3D眼镜B的虚拟现实操作装置B和3D眼镜C的虚拟现实操作装置C的操作动作将被忽略,虚拟现实操作装置B和虚拟现实操作装置C的操作动作将在虚拟现实显示系统显示3D眼镜B和3D眼镜C的左眼帧或者右眼帧时执行。本实施方式不仅能够实现3D眼镜多个第一人称视角的虚拟现实实现,而且多个3D眼镜佩戴者可以通过虚拟现实操作装置改变各自当前虚拟现实的显示而互不影响,极大地提高了虚拟现实的效率和体验度。
作为一种具体的实施方式,根据开关同步指令检测对应的3D眼镜的虚拟现实操作装置的动作信息的过程包括以下步骤:根据开关同步指令检测对应的3D眼镜的虚拟现实操作装置上设置的若干个反光球的动作定位图形信息,动作定 位图形信息为若干个反光球所组成的图形的形状或者尺寸,每一虚拟现实装置上设置的反光球的数量或者反光球所组成的形状不同;根据动作定位图形信息生成虚拟现实操作装置的动作信息。
在该实施方式中,与3D眼镜的视角定位相似地,3D眼镜的虚拟现实操作装置上设置有用于对该装置的动作进行定位的若干个反光球,每一反光球与一个支撑杆的一端固定连接,支撑杆的另一端与3D眼镜的虚拟现实操作装置固定连接,并且支撑杆与3D眼镜的虚拟现实操作装置呈不同的角度,若干个反光球共同组成了图形,每一个3D眼镜的虚拟现实操作装置的反光球共同组成的图形的形状不同或者每一个3D眼镜的虚拟现实操作装置上固定的反光球的数量不同,从而有利于识别出不同的3D眼镜的虚拟现实操作装置,具体地,根据开关同步指令检测各个3D眼镜的虚拟现实操作装置上设置的若干个反光球的动作定位图形信息,即当3D眼镜的左眼镜片或者右眼镜片打开时,检测该3D眼镜的虚拟现实操作装置的动作定位图形信息,其中动作定位图形信息可以为若干个反光球所组成的图形的形状或者尺寸,并根据动作定位图形信息生成对应的3D眼镜的虚拟现实操作装置的动作信息,其中动作信息包括该虚拟现实操作装置的偏转角度信息及运动距离信息等,例如,在3D眼镜的左眼镜片或者右眼镜片打开时,利用固定在虚拟现实显示系统上方的摄像头拍摄该3D眼镜的虚拟现实操作装置的图像,然后利用图像识别技术提取虚拟现实操作装置反光球所组成的图形的形状,比较某一时刻前后相邻的对应的图形的变化情况,从而计算出虚拟现实操作装置变化的偏转角度信息和高运动距离信息,最终生成该3D眼镜的虚拟现实操作装置的动作信息。本实施方式采用设置在虚拟现实操作装置上的反光球方式对虚拟现实操作装置的动作进行定位,得到各个3D眼镜的虚拟现实操作装置的动作信息,该方式不仅定位准确度较高,而且容易实现,成本较低。
同时,本发明还提出一种虚拟现实实现系统,如图4所示,该系统包括:
获取装置100,用于实时获取各个3D眼镜的视角信息;
输出装置200,根据所述视角信息、预设帧输出顺序和场同步脉冲信号输出 各个所述3D眼镜的左眼帧和右眼帧,并根据所述预设帧输出顺序和所述场同步脉冲信号向对应的所述3D眼镜发送开关同步指令。
具体地,首先获取装置100获取各个3D眼镜的视角信息,例如,获取装置100可以利用3D眼镜上固定的反光球并通过固定在虚拟现实显示系统上的摄像头对反光球进行摄像,来判断3D眼镜的视角,在3D眼镜的视角发生变化时,获取装置100获取到3D眼镜的视角信息,并将视角信息发送至输出装置200;然后,输出装置200根据获取的各个3D眼镜的视角信息、预设帧输出顺序以及场同步脉冲信号输出各个3D眼镜的左眼帧和右眼帧,并根据预设帧输出顺序和场同步脉冲信号向对应的3D眼镜发送开关同步指令,在本实施例中,获取装置100和输出装置200可以单独外设于虚拟现实显示系统,如图4所示,也可以集成于虚拟现实显示系统内。其中,开关同步指令用于控制对应的3D眼镜的左眼镜片和右眼镜片的开关,例如输出装置200可以通过红外发射器或者无线发射器发送开关同步指令,以控制3D眼镜的左眼镜片和右眼镜片的开关,从而使得3D眼镜的左眼镜片和右眼镜片只有在虚拟现实显示系统显示该3D眼镜的左眼帧和右眼帧时才打开,而在显示其他3D眼镜的左眼帧和右眼帧时则处于关闭状态。因此,在场同步脉冲信号的控制下,每一个3D眼镜都可以以第一人称的视角观看到连续的画面,其中场同步脉冲信号的场周期或者虚拟现实显示系统的帧率可以根据人员视觉暂留的时间以及虚拟现实中使用的3D眼镜的数量计算得到,而且在3D眼镜响应时间范围内,帧率越高,每一个第一人称的3D眼镜看到的画面的连续性越好,画面闪烁感越弱。
本实施例所提出的虚拟现实实现系统按照预设帧输出顺序将虚拟现实显示系统输出的画面帧按时分方式分配给每一个3D眼镜,使得各个3D眼镜的左眼帧和右眼帧在场同步脉冲信号的控制下交替输出,通过实时获取各个3D眼镜的视角信息,根据获取的视角信息、预设帧输出顺序以及场同步脉冲信号输出各个3D眼镜的左眼帧和右眼帧,当任意一个3D眼镜的视角发生变化时,本发明所提出的虚拟现实实现系统都可以根据3D眼镜的视角变化调整输出的对应的左眼帧和右眼帧,从而实现多人在虚拟现实屏幕前以第一人称身份进行互动, 进一步提高了虚拟现实的体验度。
本发明中的预设帧输出顺序可以采用依次输出各个3D眼镜的左眼帧和右眼帧且每个3D眼镜的左眼帧和右眼帧在时序上都是相邻的方式,同时,作为一种具体的实施方式,预设帧输出顺序可以为依次输出各个3D眼镜的帧序列中的一帧左眼帧后,再依次对应输出各个3D眼镜的帧序列中的一帧右眼帧,或者依次输出各个3D眼镜的帧序列中的一帧右眼帧后,再依次对应输出各个3D眼镜的帧序列中的一帧左眼帧。本发明虚拟现实实现系统中预设帧输出顺序的具体实现方法,可以参照上述的虚拟现实实现方法实施例中描述的实现方法,此处不再赘述。
作为一种具体的实施方式,虚拟现实实现系统还包括:视角信息检测装置,用于实时检测各个3D眼镜上设置的若干个反光球的视角定位图形信息,所述视角定位图形信息包括若干个反光球所组成的图形的形状信息或者尺寸信息,且每一3D眼镜上设置的反光球的数量或者反光球所组成的形状不同;视角信息生成装置,用于根据视角定位图形信息生成对应的3D眼镜的视角信息。
在该实施方式中,3D眼镜上设置有用于对该3D眼镜的视角进行定位的若干个反光球,每一反光球与一个支撑杆的一端固定连接,支撑杆的另一端与3D眼镜的框架固定连接,并且支撑杆与3D眼镜的框架呈不同的角度,若干个反光球共同组成了图形,每一个3D眼镜的反光球共同组成的图形的形状不同或者每一个3D眼镜上固定的反光球的数量不同,从而有利于识别出不同的3D眼镜,具体地,视角信息检测装置实时检测各个3D眼镜上设置的若干个反光球的视角定位图形信息,其中视角定位图形信息可以为若干个反光球所组成的图形的形状信息或者尺寸信息,视角信息生成装置根据视角定位图形信息生成对应的3D眼镜的视角信息,其中视角信息包括3D眼镜视角变化的角度信息、高度信息等。例如,视角信息检测装置利用固定在虚拟现实显示屏幕上方的摄像头实时拍摄各个3D眼镜的图像,然后视角信息生成装置利用图像识别技术提取各个3D眼镜对应的反光球所组成的图形的形状,比较3D眼镜某一时刻前后相邻的对应的图形的变化情况,从而计算出3D眼镜变化的角度信息和高度信息,最终生成该 3D眼镜的视角信息。本实施方式中视角信息检测装置利用设置在3D眼镜上的反光球方式对3D眼镜的视角进行定位,得到各个3D眼镜的视角信息,该方式不仅定位准确度较高,而且容易实现,成本较低。
作为一种具体的实施方式,虚拟现实实现系统还包括:动作信息检测装置,用于根据开关同步指令检测对应的3D眼镜的虚拟现实操作装置的动作信息;更新装置,用于根据动作信息更新虚拟现实操作装置所在的3D眼镜的左眼帧和右眼帧。在虚拟现实技术中,不仅包括对环境的仿真模拟即视觉感知,而且还包括运动感知等,即第一人称3D眼镜的佩戴者可以通过操作装置等改变虚拟现实现实屏幕的画面帧,在本实施方式中,输出装置根据预设帧输出顺序和场同步脉冲信号向对应的3D眼镜发送开关同步指令,以控制3D眼镜的左眼镜片和右眼镜片的开关与虚拟现实系统输出的左眼帧和右眼帧同步,当在开关同步指令控制下打开某一3D眼镜的左眼镜片或者右眼镜片时,动作信息检测装置检测该3D眼镜的虚拟现实操作装置的动作信息,该动作信息包括偏转角度信息及运动距离信息等,更新装置根据检测到的虚拟现实操作装置的动作信息更新虚拟现实操作装置所在的3D眼镜的左眼帧和右眼帧,这里可以利用3D画面输出服务器实现更新装置,根据检测到的动作信息计算并更新虚拟现实操作装置所在的3D眼镜的左眼帧和右眼帧,更新后的虚拟现实操作装置所在的3D眼镜的左眼帧和右眼帧将在场同步脉冲信号的控制下输出。本实施方式中的动作信息检测装置和更新装置的具体实现方法,可以参照上述的虚拟现实实现方法实施例中描述的实现方法,此处不再赘述。
作为一种具体的实施方式,动作信息检测装置包括动作检测模块和生成模块,检测模块用于根据开关同步指令检测对应的3D眼镜的虚拟现实操作装置上设置的若干个反光球的动作定位图形信息,动作定位图形信息包括若干个反光球所组成的图形的形状信息或者尺寸信息,且每一虚拟现实装置上设置的反光球的数量或者反光球所组成的形状不同;生成模块用于根据动作定位图形信息生成虚拟现实操作装置的动作信息。
在该实施方式中,与3D眼镜的视角定位相似地,3D眼镜的虚拟现实操作 装置上设置有用于对该装置的动作进行定位的若干个反光球,每一反光球与一个支撑杆的一端固定连接,支撑杆的另一端与3D眼镜的虚拟现实操作装置固定连接,并且支撑杆与3D眼镜的虚拟现实操作装置呈不同的角度,若干个反光球共同组成了图形,每一个3D眼镜的虚拟现实操作装置的反光球共同组成的图形的形状不同或者每一个3D眼镜的虚拟现实操作装置上固定的反光球的数量不同,从而有利于识别出不同的3D眼镜的虚拟现实操作装置,具体地,检测模块根据开关同步指令检测各个3D眼镜的虚拟现实操作装置上设置的若干个反光球的动作定位图形信息,即当3D眼镜的左眼镜片或者右眼镜片打开时,检测该3D眼镜的虚拟现实操作装置的动作定位图形信息,其中动作定位图形信息可以为若干个反光球所组成的图形的形状信息或者尺寸信息,生成模块根据动作定位图形信息生成对应的3D眼镜的虚拟现实操作装置的动作信息,其中动作信息包括该虚拟现实操作装置的偏转角度信息及运动距离信息等,例如,在3D眼镜的左眼镜片或者右眼镜片打开时,检测模块利用固定在虚拟现实显示系统上方的摄像头拍摄该3D眼镜的虚拟现实操作装置的图像,然后生成模块利用图像识别技术提取虚拟现实操作装置反光球所组成的图形的形状,比较某一时刻前后相邻的对应的图形的变化情况,从而计算出虚拟现实操作装置变化的偏转角度信息和高运动距离信息,最终生成该3D眼镜的虚拟现实操作装置的动作信息。本实施方式中的检测模块利用设置在虚拟现实操作装置上的反光球方式对虚拟现实操作装置的动作进行定位,得到各个3D眼镜的虚拟现实操作装置的动作信息,该方式不仅定位准确度较高,而且容易实现,成本较低。
以上所述实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本发明的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干变形和改进,这些都属于本发明的保护范围。因此,本发明专利的保护范围应以所附权 利要求为准。

Claims (10)

  1. 一种虚拟现实实现方法,其特征在于,包括以下步骤:
    实时获取各个3D眼镜的视角信息;
    根据所述视角信息、预设帧输出顺序和场同步脉冲信号输出各个所述3D眼镜的左眼帧和右眼帧,并根据所述预设帧输出顺序和所述场同步脉冲信号向对应的所述3D眼镜发送控制所述3D眼镜左眼镜片和右眼镜片开关的开关同步指令。
  2. 根据权利要求1所述的虚拟现实实现方法,其特征在于,实时获取各个3D眼镜的视角信息的步骤之前,还包括以下步骤:
    实时检测各个3D眼镜上设置的若干个反光球的视角定位图形信息;所述视角定位图形信息包括若干个所述反光球所组成的图形的形状信息或者尺寸信息,且每一所述3D眼镜上设置的所述反光球的数量或者所述反光球所组成的图形形状不同;
    根据所述视角定位图形信息生成对应的3D眼镜的视角信息。
  3. 根据权利要求1或2所述的虚拟现实实现方法,其特征在于,根据所述预设帧输出顺序和所述场同步脉冲信号向对应的所述3D眼镜发送开关同步指令步骤之后,还包括以下步骤:
    根据所述开关同步指令检测对应的所述3D眼镜的虚拟现实操作装置的动作信息;
    根据所述动作信息更新所述虚拟现实操作装置所在的3D眼镜的左眼帧和右眼帧。
  4. 根据权利要求3所述的虚拟现实实现方法,其特征在于,根据所述开关同步指令检测对应的所述3D眼镜的虚拟现实操作装置的动作信息的过程包括以下步骤:
    根据所述开关同步指令检测对应的3D眼镜的虚拟现实操作装置上设置的若干个反光球的动作定位图形信息;所述动作定位图形信息包括若干个所述反光球所组成的图形的形状信息或者尺寸信息,且每一所述虚拟现实装置上设置 的所述反光球的数量或者所述反光球所组成的形状不同;
    根据所述动作定位图形信息生成所述虚拟现实操作装置的动作信息。
  5. 根据权利要求1或2所述的虚拟现实实现方法,其特征在于,
    所述预设帧输出顺序为依次输出各个所述3D眼镜的帧序列中的一帧左眼帧后,再依次对应输出各个所述3D眼镜的帧序列中的一帧右眼帧,
    或者
    所述预设帧输出顺序为依次输出各个所述3D眼镜的帧序列中的一帧右眼帧后,再依次对应输出各个所述3D眼镜的帧序列中的一帧左眼帧。
  6. 一种虚拟现实实现系统,其特征在于,包括:
    获取装置,用于实时获取各个3D眼镜的视角信息;
    输出装置,根据所述视角信息、预设帧输出顺序和场同步脉冲信号输出各个所述3D眼镜的左眼帧和右眼帧,并根据所述预设帧输出顺序和所述场同步脉冲信号向对应的所述3D眼镜发送开关同步指令。
  7. 根据权利要求6所述的虚拟现实实现系统,其特征在于,还包括:
    视角信息检测装置,用于实时检测各个3D眼镜上设置的若干个反光球的视角定位图形信息;所述视角定位图形信息包括若干个所述反光球所组成的图形的形状信息或者尺寸信息,且每一所述3D眼镜上设置的所述反光球的数量或者所述反光球所组成的图形形状不同;
    视角信息生成装置,用于根据所述视角定位图形信息生成对应的3D眼镜的视角信息。
  8. 根据权利要求6或7所述的虚拟现实实现系统,其特征在于,还包括:
    动作信息检测装置,用于根据所述开关同步指令检测对应的所述3D眼镜的虚拟现实操作装置的动作信息;
    更新装置,用于根据所述动作信息更新所述虚拟现实操作装置所在的3D眼镜的左眼帧和右眼帧。
  9. 根据权利要求8所述的虚拟现实实现系统,其特征在于,所述动作信息检测装置包括检测模块和生成模块,
    所述检测模块用于根据所述开关同步指令检测对应的3D眼镜的虚拟现实操作装置上设置的若干个反光球的动作定位图形信息;所述动作定位图形信息包括若干个所述反光球所组成的图形的形状信息或者尺寸信息,且每一所述虚拟现实装置上设置的所述反光球的数量或者所述反光球所组成的形状不同;
    所述生成模块用于根据所述动作定位图形信息生成所述虚拟现实操作装置的动作信息。
  10. 根据权利要求6或7所述的虚拟现实实现系统,其特征在于,
    所述预设帧输出顺序为依次输出各个所述3D眼镜的帧序列中的一帧左眼帧后,再依次对应输出各个所述3D眼镜的帧序列中的一帧右眼帧,
    或者
    所述预设帧输出顺序为依次输出各个所述3D眼镜的帧序列中的一帧右眼帧后,再依次对应输出各个所述3D眼镜的帧序列中的一帧左眼帧。
PCT/CN2017/096494 2016-12-16 2017-08-08 虚拟现实实现方法及系统 WO2018107781A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611169109.3 2016-12-16
CN201611169109.3A CN106791769A (zh) 2016-12-16 2016-12-16 虚拟现实实现方法及系统

Publications (1)

Publication Number Publication Date
WO2018107781A1 true WO2018107781A1 (zh) 2018-06-21

Family

ID=58893251

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/096494 WO2018107781A1 (zh) 2016-12-16 2017-08-08 虚拟现实实现方法及系统

Country Status (2)

Country Link
CN (1) CN106791769A (zh)
WO (1) WO2018107781A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11290573B2 (en) * 2018-02-14 2022-03-29 Alibaba Group Holding Limited Method and apparatus for synchronizing viewing angles in virtual reality live streaming
WO2022156671A1 (zh) * 2021-01-22 2022-07-28 上海曼恒数字技术股份有限公司 多视角虚拟显示信号处理方法、系统、计算机可读存储介质及电子设备
CN117499614A (zh) * 2023-11-21 2024-02-02 北京视睿讯科技有限公司 一种3d显示方法、装置、设备及存储介质
US11989351B2 (en) 2019-05-07 2024-05-21 Adam Farley Virtual, augmented and mixed reality systems with physical feedback

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106791769A (zh) * 2016-12-16 2017-05-31 广东威创视讯科技股份有限公司 虚拟现实实现方法及系统
CN108785988B (zh) * 2017-06-09 2020-07-31 台州市路桥琦萍眼镜配件股份有限公司 一种vr眼镜
CN107050757B (zh) * 2017-06-09 2019-03-01 温岭市恒芃信博机械科技有限公司 一种基于vr的健身车装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008021857A2 (en) * 2006-08-08 2008-02-21 Texas Instruments Incorporated Method and system for multi-channel viewing applications
CN102577401A (zh) * 2009-08-06 2012-07-11 索尼公司 用于立体多用户显示的方法和设备
US8711062B2 (en) * 1995-10-05 2014-04-29 Semiconductor Energy Laboratory Co., Ltd. Display unit and display method
CN104581108A (zh) * 2013-10-24 2015-04-29 西安群丰电子信息科技有限公司 3d图像显示系统与方法
CN204990186U (zh) * 2015-07-28 2016-01-20 深圳市中视典数字科技有限公司 一种桌面式立体显示交互系统
CN106791769A (zh) * 2016-12-16 2017-05-31 广东威创视讯科技股份有限公司 虚拟现实实现方法及系统

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2562581A4 (en) * 2010-04-21 2014-11-12 Panasonic Ip Corp America 3D VIDEO DISPLAY AND 3D VIDEO DISPLAY
CN102378016A (zh) * 2010-08-20 2012-03-14 纬创资通股份有限公司 根据不同视角播放相对应的立体影像的方法及其处理系统
US20120212589A1 (en) * 2011-02-20 2012-08-23 Min-Hung Chien Playback methods and playback apparatuses for processing multi-view content
CN105159448A (zh) * 2015-08-19 2015-12-16 天津先驱领域科技有限公司 一种多人同场景的浸入型虚拟现实装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8711062B2 (en) * 1995-10-05 2014-04-29 Semiconductor Energy Laboratory Co., Ltd. Display unit and display method
WO2008021857A2 (en) * 2006-08-08 2008-02-21 Texas Instruments Incorporated Method and system for multi-channel viewing applications
CN102577401A (zh) * 2009-08-06 2012-07-11 索尼公司 用于立体多用户显示的方法和设备
CN104581108A (zh) * 2013-10-24 2015-04-29 西安群丰电子信息科技有限公司 3d图像显示系统与方法
CN204990186U (zh) * 2015-07-28 2016-01-20 深圳市中视典数字科技有限公司 一种桌面式立体显示交互系统
CN106791769A (zh) * 2016-12-16 2017-05-31 广东威创视讯科技股份有限公司 虚拟现实实现方法及系统

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11290573B2 (en) * 2018-02-14 2022-03-29 Alibaba Group Holding Limited Method and apparatus for synchronizing viewing angles in virtual reality live streaming
US11989351B2 (en) 2019-05-07 2024-05-21 Adam Farley Virtual, augmented and mixed reality systems with physical feedback
WO2022156671A1 (zh) * 2021-01-22 2022-07-28 上海曼恒数字技术股份有限公司 多视角虚拟显示信号处理方法、系统、计算机可读存储介质及电子设备
CN117499614A (zh) * 2023-11-21 2024-02-02 北京视睿讯科技有限公司 一种3d显示方法、装置、设备及存储介质
CN117499614B (zh) * 2023-11-21 2024-04-26 北京视睿讯科技有限公司 一种3d显示方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN106791769A (zh) 2017-05-31

Similar Documents

Publication Publication Date Title
WO2018107781A1 (zh) 虚拟现实实现方法及系统
US9842433B2 (en) Method, apparatus, and smart wearable device for fusing augmented reality and virtual reality
US10701344B2 (en) Information processing device, information processing system, control method of an information processing device, and parameter setting method
WO2019123729A1 (ja) 画像処理装置、画像処理方法、およびプログラム
CN102647606B (zh) 立体影像处理器、立体影像互动系统及立体影像显示方法
CN104808340B (zh) 头戴式显示装置及其控制方法
CN105183147A (zh) 头戴式智能设备及其建模三维虚拟肢体的方法
CN110045832B (zh) 基于ar交互的沉浸式安全教育实训系统及方法
US20160249043A1 (en) Three dimensional (3d) glasses, 3d display system and 3d display method
CN110537208A (zh) 头戴式显示器和方法
JP2018505580A (ja) 3d画像の表示方法及びヘッドマウント機器
JPWO2013161662A1 (ja) 動きガイド提示方法、そのシステム及び動きガイド提示装置
CN204496117U (zh) 3d眼镜
WO2020017435A1 (ja) 情報処理装置、情報処理方法、およびプログラム
CN105721857A (zh) 一种具有双摄像头的头盔
TWI589150B (zh) 3d自動對焦顯示方法及其系統
US20190281280A1 (en) Parallax Display using Head-Tracking and Light-Field Display
JP6682624B2 (ja) 画像処理装置
KR20130091819A (ko) 3차원 영상 시스템 및 그 영상 재생 방법
US20210400234A1 (en) Information processing apparatus, information processing method, and program
WO2017076231A1 (zh) 头戴式虚拟现实显示器的显示方法及显示控制系统
KR20180099756A (ko) 2차원 영상의 입체 영상화 표시장치
WO2018062251A1 (ja) 情報処理装置
JPWO2018074419A1 (ja) 情報処理装置
KR101230344B1 (ko) 스테레오 비전 시스템의 영상촬영방법 및 그 장치

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17881122

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 29/10/2019)

122 Ep: pct application non-entry in european phase

Ref document number: 17881122

Country of ref document: EP

Kind code of ref document: A1