CN107396086A - The method and VR helmets of video are played based on VR helmets - Google Patents
The method and VR helmets of video are played based on VR helmets Download PDFInfo
- Publication number
- CN107396086A CN107396086A CN201710631060.7A CN201710631060A CN107396086A CN 107396086 A CN107396086 A CN 107396086A CN 201710631060 A CN201710631060 A CN 201710631060A CN 107396086 A CN107396086 A CN 107396086A
- Authority
- CN
- China
- Prior art keywords
- video
- user
- scenes
- focus
- video playback
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/332—Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
- H04N21/44004—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving video buffer management, e.g. video decoder buffer or video display buffer
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/47202—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting content on demand, e.g. video on demand
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/816—Monomedia components thereof involving special video data, e.g 3D video
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Human Computer Interaction (AREA)
- User Interface Of Digital Computer (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The application provides a kind of method and VR helmets that video is played based on VR helmets.Wherein, method includes:The communication bridge established by dynamic link library between VR scenes and decoder, dynamic link library can be called in VR scenes, video file is decoded into the video data being adapted to VR scenes using decoder, and then video pictures can be rendered in VR scenes according to the video data that VR scenes are adapted to, allow to play video in VR scenes, meet the needs of user watches video in VR scenes.
Description
Technical field
The application is related to dimension display technologies field, more particularly to it is a kind of based on virtual reality (Virtual Reality,
VR) helmet plays the method and VR helmets of video.
Background technology
Unity3D is as a game engine that can create 3 D video game, realtime three dimensional animation etc., because of its function
Powerful, VR technical staff increasingly tends to use Unity3D fabrication techniques three-dimensional virtual scenes.In numerous VR helmets,
Such as wear in display (Helmet Mounted Display, HMD) equipment, three-dimensional virtual scene is all to use Unity3D skills
What art made.
Because 3D videos can more increase the sense of reality of user, therefore be played by the VR such as HMD helmets in VR scenes
Audio frequency and video increasingly cause the concern of people.For this reason, it is necessary to provide a kind of solution party that video can be played in VR scenes
Case.
The content of the invention
Some embodiments of the application provide a kind of method that video is played based on VR helmets, including:
The request message of viewing video is asked according to user, video information interface, the video letter are shown in VR scenes
Breath includes several video files on interface;
In response to the operation of the selection target video file from several described video files, the mesh to be played is determined
Mark video file;
The decoder in dynamic link library is called to regard the target video file decoding into what the VR scenes were adapted to
Frequency evidence;
A video playback interface is shown in the VR scenes, the video data is rendered into the video playback interface
In, to obtain video pictures.
Alternatively, before showing video information interface in VR scenes, methods described also includes:
In the VR scenes, at least one VR functional areas are shown, at least one VR functional areas include video playback
Area;
According to focus of the user in the VR scenes, judge whether the user chooses the video playback
Area;
When the user chooses the video playback area, determine that the user asks viewing video.
Alternatively, the focus according to the user in the VR scenes, judges whether the user chooses institute
Video playback area is stated, including:
Based on eyeball tracking technology, the focus of the user is followed the trail of;
When the focus for tracking the user is located at the video playback area, the focus for recording the user is located at
The very first time length in the video playback area;
When the very first time length is more than the first duration threshold value of setting, determine that the user chooses the video to broadcast
Put area.
Alternatively, the VR scenes also include the selection interactive areas associated with the video playback area;
The focus according to the user in the VR scenes, judge whether the user chooses the video to broadcast
Area is put, including:
Based on eyeball tracking technology, the focus of the user is followed the trail of;
When tracking the focus of the user positioned at the selection interactive areas, the focus for recording the user is located at
Second time span of the selection interactive areas;
When second time span is more than the second duration threshold value of setting, determine that the user chooses the video to broadcast
Put area.
Alternatively, the operation in response to the selection target video file from several described video files, it is determined that treating
The target video file played, including:
Based on eyeball tracking technology, focus of the user on the video information interface is followed the trail of;
When tracking the focus correspondingly video file of the user, record described in the focus correspondence of the user
3rd time span of video file;
When the 3rd time span is more than the 3rd duration threshold value of setting, corresponding to the focus that determines the user
Video file is the target video file.
Alternatively, the VR scenes are 360 degree of panoramic view;A video playback interface is shown in the VR scenes,
Including:
Corresponded in described 360 degree of panoramic view and show a screen on the arc area at the visual angle center of the user,
As the video playback interface.
Alternatively, the VR scenes also include the control interactive areas associated with the video playback interface, and the control is handed over
Mutual area includes at least a kind of play and controls control;
After video pictures are obtained, methods described also includes:
According to focus of the user in the VR scenes, determine that the user controls from least a kind of play
The target selected in control plays control control;
The control event of control control association is played according to the target, controls the broadcast state of the video pictures.
Alternatively, the decoder is XBMC decoders.
Alternatively, it is described call dynamic link library in decoder by the target video file decoding into the VR fields
The video data of scape adaptation, including:
OpenGL texture object is created, and the texture object is passed in the dynamic link library;
OpenGL frame buffer object is created in the dynamic link library, and the texture object is attached to the frame
On cache object;
The decoder is called to be decoded frame by frame to the target video file and export the every frame video data decoded
To the frame buffer object;Wherein, it is described to be output to automatically on the texture object per frame video data.
Alternatively, it is described to render to the video data in the video playback interface, to obtain video pictures, bag
Include:
The texture object is sent in rectangular window corresponding to the video playback interface;
By calling OpenGL to render the texture object in the rectangular window in the VR scenes, with described
The video pictures are exported in video playback interface.
Some embodiments of the application also provide a kind of device that video is played based on VR helmets, including:
Display module, for asking the request message of viewing video according to user, video information circle is shown in VR scenes
Face, include several video files on the video information interface;
Determining module, for the operation in response to the selection target video file from several described video files, it is determined that
The target video file to be played;
Calling module, for call the decoder in dynamic link library by the target video file decoding into the VR
The video data of scene adaptation;
Output module, for showing a video playback interface in the VR scenes, the video data is rendered into institute
State in video playback interface, to obtain video pictures.
Alternatively, described device also includes:Judge module;
The display module, it is additionally operable in the VR scenes, shows at least one VR functional areas, at least one VR
Functional areas include video playback area;
The judge module, for the focus according to the user in the VR scenes, whether judge the user
Choose the video playback area;
The determining module, it is additionally operable to when the user chooses the video playback area, determines that the user asks to see
See video.
Alternatively, the judge module is specifically used for:Based on eyeball tracking technology, the focus of the user is followed the trail of;When
When tracking the focus of the user and being located at the video playback area, the focus for recording the user is located at the video and broadcast
Put the very first time length in area;When the very first time length is more than the first duration threshold value of setting, user's choosing is determined
Described in video playback area.
Alternatively, the VR scenes also include the selection interactive areas associated with the video playback area;It is described based on this
Judge module is specifically used for:Based on eyeball tracking technology, the focus of the user is followed the trail of;When the concern for tracking the user
When point is positioned at the selection interactive areas, second time span of the focus positioned at the selection interactive areas of the user is recorded;
When second time span is more than the second duration threshold value of setting, determine that the user chooses the video playback area.
Alternatively, the determining module is specifically used for:Based on eyeball tracking technology, follow the trail of the user and believe in the video
Cease the focus on interface;When the corresponding video file of the focus for tracking the user, the concern of the user is recorded
3rd time span of the corresponding video file of point;When the 3rd time span is more than the 3rd duration threshold value of setting,
It is the target video file to determine video file corresponding to the focus of the user.
Some embodiments of the application also provide a kind of VR helmets, including:Memory and processor;The memory,
For storage program, the processor is used to perform the described program in the memory, for:
The request message of viewing video is asked according to user, video information interface, the video letter are shown in VR scenes
Breath includes several video files on interface;
In response to the operation of the selection target video file from several described video files, the mesh to be played is determined
Mark video file;
The decoder in dynamic link library is called to regard the target video file decoding into what the VR scenes were adapted to
Frequency evidence;
A video playback interface is shown in the VR scenes, the video data is rendered into the video playback interface
In, to obtain video pictures.
Alternatively, the processor is additionally operable to:
In the VR scenes, at least one VR functional areas are shown, at least one VR functional areas include video playback
Area;
According to focus of the user in the VR scenes, judge whether the user chooses the video playback
Area;
When the user chooses the video playback area, determine that the user asks viewing video.
Alternatively, the processor is specifically used for when judging whether the user chooses the video playback area:
Based on eyeball tracking technology, the focus of the user is followed the trail of;
When the focus for tracking the user is located at the video playback area, the focus for recording the user is located at
The very first time length in the video playback area;
When the very first time length is more than the first duration threshold value of setting, determine that the user chooses the video to broadcast
Put area.
Alternatively, the decoder is XBMC decoders.
Alternatively, the processor is specifically used for when calling the decoder:
OpenGL texture object is created, and the texture object is passed in the dynamic link library;
OpenGL frame buffer object is created in the dynamic link library, and the texture object is attached to the frame
On cache object;
The decoder is called to be decoded frame by frame to the target video file and export the every frame video data decoded
To the frame buffer object;Wherein, it is described to be output to automatically on the texture object per frame video data.
In the embodiment of the present application, the communication bridge established by dynamic link library between VR scenes and decoder, utilize
Video file is decoded into the video data being adapted to VR scenes by decoder, and then can be according to the video counts being adapted to VR scenes
Video pictures are rendered according in VR scenes so that can be played video in VR scenes, be met that user watches in VR scenes
The demand of video.
Brief description of the drawings
Accompanying drawing described herein is used for providing further understanding of the present application, forms the part of the application, this Shen
Schematic description and description please is used to explain the application, does not form the improper restriction to the application.In the accompanying drawings:
Fig. 1 a are the method flow diagram that video is played based on VR helmets of the application some embodiments offer;
Fig. 1 b are the method flow diagram that video is played based on VR helmets of the application other embodiments offer;
Fig. 2 a are the method flow that video is played based on VR helmets that the other embodiment of the application provides;
Fig. 2 b are a kind of arrangement pattern diagram at least one VR functional areas that the other embodiment of the application provides;
Fig. 2 c are another arrangement pattern diagram at least one VR functional areas that the other embodiment of the application provides;
Fig. 3 is the apparatus structure schematic diagram that video is played based on VR helmets that the other embodiment of the application provides;
Fig. 4 is the apparatus structure schematic diagram that video is played based on VR helmets that the other embodiment of the application provides;
Fig. 5 is the structural representation for the VR helmets that the other embodiment of the application provides.
Embodiment
To make the purpose, technical scheme and advantage of the application clearer, below in conjunction with the application specific embodiment and
Technical scheme in the application some embodiments is described corresponding accompanying drawing.Obviously, described embodiment is only this Shen
Please part of the embodiment, rather than whole embodiment.Based on the embodiment mentioned in the application, those of ordinary skill in the art
The every other embodiment obtained under the premise of creative work is not made, belong to the scope of the application protection.
The needs of watching video to meet user in VR scenes, some embodiments of the application provide one kind can be in VR fields
The solution of video is played in scape, cardinal principle is:
VR scenes and the decoding of Unity3D exploitations are established by dynamic link library (Dynamic Link Library, DLL)
Communication bridge between device, dynamic link library can be called in the VR scenes of Unity3D exploitations, it is using decoder that video is literary
Part is decoded into the video data being adapted to VR scenes, so can according to the video data that VR scenes are adapted in VR scenes wash with watercolours
Dye video pictures so that video can be played in VR scenes, meet the needs of user watches video in VR scenes.
In some embodiments, before playing video in VR scenes, VR scenes can be developed in advance.It is for instance possible to use
Unity3D develops VR scenes.In the process using Unity3D exploitation VR scenes, a dynamic link library is created, in dynamic link
Decoder is realized in storehouse, and realizes the method for calling decoder, so can passes through the dynamic link in VR scenes
Storehouse accesses decoder, and the dynamic link library becomes the communication bridge between VR scenes and decoder.When what is developed in Unity3D
When video is played in VR scenes, by the dynamic link library decoder can be called to be decoded into video file and be adapted to VR scenes
Video data, so as to playing video in VR scenes.Wherein, that is realized in dynamic link library is used to call the side of decoder
Method can have a variety of realizations, every for applications the implementation method of decoder to be called to can be applied to some realities of the application
Apply in example.A kind of method logic for being used to call decoder realized in dynamic link library will be entered in some follow-up embodiments
Row explanation, wouldn't be repeated herein.
Fig. 1 a are the method flow diagram that video is played based on VR helmets of the application some embodiments offer.Such as Fig. 1 a
Shown, this method includes:
10a, the request message according to user's request viewing video, show video information interface in VR scenes, described to regard
Include several video files on frequency information interface.
10b, the operation in response to the selection target video file from several described video files, determine mesh to be played
Mark video file.
10c, the decoder in dynamic link library is called to be adapted to the target video file decoding into the VR scenes
Video data.
10d, a video playback interface is shown in the VR scenes, the video data is rendered into the video playback
In interface, to obtain video pictures.
In step 10a, user wears VR helmets, and VR helmets can show VR scenes to user.VR is worn
Equipment can ask the request message of viewing video to show a video information interface, the video to user in VR scenes according to user
Include several video files on information interface, so that user therefrom selects to want the target video file of viewing.
In step 10b, user can select to want the target video file of viewing from several video files.To VR
For helmet, it can be determined to be played in response to the operation of the selection target video file from several video files
Target video file.
In step 10c, it is determined that after target video file, VR helmets can call the solution in dynamic link library
Code device, by decoder by target video file decoding into the video data being adapted to the VR scenes.
And then in step 10d, VR helmets show a video playback interface in VR scenes, and decoder is decoded
The video data gone out is rendered in video playback interface, to obtain video pictures.For user, regarding in VR scenes may be viewed by
The video pictures that frequency broadcast interface is playing, realize the purpose that video is watched in VR scenes.
In the embodiment of the present application, the type of decoder is not limited, any decoder can be used, such as can use
MediaPlayer decoders, MPC decoder or XBMC decoders.The video format that different decoders are supported is not
Together, specifically decoder can be selected according to the video format that VR scenes need to support.
XBMC is a freedom and (GPL) media center software of increasing income, and can not only play nearly all popular sound
Frequency form and audio frequency and video form, but also various network media agreements can be supported with playing network media, hardware decoding is supported,
Support multi-platform.Based on this, some embodiments of the application use XBMC decoders, so as to support a variety of audio frequency and video lattice using XBMC
The advantage of formula so that the video file of multiple format can be played in the VR scenes of Unity3D exploitations.For example, in Fig. 1 b institutes
Show in embodiment, using XBMC decoders.
Fig. 1 b are the method flow diagram that video is played based on VR helmets of the application other embodiments offer.Such as figure
Shown in 1b, this method includes:
101st, the request message of viewing video is asked according to user, video information interface is shown in VR scenes, it is described to regard
Include several video files on frequency information interface.
102nd, in response to the operation of the selection target video file from several described video files, mesh to be played is determined
Mark video file.
103rd, call dynamic link library in XBMC decoders by the target video file decoding into the VR scenes
The video data of adaptation.
104th, a video playback interface is shown in the VR scenes, the video data is rendered into the video playback
In interface, to obtain video pictures.
In a step 101, user wears VR helmets, and VR helmets can show VR scenes to user.It is worth saying
Bright, the VR scenes in some embodiments of the application are primarily referred to as the VR scenes realized using Unity3D, are described to simplify,
Some places can directly be described as VR scenes in the description.User can ask to watch video in VR scenes.VR, which is worn, to be set
The standby request message that viewing video can be asked according to user shows a video information interface, video letter in VR scenes to user
Breath includes several video files on interface, so that user therefrom selects to want the target video file of viewing.
In certain embodiments, the video file included on video information interface can be local video file, can also
It is based on information on services block (Server Message Block, SMB), UPnP (Universal Plug and
Play, UPnP), the shared video file of the procotol such as FTP (File Transfer Protocol, FTP).
The video format that several described video files are related to includes the video format that Android platform is supported, such as mp4 and 3gp
Deng, can also be including the video format that Android platform is not supported, such as wov, rmvb, rm, tb etc..Opened using Unity3D
In the VR scenes of hair, XBMC technologies are combined, therefore user can select to play the video file of various video formats.
In a step 102, user can select to want the video file of viewing from the video file of various video form.
For VR helmets, can in response to from the video file of various video form select video file operation, it is determined that
Target video file to be played.Goal video file is the video being selected in the video file of various video form
File.In the present embodiment, do not limit from the video file of various video form select video file operation, it is every can
The application is suitable for by the mode of operation that VR helmets identify.
In step 103, it is determined that after target video file, VR helmets can be called in dynamic link library
XBMC decoders, by XBMC decoders by target video file decoding into the video data being adapted to the VR scenes.
And then at step 104, VR helmets show a video playback interface in VR scenes, by XBMC decoders
The video data decoded is rendered in video playback interface, to obtain video pictures.For user, it may be viewed by VR scenes
The video pictures that are playing of video playback interface, realize the purpose that video is watched in VR scenes.
As can be seen here, in the present embodiment, the VR scenes of Unity3D exploitations are established by dynamic link library and XBMC is decoded
Communication bridge between device, video file is decoded into the video data being adapted to VR scenes, Jin Erke using XBMC decoders
Video pictures are rendered in VR scenes with the video data that VR scenes are adapted to basis so that can be developed in Unity3D
Video is played in VR scenes, meets the needs of user watches video in VR scenes.
In certain embodiments, when user request in VR scenes watch video when, VR helmets in VR scenes to
User shows video information interface.Before this, it can determine whether user asks to watch video in VR scenes in advance.It is based on
This, the method flow that video is played based on VR helmets that other embodiments of the application provide, as shown in Figure 2 a, including with
Lower step:
200th, in VR scenes, at least one VR functional areas are shown, at least one VR functional areas include video playback
Area.
User wears VR helmets, and VR helmets can show VR scenes to user.VR scenes include at least one
Individual VR functional areas.VR functional areas refer to the VR functions that VR helmets can provide a user.For example, at least one VR functional areas
Video playback area represent VR helmets can provide a user video playback capability.In another example at least one VR functional areas
In VR Game Zones represent VR helmets can provide a user game function.In addition, at least one VR functional areas are also
Can include for the functional areas of urban planning, the functional areas for indoor design, the functional areas for industrial simulation, for teaching
Educate functional areas of training etc..
In some embodiments, at least one VR functional areas can arrange successively from left to right in the angular field of view of user, such as
Shown in Fig. 2 b.Or at least one VR functional areas can also be arranged in the angular field of view of user with list style, such as scheme
Shown in 2c.Wherein, the arrangement mode of at least one VR functional areas is not limited to shown in Fig. 2 b and Fig. 2 c.In addition, Fig. 2 b and Fig. 2 c
The pattern of shown VR scenes is only a kind of example, however it is not limited to this.
201st, the focus according to user in VR scenes, judges whether user selects video playback area;If judged result
During to be, step 202 is performed;If judged result is no, terminate this time to operate or enter other VR function treatments flows.
Here other VR function treatments flows refer to the handling process of other VR functions, and other VR functions refer to broadcast in video
Put VR functions corresponding to other VR functional areas outside area.
In some embodiments, user can be selected certainly by rotating head and/or eyes from least one VR functional areas
The VR functions that oneself needs.Eyes can be focused on to VR functional areas corresponding to the VR functions, user when user selects VR functions
Focus point of the eyes in VR scenes, i.e. focus.
Wherein it is possible to focus of the user in VR scenes is followed the trail of by eyeball tracking technology.Eyeball tracking technology can be with
The light beams such as infrared ray are projected according to the changing features on eyeball and eyeball periphery, iris angle change or actively to iris to carry
Eye feature is taken, further according to eye feature and VR scenes, determines focus of the user in VR scenes.
In certain embodiments, the implementation process of step 201 includes:Based on eyeball tracking technology, the concern of user is followed the trail of
Point;When the focus for tracking user is located at video playback area, the focus for recording user is located at the first of video playback area
Time span;Then judge whether very first time length is more than the first duration threshold value of setting;If very first time length is more than
First duration threshold value, determine that user chooses video playback area.In simple terms, user can be by watching the timing of video playback area one attentively
Between, so as to send the instruction for choosing video playback area to VR helmets;VR helmets can according to this behavior of user,
Determine that user chooses video playback area.
In certain embodiments, in VR scenes in addition to including at least one VR functional areas, in addition to each VR functions
Area selects interactive areas correspondingly.For user, a certain VR functional areas can be selected by these selection interactive areas.To video
For broadcast area, user can be by the selection interactive areas that is associated in VR scenes with video playback area, to select video playback area.
Based on this, the implementation process of step 201 includes:Based on eyeball tracking technology, the focus of user is followed the trail of;When tracking user's
When focus is positioned at selection interactive areas, second time span of the focus positioned at selection interactive areas of user is recorded;Then judge
Whether the second time span is more than the second duration threshold value of setting;If the second time span is more than the second duration threshold value, it is determined that
User chooses video playback area.
202nd, determine that video is watched in user's request in VR scenes, and enter step 203.
203rd, a video information interface is shown in VR scenes, several videos text is included on the video information interface
Part.
204th, in response to the operation of the selection target video file from several described video files, mesh to be played is determined
Mark video file.
205th, the decoder in dynamic link library is called to be adapted to the target video file decoding into the VR scenes
Video data.
206th, a video playback interface is shown in VR scenes, the video data is rendered into the video playback interface
In, to obtain video pictures.
In certain embodiments, the VR scenes are 360 degree of panoramic view., can in above-mentioned steps 203 based on this
Video information interface is shown on the arc area at the visual angle center of user to be corresponded in 360 degree of panoramic view.So, user
Without rotate or significantly rotate head can see video information interface, be easy to user in time, be rapidly selected target video
File, and then improve the playing efficiency of video.
Some embodiments, the video information interface can include multiple regions.For example, list area can be included, use
In the title for showing each video file.In another example breviary graph region can be included, for showing the thumbnail of each video file.
In another example can include the field of search, for searching for video file for user, the search term that the field of search is supported can be video
The information such as the video format of file, type, title, Yan Yuanming, show time.
Some embodiments, in step 10d, step 104 or step 206, it can also be corresponded in 360 degree of panoramic view
A screen is shown on the arc area at the visual angle center of user, as video playback interface.So, user without rotate or significantly
Degree, which rotates head, can see video pictures, be advantageous to improve the viewing experience of user.
In certain embodiments, in step 10b, step 102 or step 204, can according to the focus of user, it is determined that
Target video file.For example, eyeball tracking technology can be based on, focus of the tracking user on video information interface;Work as tracking
To the user a focus corresponding video file when, the focus for recording user corresponds to the 3rd time of the video file
Length;When the 3rd time span is more than the 3rd duration threshold value of setting, determine that video file is corresponding to the focus of user
Target video file.
In certain embodiments, VR helmets are furnished with Trackpad.Then in step 10b, step 102 or step 204, use
Family can select to want the target video file of viewing by rotating head and/or eyes from each video file, and by touching
Control plate sends the instruction for confirming selection.For VR helmets, eyeball tracking technology can be based on, user is in video information for tracking
Focus on interface;When the corresponding video file of the focus for tracking the user, user to be received is waited to pass through touch-control
The confirmation selection instruction that plate is sent;When receiving confirmation selection instruction, determine that user chooses video text corresponding to its focus
Part is as target video file.
In certain embodiments, VR helmets are furnished with handle.Then in step 10b, step 102 or step 204, user
Can be by handle, the target video file cocurrent for selecting oneself to want to watch from each video file goes out to confirm the finger of selection
Order.For VR helmets, the operation of handle, the instruction that identification handle is sent, so that it is determined that what user chose can be detected
Video file is as target video file.
In certain embodiments, step 10c, a kind of embodiment of step 103 or step 205, actually and dynamic
That is realized in state chained library is used to call the method logic of decoder (such as XBMC decoders), including:
It is determined that after target video file, OpenGL texture (Texture) object created first, and by the texture
Object is passed in dynamic link library.Texture object is mainly used in storing the data texturing needed for each frame video cartoon, such as wraps
Include the information such as the pixel data of texture, texture size, parametric texture.Dynamic link library is based on Unity3D exploitation VR scenes
During create.
Then, OpenGL frame buffer object (Frame Buffer Object, FBO) is created in dynamic link library, and
Texture object is attached on frame buffer object.Wherein, frame buffer object can allow XBMC decoders to be output to decoded result
The frame buffer object rather than it is output on screen.In addition, after texture object is attached on the frame buffer object, the frame buffer
The decoded result of XBMC decoders can also be exported and give the texture object by object, in VR scenes render video picture provide
Basis.
Afterwards, the every frame for calling decoder (such as XBMC decoders) to decode and will be decoded frame by frame to target video file
Video data is exported to frame buffer object;Wherein, can be output to automatically on texture object per frame video data.Here per frame video
Data are primarily referred to as the data texturing needed for every frame video cartoon, such as the pixel data including texture, texture size, texture ginseng
The information such as number.In the present embodiment, decoder is realized in dynamic link library, its whole life cycle (such as initial, circulation,
Terminate) it can be monitored in dynamic link library, it is easy to control the operation rhythm of decoder.
It is alternatively possible to dynamic link library and decoder, but not limited to this are write using C Plus Plus.
Further, the texture object based on above-mentioned establishment, can will be above-mentioned in step 10d, step 104 or step 206
Texture object is sent in rectangular window corresponding to video playback interface;By calling OpenGL to render the square in VR scenes
Texture object in shape window, to export the video pictures in video playback interface.Wherein, rectangular window is video playback
Embodiment of the interface on the screen of VR helmets, i.e., create a rectangular window, by rendering on the screen of VR helmets
So as to obtain the video playback interface in VR scenes.
In above process, no matter what form target video file is, wherein can all pass through decoding per frame video cartoon
Device is decoded on texture object, by calling OpenGL to render texture object frame by frame, so as to see video in VR scenes
The change of picture, reach the effect that video is watched in VR scenes.
In certain embodiments, above-mentioned VR scenes also include the control interactive areas associated with video playback interface.Alternatively,
Interactive areas is controlled to be located at some region at video playback interface, such as the upper left corner, the upper right corner, the lower left corner or lower right corner etc..Or
Person, control interaction area are located at other regions in the VR scenes in addition to the video playback interface, such as video playback
Lower zone, left field, right side area or the upper area at interface etc..
The control interactive areas includes at least a kind of play and controls control.By these broadcasting control controls, user can be with
Control the broadcast state of video cartoon in video playback interface.For example, at least a kind of control control that plays can include but unlimited
In:F.F. control, pause control, terminate control, volume adjusting control, wicket play control etc..Based on this, in VR scenes
During playing video, VR helmets can also follow the trail of the focus on user, according to user according to eyeball tracking technology
Focus in VR scenes, determine that the user controls the target selected in control to play control from least a kind of play
Control;And then the control event of control control association is played according to the target, control the broadcast state of the video pictures.
For example, when user selects F.F. control, VR helmets can accelerate the video cartoon in video playback interface
Broadcasting speed.In another example when user selects pause control, VR helmets, which can suspend, to be played in video playback interface
Video cartoon.In another example when user selects volume adjusting control, VR helmets can increase or reduce video playback interface
In video cartoon broadcast sound volume.In the present embodiment, control is controlled by controlling interactive areas to provide at least a kind of play, just
The broadcast state of video pictures is flexibly controlled according to the viewing demand of oneself in user, is advantageous to improve Consumer's Experience.
Fig. 3 is the apparatus structure schematic diagram that video is played based on VR helmets that the other embodiment of the application provides.
As shown in figure 3, described device includes:Display module 31, determining module 32, calling module 33 and output module 34.
Display module 31, for asking the request message of viewing video according to user, video information is shown in VR scenes
Interface, include several video files on the video information interface.Optionally, VR scenes can use Unity3D technologies
The VR scenes of exploitation.
Determining module 32, for the operation in response to the selection target video file from several described video files, really
Fixed target video file to be played.
Calling module 33, for call the decoder in dynamic link library by the target video file decoding into it is described
The video data of VR scenes adaptation.
Output module 34, for showing a video playback interface in the VR scenes, the video data is rendered to
In the video playback interface, to obtain video pictures.
In an optional embodiment, as shown in figure 4, described device also includes:Judge module 35.
Display module 31, before being additionally operable to show video information interface in VR scenes, in the VR scenes, displaying is extremely
Few VR functional areas, at least one VR functional areas include video playback area.Correspondingly, judge module 35, for basis
Focus of the user in the VR scenes, judges whether the user chooses the video playback area.Determining module 32,
It is additionally operable to when judge module 35 judges that user chooses the video playback area, determines that the user asks viewing video, enter
And trigger display module 31 and video information interface is shown in VR scenes.
Still optionally further, judge module 35 is specifically used for:Based on eyeball tracking technology, the concern of the user is followed the trail of
Point;When the focus for tracking the user is located at the video playback area, the focus of the user is recorded positioned at described
The very first time length in video playback area;When the very first time length is more than the first duration threshold value of setting, it is determined that described
User chooses the video playback area.
Still optionally further, VR scenes also include the selection interactive areas associated with the video playback area.Based on this, sentence
Disconnected module 35 is specifically used for:Based on eyeball tracking technology, the focus of the user is followed the trail of;When the concern for tracking the user
When point is positioned at the selection interactive areas, second time span of the focus positioned at the selection interactive areas of the user is recorded;
When second time span is more than the second duration threshold value of setting, determine that the user chooses the video playback area.
In an optional embodiment, determining module 32 in response to from several described video files selection target regard
The operation of frequency file, when determining target video file to be played, it is specifically used for:Based on eyeball tracking technology, the use is followed the trail of
Focus of the family on the video information interface;When the corresponding video file of the focus for tracking the user, record
The focus of the user corresponds to the 3rd time span of the video file;When the 3rd time span is more than the of setting
During three duration threshold values, it is the target video file to determine video file corresponding to the focus of the user.
In an optional embodiment, the VR scenes are 360 degree of panoramic view;Output module 34 is in the VR scenes
During one video playback interface of middle displaying, it is specifically used for:The visual angle center of the user is corresponded in described 360 degree of panoramic view
Arc area on show a screen, as the video playback interface.
In an optional embodiment, decoder can be XBMC decoders, but not limited to this.
In an optional embodiment, calling module 33 is specifically used for:OpenGL texture object is created, and by the line
Reason object is passed in the dynamic link library;OpenGL frame buffer object is created in the dynamic link library, and by described in
Texture object is attached on the frame buffer object;The decoder is called to decode and will be solved frame by frame to the target video file
Every frame video data that code goes out is exported to the frame buffer object;Wherein, every frame video data can be output to described automatically
On texture object.
In an optional embodiment, the video data is being rendered to the video playback interface by output module 34
In, during obtaining video pictures, it is specifically used for:The texture object is sent to rectangular window corresponding to the video playback interface
In mouthful;By calling OpenGL to render the texture object in the rectangular window in the VR scenes, to be regarded described
The video pictures are exported in frequency broadcast interface.
In an optional embodiment, the VR scenes also include interacting with the control of video playback interface association
Area, the control interactive areas include at least a kind of play and control control.Based on this, output module 34 is additionally operable to by the video
Data render is into the video playback interface, after obtaining video pictures, according to the user in the VR scenes
Focus, determine that the user controls the target selected in control to play control control from least a kind of play;According to institute
The control event that target plays control control association is stated, controls the broadcast state of the video pictures.
The device that the present embodiment provides, video is played based on VR helmets available for what execution above-described embodiment provided
Method flow, its concrete operating principle repeat no more, and refer to the description of embodiment of the method.
The present embodiment provide based on VR helmets play video device, by dynamic link library establish VR scenes with
Communication bridge between decoder, video file is decoded into the video data being adapted to VR scenes, Jin Erke using decoder
Video pictures are rendered in VR scenes with the video data that VR scenes are adapted to basis so that can play and regard in VR scenes
Frequently, meets the needs of user watches video in VR scenes.
As shown in figure 5, some embodiments of the application provide a kind of VR helmets.The VR helmets can include storage
Device 51 and processor 52.
Wherein, memory 51 is mainly used in storage program.In addition to storage program, memory 51 is also configured to store
Other various data are to support the operation on VR helmets.The example of other various data includes routine data, message, figure
Piece etc..
Memory 51 can realize by any kind of volatibility or non-volatile memory device or combinations thereof, such as
Static RAM (SRAM), Electrically Erasable Read Only Memory (EEPROM), erasable programmable is read-only to be deposited
Reservoir (EPROM), programmable read only memory (PROM), read-only storage (ROM), magnetic memory, flash memory, disk or
CD.
Processor 52 is used to perform the program stored in memory 51, for:
The request message of viewing video is asked according to user, video information interface, the video letter are shown in VR scenes
Breath includes several video files on interface;
In response to the operation of the selection target video file from several described video files, the mesh to be played is determined
Mark video file;
The decoder in dynamic link library is called to regard the target video file decoding into what the VR scenes were adapted to
Frequency evidence;
A video playback interface is shown in the VR scenes, the video data is rendered into the video playback interface
In, to obtain video pictures.
In an optional embodiment, before processor 52 shows video information interface in VR scenes, it is additionally operable to:
In the VR scenes, at least one VR functional areas are shown, at least one VR functional areas include video playback
Area;
According to focus of the user in the VR scenes, judge whether the user chooses the video playback
Area;
When the user chooses the video playback area, determine that the user asks viewing video.
In an optional embodiment, processor 52 is in the focus according to the user in the VR scenes, judgement
When whether the user chooses the video playback area, it is specifically used for:
Based on eyeball tracking technology, the focus of the user is followed the trail of;
When the focus for tracking the user is located at the video playback area, the focus for recording the user is located at
The very first time length in the video playback area;
When the very first time length is more than the first duration threshold value of setting, determine that the user chooses the video to broadcast
Put area.
In an optional embodiment, the selection that the VR scenes also include with video playback area association interacts
Area.Based on this, processor 52 judges whether the user chooses institute in the focus according to the user in the VR scenes
When stating video playback area, it is specifically used for:
Based on eyeball tracking technology, the focus of the user is followed the trail of;
When tracking the focus of the user positioned at the selection interactive areas, the focus for recording the user is located at
Second time span of the selection interactive areas;
When second time span is more than the second duration threshold value of setting, determine that the user chooses the video to broadcast
Put area.
In an optional embodiment, processor 52 is in response to the selection target video from several described video files
The operation of file, when determining the target video file to be played, it is specifically used for:
Based on eyeball tracking technology, focus of the user on the video information interface is followed the trail of;
When tracking the focus correspondingly video file of the user, record described in the focus correspondence of the user
3rd time span of video file;
When the 3rd time span is more than the 3rd duration threshold value of setting, corresponding to the focus that determines the user
Video file is the target video file.
In an optional embodiment, the VR scenes are 360 degree of panoramic view.Based on this, processor 52 is described
When video playback interface is shown in VR scenes, it is specifically used for:The visual angle of the user is corresponded in described 360 degree of panoramic view
A screen is shown on the arc area at center, as the video playback interface.
In an optional embodiment, the VR scenes also include interacting with the control of video playback interface association
Area, the control interactive areas include at least a kind of play and control control.Based on this, processor 52 is additionally operable to:Drawn obtaining video
After face, according to focus of the user in the VR scenes, determine that the user controls from least a kind of play
The target selected in control plays control control;The control event that control control associates is played according to the target, described in control
The broadcast state of video pictures.
In an optional embodiment, the decoder is XBMC decoders, but not limited to this.
In an optional embodiment, decoder of the processor 52 in dynamic link library is called is literary by the target video
When part is decoded into the video data being adapted to the VR scenes, it is specifically used for:
OpenGL texture object is created, and the texture object is passed in the dynamic link library;
OpenGL frame buffer object is created in the dynamic link library, and the texture object is attached to the frame
On cache object;
The decoder is called to be decoded frame by frame to the target video file and export the every frame video data decoded
To the frame buffer object;Wherein, it is described to be output to automatically on the texture object per frame video data.
In an optional embodiment, processor 52 is rendering to the video data in the video playback interface,
During obtaining video pictures, it is specifically used for:
The texture object is sent in rectangular window corresponding to the video playback interface;
By calling OpenGL to render the texture object in the rectangular window in the VR scenes, with described
The video pictures are exported in video playback interface.
As shown in figure 5, VR helmets in addition to including memory 51 and processor 52, can also include display screen
53rd, binocular lens (Lens) 54, inertia detection unit (Inertial Measurement Unit, IMU) 55, loudspeaker 56 etc.
Component.The members of VR helmets are only shown in Fig. 5, are not all components, it will be appreciated by those skilled in the art that VR
Helmet can also include other assemblies in addition to component shown in Fig. 5, it is of course also possible to select not include one shown in Fig. 5
A little components.
Alternatively, display screen 53 can use LED display, but not limited to this.IMU55 can include gyroscope, magnetic force
Meter, accelerometer etc..
The VR helmets provided using the present embodiment can play video in VR scenes, meet user in VR scenes
Watch the demand of video.
Some embodiments of the application also provide a kind of computer-readable storage medium, and the computer-readable storage medium is stored with one
Or a plurality of computer instruction, one or more computer instruction are suitable to be loaded and performed by processor, to realize:
The request message of viewing video is asked according to user, video information interface, the video letter are shown in VR scenes
Breath includes several video files on interface;
In response to the operation of the selection target video file from several described video files, the mesh to be played is determined
Mark video file;
The decoder in dynamic link library is called to regard the target video file decoding into what the VR scenes were adapted to
Frequency evidence;
A video playback interface is shown in the VR scenes, the video data is rendered into the video playback interface
In, to obtain video pictures.
In certain embodiments, the computer-readable storage medium also includes:Suitable for being loaded by the processor and performing reality
Other computer instructions of existing operations described below:
In the VR scenes, at least one VR functional areas are shown, at least one VR functional areas include video playback
Area;
According to focus of the user in the VR scenes, judge whether the user chooses the video playback
Area;
When the user chooses the video playback area, determine that the user asks viewing video.
In certain embodiments, it is above-mentioned by processor loads and perform can be according to the user in the VR scenes
Focus, judges whether the user chooses the computer instruction in the video playback area, including:
Based on eyeball tracking technology, the focus of the user is followed the trail of;
When the focus for tracking the user is located at the video playback area, the focus for recording the user is located at
The very first time length in the video playback area;
When the very first time length is more than the first duration threshold value of setting, determine that the user chooses the video to broadcast
Put area.
In certain embodiments, the VR scenes also include the selection interactive areas associated with the video playback area.Base
It is above-mentioned by the focus that can be according to the user in the VR scenes that processor loads and performs in this, judge the use
Whether family chooses the computer instruction in the video playback area, including:
Based on eyeball tracking technology, the focus of the user is followed the trail of;
When tracking the focus of the user positioned at the selection interactive areas, the focus for recording the user is located at
Second time span of the selection interactive areas;
When second time span is more than the second duration threshold value of setting, determine that the user chooses the video to broadcast
Put area.
In certain embodiments, it is above-mentioned by processor loads and what is performed may be in response to from several described video files
The operation of selection target video file, the computer instruction of the target video file to be played is determined, including:
Based on eyeball tracking technology, focus of the user on the video information interface is followed the trail of;
When tracking the focus correspondingly video file of the user, record described in the focus correspondence of the user
3rd time span of video file;
When the 3rd time span is more than the 3rd duration threshold value of setting, corresponding to the focus that determines the user
Video file is the target video file.
In certain embodiments, the VR scenes are 360 degree of panoramic view.It is above-mentioned to be loaded simultaneously by processor based on this
The computer instruction that a video playback interface can be shown in the VR scenes performed, including:In described 360 degree of aphorama
Corresponded in figure and show a screen on the arc area at the visual angle center of the user, as the video playback interface.
In certain embodiments, the VR scenes also include the control interactive areas associated with the video playback interface, institute
Stating control interactive areas includes at least a kind of broadcasting control control.Based on this, the computer-readable storage medium also includes:Suitable for by
Reason device loads and performed the other computer instructions for realizing operations described below:
According to focus of the user in the VR scenes, determine that the user controls from least a kind of play
The target selected in control plays control control;
The control event of control control association is played according to the target, controls the broadcast state of the video pictures.
In certain embodiments, the decoder is XBMC decoders.
In certain embodiments, it is above-mentioned by processor loads and performs the decoder called in dynamic link library by institute
Target video file decoding is stated into the computer instruction for the video data being adapted to the VR scenes, including:
OpenGL texture object is created, and the texture object is passed in the dynamic link library;
OpenGL frame buffer object is created in the dynamic link library, and the texture object is attached to the frame
On cache object;
The decoder is called to be decoded frame by frame to the target video file and export the every frame video data decoded
To the frame buffer object;Wherein, it is described to be output to automatically on the texture object per frame video data.
In certain embodiments, it is above-mentioned loaded and performed by processor the video data can be rendered to the video
In broadcast interface, to obtain the computer instruction of video pictures, including:
The texture object is sent in rectangular window corresponding to the video playback interface;
By calling OpenGL to render the texture object in the rectangular window in the VR scenes, with described
The video pictures are exported in video playback interface.
Computer instruction in the computer-readable storage medium of the present embodiment offer is provided by VR helmets, can be in VR
Video is played in scene, meets the needs of user watches video in VR scenes.
It should be understood by those skilled in the art that, embodiments herein can be provided as method, system or computer program
Product.Therefore, the application can use the reality in terms of complete hardware embodiment, complete software embodiment or combination software and hardware
Apply the form of example.Moreover, the application can use the computer for wherein including computer usable program code in one or more
The computer program production that usable storage medium is implemented on (including but is not limited to magnetic disk storage, CD-ROM, optical memory etc.)
The form of product.
The application be with reference to according to the present processes, equipment (system) and computer program product flow chart and/or
Block diagram describes.It should be understood that can by each flow in computer program instructions implementation process figure and/or block diagram and/or
Square frame and the flow in flow chart and/or block diagram and/or the combination of square frame.These computer program instructions can be provided to arrive
All-purpose computer, special-purpose computer, the processor of Embedded Processor or other programmable data processing devices are to produce one
Machine so that produced by the instruction of computer or the computing device of other programmable data processing devices and flowed for realizing
The device for the function of being specified in one flow of journey figure or multiple flows and/or one square frame of block diagram or multiple square frames.
These computer program instructions, which may be alternatively stored in, can guide computer or other programmable data processing devices with spy
Determine in the computer-readable memory that mode works so that the instruction being stored in the computer-readable memory, which produces, to be included referring to
Make the manufacture of device, the command device realize in one flow of flow chart or multiple flows and/or one square frame of block diagram or
The function of being specified in multiple square frames.
These computer program instructions can be also loaded into computer or other programmable data processing devices so that counted
Series of operation steps is performed on calculation machine or other programmable devices to produce computer implemented processing, so as in computer or
The instruction performed on other programmable devices is provided for realizing in one flow of flow chart or multiple flows and/or block diagram one
The step of function of being specified in individual square frame or multiple square frames.
In a typical configuration, computing device includes one or more processors (CPU), input/output interface, net
Network interface and internal memory.
Internal memory may include computer-readable medium in volatile memory, random access memory (RAM) and/or
The forms such as Nonvolatile memory, such as read-only storage (ROM) or flash memory (flash RAM).Internal memory is computer-readable medium
Example.
Computer-readable medium includes permanent and non-permanent, removable and non-removable media can be by any method
Or technology come realize information store.Information can be computer-readable instruction, data structure, the module of program or other data.
The example of the storage medium of computer includes, but are not limited to phase transition internal memory (PRAM), static RAM (SRAM), moved
State random access memory (DRAM), other kinds of random access memory (RAM), read-only storage (ROM), electric erasable
Programmable read only memory (EEPROM), fast flash memory bank or other memory techniques, read-only optical disc read-only storage (CD-ROM),
Digital versatile disc (DVD) or other optical storages, magnetic cassette tape, the storage of tape magnetic rigid disk or other magnetic storage apparatus
Or any other non-transmission medium, the information that can be accessed by a computing device available for storage.Define, calculate according to herein
Machine computer-readable recording medium does not include temporary computer readable media (transitory media), such as data-signal and carrier wave of modulation.
It should also be noted that, term " comprising ", "comprising" or its any other variant are intended to nonexcludability
Comprising so that process, method, commodity or equipment including a series of elements not only include those key elements, but also wrapping
Include the other element being not expressly set out, or also include for this process, method, commodity or equipment intrinsic want
Element.In the absence of more restrictions, the key element limited by sentence "including a ...", it is not excluded that wanted including described
Other identical element also be present in the process of element, method, commodity or equipment.
It will be understood by those skilled in the art that embodiments herein can be provided as method, system or computer program product.
Therefore, the application can be using the embodiment in terms of complete hardware embodiment, complete software embodiment or combination software and hardware
Form.Deposited moreover, the application can use to can use in one or more computers for wherein including computer usable program code
The shape for the computer program product that storage media is implemented on (including but is not limited to magnetic disk storage, CD-ROM, optical memory etc.)
Formula.
Embodiments herein is the foregoing is only, is not limited to the application.For those skilled in the art
For, the application can have various modifications and variations.All any modifications made within spirit herein and principle, it is equal
Replace, improve etc., it should be included within the scope of claims hereof.
Claims (15)
- A kind of 1. method that video is played based on VR helmets, it is characterised in that including:The request message of viewing video is asked according to user, video information interface, video information circle are shown in VR scenes Include several video files on face;In response to the operation of the selection target video file from several described video files, determine that the target to be played regards Frequency file;Call the decoder in dynamic link library by the target video file decoding into the video counts being adapted to the VR scenes According to;A video playback interface is shown in the VR scenes, the video data is rendered in the video playback interface, To obtain video pictures.
- 2. according to the method for claim 1, it is characterised in that described before showing video information interface in VR scenes Method also includes:In the VR scenes, at least one VR functional areas are shown, at least one VR functional areas include video playback area;According to focus of the user in the VR scenes, judge whether the user chooses the video playback area;When the user chooses the video playback area, determine that the user asks viewing video.
- 3. according to the method for claim 2, it is characterised in that the concern according to the user in the VR scenes Point, judges whether the user chooses the video playback area, including:Based on eyeball tracking technology, the focus of the user is followed the trail of;When the focus for tracking the user is located at the video playback area, the focus of the user is recorded positioned at described The very first time length in video playback area;When the very first time length is more than the first duration threshold value of setting, determine that the user chooses the video playback Area.
- 4. according to the method for claim 2, it is characterised in that the VR scenes also include associating with the video playback area One selection interactive areas;The focus according to the user in the VR scenes, judge whether the user chooses the video playback Area, including:Based on eyeball tracking technology, the focus of the user is followed the trail of;When tracking the focus of the user positioned at the selection interactive areas, the focus of the user is recorded positioned at described Select the second time span of interactive areas;When second time span is more than the second duration threshold value of setting, determine that the user chooses the video playback Area.
- 5. according to the method for claim 1, it is characterised in that described in response to being selected from several described video files The operation of target video file, the target video file to be played is determined, including:Based on eyeball tracking technology, focus of the user on the video information interface is followed the trail of;When the corresponding video file of the focus for tracking the user, the focus for recording the user corresponds to the video 3rd time span of file;When the 3rd time span is more than the 3rd duration threshold value of setting, video corresponding to the focus of the user is determined File is the target video file.
- 6. according to the method for claim 1, it is characterised in that the VR scenes are 360 degree of panoramic view;In the VR A video playback interface is shown in scene, including:Corresponded in described 360 degree of panoramic view and show a screen on the arc area at the visual angle center of the user, as The video playback interface.
- 7. according to the method for claim 1, it is characterised in that the VR scenes also include closing with the video playback interface The control interactive areas of connection, the control interactive areas include at least a kind of play and control control;After video pictures are obtained, methods described also includes:According to focus of the user in the VR scenes, determine that the user controls control from least a kind of play The target of middle selection plays control control;The control event of control control association is played according to the target, controls the broadcast state of the video pictures.
- 8. according to the method for claim 1, it is characterised in that the decoder is XBMC decoders.
- 9. according to the method described in claim any one of 1-8, it is characterised in that the decoder called in dynamic link library By the target video file decoding into the video data being adapted to the VR scenes, including:OpenGL texture object is created, and the texture object is passed in the dynamic link library;OpenGL frame buffer object is created in the dynamic link library, and the texture object is attached to the frame buffer On object;The decoder is called to be decoded frame by frame to the target video file and export the every frame video data decoded to institute State frame buffer object;Wherein, it is described to be output to automatically on the texture object per frame video data.
- 10. according to the method for claim 9, it is characterised in that described the video data is rendered into the video to broadcast Put in interface, to obtain video pictures, including:The texture object is sent in rectangular window corresponding to the video playback interface;By calling OpenGL to render the texture object in the rectangular window in the VR scenes, with the video The video pictures are exported in broadcast interface.
- A kind of 11. VR helmets, it is characterised in that including:Memory and processor;The memory, for storage program;The processor, for performing the described program in the memory, for:The request message of viewing video is asked according to user, video information interface, video information circle are shown in VR scenes Include several video files on face;In response to the operation of the selection target video file from several described video files, determine that the target to be played regards Frequency file;Call the decoder in dynamic link library by the target video file decoding into the video counts being adapted to the VR scenes According to;A video playback interface is shown in the VR scenes, the video data is rendered in the video playback interface, To obtain video pictures.
- 12. VR helmets according to claim 11, it is characterised in that the processor is additionally operable to:In the VR scenes, at least one VR functional areas are shown, at least one VR functional areas include video playback area;According to focus of the user in the VR scenes, judge whether the user chooses the video playback area;When the user chooses the video playback area, determine that the user asks viewing video.
- 13. VR helmets according to claim 12, it is characterised in that whether the processor is judging the user When choosing the video playback area, it is specifically used for:Based on eyeball tracking technology, the focus of the user is followed the trail of;When the focus for tracking the user is located at the video playback area, the focus of the user is recorded positioned at described The very first time length in video playback area;When the very first time length is more than the first duration threshold value of setting, determine that the user chooses the video playback Area.
- 14. VR helmets according to claim 11, it is characterised in that the decoder is XBMC decoders.
- 15. according to the VR helmets described in claim any one of 11-14, it is characterised in that the processor is calling institute When stating decoder, it is specifically used for:OpenGL texture object is created, and the texture object is passed in the dynamic link library;OpenGL frame buffer object is created in the dynamic link library, and the texture object is attached to the frame buffer On object;The decoder is called to be decoded frame by frame to the target video file and export the every frame video data decoded to institute State frame buffer object;Wherein, it is described to be output to automatically on the texture object per frame video data.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710631060.7A CN107396086A (en) | 2017-07-28 | 2017-07-28 | The method and VR helmets of video are played based on VR helmets |
PCT/CN2017/098115 WO2019019232A1 (en) | 2017-07-28 | 2017-08-18 | Method for playing video based on vr head-mounted device, and vr head-mounted device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710631060.7A CN107396086A (en) | 2017-07-28 | 2017-07-28 | The method and VR helmets of video are played based on VR helmets |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107396086A true CN107396086A (en) | 2017-11-24 |
Family
ID=60341383
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710631060.7A Pending CN107396086A (en) | 2017-07-28 | 2017-07-28 | The method and VR helmets of video are played based on VR helmets |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN107396086A (en) |
WO (1) | WO2019019232A1 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109491508A (en) * | 2018-11-27 | 2019-03-19 | 北京七鑫易维信息技术有限公司 | The method and apparatus that object is watched in a kind of determination attentively |
CN110290409A (en) * | 2019-07-26 | 2019-09-27 | 浙江开奇科技有限公司 | Data processing method, VR equipment and system |
CN108668168B (en) * | 2018-05-28 | 2020-10-09 | 烽火通信科技股份有限公司 | Android VR video player based on Unity3D and design method thereof |
CN112567759A (en) * | 2018-04-11 | 2021-03-26 | 阿尔卡鲁兹公司 | Digital media system |
CN114615528A (en) * | 2020-12-03 | 2022-06-10 | 中移(成都)信息通信科技有限公司 | VR video playing method, system, device and medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103502981A (en) * | 2011-05-09 | 2014-01-08 | 谷歌公司 | Contextual video browsing |
US20140016908A1 (en) * | 2011-04-04 | 2014-01-16 | Hitachi Maxell, Ltd. | Video display system, display apparatus, and display method |
CN106020461A (en) * | 2016-05-13 | 2016-10-12 | 陈盛胜 | Video interaction method based on eyeball tracking technology |
CN106131539A (en) * | 2016-06-30 | 2016-11-16 | 乐视控股(北京)有限公司 | A kind of Virtual Reality equipment and video broadcasting method thereof |
CN106527857A (en) * | 2016-10-10 | 2017-03-22 | 成都斯斐德科技有限公司 | Virtual reality-based panoramic video interaction method |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3143774A4 (en) * | 2014-05-13 | 2018-04-25 | PCP VR Inc. | Method, system and apparatus for generation and playback of virtual reality multimedia |
CN104596523B (en) * | 2014-06-05 | 2019-05-07 | 腾讯科技(深圳)有限公司 | A kind of streetscape destination bootstrap technique and equipment |
KR20160033376A (en) * | 2014-09-18 | 2016-03-28 | (주)에프엑스기어 | Head-mounted display controlled by line of sight, method for controlling the same and computer program for controlling the same |
CN105872515A (en) * | 2015-01-23 | 2016-08-17 | 上海乐相科技有限公司 | Video playing control method and device |
JP6831840B2 (en) * | 2015-10-20 | 2021-02-17 | マジック リープ, インコーポレイテッドMagic Leap,Inc. | Selection of virtual objects in 3D space |
-
2017
- 2017-07-28 CN CN201710631060.7A patent/CN107396086A/en active Pending
- 2017-08-18 WO PCT/CN2017/098115 patent/WO2019019232A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140016908A1 (en) * | 2011-04-04 | 2014-01-16 | Hitachi Maxell, Ltd. | Video display system, display apparatus, and display method |
CN103502981A (en) * | 2011-05-09 | 2014-01-08 | 谷歌公司 | Contextual video browsing |
CN106020461A (en) * | 2016-05-13 | 2016-10-12 | 陈盛胜 | Video interaction method based on eyeball tracking technology |
CN106131539A (en) * | 2016-06-30 | 2016-11-16 | 乐视控股(北京)有限公司 | A kind of Virtual Reality equipment and video broadcasting method thereof |
CN106527857A (en) * | 2016-10-10 | 2017-03-22 | 成都斯斐德科技有限公司 | Virtual reality-based panoramic video interaction method |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112567759A (en) * | 2018-04-11 | 2021-03-26 | 阿尔卡鲁兹公司 | Digital media system |
US11589110B2 (en) | 2018-04-11 | 2023-02-21 | Alcacruz Inc. | Digital media system |
CN112567759B (en) * | 2018-04-11 | 2023-09-29 | 阿尔卡鲁兹公司 | Digital media system supporting multiple features regarding virtual reality content |
CN108668168B (en) * | 2018-05-28 | 2020-10-09 | 烽火通信科技股份有限公司 | Android VR video player based on Unity3D and design method thereof |
CN109491508A (en) * | 2018-11-27 | 2019-03-19 | 北京七鑫易维信息技术有限公司 | The method and apparatus that object is watched in a kind of determination attentively |
CN109491508B (en) * | 2018-11-27 | 2022-08-26 | 北京七鑫易维信息技术有限公司 | Method and device for determining gazing object |
CN110290409A (en) * | 2019-07-26 | 2019-09-27 | 浙江开奇科技有限公司 | Data processing method, VR equipment and system |
CN114615528A (en) * | 2020-12-03 | 2022-06-10 | 中移(成都)信息通信科技有限公司 | VR video playing method, system, device and medium |
CN114615528B (en) * | 2020-12-03 | 2024-04-19 | 中移(成都)信息通信科技有限公司 | VR video playing method, system, equipment and medium |
Also Published As
Publication number | Publication date |
---|---|
WO2019019232A1 (en) | 2019-01-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107396086A (en) | The method and VR helmets of video are played based on VR helmets | |
JP6469313B2 (en) | Information processing method, terminal, and computer storage medium | |
CN102308589B (en) | Playback device, playback method and program | |
Belton | If film is dead, what is cinema? | |
JP7088878B2 (en) | Interactions Devices, methods and computer-readable recording media for playing audiovisual movies | |
CN106257930A (en) | Generate the dynamic time version of content | |
US20170337949A1 (en) | Mobile device video personalization | |
Pillai et al. | Grammar of VR storytelling: visual cues | |
US20150135071A1 (en) | Method and apparatus for distribution and presentation of audio visual data enhancements | |
US20200104030A1 (en) | User interface elements for content selection in 360 video narrative presentations | |
CN104683766A (en) | Digital video monitoring system | |
CN110069230A (en) | Extend content display method, device and storage medium | |
Rico Garcia et al. | Seamless multithread films in virtual reality | |
WO2021052130A1 (en) | Video processing method, apparatus and device, and computer-readable storage medium | |
CN115150555B (en) | Video recording method, device, equipment and medium | |
CN103179415B (en) | Timing code display device and timing code display packing | |
CN105556952A (en) | Method for the reproduction of a film | |
Rao et al. | Shoot360: Normal View Video Creation from City Panorama Footage | |
CN107330960A (en) | Generation, the method and device for playing image | |
KR102066857B1 (en) | object image tracking streaming system and method using the same | |
US11285388B2 (en) | Systems and methods for determining story path based on audience interest | |
CN108574838A (en) | A kind of anti-fluttering method, device and the mobile terminal of mobile terminal 3D display | |
Robins | Film, Gaming, and Medium: Diagnosing the Limitations of Narrative Art | |
Cohen | Media Production in the Age of Internet Media: Digitisation, Mediation, Co-creation | |
KR20230056093A (en) | A system and method for generating a ghost image including at least one image identified for each sequence for video comparison analysis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20171124 |