WO2018076939A1 - 视频文件的处理方法和装置 - Google Patents

视频文件的处理方法和装置 Download PDF

Info

Publication number
WO2018076939A1
WO2018076939A1 PCT/CN2017/100970 CN2017100970W WO2018076939A1 WO 2018076939 A1 WO2018076939 A1 WO 2018076939A1 CN 2017100970 W CN2017100970 W CN 2017100970W WO 2018076939 A1 WO2018076939 A1 WO 2018076939A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual reality
reality device
video
terminal
video file
Prior art date
Application number
PCT/CN2017/100970
Other languages
English (en)
French (fr)
Inventor
沈晓斌
罗谷才
王宇
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2018076939A1 publication Critical patent/WO2018076939A1/zh
Priority to US16/293,391 priority Critical patent/US10798363B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/189Recording image signals; Reproducing recorded image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/139Format conversion, e.g. of frame-rate or size
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/161Encoding, multiplexing or demultiplexing different image signal components
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/344Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/356Image reproducers having separate monoscopic and stereoscopic modes

Definitions

  • the present application relates to the field of computers, and in particular to the processing of video files.
  • VR technology is a simulation technology that can create and experience virtual worlds. It uses computer to generate a simulation environment, through multi-source information fusion and interactive 3D dynamics. Vision and physical behavior, etc., immerse users in the environment. Good interactivity and multi-perception make VR widely used in entertainment, such as panoramic video, VR games and so on.
  • 2D (2Dimension, 2D) multimedia content VR content is less, most of which are Professional Generated Content (PGC), which cannot meet the individual needs of users.
  • PPC Professional Generated Content
  • the embodiment of the present application provides a method and an apparatus for processing a video file, so as to at least solve the technical problem that the related technology cannot record the video displayed in the virtual reality device.
  • a method of processing a video file includes: detecting a type of the first virtual reality device, wherein the first virtual reality device is configured to display the target video to be recorded; and detecting the application of the first virtual reality device according to the type of the first virtual reality device; In the application of the first virtual reality device, the media data of the target video is obtained; and the media data is encoded to obtain a recorded video file of the target video, wherein the video content of the recorded video file is the same as the video content of the target video.
  • a method of processing a video file includes: detecting, by the terminal, a type of the first virtual reality device, where the first virtual reality device is configured to display a target video to be recorded; and the terminal is detected according to the type of the first virtual reality device An application of the first virtual reality device; in an application of the first virtual reality device, the terminal acquires media data of the target video; the terminal encodes the media data to obtain the target video Recording video file, wherein the video content of the recorded video file is the same as the video content of the target video.
  • a processing apparatus for a video file includes: a first detecting unit, configured to detect a type of the first virtual reality device, wherein the first virtual reality device is configured to display a target video to be recorded; and the second detecting unit is configured to The type of the virtual reality device detects an application of the first virtual reality device; the acquiring unit is configured to acquire the target video in the application of the first virtual reality device Media data; an encoding unit, configured to encode the media data to obtain a recorded video file of the target video, wherein the video content of the recorded video file is the same as the video content of the target video.
  • a terminal includes: a processor for storing program code and transmitting the program code to the processor; the processor for invoking an instruction in the memory to perform processing of the video file method.
  • a storage medium is also provided.
  • the storage medium is for storing program code for executing a processing method of the above video file.
  • a computer program product comprising instructions which, when run on a terminal, cause the terminal to perform a processing method of the video file described above.
  • the type of the first virtual reality device is detected, wherein the first virtual reality device is configured to display the target video to be recorded; and the application of the first virtual reality device is detected according to the type of the first virtual reality device; In the application of the first virtual reality device, acquiring media data of the target video; encoding the media data to obtain a recorded video file of the target video, wherein the video content of the recorded video file is the same as the video content of the target video, and the pair is realized.
  • the technical effect of recording the video displayed in the virtual reality device thereby solving the technical problem that the related technology cannot record the video displayed in the virtual reality device.
  • FIG. 1 is a schematic diagram of an OBS recording system framework according to the related art
  • FIG. 2 is a schematic diagram of an OBS recording operation interface according to the related art
  • FIG. 3 is a schematic diagram of a hardware environment of a method for processing a video file according to an embodiment of the present application
  • FIG. 4 is a flowchart of a method for processing a video file according to an embodiment of the present application
  • FIG. 5 is a flowchart of a method for encoding media data according to an embodiment of the present application.
  • FIG. 6 is a flowchart of another method for processing a video file according to an embodiment of the present application.
  • FIG. 7 is a flowchart of a method for detecting an application of a first virtual reality device according to a type of a first virtual reality device according to an embodiment of the present application
  • FIG. 8 is a flowchart of a method for detecting a type of a first virtual reality device according to an embodiment of the present application
  • FIG. 9 is a flowchart of another method for processing a video file according to an embodiment of the present application.
  • FIG. 10 is a flowchart of a method for acquiring media data of a target video according to an embodiment of the present application.
  • FIG. 11 is a schematic structural diagram of video recording according to an embodiment of the present application.
  • FIG. 12 is a flowchart of a method for detecting a VR device according to an embodiment of the present application.
  • FIG. 13 is a flowchart of a method for detecting a VR application according to an embodiment of the present application.
  • FIG. 14 is a flowchart of another method for processing a video file according to an embodiment of the present application.
  • 15 is a schematic diagram of a frame of a video player according to an embodiment of the present application.
  • FIG. 16 is a schematic diagram of a video recording interface according to an embodiment of the present application.
  • FIG. 17 is a schematic diagram of a video playing interface according to an embodiment of the present application.
  • FIG. 18 is a schematic diagram of a processing apparatus of a video file according to an embodiment of the present application.
  • FIG. 19 is a structural block diagram of a terminal according to an embodiment of the present application.
  • OBS Open Broadcaster Software
  • FIG. 1 is a schematic diagram of an OBS recording system framework according to the related art.
  • the OBS recording system includes an image frame capture module, a recording source plug-in system, a video encoding module, and an audio encoding module, wherein the recording source plug-in system can record a Window image, implement a 3D specification interface (Direct3D) capture, and open source.
  • Image Library Open Graphics Library, referred to as OpenGL
  • the OBS recording system can record the desktop application window of the Windows platform, as well as the Windows game screen, etc., but cannot record the screen of the newly appeared PC platform VR head display (head mounted display device).
  • FIG. 2 is a schematic diagram of an OBS recording operation interface according to the related art. As shown in Figure 2, you need to start the recording process by clicking “Start Streaming”, “Start Recording” or “Preview Streaming”. It is difficult for many ordinary users in terms of interface friendliness and product experience. The threshold is higher, which is not conducive to Use of the product.
  • the embodiment of the present application provides a method and a device for processing a video file, so as to at least solve the technical problem that the related technology cannot record the video displayed in the virtual reality device.
  • the various embodiments of the present application will be described in detail below.
  • an embodiment of a method for processing a video file is provided.
  • the processing method of the video file may be applied to the terminal 301, and the terminal may be a smart phone, a tablet computer, an e-book reader, and an MP3 (Moving Picture Experts Group Audio Layer III, dynamic).
  • the video specialist compresses the standard audio layer 3) player, MP4 (Moving Picture Experts Group Audio Layer IV) player, laptop portable computer and desktop computer, and the like.
  • the terminal can be connected to the first virtual reality device 302.
  • FIG. 3 is a flowchart of a method for processing a video file according to an embodiment of the present application. As shown in FIG. 3, the processing method of the video file may include the following steps:
  • Step S402 detecting a type of the first virtual reality device.
  • the first virtual reality device is configured to display the target video to be recorded.
  • a virtual reality device is a device that uses virtual reality technology to immerse users in a virtual world.
  • Virtual reality technology can create a computer simulation system that experiences the virtual world. It uses a computer to generate a multi-source information fusion interactive 3D dynamic view and system simulation of physical behavior, so that users can immerse into the virtual world environment.
  • a virtual reality device for example, a VR head-mounted display device (HMD) on a PC platform.
  • HMD VR head-mounted display device
  • the first virtual reality device is detected, and may be detected according to an application development package software (Platform SDK) of a Windows operating system corresponding to the virtual reality device.
  • Platinum SDK application development package software
  • the detection is performed by loading the detection plug-in system.
  • the detection plug-in system returns a type of the first virtual reality device, and the type of the first virtual reality device may be an Oculus type or an HTC Vive type, that is, the first virtual reality device may be an Oculus device, or For HTC Vive devices.
  • the first virtual reality device is configured to display the target video to be recorded, and the target video is played on the first virtual reality device, and the video that the user wants to record during the video watching may be played on the first virtual reality device.
  • the entire video of a video may also be a certain video in a certain video played on the first virtual reality device.
  • Step S404 detecting an application of the first virtual reality device according to the type of the first virtual reality device.
  • the detection logic may be executed according to the detected type of the first virtual reality device.
  • the type of the first virtual reality device is different, and the corresponding detection logic is also different.
  • the specific implementation of detecting the application of the first virtual reality device according to the type of the first virtual reality device may include: starting the preset process to detect the application of the first virtual reality device, where the type of the first virtual reality device is the first type The window information of the process and the application; or, in the case that the type of the first virtual reality device is the second type, the software development kit (Software Development Kit, SDK for short) of the first virtual reality device is invoked to obtain the process ID ( Identity, identity, according to the process ID, the process of the application of the first virtual reality device and the window information of the application; save the process of the application and the window information of the application, and load the application according to the process and window information of the application.
  • the preset process is a preset independent process for detecting the first virtual reality device type. It can be used to detect the application's process and application window information to load the application based on the above information.
  • the first type and the second type of the first virtual reality device may be differentiated according to whether the software development kit provides a function related to application detection.
  • the software development kit Software Development Kit, SDK for short
  • the software development kit does not provide a function related to application detection, that is, the first virtual reality device
  • the type is the first type.
  • an independent process needs to be started for detecting, obtaining the process name of the application and the window information of the application, and saving the process name and the window information;
  • the type of the first virtual reality device is the HTC Vive type
  • the SDK of the HTC Vive device provides a function related to application detection.
  • the type of the first virtual reality device is the second type, and only the SDK provided by the HTC Vive device is called to obtain the process ID, and then the process ID is obtained.
  • the process name of the application and the window information of the application and finally save the process name of the application and the window information of the application.
  • the application After detecting the application of the first virtual reality device according to the type of the first virtual reality device, the application may also be loaded according to the process and window information of the application.
  • Step S406 in the application of the first virtual reality device, acquiring media data of the target video.
  • the recording of the target video is started.
  • the recording process of the target video includes a capture thread, an audio encoding thread, a video encoding thread, and the like.
  • the VR source capture module is loaded into the recording process when the target video starts recording, and the VR source capture module is configured to capture a video image of the target video, and obtain image data of the target media file according to the video image.
  • the image data is then copied from the target process to the recording main program, and the image data of the target media file is the media data of a target video.
  • audio data of the target video may also be acquired, and the audio data of the target media file is also media data of a target video.
  • Step S408 encoding the media data to obtain a recorded video file of the target video.
  • the video content of the recorded video file is the same as the video content of the target video.
  • the media data is encoded.
  • the image data and the audio data can be separately encoded to obtain a recorded video file.
  • the media data may be encoded by the encoding thread of the main program of the application, including video encoding the image data in the media data by the video encoding thread in the application, and the audio encoding thread in the application is used in the media data.
  • the audio data is audio-encoded to obtain a recorded video file, thereby realizing the purpose of recording the target video in the first virtual reality device.
  • the recorded video file may also be played.
  • the video playing terminal is automatically adapted according to different types of the first virtual reality device, and the video playing terminal is used to play the recorded video file.
  • the recorded video file can be played in a head mounted display device that matches the type of the first virtual reality device, the head mounted display device including a PC platform and a mobile platform device, the head mounted display device can be used In VR or Argumented Reality (AR) technology, where AR technology is a real-time calculation of the position and angle of the camera, and the corresponding image technology, so as to realize the virtual world on the screen.
  • VR or Argumented Reality (AR) technology where AR technology is a real-time calculation of the position and angle of the camera, and the corresponding image technology, so as to realize the virtual world on the screen.
  • AR Argumented Reality
  • playing the recorded video file may also be played back to the first virtual reality device, thereby restoring the target video, and obtaining a visual immersion when the user experiences the viewing of the video in the first person.
  • the first virtual reality device is configured to display the target video to be recorded, and the application of the first virtual reality device is detected according to the type of the first virtual reality device, by using the foregoing step S402 to step S408.
  • the video content is the same as the video content of the target video, and solves the technical problem that the related technology cannot record the video displayed in the virtual reality device, thereby achieving the technical effect of recording the video displayed in the virtual reality device.
  • step S406 acquiring media data of the target video includes: capturing a left-eye video image of the target video and a right-eye video image of the target video, and acquiring left-eye image data of the target video according to the left-eye video image. Obtaining right eye image data of the target video according to the right eye video image; splicing the left eye image data and the right eye image data to obtain image data of the target video.
  • FIG. 5 is a flowchart of a method of encoding media data according to an embodiment of the present application. As shown in FIG. 5, the method for encoding media data includes the following steps:
  • Step S501 acquiring left eye image data and right eye image data of the target video, respectively.
  • the left-eye video image of the target video and the right-eye video image of the target video may be captured, and the left-eye image data of the target video is acquired according to the left-eye video image, according to the right-eye video.
  • the screen acquires right eye image data of the target video.
  • the target video includes a left-eye video picture viewed through the left eye and a right-eye video picture viewed through the right eye.
  • the left eye image data may be used to display a left eye video image
  • the right eye image data may be used to display a right eye video image
  • left eye image data and right eye image data may be acquired according to the left eye video image and the right eye video image, respectively.
  • Step S502 performing splicing processing on the left eye image data and the right eye image data to obtain image data of the target video.
  • Encoding the media data to obtain the recorded video file of the target video includes: encoding the image data, obtaining a complete recorded video file, and saving the recorded video file.
  • the left eye video file of the recorded video file may be generated according to the left eye image data, and the left eye video file is saved, thereby implementing encoding of the left eye image data, and the user passes
  • the left eye can view the left eye picture of the target video corresponding to the left eye video file.
  • the right eye image data After acquiring the right eye image data, generating a right eye video file of the recorded video file according to the right eye image data, and saving the right eye video file, thereby realizing encoding of the left eye image data, and the user can view the right eye video through the right eye.
  • the right eye screen of the target video corresponding to the file is
  • the left eye image data and the right eye image data of the target video are respectively acquired, and the left eye image data and the right eye image data are spliced to obtain the image data of the target video, thereby realizing the acquisition of the media data of the target video. the goal of.
  • a first play instruction for instructing to play the left-eye video frame may be received, and the left-eye video frame is played according to the first play instruction. And/or receiving a second play instruction for instructing playing of the right eye video picture, and playing the right eye video picture according to the second play instruction.
  • the recorded video file is played, that is, the target video is reproduced, and the left-eye video screen and the right-eye video screen may be selected to be played, or may be separately selected to play only.
  • the left eye video screen or only the right eye video screen This increases the flexibility of playing recorded video files.
  • the left-eye video file may be simultaneously played to display the left-eye video image and the right-eye video file to display the right-eye video image, and receive the first play instruction, according to the first
  • the playback instruction plays the left-eye video file, receives the second playback instruction, and plays the right-eye video file according to the second playback instruction; or may only receive the first playback instruction, play the left-eye video file to display the left-eye video image, or only receive the first
  • the second play command plays the right eye video file to display the right eye video picture, thereby improving the flexibility of playing the recorded video file.
  • the video playing terminal adapted to the first virtual reality device may be determined according to the type of the first virtual reality device, where The video playing terminal is configured to connect to the first virtual reality device to acquire and play the recorded video file.
  • the video playing terminal that plays the recorded video file is adapted to the first virtual reality device, and the different types of first virtual reality devices can be adapted to different video playing terminals.
  • the video playing terminal that is adapted to the first virtual reality device is automatically adapted, and the video playing terminal is connected to the first virtual reality device to obtain the recorded video file, and then Play the recorded video file, that is, reproduce the target video.
  • the first virtual The actual device is tested, and the recorded video file can be played on a normal plane, or the recorded video file can be played on the VR head-mounted display device, for example, played by the Oculus head-mounted play plug-in, and played through the HTC head-mounted play plug-in.
  • the flat play plug-in plays, and the video decoding module is also required to decode the recorded video file when playing the recorded video file.
  • determining, according to the type of the first virtual reality device, the video playing terminal that is adapted to the first virtual reality device comprises: determining, according to the type of the first virtual reality device, that the first virtual reality device is suitable a flat video playing terminal, wherein the flat video playing terminal is configured to play the recorded video file in a two-dimensional form, the video playing terminal comprises a flat video playing terminal; or the first virtual reality is determined according to the type of the first virtual reality device
  • the device is adapted to the second virtual reality device, wherein the second virtual reality device is configured to play the recorded video file in a three-dimensional form, and the video playing terminal comprises the second virtual reality device.
  • the video playing terminal is configured to play the recorded video file, and the type thereof is adapted to the first virtual reality device, and includes a flat playing terminal and a second virtual reality device.
  • the flat video playing terminal that is, the planar two-dimensional (2D) video player, is used to play the recorded video file in two dimensions, and can be played through the flat play plug-in.
  • the video playing terminal for playing the recorded video file may be a second virtual reality device, for example, a head mounted display device, which may be a mobile head mounted display device, in a three-dimensional form through an Oculus head-mounted playback plug-in or an HTC head-mounted playback plug-in. Play back the recorded video file.
  • the head-mounted display device can be rendered by the rendering process Direct3D11, rendered by Oculus, and rendered by HTC.
  • Oculus rendering includes rendering through the Oculus DK2 Piugin plugin and through the Oculus CV1 Plugin plugin.
  • the playback mode of the recorded video file is only a preferred embodiment of the embodiment of the present application, and is not limited to the playback mode of the recorded video file in the embodiment of the present application.
  • the manner in which the files are played is within the protection scope of the embodiments of the present application, and is not enumerated here.
  • determining, according to the type of the first virtual reality device, the implementation of the second virtual reality device that is adapted to the first virtual reality device may include: when the recorded video file is saved in the terminal, according to The type of the first virtual reality device determines a fixed virtual reality device that is adapted to the first virtual reality device, and the second virtual reality device includes a fixed virtual reality device.
  • the fixed virtual reality device adapted to the first virtual reality device is determined according to the type of the first virtual reality device, for example, the first virtual reality device is a PC end.
  • the VR head display in the process of the user watching the VR through the PC VR head, or the panoramic video, or experiencing the VR application and the VR game, the target video is recorded, and the recorded video file is saved on the PC end.
  • the second virtual reality device is a fixed virtual reality device, and can also be a PC-side VR head display, and the target video corresponding to the recorded video file is viewed through the PC-side VR head, thereby realizing the user watching the immersive recorded video on the PC-side VR.
  • a purpose of implementing determining a second virtual reality device that is adapted to the first virtual reality device according to the type of the first virtual reality device is implemented.
  • the recorded video file is processed to obtain processed data; and the processed data is sent to the preset website; according to the first virtual Determining, by the type of the real device, the second virtual reality device that is adapted to the first virtual reality device may include: determining a mobile virtual reality device adapted to the first virtual reality device according to the type of the first virtual reality device, moving The virtual reality device is configured to play the recorded video file according to the processed data by using a preset website.
  • FIG. 6 is a flowchart of another method for processing a video file according to an embodiment of the present application. As shown in FIG. 6, the processing method of the video file includes the following steps:
  • Step S601 processing the recorded video file to obtain processed data.
  • the recorded video file can be processed to obtain the processed data.
  • Step S602 sending the processing data to the preset website.
  • the processed data is sent to a third-party online video website, wherein the preset website includes a third-party online video website, thereby achieving the purpose of sharing the recorded video file.
  • Step S603 determining a mobile virtual reality device that is adapted to the first virtual reality device according to the type of the first virtual reality device.
  • the mobile virtual reality device is configured to play the recorded video file according to the processing data by using a preset website, and the second virtual reality device includes the mobile virtual reality device.
  • the mobile virtual reality device adapted to the type of the first virtual reality device may obtain the processing data again from the third-party online video website, and the mobile virtual reality device may be a mobile VR.
  • the glasses play the recorded video file through the processed data, thereby realizing the purpose of playing the recorded video file.
  • the processed data processed by the recorded video file is sent to the preset website, and the mobile phone is adapted to the type of the first virtual reality device.
  • the virtual reality device plays the recorded video file according to the processing data through the preset website, and achieves the purpose of sharing and playing the recorded video file.
  • the second virtual reality device may include: a head mounted display device, wherein the head mounted display device plays the recorded video file in a three-dimensional form through the plug-in.
  • the second virtual reality device includes a head mounted display device, such as an Oculus head mounted display device, an HTC head mounted display device, wherein the Oculus head mounted display device is in a three dimensional form via an Oculus Dk2 Plugin plugin or an Oculus CV1 Plugin plugin Record video files for playback.
  • a head mounted display device such as an Oculus head mounted display device, an HTC head mounted display device, wherein the Oculus head mounted display device is in a three dimensional form via an Oculus Dk2 Plugin plugin or an Oculus CV1 Plugin plugin Record video files for playback.
  • FIG. 7 is a flowchart of a method for detecting an application of a first virtual reality device according to a type of a first virtual reality device according to an embodiment of the present application. As shown in FIG. 7, the method for detecting an application of a first virtual reality device according to a type of the first virtual reality device includes the following steps:
  • Step S701 the preset process is started to detect the process of the application and the window information of the application.
  • step S701 of the embodiment of the present application in a case where the type of the first virtual reality device that is adapted to the second virtual reality device is the first type, the process and application of the preset process detection application are started. Window information.
  • the type of the second virtual reality device is an Oculus type, and in the case where the type of the first virtual reality device adapted to the second virtual reality device is Oculus, the SDK of the second virtual reality device of the Oculus type does not provide the detection application.
  • the related function starts a preset process, which is an independent process for detecting a process of the application of the first virtual reality device.
  • Step S702 the software development kit of the first virtual reality device is invoked to acquire the process ID, and the process of the application and the window information of the application are acquired according to the process ID.
  • step S702 of the embodiment of the present application if the type of the first virtual reality device that is adapted to the second virtual reality device is the second type, the SDK of the first virtual reality device is invoked to obtain The process ID obtains the window information of the application process and the application according to the process ID.
  • the type of the second virtual reality device is an HTC Vive type.
  • the type of the first virtual reality device that is compatible with the second virtual reality device is HTC Vive
  • only the SDK of the first virtual reality device needs to be called to acquire the process. ID, the process of obtaining the application and the window information according to the process ID.
  • Step S703 saving the window of the application process and the application, and loading the application according to the process and window information of the application.
  • the window information of the process and the application can save the process of the application by saving the process name of the application, and the window information of the application includes information such as the window title of the application.
  • the application is loaded according to the application process and the window information, thereby realizing the purpose of detecting the application of the first virtual reality device.
  • the process of the preset process detection application and the window information of the application are started; or, in the second If the type of the first virtual reality device that is adapted by the virtual reality device is the second type, the software development kit of the first virtual reality device is invoked to obtain the process ID, and the process of the application and the window information of the application are acquired according to the process ID.
  • the window information of the process and the application of the application is saved, and the application is loaded according to the process and window information of the application, and the purpose of detecting the application of the first virtual reality device according to the type of the first virtual reality device is implemented.
  • detecting the type of the first virtual reality device may include: displaying a first preset interface; determining whether a start recording instruction is received through the first preset interface, and starting a recording instruction for indicating The recording of the target video is started; if it is determined that the start recording instruction is received through the first preset interface, the type of the first virtual reality device is detected in response to the start of the recording instruction.
  • FIG. 8 is a flowchart of a method of detecting a type of a first virtual reality device according to an embodiment of the present application. As shown in FIG. 8, the method for detecting the type of the first virtual reality device includes the following steps:
  • Step S801 displaying a first preset interface.
  • the first preset interface is an interface for receiving a start recording instruction for starting recording of the target video, and may include an interface command button.
  • Step S802 determining whether a start recording instruction is received through the first preset interface.
  • the user may touch the first preset interface to generate a start recording instruction on the first preset interface.
  • Step S803 detecting a type of the first virtual reality device in response to starting the recording instruction.
  • the first virtual reality device After determining whether to start the recording instruction through the first preset interface, that is, if the user touches the first preset interface to generate a start recording instruction, the first virtual reality device is detected in response to the start recording instruction, The type of the first virtual reality device is detected.
  • the embodiment is configured to display a first preset interface, determine whether a start recording instruction is received through the first preset interface, and determine to receive a start recording instruction through the first preset interface, and detect the first virtual reality in response to starting the recording instruction.
  • the type of device implements the purpose of detecting the type of the first virtual reality device.
  • step S802 determining whether the start recording instruction is received through the first preset interface includes one of: determining whether the preset button of the first preset interface is received by the touch The recording instruction is started; determining whether a start recording instruction generated by the touch of the keyboard shortcut of the first preset interface is received; and determining whether the voice command corresponding to the start recording instruction is received through the first preset interface.
  • the first preset interface is relatively simple and can include a plurality of ways for the operation to start recording the target video.
  • the first preset interface may include a preset button, where the preset button is an interface command button, and the preset button is used to generate a recording command by the touch, and then it is determined whether the preset button received by the first preset interface is touched.
  • the first preset interface may be corresponding to a keyboard shortcut key. When the keyboard shortcut key is used to generate a recording command by the touch control, it is determined whether a start recording instruction generated by the touch screen shortcut of the first preset interface is received.
  • the first preset interface can also recognize the start recording instruction by using the voice command input, and then determine whether the voice command corresponding to the start recording instruction is received through the first preset interface.
  • the manner of starting the recording of the target video is only the preferred embodiment of the embodiment of the present application, and the method for starting the recording of the target video in the embodiment of the present application is limited to the foregoing playback mode, and any target can be achieved.
  • the manner in which the video is started to be recorded is within the protection scope of the embodiment of the present application, and is not enumerated here.
  • a recording mark indicating that the target video is recorded is displayed, wherein the recording mark includes a recording mark displayed in an animated form, and/or a recording mark displayed in time form.
  • the recording mark used to indicate that the target video is recorded is a recording mark having an immersive feeling for the user, and can be displayed in the head mounted display.
  • the recording After judging whether a start recording instruction for instructing to start recording of the target video is received through the first preset interface, when it is determined that the start recording instruction is received through the first preset interface, the recording may be displayed in an animated form.
  • a flag for example, a recorded graphic that begins recording with a target video, and/or displayed in time, in a rotated graphic, such as a time representation of the start timing.
  • the media data is encoded to obtain the recorded video file of the target video
  • the end recording instruction is received through the second preset interface
  • the recording of the target video is ended;
  • the recorded video file is saved in response to the save instruction.
  • FIG. 9 is a flowchart of another method for processing a video file according to an embodiment of the present application. As shown in FIG. 9, the processing method of the video file further includes the following steps:
  • Step S901 displaying a second preset interface.
  • the second preset interface is an interface for receiving an end recording instruction for ending recording of the target video, and may include an interface command button.
  • Step S902 determining whether an end recording instruction is received through the second preset interface, and ending the recording instruction is used to instruct to end recording of the target video.
  • the user After displaying the second preset interface, the user can touch the second preset interface to produce on the second preset interface. End the recording command.
  • Step S903 ending the recording of the target video in response to the end of the recording instruction.
  • the end recording After determining whether the end recording command is received through the second preset interface, if it is determined that the end recording command is received through the second preset interface, if the user touches the second preset interface, the end recording is generated.
  • the instruction ends the recording of the target video in response to the end of the recording instruction.
  • Step S904 displaying a third preset interface.
  • the third preset interface is an interface for receiving a save instruction for saving the target video, and may include an interface command button.
  • Step S905 determining whether a save instruction is received through the third preset interface, and the save instruction is used to instruct to save the recorded video file.
  • the user may touch the third preset interface to generate a save instruction on the first preset interface.
  • Step S906 saving the recorded video file in response to the save instruction.
  • the second preset interface is displayed; determining whether the end recording instruction is received through the second preset interface; if it is determined to pass the second preset interface Receiving the end recording instruction, ending the recording of the target video in response to the end recording instruction; displaying the third preset interface; determining whether the save instruction is received through the third preset interface; if it is determined that the save is received through the third preset interface
  • the instruction saves the recorded video file in response to the save instruction, simplifying the operation of recording the video displayed by the first virtual reality device.
  • the step S902 receives the end recording instruction by using the second preset interface, including one of: determining whether an end recording instruction generated by the preset button of the second preset interface is touched is received; Determining whether an end recording instruction generated by the touch of the keyboard shortcut of the second preset interface is received; determining whether the voice command corresponding to the end recording instruction is received through the second preset interface.
  • the second preset interface is relatively simple and can include a plurality of methods for ending the recording of the target video.
  • the second preset interface may include a preset button, where the preset button is an interface command button, and the preset button is used to generate a stop recording command, and then it is determined whether the preset button received by the second preset interface is touched.
  • the second preset interface may be corresponding to a keyboard shortcut key. When the keyboard shortcut key is used to generate a recording end command, it is determined whether an end recording instruction generated by the keyboard shortcut key of the second preset interface is received.
  • the second preset interface can also recognize the end recording command by using the voice command input, and then determine whether the voice command corresponding to the end recording instruction is received through the second preset interface.
  • the manner of ending the recording of the recorded video file is only the preferred embodiment of the embodiment of the present application, and the method for ending the recording of the recorded video file in the embodiment of the present application is limited to the above manner, and any recording can be realized.
  • the manner in which the video file ends recording is within the protection scope of the embodiment of the present application, and is not enumerated here.
  • determining whether the save instruction is received through the third preset interface comprises: One of the following: determining whether a save command generated by the touch button of the third preset interface is touched; determining whether a save command generated by the keyboard shortcut of the third preset interface is received; It is determined whether a voice command corresponding to the save instruction is received through the third preset interface.
  • the third preset interface is relatively simple, and can include a plurality of methods for operating the recording target video.
  • the third preset interface may include a preset button, where the preset button is an interface command button, and the preset button is used to generate a save command by the touch, and then it is determined whether the preset button received by the third preset interface is touched.
  • the resulting save instruction may be corresponding to a keyboard shortcut key, and the keyboard shortcut key is used to generate a save command by the touch, and then it is determined whether the save command generated by the keyboard shortcut key of the third preset interface is received.
  • the third preset interface can also recognize the start recording instruction by using the voice command input, and then determine whether the voice command corresponding to the save instruction is received through the third preset interface.
  • the foregoing manner of saving the recorded video file is only a preferred embodiment of the embodiment of the present application, and the method for saving the recorded video file in the embodiment of the present application is limited to the foregoing manner, and any recording can be implemented.
  • the manner in which the video files are saved is within the protection scope of the embodiments of the present application, and is not enumerated here.
  • step S406 acquiring the media data of the target video includes: capturing a video image of the target video, acquiring image data of the target video according to the video image; and acquiring audio data of the target video.
  • FIG. 10 is a flowchart of a method for acquiring media data of a target video according to an embodiment of the present application. As shown in FIG. 10, the method for acquiring media data of a target video includes the following steps:
  • Step S1001 Capture a video picture of the target video, and acquire image data of the target video according to the video picture.
  • the video image of the target video is captured, and the image data of the target video is acquired according to the video image of the target video.
  • Step S1002 Acquire audio data of the target video.
  • the target video has audio data that plays sound. After detecting the application of the first virtual reality device, the audio data of the target video is acquired.
  • step S1001 and step S1002 does not include the sequence, and may be performed at the same time.
  • Step S1001 may be performed first, or step S1002 may be performed first.
  • the image data and the audio data are respectively encoded to obtain a recorded video file, thereby realizing the purpose of encoding the media data to obtain a recorded video file.
  • the recorded video file may be saved in the software development package of the first virtual reality device; or the recorded video file is saved to In the software development package of the game client; or save the recorded video file to the software development kit of the game engine.
  • the function of saving the first person VR video in the SDK of the engine, the game and the hardware display SDK can be built in.
  • the embodiment of the present application implements a VR video solution for recording left and right eye VR images and can restore the first person immersion feeling perfectly, and meets the requirements of the VR player recording experience VR application and the game process.
  • the embodiment of the present application can also serve as a content generation platform for the user.
  • the generated content (User Generated Content, referred to as UGC) is mainly used to increase the current VR content to a certain extent, and achieves the technical effect of recording the video displayed in the virtual reality device.
  • the technical framework of the embodiment of the present application is divided into two parts, the first part is a recording process, and the second part is a playing process.
  • FIG. 11 is a schematic structural diagram of video recording according to an embodiment of the present application.
  • the target video on the virtual reality device is recorded, including VR device detection, VR application detection, image frame capture, recording source plug-in system, video encoding module, audio encoding module, and distortion module.
  • the VR device detection is used to detect the type of the VR device
  • the VR application detection is used to detect the application of the VR device according to the type of the VR device
  • the image frame capture is used to capture an image
  • the recording source plug-in system is used to implement the recording source plug-in system.
  • the video encoding module is configured to encode the image data of the target video
  • the audio encoding module is configured to encode the audio data of the target video
  • the distortion module is configured to process the recorded video file to obtain processed data, and the processed data after the distortion Can be shared to third-party online video sites for mobile VR glasses to watch.
  • FIG. 12 is a flowchart of a method for detecting a VR device according to an embodiment of the present application. As shown in FIG. 12, the method for detecting a VR device includes the following steps:
  • Step S1201 loading the detection plug-in system.
  • the first is the detection of the VR device.
  • the main principle of VR device detection is based on different VR hardware devices, which rely on the Platform SDK provided by their hardware devices to implement detection functions.
  • Load detection plugin system the plugin system includes Oculus plugin and HTC plugin.
  • step S1202 the Oculus plugin is loaded.
  • the VR device is an Oculus device, load the Oculus plugin and test it with the Oculus plugin.
  • Step S1203 loading the HTC Vive plugin.
  • the VR device is an HTC Vive device, load the HTC Vive plugin and check it with the HTC Vive plugin.
  • step S1204 the device type and the number of devices are summarized.
  • FIG. 13 is a flowchart of a method for detecting a VR application according to an embodiment of the present application. As shown in FIG. 13, the detection method of the VR application includes the following steps:
  • step S1301 the device type of the VR application is determined.
  • Step S1302 When the VR device is of the HTC Vive type, the SDK is called to acquire the process ID.
  • step S1303 when the VR device is of the Oculus type, independent process detection is started.
  • step S1304 information such as a process name, a window title, and the like is saved.
  • FIG. 14 is a flowchart of another method for processing a video file according to an embodiment of the present application. As shown in FIG. 14, the processing method of the video file includes the following steps:
  • step S1401 the data for recording the video is injected into the target process in the recording main program.
  • Step S1402 detecting the VR module in the target process.
  • step S1403 data is captured by the VR capture management module in the target process.
  • Step S1404 is processed by the Oculus hook in the target process.
  • the captured data is processed by Oculus hooks.
  • Step S1405 processing by the HTC hook in the target process.
  • the captured data is processed by the HTC hook.
  • Step S1406 capturing a video picture in the target process.
  • the video data is captured by the processed data to obtain image data.
  • Step S1407 copying by the graphics processor in the target process.
  • the image data is copied to the recording main program by the image processor.
  • step S1408 a capture screen is acquired in the recording main program.
  • Step S1409 audio and video encoding is performed in the recording main program.
  • Audio and video encoding of the video file corresponding to the captured picture Audio and video encoding of the video file corresponding to the captured picture.
  • step S1410 a recorded video file is generated in the recording main program.
  • the process of recording video is more complicated, mainly divided into capture thread, audio coding thread, and video coding thread.
  • the corresponding VR source capture module is injected into the recorded process, which is responsible for capturing the left and right eye images of the VR, and then copying the image data from the target process to the recording main program by the GPU copying texture method, and encoding by the main program.
  • the thread performs audio and video encoding to generate a video file, and realizes the technical effect of recording the video displayed in the virtual reality device.
  • the recorded video file After the recorded video file is generated, according to different VR hardware, it will be automatically adapted during playback and played to the corresponding head display.
  • FIG. 15 is a schematic diagram of a framework of a video player in accordance with an embodiment of the present application.
  • the video player can implement: VR device detection, normal plane playback, VR HMD device playback, and video decoding module.
  • the VR HMD device plays as a head-end display, and is mainly divided into three plug-in modules, and a plane. Play plug-in module, Oculus head-mounted play plug-in module and HTC head-mounted play plug-in module. It is not limited to the above three head-mounted plug-ins.
  • the subsequent hardware head-end playback can also use such a framework.
  • VR HMD can be used to implement Direct. 3D rendering, Oculus rendering, HTC rendering, where Oculus rendering includes the Oculus DK2Plugin plugin and the Oculus CV1Plugin plugin.
  • the embodiment implements a VR video solution for recording left and right eye VR images and can restore the first person immersiveness perfectly, and meets the requirements of the VR player recording experience VR application and the game process.
  • the embodiment of the present application can also be used as a content generation platform, with UGC as The Lord has increased the VR content at this stage to some extent.
  • the game developer, game engine developer, or VR hardware developer may have built-in functions for saving the first person VR video in the engine's SDK, in the game, and in the hardware display SDK.
  • the application environment of the embodiment of the present application may be, but is not limited to, the reference to the application environment in the foregoing embodiments.
  • An embodiment of the present application provides an optional specific method for implementing the processing method of the foregoing video file. use.
  • the embodiments of the present application are applicable to multiple VR experience scenarios, and the embodiments of the present application include but are not limited to the following:
  • the user who has the PC VR head display can use the method of the embodiment of the present application to record the entire process during the process of watching the VR or the panoramic video, the experience of the VR application, and the VR game.
  • the user with the mobile terminal VR head display can use the method of the embodiment to view the immersive experience video recorded by other users.
  • the recorded video can be directly played by using the method of the embodiment of the invention, and the left and right eye video images can be selected simultaneously, or only the left eye or right eye video screen can be viewed.
  • the embodiment of the present application can record the picture in the PC VR head display, save the video format into the left and right eyes, and replay the recorded video into the head display to restore the first person perspective immersion.
  • FIG. 16 is a schematic diagram of a video recording interface according to an embodiment of the present application.
  • the recording and playback operations of the product are simple, and the recording can be started and ended through various operations, including but not limited to interface command buttons, keyboard shortcuts, voice command input recognition, and the like.
  • the interface command button is the start recording button in the upper right corner of Figure 16.
  • the target video is recorded.
  • the immersive recording mark is displayed inside the head display, including but not limited to recording animation, time display, and the like.
  • the video recording interface displays recorded video files, such as a first video file, a second video file, and a third video file, including a detection panel, a device, a VR application, a recording PC audio, a recording microphone, and the like on the right side of the recording interface.
  • the recording time and the like thereby realizing the technical effect of recording the video displayed in the virtual reality device.
  • FIG. 17 is a schematic diagram of a video playing interface according to an embodiment of the present application.
  • the video player can choose to play as a flat 2D video, or you can choose to play the VR video to the head display.
  • a user who does not have a VR head display can play the recorded video directly by using the method of the embodiment of the invention.
  • the progress and playing time, the four interface buttons below the video playing interface can respectively correspond to the exit operation, the backward operation, the play/pause operation, and the forward operation of the video during playback, thereby realizing the playback of the recorded video.
  • the method according to the above embodiment can be implemented by means of software plus a necessary general hardware platform, and of course, by hardware, but in many cases, the former is A better implementation.
  • the technical solution of the embodiments of the present application may be embodied in the form of a software product in essence or in the form of a software product stored in a storage medium (such as ROM/RAM, magnetic
  • the disc, the disc includes a number of instructions for causing a terminal device (which may be a cell phone, a computer, a server, or a network device, etc.) to perform the methods of various embodiments of the present application.
  • FIG. 18 is a schematic diagram of a processing apparatus of a video file according to an embodiment of the present application.
  • the processing device of the video file may include: a first detecting unit 10, a second detecting unit 20, an obtaining unit 30, and an encoding unit 40.
  • the first detecting unit 10 is configured to detect a type of the first virtual reality device, where the first virtual reality device is configured to display the target video to be recorded.
  • the second detecting unit 20 is configured to detect an application of the first virtual reality device according to the type of the first virtual reality device.
  • the obtaining unit 30 is configured to acquire media data of the target video in the application of the first virtual reality device.
  • the encoding unit 40 is configured to encode the media data to obtain a recorded video file of the target video, wherein the video content of the recorded video file is the same as the video content of the target video.
  • the acquiring unit 30 may include: a capturing module and a first acquiring module.
  • a capture module configured to capture a video image of the target video, and acquire image data of the target video according to the video image; and a first acquiring module, configured to acquire audio data of the target video.
  • the coding unit is specifically configured to: separately encode the image data and the audio data to obtain a recorded video file.
  • the acquiring unit 30 may include: a second acquiring module and a splicing module.
  • a second acquiring module configured to capture a left eye video image of the target video and a right eye video image of the target video, acquire left eye image data of the target video according to the left eye video image, and acquire a right eye image of the target video according to the right eye video image Data;
  • a splicing module for splicing the left eye image data and the right eye image data to obtain image data of the target video;
  • the coding unit is specifically configured to: encode the image data to obtain a recorded video file.
  • the processing device of the video file may further include: a first playing unit and/or a second playing unit.
  • a first playing unit configured to receive a first play instruction for instructing to play a left-eye video picture; play a left-eye video picture according to the first play instruction; and a second play unit, configured to receive the instruction for playing the right-eye video picture
  • the second play instruction plays the right eye video screen according to the second play instruction.
  • the processing device of the video file may further include: a determining unit, configured to: after encoding the media data to obtain a recorded video file of the target video, according to the first virtual reality device The type determines a video playing terminal that is adapted to the first virtual reality device, wherein the video playing terminal is configured to connect with the first virtual reality device to acquire and play the recorded video file.
  • the determining unit is specifically configured to determine, according to the type of the first virtual reality device, a flat video playing terminal that is adapted to the first virtual reality device, where the flat video playing terminal is used.
  • the recorded video file is played in a two-dimensional form, and the video playing terminal includes a flat video playing terminal.
  • the determining unit is specifically configured to determine, according to a type of the first virtual reality device, a second virtual reality device that is adapted to the first virtual reality device, where the second virtual reality device
  • the device is configured to play the recorded video file in a three-dimensional form, and the video playing terminal includes a second virtual reality device.
  • the determining unit is specifically configured to: when the recorded video file is saved in the terminal, determine, according to the type of the first virtual reality device, the fixed with the first virtual reality device.
  • the virtual reality device, the second virtual reality device includes a fixed virtual reality device.
  • the processing device of the video file may further include: a processing unit, configured to: after encoding the media data to obtain a recorded video file of the target video, The processing is performed to obtain the processing data, and the processing data is sent to the preset website.
  • the determining unit is specifically configured to: determine, according to the type of the first virtual reality device, a mobile virtual reality device that is adapted to the first virtual reality device, where the mobile virtual device The real device is configured to play the recorded video file according to the processing data by using a preset website, and the second virtual reality device comprises a mobile virtual reality device.
  • the second virtual reality device includes a head mounted display device, wherein the head mounted display device is configured to play the recorded video file in a three-dimensional form through the plug-in.
  • the foregoing second detecting unit 20 may include: a detecting module or a calling module, and a first saving module.
  • the detecting module is configured to: when the type of the first virtual reality device is the first type, start a preset process to detect a process of the application of the first virtual reality device and window information of the application; or, call the module,
  • the software development kit of the first virtual reality device is invoked to acquire the process ID, and the process of the application of the first virtual reality device and the window information of the application are acquired according to the process ID.
  • the save module is used to save the application's process and application window information, and load the application according to the application's process and window information.
  • the first detecting unit 10 may include: a first display module, a determining module, and a response module.
  • the first display module is configured to display a first preset interface
  • the determining module is configured to determine whether a start recording instruction is received through the first preset interface, and the start recording instruction is used to instruct to start recording the target video; the response module And if it is determined that the start recording instruction is received through the first preset interface, detecting the type of the first virtual reality device in response to the start recording instruction.
  • the determining module is specifically configured to determine whether a start recording instruction generated by the touch button of the first preset interface is touched is received; or The keyboard shortcut of the first preset interface is started by the touch to generate a recording instruction; or, it is determined whether the voice command corresponding to the start recording instruction is received through the first preset interface.
  • the first detecting unit 10 may further include: a second display module, configured to: determine whether to start recording the target video by using the first preset interface After the start of the recording instruction, if it is determined that the start recording instruction is received through the first preset interface, a recording mark indicating that the target video is recorded is displayed, wherein the recording mark includes a recording mark displayed in an animated form, and/or Recording mark displayed in time.
  • a second display module configured to: determine whether to start recording the target video by using the first preset interface After the start of the recording instruction, if it is determined that the start recording instruction is received through the first preset interface, a recording mark indicating that the target video is recorded is displayed, wherein the recording mark includes a recording mark displayed in an animated form, and/or Recording mark displayed in time.
  • the processing device of the video file further includes: a first display unit, a first determining unit, and a first response unit.
  • the first display unit is configured to display the second preset interface after the media data is encoded to obtain the recorded video file of the target video, and the first determining unit is configured to determine whether the end is received through the second preset interface.
  • the recording instruction ends the recording instruction for ending the recording of the target video; the first response unit is configured to end the recording of the target video in response to the end recording instruction if it is determined that the end recording instruction is received through the second preset interface.
  • the first determining unit is specifically configured to: determine whether an end recording instruction generated by the preset button of the second preset interface is touched; or Receiving an end recording instruction generated by the touch of the keyboard shortcut of the second preset interface; or determining whether the voice command corresponding to the end recording instruction is received through the second preset interface.
  • the processing device of the video file further includes: a second display unit, a second determining unit, and a second response unit.
  • the second display unit is configured to display the third preset interface after the media data is encoded to obtain the recorded video file of the target video, and the second determining unit is configured to determine whether the third preset interface is used to receive the save.
  • the instruction, the save instruction is used to indicate that the recorded video file is saved, and the second response unit is configured to save the recorded video file in response to the save instruction if it is determined that the save instruction is received through the third preset interface.
  • the second determining unit is specifically configured to: determine whether a save command generated by the preset button of the third preset interface is touched, or determine whether to receive a save command generated by the touch of the keyboard shortcut of the third preset interface; or determining whether the voice command corresponding to the save command is received through the third preset interface.
  • the processing device of the video file may further include: a first saving unit, or a second saving unit, or a third saving unit.
  • the first saving unit is configured to: after encoding the media data to obtain the recorded video file of the target video, save the recorded video file to the software development package of the first virtual reality device; and the second saving unit is configured to be in the pair After the media data is encoded to obtain the recorded video file of the target video, the recorded video file is saved to the software development package of the game client; and the third saving unit is configured to encode the media data to obtain the recorded video file of the target video. Save the recorded video file to the software development kit of the game engine.
  • first detecting unit 10 in this embodiment may be used to perform step S402 in the embodiment of the present application.
  • the second detecting unit 20 in this embodiment may be used to perform step S404 in the embodiment of the present application.
  • the obtaining unit 30 in this embodiment may be used to perform step S406 in the embodiment of the present application.
  • the encoding unit 40 in this embodiment may be used to perform step S408 in the embodiment of the present application.
  • the above-mentioned units and modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the contents disclosed in the above embodiments. It should be noted that the above module may be implemented as a part of the device in the terminal in the hardware environment as shown in FIG. 3, and may be implemented by software or by hardware.
  • the first detecting unit 10 detects the type of the first virtual reality device, wherein the first virtual reality device is configured to display the target video to be recorded, and the second detecting unit 20 detects the first virtual reality device according to the type of the first virtual reality device.
  • An application of the virtual reality device acquires media data of the target video in the application of the first virtual reality device by the obtaining unit 30, and encodes the media data by the encoding unit 40 to obtain a recorded video file of the target video, where the video is recorded.
  • the video content of the file is the same as the video content of the target video, which can solve the technical problem that the related technology cannot record the video displayed in the virtual reality device, thereby achieving the technical effect of recording the video displayed in the virtual reality device.
  • a server or terminal for implementing the processing method of the above video file.
  • FIG. 19 is a structural block diagram of a terminal according to an embodiment of the present application.
  • the terminal may include one or more (only one shown in the figure) processor 201, memory 203, and transmission device 205.
  • the terminal may further include an input/output device 207. .
  • the memory 203 can be used to store the software program and the module, such as the processing method of the video file and the program instruction/module corresponding to the device in the embodiment of the present application, and the processor 201 runs the software program stored in the memory 203. And a module to perform various functional applications and data processing, that is, to implement the above-described processing method of the video file.
  • Memory 203 can include high speed random access memory, and can also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory.
  • memory 203 can further include memory remotely located relative to processor 201, which can be connected to the terminal over a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • the above described transmission device 205 is used to receive or transmit data via a network, and can also be used for data transmission between the processor and the memory. Specific examples of the above network may include a wired network and a wireless network.
  • the transmission device 205 includes a Network Interface Controller (NIC) that can be connected to other network devices and routers via a network cable to communicate with the Internet or a local area network.
  • the transmission device 205 is a Radio Frequency (RF) module for communicating with the Internet wirelessly.
  • NIC Network Interface Controller
  • RF Radio Frequency
  • the memory 203 is used to store an application.
  • the processor 201 can call the application stored in the memory 203 through the transmission device 205 to perform the following steps:
  • the first virtual reality device is configured to display the target video to be recorded
  • the media data is encoded to obtain a recorded video file of the target video, wherein the video content of the recorded video file is the same as the video content of the target video.
  • the processor 201 may be further configured to: capture a video image of the target video, acquire image data of the target video according to the video image; acquire audio data of the target video; encode the image data and the audio data, respectively, to obtain a recorded video file. .
  • the processor 201 may be further configured to: capture a left eye video image of the target video and a right eye video image of the target video, acquire left eye image data of the target video according to the left eye video image, and acquire a target according to the right eye video image. Right eye image data of the video; splicing the left eye image data and the right eye image data to obtain image data of the target video; encoding the image data to obtain a recorded video file.
  • the processor 201 is further configured to: perform step of: after encoding the image data, obtaining a recorded video file, receiving a first play instruction for instructing playing of the left-eye video frame, and playing the left-eye video frame according to the first play instruction And/or receiving a second play instruction for instructing to play the right eye video picture, and playing the right eye video picture according to the second play instruction.
  • the processor 201 may be further configured to: after encoding the media data to obtain a recorded video file of the target video, determining a video playing terminal adapted to the first virtual reality device according to the type of the first virtual reality device The video playing terminal is configured to connect to the first virtual reality device to acquire and play the recorded video file.
  • the processor 201 may be further configured to: determine a planar video playing terminal adapted to the first virtual reality device according to the type of the first virtual reality device, wherein the flat video playing terminal is configured to record in two dimensions The video file is played, and the video playing terminal includes a flat video playing terminal.
  • the processor 201 may be further configured to: determine, according to the type of the first virtual reality device, a second virtual reality device that is adapted to the first virtual reality device, wherein the second virtual reality device is configured to The video file is recorded for playing, and the video playing terminal includes a second virtual reality device.
  • the processor 201 may be further configured to: when the recorded video file is saved in the terminal, determine, according to the type of the first virtual reality device, the fixed virtual reality device that is adapted to the first virtual reality device, the second virtual Reality devices include fixed virtual reality devices.
  • the processor 201 is further configured to perform the following steps: after encoding the media data to obtain a recorded video file of the target video, processing the recorded video file to obtain processed data; and sending the processed data to the preset website;
  • the type of the virtual reality device determines a mobile virtual reality device that is adapted to the first virtual reality device, wherein the mobile virtual reality device is configured to play the recorded video file according to the processed data by using a preset website, and the second virtual reality device includes the mobile Virtual reality device.
  • the processor 201 is further configured to: when the type of the first virtual reality device is the first type, initiate a preset process to detect a process of the application of the first virtual reality device and window information of the application; or In a case where the type of the first virtual reality device is the second type, the software development kit of the first virtual reality device is invoked to acquire the process ID, and the process of the application of the first virtual reality device and the window information of the application are acquired according to the process ID. Save the application's process and application window information, and load the application based on the application's process and window information.
  • the processor 201 is further configured to: perform the following steps: display a first preset interface; determine whether a start recording instruction is received through the first preset interface, and start a recording instruction to instruct to start recording the target video;
  • the first preset interface receives a start recording instruction, and detects a type of the first virtual reality device in response to the start recording instruction.
  • the processor 201 is further configured to: determine whether a start recording instruction generated by the preset button of the first preset interface is touched is received; or determine whether the keyboard of the first preset interface is received The shortcut key is triggered by the touch to start the recording instruction; or, it is determined whether the voice command corresponding to the start recording instruction is received through the first preset interface.
  • the processor 201 is further configured to perform the following steps: after determining whether a start recording instruction for instructing to start recording the target video is received through the first preset interface, if it is determined that the first preset interface is received to start
  • the recording instruction displays a recording mark for indicating recording of the target video, wherein the recording mark includes a recording mark displayed in an animated form, and/or a recording mark displayed in time form.
  • the processor 201 is further configured to perform the following steps: after encoding the media data to obtain a recorded video file of the target video, displaying a second preset interface; determining whether the end recording command is received through the second preset interface, and ending The recording instruction is used to indicate that the recording of the target video is ended; if it is determined that the end recording instruction is received through the second preset interface, the recording of the target video is ended in response to the end of the recording instruction.
  • the processor 201 is further configured to: determine whether an end recording instruction generated by the preset button of the second preset interface is touched is received; or determine whether the keyboard of the second preset interface is received The shortcut key is terminated by the touch generated recording instruction; or, it is determined whether the voice command corresponding to the end recording instruction is received through the second preset interface.
  • the processor 201 is further configured to perform the following steps: after encoding the media data to obtain a recorded video file of the target video, displaying a third preset interface; determining whether the save command is received through the third preset interface, and saving the instruction It is used to indicate that the recorded video file is saved; if it is determined that the save instruction is received through the third preset interface, the recorded video file is saved in response to the save instruction.
  • the processor 201 is further configured to perform the step of: determining whether a preset button received by the third preset interface is received a save command generated by the touch; or, determining whether a save command generated by the keyboard shortcut of the third preset interface is received; or determining whether the third preset interface receives the save command Voice command.
  • the processor 201 is further configured to perform the following steps: after encoding the media data to obtain the recorded video file of the target video, saving the recorded video file to the software development package of the first virtual reality device; or saving the recorded video file Go to the software development kit of the game client; or save the recorded video file to the software development kit of the game engine.
  • the embodiment of the present application provides a solution for a video file processing method.
  • the first virtual reality device is configured to display the target video to be recorded; the first virtual reality device is detected according to the type of the first virtual reality device; and the first virtual reality device is In the application, obtaining media data of the target video; encoding the media data to obtain a recorded video file of the target video, wherein the video content of the recorded video file is the same as the video content of the target video, thereby realizing display on the virtual reality device.
  • the technical effect of the video recording thereby solving the technical problem that the related technology cannot record the video displayed in the virtual reality device.
  • the terminal can be a smart phone (such as an Android mobile phone, an iOS mobile phone, etc.), a tablet computer, a palm computer, and a mobile Internet device (MID). Terminal equipment such as PAD.
  • Fig. 19 does not limit the structure of the above electronic device.
  • the terminal may also include more or less components (such as a network interface, display device, etc.) than shown in FIG. 19, or have a different configuration than that shown in FIG.
  • Embodiments of the present application also provide a storage medium.
  • the foregoing storage medium may be used to execute program code of a processing method of a video file.
  • the foregoing storage medium may be located on at least one of the plurality of network devices in the network shown in the foregoing embodiment.
  • the storage medium may be arranged to store program code for performing the following steps:
  • the first virtual reality device is configured to display the target video to be recorded
  • the media data is encoded to obtain a recorded video file of the target video, wherein the video content of the recorded video file is the same as the video content of the target video.
  • the storage medium may be further configured to store program code for performing the following steps:
  • the storage medium is further configured to store program code for performing the following steps: capturing a video image of the target video, acquiring image data of the target video according to the video image; acquiring the mesh The audio data of the standard video; the image data and the audio data are respectively encoded to obtain a recorded video file.
  • the storage medium may be further configured to store program code for performing the following steps: capturing a left eye video picture of the target video and a right eye video picture of the target video, according to the left eye Obtaining, by the video image, the left eye image data of the target video, acquiring the right eye image data of the target video according to the right eye video image; performing splicing processing on the left eye image data and the right eye image data to obtain image data of the target video; and performing image data on the image data Encode to get the recorded video file.
  • the storage medium may be further configured to store program code for performing the following steps: after encoding the image data to obtain a recorded video file, receiving the left eye for indicating playback a first play instruction of the video picture, playing a left eye video picture according to the first play instruction; and/or receiving a second play instruction for instructing playing of the right eye video picture, and playing the right eye video picture according to the second play instruction.
  • the storage medium may be further configured to store program code for performing the following steps: after encoding the media data to obtain a recorded video file of the target video, according to the first virtual
  • the type of the real device determines a video playing terminal that is adapted to the first virtual reality device, wherein the video playing terminal is configured to connect to the first virtual reality device to acquire and play the recorded video file.
  • the storage medium may be further configured to store program code for performing: determining a plane adapted to the first virtual reality device according to the type of the first virtual reality device a video playing terminal, wherein the flat video playing terminal is configured to play the recorded video file in a two-dimensional form, and the video playing terminal comprises a flat video playing terminal.
  • the storage medium may be further configured to store program code for performing the following steps: determining, according to the type of the first virtual reality device, the first virtual reality device
  • the second virtual reality device is configured to play the recorded video file in a three-dimensional form
  • the video playing terminal comprises a second virtual reality device.
  • the storage medium may be further configured to store program code for performing the following steps: in case the recorded video file is saved in the terminal, determined according to the type of the first virtual reality device A fixed virtual reality device that is adapted to the first virtual reality device, the second virtual reality device including a fixed virtual reality device.
  • the storage medium may be further configured to store program code for performing the following steps: after encoding the media data to obtain a recorded video file of the target video, recording the video file Processing, obtaining processing data; transmitting the processing data to a preset website; determining a mobile virtual reality device adapted to the first virtual reality device according to the type of the first virtual reality device, wherein the mobile virtual reality device is configured to pass the preset
  • the website plays the recorded video file according to the processed data
  • the second virtual reality device includes a mobile virtual reality device.
  • the storage medium may be further configured to store program code for performing the following steps: in a case where the type of the first virtual reality device is the first type, the preset process is started. Detecting the process of the application of the first virtual reality device and the window information of the application; or, if the type of the first virtual reality device is the second type, calling the software development kit of the first virtual reality device to obtain the process ID, Obtaining the process of the application of the first virtual reality device and the window information of the application according to the process ID; saving the process of the application and the window of the application And load the app based on the application's progress and window information.
  • the storage medium may be further configured to store program code for performing the following steps: displaying a first preset interface; determining whether a start recording instruction is received through the first preset interface The start recording instruction is used to instruct to start recording the target video; if it is determined that the start recording instruction is received through the first preset interface, the type of the first virtual reality device is detected in response to the start recording instruction.
  • the storage medium may be further configured to store program code for performing the following steps: determining whether the preset button of the first preset interface is received by the touch Start recording the instruction; or, determine whether a start recording instruction generated by the touch of the keyboard shortcut of the first preset interface is received; or, determine whether the voice command corresponding to the start recording instruction is received through the first preset interface .
  • the storage medium may be further configured to store program code for performing the following steps: determining whether to start recording the target video through the first preset interface. After the start of the recording instruction, if it is determined that the start recording instruction is received through the first preset interface, a recording mark indicating that the target video is recorded is displayed, wherein the recording mark includes a recording mark displayed in an animated form, and/or Recording mark displayed in time.
  • the storage medium may be further configured to store program code for performing the following steps: after encoding the media data to obtain a recorded video file of the target video, displaying the second pre- Setting an interface; determining whether an end recording instruction is received through the second preset interface, ending the recording instruction for instructing to end recording of the target video; and determining to end the recording instruction through the second preset interface, responding to ending the recording instruction End recording of the target video.
  • the storage medium may be further configured to store program code for performing the following steps: determining whether the preset button of the second preset interface is received by the touch Ending the recording instruction; or determining whether an end recording instruction generated by the touch of the keyboard shortcut of the second preset interface is received; or determining whether the voice command corresponding to the end recording instruction is received through the second preset interface .
  • the storage medium may be further configured to store program code for performing the following steps: after encoding the media data to obtain a recorded video file of the target video, displaying the third pre- Setting an interface; determining whether a save command is received through the third preset interface, the save command is used to instruct to save the recorded video file; if it is determined that the save command is received through the third preset interface, the recorded video file is saved in response to the save command .
  • the storage medium may be further configured to store program code for performing the following steps: determining whether the preset button of the third preset interface is received by the touch And saving the instruction; or determining whether the save command generated by the touch of the keyboard shortcut of the third preset interface is received; or determining whether the voice command corresponding to the save instruction is received through the third preset interface.
  • the storage medium may be further configured to store program code for performing the following steps: after encoding the media data to obtain a recorded video file of the target video, saving the recorded video file To the software development kit of the first virtual reality device; or, save the recorded video file to the software development package of the game client; or save the recorded video file to the software development kit of the game engine.
  • the foregoing storage medium may include, but not limited to, a U disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a mobile hard disk, a magnetic disk, or an optical disk.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • mobile hard disk a magnetic disk
  • magnetic disk a magnetic disk
  • optical disk a variety of media that can store program code.
  • the embodiment of the present application further provides a computer program product including instructions, which when executed on a terminal, causes the terminal to execute the processing method of the video file provided in the foregoing various embodiments.
  • the integrated unit in the above embodiment if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in the above-described computer readable storage medium.
  • the technical solution of the present application in essence or the contribution to the prior art, or all or part of the technical solution may be embodied in the form of a software product, which is stored in a storage medium.
  • a number of instructions are included to cause one or more computer devices (which may be a personal computer, server or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present application.
  • the disclosed client may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, unit or module, and may be electrical or otherwise.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种视频文件的处理方法及装置,用于对虚拟现实设备中显示的视频进行录制,该方法包括:检测第一虚拟现实设备的类型,其中,第一虚拟现实设备用于显示待录制的目标视频;根据第一虚拟现实设备的类型检测第一虚拟现实设备的应用;在第一虚拟现实设备的应用中,获取目标视频的媒体数据;对媒体数据进行编码,得到目标视频的录制视频文件,其中,录制视频文件的视频内容与目标视频的视频内容相同。

Description

视频文件的处理方法和装置
本申请要求于2016年10月26日提交中国专利局、申请号为201610950464.8、发明名称为“视频文件的处理方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机领域,具体涉及视频文件的处理。
背景技术
目前,虚拟现实(Virtual Reality,简称为VR)技术逐渐兴起,VR技术是一种可以创建和体验虚拟世界的仿真技术,利用计算机生成一种模拟环境,通过多源信息融合、交互式的三维动态视景和实体行为等使得用户沉浸到该环境中。良好的交互性以及多感知性,使得VR在娱乐领域应用广泛,例如应用在全景视频、VR游戏等。相对于2D(2Dimension,二维)多媒体内容而言,VR内容较少,大部分都是专业产生内容(Professional Generated Content,简称为PGC),无法满足用户的个性化需求。例如,用户在体验VR视频时,使用传统的录制技术,无法将用户体验VR的内容完全录制下来。
针对上述的传统技术无法对虚拟现实设备中显示的视频进行录制的问题,目前尚未提出有效的解决方案。
发明内容
本申请实施例提供了一种视频文件的处理方法和装置,以至少解决相关技术无法对虚拟现实设备中显示的视频进行录制的技术问题。
根据本申请实施例的一个方面,提供了一种视频文件的处理方法。该视频文件的处理方法包括:检测第一虚拟现实设备的类型,其中,第一虚拟现实设备用于显示待录制的目标视频;根据第一虚拟现实设备的类型检测第一虚拟现实设备的应用;在第一虚拟现实设备的应用中,获取目标视频的媒体数据;对媒体数据进行编码,得到目标视频的录制视频文件,其中,录制视频文件的视频内容与目标视频的视频内容相同。
根据本申请实施例的另一方面,还提供了一种视频文件的处理方法。该视频文件的处理方法包括:终端检测第一虚拟现实设备的类型,其中,所述第一虚拟现实设备用于显示待录制的目标视频;所述终端根据所述第一虚拟现实设备的类型检测所述第一虚拟现实设备的应用;在所述第一虚拟现实设备的应用中,所述终端获取所述目标视频的媒体数据;所述终端对所述媒体数据进行编码,得到所述目标视频的录制视频文件,其中,所述录制视频文件的视频内容与所述目标视频的视频内容相同。
根据本申请实施例的另一方面,还提供了一种视频文件的处理装置。该视频文件的处理装置包括:第一检测单元,用于检测第一虚拟现实设备的类型,其中,第一虚拟现实设备用于显示待录制的目标视频;第二检测单元,用于根据第一虚拟现实设备的类型检测第一虚拟现实设备的应用;获取单元,用于在第一虚拟现实设备的应用中,获取目标视频的 媒体数据;编码单元,用于对媒体数据进行编码,得到目标视频的录制视频文件,其中,录制视频文件的视频内容与目标视频的视频内容相同。
根据本申请实施例的另一方面,还提供了一种终端。该终端包括:处理器以及存储器;所述存储器,用于存储程序代码,并将所述程序代码传输给所述处理器;所述处理器,用于调用存储器中的指令执行上述视频文件的处理方法。
根据本申请实施例的另一方面,还提供了一种存储介质。该存储介质用于存储程序代码,所述程序代码用于执行上述视频文件的处理方法。
根据本申请实施例的另一方面,还提供了一种包括指令的计算机程序产品,当其在终端上运行时,使得所述终端执行上述视频文件的处理方法。
在本申请实施例中,通过检测第一虚拟现实设备的类型,其中,第一虚拟现实设备用于显示待录制的目标视频;根据第一虚拟现实设备的类型检测第一虚拟现实设备的应用;在第一虚拟现实设备的应用中,获取目标视频的媒体数据;对媒体数据进行编码,得到目标视频的录制视频文件,其中,录制视频文件的视频内容与目标视频的视频内容相同,实现了对虚拟现实设备中显示的视频进行录制的技术效果,进而解决了相关技术无法对虚拟现实设备中显示的视频进行录制的技术问题。
附图说明
此处所说明的附图用来提供对本申请实施例的进一步理解,构成本申请的一部分,本申请实施例的示意性实施例及其说明用于解释本申请实施例,并不构成对本申请实施例的不当限定。在附图中:
图1是根据相关技术中的一种OBS录制系统框架的示意图;
图2是根据相关技术中的一种OBS录制操作界面的示意图;
图3是根据本申请实施例的一种视频文件的处理方法的硬件环境的示意图;
图4是根据本申请实施例的一种视频文件的处理方法的流程图;
图5是根据本申请实施例的一种对媒体数据进行编码的方法的流程图;
图6是根据本申请实施例的另一种视频文件的处理方法的流程图;
图7是根据本申请实施例的一种根据第一虚拟现实设备的类型检测第一虚拟现实设备的应用的方法的流程图;
图8是根据本申请实施例的一种检测第一虚拟现实设备的类型的方法的流程图;
图9是根据本申请实施例的另一种视频文件的处理方法的流程图;
图10是根据本申请实施例的一种获取目标视频的媒体数据的方法的流程图;
图11是根据本申请实施例的一种视频录制的结构示意图;
图12是根据本申请实施例的一种对VR设备进行检测的方法的流程图;
图13是根据本申请实施例的一种VR应用的检测方法的流程图;
图14根据本申请实施例的另一种视频文件的处理方法的流程图;
图15是根据本申请实施例的一种视频播放器的框架的示意图;
图16是根据本申请实施例的一种视频录制界面的示意图;
图17是根据本申请实施例的一种视频播放界面的示意图;
图18是根据本申请实施例的一种视频文件的处理装置的示意图;以及
图19是根据本申请实施例的一种终端的结构框图。
具体实施方式
为了使本技术领域的人员更好地理解本申请实施例方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。
需要说明的是,本申请实施例的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
传统的视频录制可以通过开源直播软件(Open Broadcaster Software,简称为OBS)实现的。图1是根据相关技术中的一种OBS录制系统框架的示意图。如图1所示,该OBS录制系统包括图像帧捕捉模块、录制源插件系统、视频编码模块、音频编码模块,其中,录制源插件系统可以录制Window图像、实现3D规格界面(Direct3D)捕捉、开源图像库(Open Graphics Library,简称为OpenGL)捕捉。该OBS录制系统可以录制Windows平台的桌面应用窗口,以及Windows游戏画面等,但无法录制新出现的PC平台VR头显(头戴式显示设备)内的画面。另一方面,类似于OBS这样的录屏软件大都存在界面繁琐,操作复杂的缺点。图2是根据相关技术中的一种OBS录制操作界面的示意图。如图2所示,需要通过点击“开始串流”、“开始录制”或者“预览串流”开始录制过程,在界面友好和产品体验上对很多普通用户比较困难,使用门槛较高,不利于产品的使用。
为此,本申请实施例提供了一种视频文件的处理方法及装置,以至少解决相关技术无法对虚拟现实设备中显示的视频进行录制的技术问题。以下将对本申请各个实施例进行详细说明。
根据本申请实施例,提供了一种视频文件的处理方法的实施例。
在本实施例中,如图3所示,上述视频文件的处理方法可以应用终端301中,该终端可以为智能手机、平板电脑、电子书阅读器、MP3(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)播放器、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面3)播放器、膝上型便携计算机和台式计算机等等。该终端可以第一虚拟现实设备302相连。
图3是根据本申请实施例的一种视频文件的处理方法的流程图。如图3所示,该视频文件的处理方法可以包括以下步骤:
步骤S402,检测第一虚拟现实设备的类型。
在本申请实施例中第一虚拟现实设备用于显示待录制的目标视频。
虚拟现实设备是一种采用虚拟现实技术,可以使用户沉浸到虚拟世界中的设备。虚拟现实技术可以创建体验虚拟世界的计算机仿真系统,利用计算机生成一种多源信息融合的交互式的三维动态视景和实体行为的系统仿真,使用户沉浸到该虚拟世界的环境中。虚拟现实设备比如,PC平台上的VR头戴式显示设备((Head-mounted Display,简称为HMD)。
对第一虚拟现实设备进行检测,可以根据虚拟现实设备对应的Windows操作系统的应用程序开发包软件(Platform SDK)进行检测。在检测第一虚拟现实设备时,通过加载检测插件系统进行检测。比如,可以加载Oculus(欧酷来,一种虚拟现实设备)插件进行检测,也可以加载HTC Vive(HTC公司生产的一种虚拟现实设备)插件进行检测,从而得到第一虚拟现实设备的类型。进一步,检测插件系统返回第一虚拟现实设备的类型,该第一虚拟现实设备的类型可以为Oculus类型,也可以为HTC Vive类型,也即,该第一虚拟现实设备可以为Oculus设备,也可以为HTC Vive设备。
第一虚拟现实设备用于显示待录制的目标视频,该目标视频播放在第一虚拟现实设备上,为用户在观看视频过程中想要录制的视频,可以为第一虚拟现实设备上播放的某个视频的整段视频,也可以为第一虚拟现实设备上播放的某个视频中的某一段视频。
步骤S404,根据第一虚拟现实设备的类型检测第一虚拟现实设备的应用。
在检测第一虚拟现实设备的应用时,可以根据检测到的第一虚拟现实设备的类型执行检测逻辑,第一虚拟现实设备的类型不同,对应的检测逻辑也不同。根据第一虚拟现实设备的类型检测第一虚拟现实设备的应用的具体实现可以包括在第一虚拟现实设备的类型为第一类型的情况下,启动预设进程检测第一虚拟现实设备的应用的进程和应用的窗口信息;或者,在第一虚拟现实设备的类型为第二类型的情况下,调用第一虚拟现实设备的软件开发工具包(Software Development Kit,简称为SDK)以获取进程ID(Identity,身份),根据进程ID获取第一虚拟现实设备的应用的进程和应用的窗口信息;保存应用的进程和应用的窗口信息,并根据应用的进程和窗口信息加载应用。
预设进程为预先设置的一个用于检测第一虚拟现实设备类型的独立进程。可以用于检测应用的进程和应用的窗口信息,以便根据上述信息加载应用。
第一虚拟现实设备的第一类型与第二类型可以根据软件开发工具包是否有提供与应用检测相关的函数进行区分。例如,当第一虚拟现实设备的类型为Oculus类型时,Oculus设备的软件开发工具包(Software Development Kit,简称为SDK)没有提供与应用检测相关的函数,也就是说,第一虚拟现实设备的类型为第一类型,在实现上需要启动一个独立进程用于检测,得到应用的进程名和应用的窗口信息,并将进程名和窗口信息保存;当第一虚拟现实设备的类型为HTC Vive类型时,HTC Vive设备的SDK提供了与应用检测相关的函数,也就是说,第一虚拟现实设备的类型为第二类型,只需要调用HTC Vive设备提供的SDK以获取到进程ID,然后根据进程ID获取应用的进程名以及应用的窗口信息,最后对应用的进程名和应用的窗口信息进行保存。
在根据第一虚拟现实设备的类型检测第一虚拟现实设备的应用之后,还可以根据应用的进程和窗口信息加载应用。
步骤S406,在第一虚拟现实设备的应用中,获取目标视频的媒体数据。
在根据第一虚拟现实设备的类型检测第一虚拟现实设备的应用之后,对目标视频开始录制。在第一虚拟现实设备的应用中,对目标视频的录制过程包括捕捉线程、音频编码线程和视频编码线程等。获取目标视频的媒体数据,可以在对目标视频开始录制时加载VR源捕捉模块到录制进程中,该VR源捕捉模块用于捕捉目标视频的视频画面,根据视频画面获取目标媒体文件的图像数据,然后将图像数据从目标进程中复制到录制主程序中,目标媒体文件的图像数据为一种目标视频的媒体数据。另外,还可以获取目标视频的音频数据,目标媒体文件的音频数据也为一种目标视频的媒体数据。
步骤S408,对媒体数据进行编码,得到目标视频的录制视频文件。录制视频文件的视频内容与目标视频的视频内容相同。
在获取目标视频的媒体数据之后,对媒体数据进行编码。可以分别对图像数据和音频数据进行编码,得到录制视频文件。在实际应用中,可以由应用的主程序的编码线程对媒体数据进行编码,包括通过应用中的视频编码线程对媒体数据中的图像数据进行视频编码,通过应用中的音频编码线程对媒体数据中的音频数据进行音频编码,得到录制视频文件,实现了对第一虚拟现实设备中的目标视频进行录制的目的。
在本申请实施例一种可能的实现方式中,在对媒体数据进行编码,得到目标视频的录制视频文件之后,还可以对录制视频文件进行播放。根据不同类型的第一虚拟现实设备,自动适配视频播放终端,视频播放终端用于播放录制视频文件。例如可以在与第一虚拟现实设备的类型相匹配的头戴式显示设备中对录制视频文件进行播放,该头戴式显示设备包括PC平台和移动平台的设备,该头戴式显示设备可以用于VR或者增强现实(Argumented Reality,简称为AR)技术中,其中,AR技术是一种实时的计算摄影机影响的位置以及角度,并加上相应图像的技术,从而实现在屏幕上将虚拟世界套在现实世界中,并进行互动的目的。又例如对录制视频文件进行播放也可以重新播放到第一虚拟现实设备中,从而还原目标视频,得到用户以第一人称在体验观看视频时的视觉沉浸感。
通过上述步骤S402至步骤S408,通过检测第一虚拟现实设备的类型,其中,第一虚拟现实设备用于显示待录制的目标视频;根据第一虚拟现实设备的类型检测第一虚拟现实设备的应用;在第一虚拟现实设备的应用中,获取目标视频的媒体数据,其中,媒体数据至少包括目标视频的图像数据;对媒体数据进行编码,得到目标视频的录制视频文件,其中,录制视频文件的视频内容与目标视频的视频内容相同,解决了相关技术无法对虚拟现实设备中显示的视频进行录制的技术问题,进而达到了对虚拟现实设备中显示的视频进行录制的技术效果。
作为一种可选的实施例,步骤S406,获取目标视频的媒体数据包括:捕捉目标视频的左眼视频画面和目标视频的右眼视频画面,根据左眼视频画面获取目标视频的左眼图像数据,根据右眼视频画面获取目标视频的右眼图像数据;对左眼图像数据和右眼图像数据进行拼接处理,得到目标视频的图像数据。
图5是根据本申请实施例的一种对媒体数据进行编码的方法的流程图。如图5所示,该对媒体数据进行编码的方法包括以下步骤:
步骤S501,分别获取目标视频的左眼图像数据和右眼图像数据。
在本申请实施例上述步骤S501提供的技术方案中,可以捕捉目标视频的左眼视频画面和目标视频的右眼视频画面,根据左眼视频画面获取目标视频的左眼图像数据,根据右眼视频画面获取目标视频的右眼图像数据。
目标视频包括通过左眼观看的左眼视频画面和通过右眼观看的右眼视频画面。左眼图像数据可以用于显示左眼视频画面,右眼图像数据可以用于显示右眼视频画面,根据左眼视频画面和右眼视频画面可以分别获取左眼图像数据和右眼图像数据。
步骤S502,对左眼图像数据和右眼图像数据进行拼接处理,得到目标视频的图像数据。
对媒体数据进行编码,得到目标视频的录制视频文件包括:对图像数据进行编码,得到一个完整的录制视频文件,并将录制视频文件进行保存,
在本申请实施例中,在获取左眼图像数据之后,可以根据左眼图像数据生成录制视频文件的左眼视频文件,保存左眼视频文件,从而实现了对左眼图像数据的编码,用户通过左眼可以观看到左眼视频文件对应的目标视频的左眼画面。在获取右眼图像数据之后,根据右眼图像数据生成录制视频文件的右眼视频文件,保存右眼视频文件,从而实现了对左眼图像数据的编码,用户通过右眼可以观看到右眼视频文件对应的目标视频的右眼画面。
该实施例分别获取目标视频的左眼图像数据和右眼图像数据,对左眼图像数据和右眼图像数据进行拼接处理,得到目标视频的图像数据,从而实现了对目标视频的媒体数据进行获取的目的。
作为一种可选的实施例,在对图像数据进行编码,得到录制视频文件之后,还可以接收用于指示播放左眼视频画面的第一播放指令,根据第一播放指令播放左眼视频画面;和/或接收用于指示播放右眼视频画面的第二播放指令,根据第二播放指令播放右眼视频画面。
在对媒体数据进行编码,得到目标视频的录制视频文件之后,对录制视频文件进行播放,也即,重现目标视频,可以选择播放左眼视频画面和右眼视频画面,也可单独选择只播放左眼视频画面、或者只播放右眼视频画面。从而提高了播放录制视频文件的灵活性。
另外,在保存左眼视频文件和右眼视频文件之后,可以同时播放左眼视频文件以显示左眼视频画面和播放右眼视频文件以显示右眼视频画面,接收第一播放指令,根据第一播放指令播放左眼视频文件,接收第二播放指令,根据第二播放指令播放右眼视频文件;也可以只接收第一播放指令,播放左眼视频文件以显示左眼视频画面,或者只接收第二播放指令,播放右眼视频文件以显示右眼视频画面,从而提高了播放录制视频文件的灵活性。
作为一种可选的实施例,在对媒体数据进行编码,得到目标视频的录制视频文件之后,可以根据第一虚拟现实设备的类型确定与第一虚拟现实设备相适配的视频播放终端,其中,视频播放终端用于与第一虚拟现实设备相连接,获取并播放录制视频文件。
对录制视频文件进行播放的视频播放终端与第一虚拟现实设备相适配,不同类型的第一虚拟现实设备可以适配不同的视频播放终端。在对媒体数据进行编码,得到目标视频的录制视频文件之后,自动适配与第一虚拟现实设备相适配的视频播放终端,视频播放终端与第一虚拟现实设备连接,获取录制视频文件,进而对录制视频文件进行播放,也即,重现目标视频。
在本申请实施例一种可能的实现方式中,对录制视频文件进行播放时,需要对第一虚 拟现实设备进行检测,可以将录制视频文件在普通平面播放,也可以将录制视频文件在VR头戴式显示设备进行播放,例如通过Oculus头显播放插件进行播放、通过HTC头显播放插件进行播放,平面播放插件进行播放等,在对录制视频文件进行播放时还需要视频解码模块对录制视频文件进行解码。
作为一种可选的实施例,根据第一虚拟现实设备的类型确定与第一虚拟现实设备相适配的视频播放终端包括:根据第一虚拟现实设备的类型确定与第一虚拟现实设备相适配的平面视频播放终端,其中,平面视频播放终端用于以二维形式对录制视频文件进行播放,视频播放终端包括平面视频播放终端;或者根据第一虚拟现实设备的类型确定与第一虚拟现实设备相适配的第二虚拟现实设备,其中,第二虚拟现实设备用于以三维形式对录制视频文件进行播放,视频播放终端包括第二虚拟现实设备。
视频播放终端用于对录制视频文件进行播放,其类型与第一虚拟现实设备相适配,包括平面播放终端和第二虚拟现实设备。平面视频播放终端,也即平面二维(2D)视频播放器,用于以二维形式对录制视频文件进行播放,可以通过平面播放插件进行播放。
对录制视频文件进行播放的视频播放终端可以为第二虚拟现实设备,比如,头戴式显示设备,可以为移动头戴式显示设备,通过Oculus头显播放插件或者HTC头显播放插件以三维形式对录制视频文件进行播放。其中,头戴式显示设备可以通过渲染流程Direct3D11渲染,通过Oculus渲染,和通过HTC渲染,Oculus渲染包括通过Oculus DK2 Piugin插件和通过Oculus CV1 Plugin插件进行渲染。
需要说明的是,上述对录制视频文件的播放方式仅为本申请实施例的优选实施例,并不限于本申请实施例对录制视频文件的播放方式仅限于上述播放方式,任何可以实现对录制视频文件播放的方式都在本申请实施例的保护范围之内,此处不再一一列举。
作为一种可选的实施例,根据第一虚拟现实设备的类型确定与第一虚拟现实设备相适配的第二虚拟现实设备的实现可以包括:在录制视频文件保存在终端的情况下,根据第一虚拟现实设备的类型确定与第一虚拟现实设备相适配的固定虚拟现实设备,第二虚拟现实设备包括固定虚拟现实设备。
在录制视频文件保存在终端(例如PC端)的情况下,根据第一虚拟现实设备的类型确定与第一虚拟现实设备相适配的固定虚拟现实设备,比如,第一虚拟现实设备为PC端VR头显,在用户通过PC端VR头显在观看VR,或者全景视频,或者体验VR应用、VR游戏的过程中,对目标视频进行录制,并将录制视频文件保存在PC端。第二虚拟现实设备为固定虚拟现实设备,可以同样为PC端VR头显,通过PC端VR头显观看录制视频文件对应的目标视频,从而实现用户在PC端VR观看具有沉浸感的录制视频,实现了实现根据第一虚拟现实设备的类型确定与第一虚拟现实设备相适配的第二虚拟现实设备的目的。
作为一种可选的实施例,还可以在对媒体数据进行编码,得到目标视频的录制视频文件之后,对录制视频文件进行处理,得到处理数据;发送处理数据至预设网站;根据第一虚拟现实设备的类型确定与第一虚拟现实设备相适配的第二虚拟现实设备的实现可以包括:根据第一虚拟现实设备的类型确定与第一虚拟现实设备相适配的移动虚拟现实设备,移动虚拟现实设备用于通过预设网站根据处理数据对录制视频文件进行播放。
图6是根据本申请实施例的另一种视频文件的处理方法的流程图。如图6所示,该视频文件的处理方法包括以下步骤:
步骤S601,对录制视频文件进行处理,得到处理数据。
在对媒体数据进行编码,得到目标视频的录制视频文件之后,可以对录制视频文件进行处理,得到处理数据。
步骤S602,发送处理数据至预设网站。
在对录制视频文件进行处理,得到处理数据之后,将处理数据发送到第三方在线视频网站,其中,预设网站包括第三方在线视频网站,从而实现了对录制视频文件的分享的目的。
步骤S603,根据第一虚拟现实设备的类型确定与第一虚拟现实设备相适配的移动虚拟现实设备。
移动虚拟现实设备用于通过预设网站根据处理数据对录制视频文件进行播放,则第二虚拟现实设备包括移动虚拟现实设备。
在发送处理数据至预设网站之后,与第一虚拟现实设备的类型相适配的移动虚拟现实设备可以再次从第三方在线视频网站上获取该处理数据,该移动虚拟现实设备可以为移动端VR眼镜,通过该处理数据对录制视频文件进行播放,实现了对录制视频文件进行播放的目的。
该实施例通过在对媒体数据进行编码,得到目标视频的录制视频文件之后,发送对录制视频文件进行处理得到的处理数据至预设网站,通过与第一虚拟现实设备的类型相适配的移动虚拟现实设备通过预设网站根据处理数据对录制视频文件进行播放,达到了对录制视频文件进行分享和播放的目的。
作为一种可选的实施例,第二虚拟现实设备可以包括:头戴式显示设备,其中,头戴式显示设备通过插件以三维形式对录制视频文件进行播放。
第二虚拟现实设备包括头戴式显示设备,比如,Oculus头戴式显示设备,HTC头戴式显示设备,其中,Oculus头戴式显示设备通过Oculus Dk2 Plugin插件或者Oculus CV1 Plugin插件以三维形式对录制视频文件进行播放。
图7是根据本申请实施例的一种根据第一虚拟现实设备的类型检测第一虚拟现实设备的应用的方法的流程图。如图7所示,该根据第一虚拟现实设备的类型检测第一虚拟现实设备的应用的方法包括以下步骤:
步骤S701,启动预设进程检测应用的进程和应用的窗口信息。
在本申请实施例上述步骤S701提供的技术方案中,在与第二虚拟现实设备相适配的第一虚拟现实设备的类型为第一类型的情况下,启动预设进程检测应用的进程和应用的窗口信息。
第二虚拟现实设备的类型为Oculus类型,在与第二虚拟现实设备相适配的第一虚拟现实设备的类型为Oculus的情况下,由于Oculus类型的第二虚拟现实设备的SDK没有提供检测应用的相关函数,启动预设进程,该预设进程为独立的进程用于检测第一虚拟现实设备的应用的进程。
步骤S702,调用第一虚拟现实设备的软件开发工具包以获取进程ID,根据进程ID获取应用的进程和应用的窗口信息。
在本申请实施例上述步骤S702提供的技术方案中,在与第二虚拟现实设备相适配的第一虚拟现实设备的类型为第二类型的情况下,调用第一虚拟现实设备的SDK以获取进程ID,根据进程ID获取应用的进程和应用的窗口信息。
第二虚拟现实设备的类型为HTC Vive类型,在与第二虚拟现实设备相适配的第一虚拟现实设备的类型为HTC Vive的情况下,只需要调用第一虚拟现实设备的SDK获取到进程ID,根据进程ID获取应用的进程以及窗口信息。
步骤S703,保存应用的进程和应用的窗口信息,并根据应用的进程和窗口信息加载应用。
在启动预设进程检测应用的进程和应用的窗口信息之后,或者在调用第一虚拟现实设备的软件开发工具包以获取进程ID,根据进程ID获取应用的进程和应用的窗口信息之后,保存应用的进程和应用的窗口信息,可以通过保存应用的进程名来保存应用的进程,应用的窗口信息包括应用的窗口标题等信息。根据应用的进程和窗口信息加载应用,从而实现了对第一虚拟现实设备的应用进行检测的目的。
该实施例通过在与第二虚拟现实设备相适配的第一虚拟现实设备的类型为第一类型的情况下,启动预设进程检测应用的进程和应用的窗口信息;或者,在与第二虚拟现实设备相适配的第一虚拟现实设备的类型为第二类型的情况下,调用第一虚拟现实设备的软件开发工具包以获取进程ID,根据进程ID获取应用的进程和应用的窗口信息;保存应用的进程和应用的窗口信息,并根据应用的进程和窗口信息加载应用,实现了根据第一虚拟现实设备的类型检测第一虚拟现实设备的应用的目的。
作为一种可选的实施例,步骤S402,检测第一虚拟现实设备的类型可以包括:显示第一预设界面;判断是否通过第一预设界面接收到开始录制指令,开始录制指令用于指示开始对目标视频进行录制;如果判断出通过第一预设界面接收到开始录制指令,响应于开始录制指令检测第一虚拟现实设备的类型。
图8是根据本申请实施例的一种检测第一虚拟现实设备的类型的方法的流程图。如图8所示,该检测第一虚拟现实设备的类型的方法包括以下步骤:
步骤S801,显示第一预设界面。
第一预设界面为用于接收开始对目标视频进行录制的开始录制指令的界面,可以包括界面命令按钮。
步骤S802,判断是否通过第一预设界面接收到开始录制指令。
在显示第一预设界面之后,用户可以对第一预设界面进行触控以在第一预设界面上产生开始录制指令。
步骤S803,响应于开始录制指令检测第一虚拟现实设备的类型。
在判断是否通过第一预设界面接收到开始录制指令之后,即如果用户对第一预设界面进行触控,产生了开始录制指令,则响应于开始录制指令对第一虚拟现实设备进行检测,检测第一虚拟现实设备的类型。
该实施例通过显示第一预设界面;判断是否通过第一预设界面接收到开始录制指令;在判断出通过第一预设界面接收到开始录制指令,响应于开始录制指令检测第一虚拟现实设备的类型,实现了检测第一虚拟现实设备的类型的目的。
作为一种可选的实施例,步骤S802,判断是否通过第一预设界面接收到开始录制指令包括以下之一:判断是否接收到由第一预设界面的预设按钮被触控所产生的开始录制指令;判断是否接收到由第一预设界面的键盘快捷键被触控所产生的开始录制指令;判断是否通过第一预设界面接收到与开始录制指令对应的语音命令。
第一预设界面比较简单,可以包括多种用于操作开始录制目标视频的方式。第一预设界面可以包括预设按钮,该预设按钮为界面命令按钮,通过该预设按钮被触控产生开始录制指令,则判断是否接收到由第一预设界面的预设按钮被触控所产生的开始录制指令。第一预设界面可以对应有键盘快捷键,通过键盘快捷键被触控产生开始录制指令,则判断是否接收到由第一预设界面的键盘快捷键被触控所产生的开始录制指令。第一预设界面还可以通过语音命令输入识别开始录制指令,则判断是否通过第一预设界面接收到与开始录制指令对应的语音命令。
需要说明的是,上述对目标视频进行开始录制的方式仅为本申请实施例的优选实施例,并不限于本申请实施例对目标视频开始录制的方式仅限于上述播放方式,任何可以实现对目标视频进行开始录制的方式都在本申请实施例的保护范围之内,此处不再一一列举。
作为一种可选的实施例,在判断是否通过第一预设界面接收到用于指示开始对目标视频进行录制的开始录制指令之后,如果判断出通过第一预设界面接收到开始录制指令,显示用于表示对目标视频进行录制的录制标志,其中,录制标志包括以动画形式显示的录制标志,和/或以时间形式显示的录制标志。
用于表示对目标视频进行录制的录制标志为具有使用户具有沉浸感的录制标识,可以在头戴式显示器中进行显示。在判断是否通过第一预设界面接收到用于指示开始对目标视频进行录制的开始录制指令之后,在判断出通过第一预设界面接收到开始录制指令的情况下,可以以动画形式显示录制标志,比如,以转动的图形表示对目标视频开始录制,和/或以时间形式显示的录制标志,比如,开始计时的时间表示形式。
作为一种可选的实施例,还可以在对媒体数据进行编码,得到目标视频的录制视频文件之后,在通过第二预设界面接收到结束录制指令的情况下,结束对目标视频进行录制;在判断出通过第三预设界面接收到保存指令的情况下,响应于保存指令保存录制视频文件。
图9是根据本申请实施例的另一种视频文件的处理方法的流程图。如图9所示,该视频文件的处理方法还包括以下步骤:
步骤S901,显示第二预设界面。
第二预设界面为用于接收结束对目标视频进行录制的结束录制指令的界面,可以包括界面命令按钮。
步骤S902,判断是否通过第二预设界面接收到结束录制指令,结束录制指令用于指示结束对目标视频进行录制。
在显示第二预设界面之后,用户可以对第二预设界面进行触控以在第二预设界面上产 生结束录制指令。
步骤S903,响应于结束录制指令结束对目标视频进行录制。
在判断是否通过第二预设界面接收到结束录制指令之后,在判断出通过第二预设界面接收到结束录制指令的情况下,如果用户对第二预设界面进行触控,产生了结束录制指令,则响应于结束录制指令结束对目标视频进行录制。
步骤S904,显示第三预设界面。
第三预设界面为用于接收对目标视频进行保存的保存指令的界面,可以包括界面命令按钮。
步骤S905,判断是否通过第三预设界面接收到保存指令,保存指令用于指示对录制视频文件进行保存。
在显示第三预设界面之后,用户可以对第三预设界面进行触控以在第一预设界面上产生保存指令。
步骤S906,响应于保存指令保存录制视频文件。
在判断是否通过第三预设界面接收到用于指示对目标视频进行保存的保存指令之后,如果用户对第三预设界面进行触控,产生了保存指令,则响应于保存指令对录制视频文件进行保存。
该实施例通过在对媒体数据进行编码,得到目标视频的录制视频文件之后,显示第二预设界面;判断是否通过第二预设界面接收到结束录制指令;如果判断出通过第二预设界面接收到结束录制指令,响应于结束录制指令结束对目标视频进行录制;显示第三预设界面;判断是否通过第三预设界面接收到保存指令;如果判断出通过第三预设界面接收到保存指令,响应于保存指令保存录制视频文件,简化了对第一虚拟现实设备显示的视频进行录制的操作。
作为一种可选的实施例,步骤S902通过第二预设界面接收结束录制指令包括以下之一:判断是否接收到由第二预设界面的预设按钮被触控所产生的结束录制指令;判断是否接收到由第二预设界面的键盘快捷键被触控所产生的结束录制指令;判断是否通过第二预设界面接收到与结束录制指令对应的语音命令。
第二预设界面比较简单,可以包括多种用于操作结束录制目标视频的方式。第二预设界面可以包括预设按钮,该预设按钮为界面命令按钮,通过该预设按钮被触控产生结束录制指令,则判断是否接收到由第二预设界面的预设按钮被触控所产生的结束录制指令。第二预设界面可以对应有键盘快捷键,通过键盘快捷键被触控产生结束录制指令,则判断是否接收到由第二预设界面的键盘快捷键被触控所产生的结束录制指令。第二预设界面还可以通过语音命令输入识别结束录制指令,则判断是否通过第二预设界面接收到与结束录制指令对应的语音命令。
需要说明的是,上述对录制视频文件结束录制的方式仅为本申请实施例的优选实施例,并不限于本申请实施例对录制视频文件结束录制的方式仅限于上述方式,任何可以实现对录制视频文件结束录制的方式都在本申请实施例的保护范围之内,此处不再一一列举。
作为一种可选的实施例,步骤S905,判断是否通过第三预设界面接收到保存指令包括 以下之一:判断是否接收到由第三预设界面的预设按钮被触控所产生的保存指令;判断是否接收到由第三预设界面的键盘快捷键被触控所产生的保存指令;判断是否通过第三预设界面接收到与保存指令对应的语音命令。
第三预设界面比较简单,可以包括多种用于操作保存录制目标视频的方式。第三预设界面可以包括预设按钮,该预设按钮为界面命令按钮,通过该预设按钮被触控产生保存指令,则判断是否接收到由第三预设界面的预设按钮被触控所产生的保存指令。第三预设界面可以对应有键盘快捷键,通过键盘快捷键被触控产生保存指令,则判断是否接收到由第三预设界面的键盘快捷键被触控所产生的保存指令。第三预设界面还可以通过语音命令输入识别开始录制指令,则判断是否通过第三预设界面接收到与保存指令对应的语音命令。
需要说明的是,上述对录制视频文件进行保存的方式仅为本申请实施例的优选实施例,并不限于本申请实施例对录制视频文件进行保存的方式仅限于上述方式,任何可以实现对录制视频文件进行保存的方式都在本申请实施例的保护范围之内,此处不再一一列举。
作为一种可选的实施例,步骤S406,获取目标视频的媒体数据包括:捕捉目标视频的视频画面,根据视频画面获取目标视频的图像数据;获取目标视频的音频数据。
图10是根据本申请实施例的一种获取目标视频的媒体数据的方法的流程图。如图10所示,该获取目标视频的媒体数据的方法包括以下步骤:
步骤S1001,捕捉目标视频的视频画面,根据视频画面获取目标视频的图像数据。
在检测第一虚拟现实设备的应用之后,在第一虚拟现实设备的应用中,捕捉目标视频的视频画面,根据目标视频的视频画面获取目标视频的图像数据。
步骤S1002,获取目标视频的音频数据。
目标视频具有播放声音的音频数据。在检测第一虚拟现实设备的应用之后,获取目标视频的音频数据。
需要说明的是,步骤S1001和步骤S1002的执行不包括先后顺序,可以同时执行,也可以先执行步骤S1001,或者先执行步骤S1002。
在获取图像数据和音频数据之后,分别对图像数据和音频数据进行编码,得到录制视频文件,从而实现了对媒体数据进行编码,得到录制视频文件的目的。
作为一种可选的实施例,在对媒体数据进行编码,得到目标视频的录制视频文件之后,还可以保存录制视频文件至第一虚拟现实设备的软件开发包中;或者,保存录制视频文件至游戏客户端的软件开发包中;或者保存录制视频文件至游戏引擎的软件开发包中。
在本实施例中,对于游戏开发商、游戏引擎开发商或者VR硬件开发商,可以在引擎的SDK中,游戏中以及硬件显示SDK中内置保存成第一人称VR视频的功能。
本申请实施例实现了录制左右眼VR画面并且能够完美还原第一人称沉浸感的VR视频方案,满足VR玩家录制体验VR应用和游戏过程的需求,本申请实施例也可以作为内容产生平台,以用户产生内容(User Generated Content,简称为UGC)为主,一定程度上增加了现阶段的VR内容,达到了对虚拟现实设备中显示的视频进行录制的技术效果。
下面结合实际应用场景对本申请实施例的技术方案进行说明。
本申请实施例的技术框架分为两部分,第一部分是录制过程,第二部分是播放过程。
图11是根据本申请实施例的一种视频录制的结构示意图。如图11所示,对虚拟现实设备上的目标视频进行录制,包括VR设备检测、VR应用检测、图像帧捕捉、录制源插件系统、视频编码模块、音频编码模块和畸变模块。其中,VR设备检测用于对VR设备的类型进行检测,VR应用检测用于根据VR设备的类型检测VR设备的应用,图像帧捕捉用于捕捉图像,录制源插件系统用于实现录制源插件系统可以录制Window图像、VR源捕捉、Direct3D捕捉、OpenGL捕捉,VR源捕捉用于实现Oculus提交框构(Oculus Submit Frame hook),HTC Vive提交框构(HTC Vive Submit Frame hook),头显录制动画渲染,视频编码模块用于对目标视频的图像数据进行编码,音频编码模块用于对目标视频的音频数据进行编码,畸变模块用于对录制视频文件进行处理,得到处理数据,该畸变后的处理数据可以分享到第三方在线视频网站,供移动端VR眼镜观看。
图12是根据本申请实施例的一种对VR设备进行检测的方法的流程图。如图12所示,该对VR设备进行检测的方法包括以下步骤:
步骤S1201,加载检测插件系统。
在对目标视频进行录制时,首先是VR设备的检测。VR设备检测主要原理是根据不同的VR硬件设备,分别依赖于其硬件设备提供的Platform SDK,从而实现检测功能。加载检测插件系统,插件系统包括Oculus插件和HTC插件。
步骤S1202,加载Oculus插件。
如果VR设备为Oculus设备,加载Oculus插件,通过Oculus插件进行检测。
步骤S1203,加载HTC Vive插件。
如果VR设备为HTC Vive设备,加载HTC Vive插件,通过HTC Vive插件进行检测。
步骤S1204,汇总设备类型、个数。
在加载Oculus插件,通过Oculus插件进行检测之后,或者加载HTC Vive插件,通过HTC Vive插件进行检测之后,汇总设备类型、个数。
在汇总设备类型、个数之后,返回VR设备的类型。
图13是根据本申请实施例的一种VR应用的检测方法的流程图。如图13所示,该VR应用的检测方法包括以下步骤:
步骤S1301,判断VR应用的设备类型。
检测VR应用时,根据检测到的不同的VR设备,分别执行不同的检测逻辑。
步骤S1302,当VR设备为HTC Vive类型时,调用SDK获取进程ID。
对于检测运行在HTC Vive设备上的应用,只需要调用其提供的SDK获取到进程ID,然后根据进程ID获取进程以及窗口信息。
步骤S1303,当VR设备为Oculus类型时,启动独立进程检测。
对于检测运行在Oculus设备上的应用,由于其SDK没有提供相关函数,所以在实现上需要启动一个独立进程用于检测。
步骤S1304,保存进程名、窗口标题等信息。
图14根据本申请实施例的另一种视频文件的处理方法的流程图。如图14所示,该视频文件的处理方法包括以下步骤:
步骤S1401,在录制主程序中将用于录制视频的数据注入目标进程。
步骤S1402,在目标进程中检测VR模块。
步骤S1403,在目标进程中通过VR捕捉管理模块捕捉数据。
步骤S1404,在目标进程中通过Oculus钩子处理。
通过Oculus钩子对捕捉到的数据进行处理。
步骤S1405,在目标进程中通过HTC钩子处理。
通过HTC钩子对捕捉到的数据进行处理。
步骤S1406,在目标进程中捕捉视频画面。
通过处理后的数据捕捉视频画面,得到图像数据。
步骤S1407,在目标进程中通过图形处理器拷贝。
通过图像处理器将图像数据拷贝至录制主程序。
步骤S1408,在录制主程序中获取捕捉画面。
在录制主程序中获取图像数据,从而获取捕捉画面。
步骤S1409,在录制主程序中进行音视频编码。
对捕捉到的画面对应的视频文件进行音视频编码。
步骤S1410,在录制主程序中生成录制视频文件。
对视频进行录制的流程较为复杂,主要分为捕捉线程、音频编码线程、视频编码线程。在录制开始时会注入相应的VR源捕捉模块到被录制进程中,负责捕捉VR左右眼画面,然后通过GPU拷备纹理的方法将图像数据从目标进程复制到录制主程序,由主程序的编码线程进行音视频编码,生成视频文件,实现了对虚拟现实设备中显示的视频进行录制的技术效果。
在生成录制视频文件之后,根据不同的VR硬件,在播放的时候会自动适配,播放到相应的头显中。
图15是根据本申请实施例的一种视频播放器的框架的示意图。如图15所示,该视频播放器可以实现:VR设备检测、普通平面播放、VR HMD设备播放、视频解码模块,其中,VR HMD设备播放为头显播放,主要分为三个插件模块,平面播放插件模块、Oculus头显播放插件模块以及HTC头显播放插件模块,这里不局限于上述三种头显插件,后续的其它硬件头显播放也可采用这样的框架,VR HMD可以用于实现Direct 3D渲染、Oculus渲染、HTC渲染,其中,Oculus渲染包括Oculus DK2Plugin插件和Oculus CV1Plugin插件。
该实施例实现了录制左右眼VR画面并且能够完美还原第一人称沉浸感的VR视频方案,满足VR玩家录制体验VR应用和游戏过程的需求,本申请实施例也可以作为内容产生平台,以UGC为主,一定程度上增加了现阶段的VR内容。
在该实施例中,游戏开发商、游戏引擎开发商或者VR硬件开发商可能会在引擎的SDK中、游戏中以及硬件显示SDK中内置保存成第一人称VR视频的功能。
本申请实施例的应用环境可以但不限于参照上述实施例中的应用环境,本实施例中对此不再赘述。本申请实施例提供了用于实施上述视频文件的处理方法的一种可选的具体应 用。
本申请实施例适用于多种VR体验场景,本申请实施例包括但不限于以下几点:
第一,具有PC VR头显的用户,在观看VR或全景视频、体验VR应用、VR游戏过程中,都可以使用本申请实施例的方法录制整个过程。
第二,具有移动端Mobile VR头显的用户,可以使用发明实施例的方法观看具有沉浸感的,其他用户录制好的体验视频。
第三,对于没有VR头显的用户,可以使用发明实施例的方法直接播放录制好的视频,可以选择同时观看左右眼视频画面,也可以只观看左眼或右眼视频画面。
本申请实施例可以录制PC VR头显里的画面,保存成左右眼的视频格式,还可以将录制好的视频重新播放到头显中,还原第一人称视角沉浸感。
图16是根据本申请实施例的一种视频录制界面的示意图。如图16所示,产品的录制和播放操作简单,可以通过多种操作开始和结束录制,包括但不限于界面命令按钮、键盘快捷键、语音命令输入识别等。其中,界面命令按钮如图16右上角的开始录制按钮,当用户触控时,便开始对目标视频进行录制。同时,在对目标视频进行录制的时候,还会在头显内部显示出具有沉浸感的录制标志,包括但不限于录制动画、时间显示等。视频录制界面显示有录制好的视频文件,比如,第一视频文件、第二视频文件和第三视频文件,在录制界面的右侧包括检测面板,设备、VR应用、录制PC音频、录制麦克风以及录制时间等,从而实现了对虚拟现实设备中显示的视频进行录制的技术效果。
图17是根据本申请实施例的一种视频播放界面的示意图。如图17所示,视频播放器可以选择播放为平面2D视频,也可以选择播放到头显的VR视频。没有VR头显的用户,可以使用发明实施例的方法直接播放录制好的视频,可以选择同时观看左右眼视频画面,也可以只观看左眼或右眼视频画面,视频播放界面下方显示有视频播放的进度和播放时间,视频播放界面下方四个界面按钮分别可以分别对应视频在播放时的退出操作、后退操作、播放/暂停操作、前进操作,从而实现了对录制视频的播放。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请实施例并不受所描述的动作顺序的限制,因为依据本申请实施例,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本申请实施例所必须的。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到根据上述实施例的方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例的方法。
根据本申请实施例,还提供了一种用于实施上述视频文件的处理方法的视频文件的处理装置。图18是根据本申请实施例的一种视频文件的处理装置的示意图。如图18所示, 该视频文件的处理装置可以包括:第一检测单元10、第二检测单元20、获取单元30和编码单元40。
第一检测单元10,用于检测第一虚拟现实设备的类型,其中,第一虚拟现实设备用于显示待录制的目标视频。
第二检测单元20,用于根据第一虚拟现实设备的类型检测第一虚拟现实设备的应用。
获取单元30,用于在第一虚拟现实设备的应用中,获取目标视频的媒体数据。
编码单元40,用于对媒体数据进行编码,得到目标视频的录制视频文件,其中,录制视频文件的视频内容与目标视频的视频内容相同。
在本申请实施例一种可能的实现方式中,上述获取单元30可以包括:捕捉模块和第一获取模块。捕捉模块,用于捕捉目标视频的视频画面,根据视频画面获取目标视频的图像数据;第一获取模块,用于获取目标视频的音频数据。
编码单元具体用于:分别对图像数据和音频数据进行编码,得到录制视频文件。
在本申请实施例一种可能的实现方式中,上述获取单元30可以包括:第二获取模块和拼接模块。第二获取模块,用于捕捉目标视频的左眼视频画面和目标视频的右眼视频画面,根据左眼视频画面获取目标视频的左眼图像数据,根据右眼视频画面获取目标视频的右眼图像数据;拼接模块,用于对左眼图像数据和右眼图像数据进行拼接处理,得到目标视频的图像数据;
编码单元具体用于:对图像数据进行编码,得到录制视频文件。
在本申请实施例一种可能的实现方式中,该视频文件的处理装置还可以包括:第一播放单元和/或第二播放单元。第一播放单元,用于接收用于指示播放左眼视频画面的第一播放指令;根据第一播放指令播放左眼视频画面;第二播放单元,用于接收用于指示播放右眼视频画面的第二播放指令,根据第二播放指令播放右眼视频画面。
在本申请实施例一种可能的实现方式中,该视频文件的处理装置还可以包括:确定单元,用于在对媒体数据进行编码,得到目标视频的录制视频文件之后,根据第一虚拟现实设备的类型确定与第一虚拟现实设备相适配的视频播放终端,其中,视频播放终端用于与第一虚拟现实设备相连接,获取并播放录制视频文件。
在本申请实施例一种可能的实现方式中,上述确定单元具体用于根据第一虚拟现实设备的类型确定与第一虚拟现实设备相适配的平面视频播放终端,其中,平面视频播放终端用于以二维形式对录制视频文件进行播放,视频播放终端包括平面视频播放终端。
在本申请实施例一种可能的实现方式中,上述确定单元具体用于根据第一虚拟现实设备的类型确定与第一虚拟现实设备相适配的第二虚拟现实设备,其中,第二虚拟现实设备用于以三维形式对录制视频文件进行播放,视频播放终端包括第二虚拟现实设备。
在本申请实施例一种可能的实现方式中,上述确定单元具体用于在录制视频文件保存在终端的情况下,根据第一虚拟现实设备的类型确定与第一虚拟现实设备相适配的固定虚拟现实设备,第二虚拟现实设备包括固定虚拟现实设备。
在本申请实施例一种可能的实现方式中,该视频文件的处理装置还可以包括:处理单元,用于在对媒体数据进行编码,得到目标视频的录制视频文件之后,对录制视频文件进 行处理,得到处理数据;发送处理数据至预设网站;上述确定单元具体用于:根据第一虚拟现实设备的类型确定与第一虚拟现实设备相适配的移动虚拟现实设备,其中,移动虚拟现实设备用于通过预设网站根据处理数据对录制视频文件进行播放,第二虚拟现实设备包括移动虚拟现实设备。
在本申请实施例一种可能的实现方式中,第二虚拟现实设备包括头戴式显示设备,其中,头戴式显示设备用于通过插件以三维形式对录制视频文件进行播放。
在本申请实施例一种可能的实现方式中,上述第二检测单元20可以包括:检测模块或者调用模块,以及第一保存模块。其中,检测模块,用于在第一虚拟现实设备的类型为第一类型的情况下,启动预设进程检测第一虚拟现实设备的应用的进程和应用的窗口信息;或者,调用模块,用于在第一虚拟现实设备的类型为第二类型的情况下,调用第一虚拟现实设备的软件开发工具包以获取进程ID,根据进程ID获取第一虚拟现实设备的应用的进程和应用的窗口信息;保存模块,用于保存应用的进程和应用的窗口信息,并根据应用的进程和窗口信息加载应用。
在本申请实施例一种可能的实现方式中,上述第一检测单元10可以包括:第一显示模块、判断模块和响应模块。其中,第一显示模块,用于显示第一预设界面;判断模块,用于判断是否通过第一预设界面接收到开始录制指令,开始录制指令用于指示开始对目标视频进行录制;响应模块,用于如果判断出通过第一预设界面接收到开始录制指令,响应于开始录制指令检测第一虚拟现实设备的类型。
在本申请实施例一种可能的实现方式中,上述判断模块具体用于判断是否接收到由第一预设界面的预设按钮被触控所产生的开始录制指令;或者,判断是否接收到由第一预设界面的键盘快捷键被触控所产生的开始录制指令;或者,判断是否通过第一预设界面接收到与开始录制指令对应的语音命令。
在本申请实施例一种可能的实现方式中,上述第一检测单元10还可以包括:第二显示模块,用于在判断是否通过第一预设界面接收到用于指示开始对目标视频进行录制的开始录制指令之后,如果判断出通过第一预设界面接收到开始录制指令,显示用于表示对目标视频进行录制的录制标志,其中,录制标志包括以动画形式显示的录制标志,和/或以时间形式显示的录制标志。
在本申请实施例一种可能的实现方式中,该视频文件的处理装置还包括:第一显示单元、第一判断单元、第一响应单元。其中,第一显示单元,用于在对媒体数据进行编码,得到目标视频的录制视频文件之后,显示第二预设界面;第一判断单元,用于判断是否通过第二预设界面接收到结束录制指令,结束录制指令用于指示结束对目标视频进行录制;第一响应单元,用于如果判断出通过第二预设界面接收到结束录制指令,响应于结束录制指令结束对目标视频进行录制。
在本申请实施例一种可能的实现方式中,上述第一判断单元具体用于:判断是否接收到由第二预设界面的预设按钮被触控所产生的结束录制指令;或者,判断是否接收到由第二预设界面的键盘快捷键被触控所产生的结束录制指令;或者,判断是否通过第二预设界面接收到与结束录制指令对应的语音命令。
在本申请实施例一种可能的实现方式中,该视频文件的处理装置还包括:第二显示单元、第二判断单元、第二响应单元。其中,第二显示单元,用于在对媒体数据进行编码,得到目标视频的录制视频文件之后,显示第三预设界面;第二判断单元,用于判断是否通过第三预设界面接收到保存指令,保存指令用于指示对录制视频文件进行保存;第二响应单元,用于如果判断出通过第三预设界面接收到保存指令,响应于保存指令保存录制视频文件。
在本申请实施例一种可能的实现方式中,上述第二判断单元具体用于:判断是否接收到由第三预设界面的预设按钮被触控所产生的保存指令;或者,判断是否接收到由第三预设界面的键盘快捷键被触控所产生的保存指令;或者,判断是否通过第三预设界面接收到与保存指令对应的语音命令。
在本申请实施例一种可能的实现方式中,该视频文件的处理装置还可以包括:第一保存单元,或者第二保存单元,或者第三保存单元。其中,第一保存单元,用于在对媒体数据进行编码,得到目标视频的录制视频文件之后,保存录制视频文件至第一虚拟现实设备的软件开发包中;第二保存单元,用于在对媒体数据进行编码,得到目标视频的录制视频文件之后,保存录制视频文件至游戏客户端的软件开发包中;第三保存单元,用于在对媒体数据进行编码,得到目标视频的录制视频文件之后,保存录制视频文件至游戏引擎的软件开发包中。
需要说明的是,该实施例中的第一检测单元10可以用于执行本申请实施例中的步骤S402,该实施例中的第二检测单元20可以用于执行本申请实施例中的步骤S404,该实施例中的获取单元30可以用于执行本申请实施例中的步骤S406,该实施例中的编码单元40可以用于执行本申请实施例中的步骤S408。
此处需要说明的是,上述单元和模块与对应的步骤所实现的示例和应用场景相同,但不限于上述实施例所公开的内容。需要说明的是,上述模块作为装置的一部分可以运行在如图3所示的硬件环境中的终端中,可以通过软件实现,也可以通过硬件实现。
该实施例通过第一检测单元10检测第一虚拟现实设备的类型,其中,第一虚拟现实设备用于显示待录制的目标视频,通过第二检测单元20根据第一虚拟现实设备的类型检测第一虚拟现实设备的应用,通过获取单元30在第一虚拟现实设备的应用中,获取目标视频的媒体数据,通过编码单元40对媒体数据进行编码,得到目标视频的录制视频文件,其中,录制视频文件的视频内容与目标视频的视频内容相同,可以解决了相关技术无法对虚拟现实设备中显示的视频进行录制的技术问题,进而达到对虚拟现实设备中显示的视频进行录制的技术效果。
根据本申请实施例,还提供了一种用于实施上述视频文件的处理方法的服务器或终端。
图19是根据本申请实施例的一种终端的结构框图。如图20所示,该终端可以包括:一个或多个(图中仅示出一个)处理器201、存储器203、以及传输装置205,如图19所示,该终端还可以包括输入输出设备207。
其中,存储器203可用于存储软件程序以及模块,如本申请实施例中的视频文件的处理方法和装置对应的程序指令/模块,处理器201通过运行存储在存储器203内的软件程序 以及模块,从而执行各种功能应用以及数据处理,即实现上述的视频文件的处理方法。存储器203可包括高速随机存储器,还可以包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,存储器203可进一步包括相对于处理器201远程设置的存储器,这些远程存储器可以通过网络连接至终端。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
上述的传输装置205用于经由一个网络接收或者发送数据,还可以用于处理器与存储器之间的数据传输。上述的网络具体实例可包括有线网络及无线网络。在一个实例中,传输装置205包括一个网络适配器(Network Interface Controller,NIC),其可通过网线与其他网络设备与路由器相连从而可与互联网或局域网进行通讯。在一个实例中,传输装置205为射频(Radio Frequency,RF)模块,其用于通过无线方式与互联网进行通讯。
其中,具体地,存储器203用于存储应用程序。
处理器201可以通过传输装置205调用存储器203存储的应用程序,以执行下述步骤:
检测第一虚拟现实设备的类型,其中,第一虚拟现实设备用于显示待录制的目标视频;
根据第一虚拟现实设备的类型检测第一虚拟现实设备的应用;
在第一虚拟现实设备的应用中,获取目标视频的媒体数据;
对媒体数据进行编码,得到目标视频的录制视频文件,其中,录制视频文件的视频内容与目标视频的视频内容相同。
处理器201还可以用于执行下述步骤:捕捉目标视频的视频画面,根据视频画面获取目标视频的图像数据;获取目标视频的音频数据;分别对图像数据和音频数据进行编码,得到录制视频文件。
处理器201还可以用于执行下述步骤:捕捉目标视频的左眼视频画面和目标视频的右眼视频画面,根据左眼视频画面获取目标视频的左眼图像数据,根据右眼视频画面获取目标视频的右眼图像数据;对左眼图像数据和右眼图像数据进行拼接处理,得到目标视频的图像数据;对图像数据进行编码,得到录制视频文件。
处理器201还可以用于执行下述步骤:在对图像数据进行编码,得到录制视频文件之后,接收用于指示播放左眼视频画面的第一播放指令,根据第一播放指令播放左眼视频画面;和/或接收用于指示播放右眼视频画面的第二播放指令,根据第二播放指令播放右眼视频画面。
处理器201还可以用于执行下述步骤:在对媒体数据进行编码,得到目标视频的录制视频文件之后,根据第一虚拟现实设备的类型确定与第一虚拟现实设备相适配的视频播放终端,其中,视频播放终端用于与第一虚拟现实设备相连接,获取并播放录制视频文件。
处理器201还可以用于执行下述步骤:根据第一虚拟现实设备的类型确定与第一虚拟现实设备相适配的平面视频播放终端,其中,平面视频播放终端用于以二维形式对录制视频文件进行播放,视频播放终端包括平面视频播放终端。
处理器201还可以用于执行下述步骤:根据第一虚拟现实设备的类型确定与第一虚拟现实设备相适配的第二虚拟现实设备,其中,第二虚拟现实设备用于以三维形式对录制视频文件进行播放,视频播放终端包括第二虚拟现实设备。
处理器201还可以用于执行下述步骤:在录制视频文件保存在终端的情况下,根据第一虚拟现实设备的类型确定与第一虚拟现实设备相适配的固定虚拟现实设备,第二虚拟现实设备包括固定虚拟现实设备。
处理器201还可以用于执行下述步骤:在对媒体数据进行编码,得到目标视频的录制视频文件之后,对录制视频文件进行处理,得到处理数据;发送处理数据至预设网站;根据第一虚拟现实设备的类型确定与第一虚拟现实设备相适配的移动虚拟现实设备,其中,移动虚拟现实设备用于通过预设网站根据处理数据对录制视频文件进行播放,第二虚拟现实设备包括移动虚拟现实设备。
处理器201还可以用于执行下述步骤:在第一虚拟现实设备的类型为第一类型的情况下,启动预设进程检测第一虚拟现实设备的应用的进程和应用的窗口信息;或者,在第一虚拟现实设备的类型为第二类型的情况下,调用第一虚拟现实设备的软件开发工具包以获取进程ID,根据进程ID获取第一虚拟现实设备的应用的进程和应用的窗口信息;保存应用的进程和应用的窗口信息,并根据应用的进程和窗口信息加载应用。
处理器201还可以用于执行下述步骤:显示第一预设界面;判断是否通过第一预设界面接收到开始录制指令,开始录制指令用于指示开始对目标视频进行录制;如果判断出通过第一预设界面接收到开始录制指令,响应于开始录制指令检测第一虚拟现实设备的类型。
处理器201还可以用于执行下述步骤:判断是否接收到由第一预设界面的预设按钮被触控所产生的开始录制指令;或者,判断是否接收到由第一预设界面的键盘快捷键被触控所产生的开始录制指令;或者,判断是否通过第一预设界面接收到与开始录制指令对应的语音命令。
处理器201还可以用于执行下述步骤:在判断是否通过第一预设界面接收到用于指示开始对目标视频进行录制的开始录制指令之后,如果判断出通过第一预设界面接收到开始录制指令,显示用于表示对目标视频进行录制的录制标志,其中,录制标志包括以动画形式显示的录制标志,和/或以时间形式显示的录制标志。
处理器201还可以用于执行下述步骤:在对媒体数据进行编码,得到目标视频的录制视频文件之后,显示第二预设界面;判断是否通过第二预设界面接收到结束录制指令,结束录制指令用于指示结束对目标视频进行录制;如果判断出通过第二预设界面接收到结束录制指令,响应于结束录制指令结束对目标视频进行录制。
处理器201还可以用于执行下述步骤:判断是否接收到由第二预设界面的预设按钮被触控所产生的结束录制指令;或者,判断是否接收到由第二预设界面的键盘快捷键被触控所产生的结束录制指令;或者,判断是否通过第二预设界面接收到与结束录制指令对应的语音命令。
处理器201还可以用于执行下述步骤:在对媒体数据进行编码,得到目标视频的录制视频文件之后,显示第三预设界面;判断是否通过第三预设界面接收到保存指令,保存指令用于指示对录制视频文件进行保存;如果判断出通过第三预设界面接收到保存指令,响应于保存指令保存录制视频文件。
处理器201还可以用于执行下述步骤:判断是否接收到由第三预设界面的预设按钮被 触控所产生的保存指令;或者,判断是否接收到由第三预设界面的键盘快捷键被触控所产生的保存指令;或者,判断是否通过第三预设界面接收到与保存指令对应的语音命令。
处理器201还可以用于执行下述步骤:在对媒体数据进行编码,得到目标视频的录制视频文件之后,保存录制视频文件至第一虚拟现实设备的软件开发包中;或者,保存录制视频文件至游戏客户端的软件开发包中;或者,保存录制视频文件至游戏引擎的软件开发包中。
本申请实施例提供了一种视频文件的处理方法的方案。通过检测第一虚拟现实设备的类型,其中,第一虚拟现实设备用于显示待录制的目标视频;根据第一虚拟现实设备的类型检测第一虚拟现实设备的应用;在第一虚拟现实设备的应用中,获取目标视频的媒体数据;对媒体数据进行编码,得到目标视频的录制视频文件,其中,录制视频文件的视频内容与目标视频的视频内容相同,从而实现了对虚拟现实设备中显示的视频进行录制的技术效果,进而解决了相关技术无法对虚拟现实设备中显示的视频进行录制的技术问题。
在本申请实施例一种可能的实现方式中,本实施例中的具体示例可以参考上述实施例中所描述的示例,本实施例在此不再赘述。
本领域普通技术人员可以理解,图19所示的结构仅为示意,终端可以是智能手机(如Android手机、iOS手机等)、平板电脑、掌上电脑以及移动互联网设备(Mobile Internet Devices,MID)、PAD等终端设备。图19其并不对上述电子装置的结构造成限定。例如,终端还可包括比图19中所示更多或者更少的组件(如网络接口、显示装置等),或者具有与图19所示不同的配置。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令终端设备相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:闪存盘、只读存储器(Read-Only Memory,ROM)、随机存取器(Random Access Memory,RAM)、磁盘或光盘等。
本申请的实施例还提供了一种存储介质。在本申请实施例一种可能的实现方式中,在本实施例中,上述存储介质可以用于执行视频文件的处理方法的程序代码。
在本实施例中,上述存储介质可以位于上述实施例所示的网络中的多个网络设备中的至少一个网络设备上。
在本实施例中,存储介质可以被设置为存储用于执行以下步骤的程序代码:
检测第一虚拟现实设备的类型,其中,第一虚拟现实设备用于显示待录制的目标视频;
根据第一虚拟现实设备的类型检测第一虚拟现实设备的应用;
在第一虚拟现实设备的应用中,获取目标视频的媒体数据;
对媒体数据进行编码,得到目标视频的录制视频文件,其中,录制视频文件的视频内容与目标视频的视频内容相同。
在本申请实施例一种可能的实现方式中,存储介质还可以被设置为存储用于执行以下步骤的程序代码:
在本申请实施例一种可能的实现方式中,存储介质还被设置为存储用于执行以下步骤的程序代码:捕捉目标视频的视频画面,根据视频画面获取目标视频的图像数据;获取目 标视频的音频数据;分别对图像数据和音频数据进行编码,得到录制视频文件。
在本申请实施例一种可能的实现方式中,存储介质还可以被设置为存储用于执行以下步骤的程序代码:捕捉目标视频的左眼视频画面和目标视频的右眼视频画面,根据左眼视频画面获取目标视频的左眼图像数据,根据右眼视频画面获取目标视频的右眼图像数据;对左眼图像数据和右眼图像数据进行拼接处理,得到目标视频的图像数据;对图像数据进行编码,得到录制视频文件。
在本申请实施例一种可能的实现方式中,存储介质还可以被设置为存储用于执行以下步骤的程序代码:在对图像数据进行编码,得到录制视频文件之后,接收用于指示播放左眼视频画面的第一播放指令,根据第一播放指令播放左眼视频画面;和/或接收用于指示播放右眼视频画面的第二播放指令,根据第二播放指令播放右眼视频画面。
在本申请实施例一种可能的实现方式中,存储介质还可以被设置为存储用于执行以下步骤的程序代码:在对媒体数据进行编码,得到目标视频的录制视频文件之后,根据第一虚拟现实设备的类型确定与第一虚拟现实设备相适配的视频播放终端,其中,视频播放终端用于与第一虚拟现实设备相连接,获取并播放录制视频文件。
在本申请实施例一种可能的实现方式中,存储介质还可以被设置为存储用于执行以下步骤的程序代码:根据第一虚拟现实设备的类型确定与第一虚拟现实设备相适配的平面视频播放终端,其中,平面视频播放终端用于以二维形式对录制视频文件进行播放,视频播放终端包括平面视频播放终端。
在本申请实施例一种可能的实现方式中,存储介质还可以被设置为存储用于执行以下步骤的程序代码:根据第一虚拟现实设备的类型确定与第一虚拟现实设备相适配的第二虚拟现实设备,其中,第二虚拟现实设备用于以三维形式对录制视频文件进行播放,视频播放终端包括第二虚拟现实设备。
在本申请实施例一种可能的实现方式中,存储介质还可以被设置为存储用于执行以下步骤的程序代码:在录制视频文件保存在终端的情况下,根据第一虚拟现实设备的类型确定与第一虚拟现实设备相适配的固定虚拟现实设备,第二虚拟现实设备包括固定虚拟现实设备。
在本申请实施例一种可能的实现方式中,存储介质还可以被设置为存储用于执行以下步骤的程序代码:在对媒体数据进行编码,得到目标视频的录制视频文件之后,对录制视频文件进行处理,得到处理数据;发送处理数据至预设网站;根据第一虚拟现实设备的类型确定与第一虚拟现实设备相适配的移动虚拟现实设备,其中,移动虚拟现实设备用于通过预设网站根据处理数据对录制视频文件进行播放,第二虚拟现实设备包括移动虚拟现实设备。
在本申请实施例一种可能的实现方式中,存储介质还可以被设置为存储用于执行以下步骤的程序代码:在第一虚拟现实设备的类型为第一类型的情况下,启动预设进程检测第一虚拟现实设备的应用的进程和应用的窗口信息;或者,在第一虚拟现实设备的类型为第二类型的情况下,调用第一虚拟现实设备的软件开发工具包以获取进程ID,根据进程ID获取第一虚拟现实设备的应用的进程和应用的窗口信息;保存应用的进程和应用的窗口信 息,并根据应用的进程和窗口信息加载应用。
在本申请实施例一种可能的实现方式中,存储介质还可以被设置为存储用于执行以下步骤的程序代码:显示第一预设界面;判断是否通过第一预设界面接收到开始录制指令,开始录制指令用于指示开始对目标视频进行录制;如果判断出通过第一预设界面接收到开始录制指令,响应于开始录制指令检测第一虚拟现实设备的类型。
在本申请实施例一种可能的实现方式中,存储介质还可以被设置为存储用于执行以下步骤的程序代码:判断是否接收到由第一预设界面的预设按钮被触控所产生的开始录制指令;或者,判断是否接收到由第一预设界面的键盘快捷键被触控所产生的开始录制指令;或者,判断是否通过第一预设界面接收到与开始录制指令对应的语音命令。
在本申请实施例一种可能的实现方式中,存储介质还可以被设置为存储用于执行以下步骤的程序代码:在判断是否通过第一预设界面接收到用于指示开始对目标视频进行录制的开始录制指令之后,如果判断出通过第一预设界面接收到开始录制指令,显示用于表示对目标视频进行录制的录制标志,其中,录制标志包括以动画形式显示的录制标志,和/或以时间形式显示的录制标志。
在本申请实施例一种可能的实现方式中,存储介质还可以被设置为存储用于执行以下步骤的程序代码:在对媒体数据进行编码,得到目标视频的录制视频文件之后,显示第二预设界面;判断是否通过第二预设界面接收到结束录制指令,结束录制指令用于指示结束对目标视频进行录制;如果判断出通过第二预设界面接收到结束录制指令,响应于结束录制指令结束对目标视频进行录制。
在本申请实施例一种可能的实现方式中,存储介质还可以被设置为存储用于执行以下步骤的程序代码:判断是否接收到由第二预设界面的预设按钮被触控所产生的结束录制指令;或者,判断是否接收到由第二预设界面的键盘快捷键被触控所产生的结束录制指令;或者,判断是否通过第二预设界面接收到与结束录制指令对应的语音命令。
在本申请实施例一种可能的实现方式中,存储介质还可以被设置为存储用于执行以下步骤的程序代码:在对媒体数据进行编码,得到目标视频的录制视频文件之后,显示第三预设界面;判断是否通过第三预设界面接收到保存指令,保存指令用于指示对录制视频文件进行保存;如果判断出通过第三预设界面接收到保存指令,响应于保存指令保存录制视频文件。
在本申请实施例一种可能的实现方式中,存储介质还可以被设置为存储用于执行以下步骤的程序代码:判断是否接收到由第三预设界面的预设按钮被触控所产生的保存指令;或者,判断是否接收到由第三预设界面的键盘快捷键被触控所产生的保存指令;或者,判断是否通过第三预设界面接收到与保存指令对应的语音命令。
在本申请实施例一种可能的实现方式中,存储介质还可以被设置为存储用于执行以下步骤的程序代码:在对媒体数据进行编码,得到目标视频的录制视频文件之后,保存录制视频文件至第一虚拟现实设备的软件开发包中;或者,保存录制视频文件至游戏客户端的软件开发包中;或者,保存录制视频文件至游戏引擎的软件开发包中。
本实施例中的具体示例可以参考上述实施例中所描述的示例,本实施例在此不再赘述。
在本实施例中,上述存储介质可以包括但不限于:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
另外,本申请实施例还提供一种包括指令的计算机程序产品,当其在终端上运行时,使得该终端执行上述各个实施例中提供的视频文件的处理方法。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
上述实施例中的集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在上述计算机可读取的存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在存储介质中,包括若干指令用以使得一台或多台计算机设备(可为个人计算机、服务器或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。
在本申请的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的客户端,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
以上所述仅是本申请实施例的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请实施例原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本申请实施例的保护范围。

Claims (45)

  1. 一种视频文件的处理方法,包括:
    检测第一虚拟现实设备的类型,其中,所述第一虚拟现实设备用于显示待录制的目标视频;
    根据所述第一虚拟现实设备的类型检测所述第一虚拟现实设备的应用;
    在所述第一虚拟现实设备的应用中,获取所述目标视频的媒体数据;
    对所述媒体数据进行编码,得到所述目标视频的录制视频文件,其中,所述录制视频文件的视频内容与所述目标视频的视频内容相同。
  2. 根据权利要求1所述的方法,所述获取所述目标视频的媒体数据,包括:
    捕捉所述目标视频的视频画面,根据所述视频画面获取所述目标视频的图像数据;获取所述目标视频的音频数据;
    所述对所述媒体数据进行编码,得到所述目标视频的录制视频文件,包括:
    分别对所述图像数据和所述音频数据进行编码,得到所述录制视频文件。
  3. 根据权利要求1所述的方法,所述获取所述目标视频的媒体数据,包括:
    捕捉所述目标视频的左眼视频画面和所述目标视频的右眼视频画面,根据所述左眼视频画面获取所述目标视频的左眼图像数据,根据所述右眼视频画面获取所述目标视频的右眼图像数据;对所述左眼图像数据和所述右眼图像数据进行拼接处理,得到所述目标视频的图像数据;
    所述对所述媒体数据进行编码,得到所述目标视频的录制视频文件,包括:
    对所述图像数据进行编码,得到所述录制视频文件。
  4. 根据权利要求3所述的方法,在对所述图像数据进行编码,得到所述录制视频文件之后,所述方法还包括:
    接收用于指示播放所述左眼视频画面的第一播放指令,根据所述第一播放指令播放所述左眼视频画面;和/或,接收用于指示播放所述右眼视频画面的第二播放指令,根据所述第二播放指令播放所述右眼视频画面。
  5. 根据权利要求1所述的方法,在对所述媒体数据进行编码,得到所述目标视频的录制视频文件之后,所述方法还包括:
    根据所述第一虚拟现实设备的类型确定与所述第一虚拟现实设备相适配的视频播放终端,其中,所述视频播放终端用于与所述第一虚拟现实设备相连接,获取并播放所述录制视频文件。
  6. 根据权利要求5所述的方法,所述根据所述第一虚拟现实设备的类型确定与所述第一虚拟现实设备相适配的视频播放终端,包括:
    根据所述第一虚拟现实设备的类型确定与所述第一虚拟现实设备相适配的平面视频播放终端,其中,所述平面视频播放终端用于以二维形式对所述录制视频文件进行播放,所述视频播放终端包括所述平面视频播放终端。
  7. 根据权利要求5所述的方法,所述根据所述第一虚拟现实设备的类型确定与所述第一虚拟现实设备相适配的视频播放终端,包括:
    根据所述第一虚拟现实设备的类型确定与所述第一虚拟现实设备相适配的第二虚拟现实设备,其中,所述第二虚拟现实设备用于以三维形式对所述录制视频文件进行播放,所述视频播放终端包括所述第二虚拟现实设备。
  8. 根据权利要求7所述的方法,所述根据所述第一虚拟现实设备的类型确定与所述第一虚拟现实设备相适配的第二虚拟现实设备,包括:在所述录制视频文件保存在终端的情况下,根据所述第一虚拟现实设备的类型确定与所述第一虚拟现实设备相适配的固定虚拟现实设备,所述第二虚拟现实设备包括所述固定虚拟现实设备。
  9. 根据权利要求7所述的方法,在对所述媒体数据进行编码,得到所述目标视频的录制视频文件之后,所述方法还包括:
    对所述录制视频文件进行处理,得到处理数据;发送所述处理数据至预设网站;
    所述根据所述第一虚拟现实设备的类型确定与所述第一虚拟现实设备相适配的第二虚拟现实设备,包括:
    根据所述第一虚拟现实设备的类型确定与所述第一虚拟现实设备相适配的移动虚拟现实设备,其中,所述移动虚拟现实设备用于通过所述预设网站根据所述处理数据对所述录制视频文件进行播放,所述第二虚拟现实设备包括所述移动虚拟现实设备。
  10. 根据权利要求7所述的方法,所述第二虚拟现实设备包括头戴式显示设备,其中,所述头戴式显示设备用于通过插件以所述三维形式对所述录制视频文件进行播放。
  11. 根据权利要求1所述的方法,所述根据所述第一虚拟现实设备的类型检测所述第一虚拟现实设备的应用,包括:
    在所述第一虚拟现实设备的类型为第一类型的情况下,启动预设进程检测所述第一虚拟现实设备的应用的进程和所述应用的窗口信息;或者,在所述第一虚拟现实设备的类型为第二类型的情况下,调用所述第一虚拟现实设备的软件开发工具包以获取进程ID,根据所述进程ID获取所述第一虚拟现实设备的应用的进程和所述应用的窗口信息;
    保存所述应用的进程和所述应用的窗口信息,并根据所述应用的进程和所述窗口信息加载所述应用。
  12. 根据权利要求1至11中任一项所述的方法,所述检测所述第一虚拟现实设备的类型,包括:显示第一预设界面;
    判断是否通过所述第一预设界面接收到开始录制指令,所述开始录制指令用于指示开始对目标视频进行录制;
    如果判断出通过所述第一预设界面接收到所述开始录制指令,响应于所述开始录制指令检测所述第一虚拟现实设备的类型。
  13. 根据权利要求12所述的方法,所述判断是否通过所述第一预设界面接收到开始录制指令,包括:
    判断是否接收到由所述第一预设界面的预设按钮被触控所产生的开始录制指令;或者,判断是否接收到由所述第一预设界面的键盘快捷键被触控所产生的开始录制指令;或者,判断是否通过所述第一预设界面接收到与开始录制指令对应的语音命令。
  14. 根据权利要求12所述的方法,在判断是否通过所述第一预设界面接收到用于指示 开始对所述目标视频进行录制的开始录制指令之后,所述方法还包括:
    如果判断出通过所述第一预设界面接收到所述开始录制指令,显示用于表示对所述目标视频进行录制的录制标志,其中,所述录制标志包括以动画形式显示的录制标志,和/或以时间形式显示的录制标志。
  15. 根据权利要求1至11中任一项所述的方法,在对所述媒体数据进行编码,得到所述目标视频的录制视频文件之后,所述方法还包括:显示第二预设界面;
    判断是否通过所述第二预设界面接收到结束录制指令,所述结束录制指令用于指示结束对所述目标视频进行录制;
    如果判断出通过所述第二预设界面接收到所述结束录制指令,响应于结束录制指令结束对所述目标视频进行录制。
  16. 根据权利要求15所述的方法,所述判断是否通过所述第二预设界面接收到结束录制指令,包括:
    判断是否接收到由所述第二预设界面的预设按钮被触控所产生的结束录制指令;或者,判断是否接收到由所述第二预设界面的键盘快捷键被触控所产生的结束录制指令;或者,判断是否通过所述第二预设界面接收到与结束录制指令对应的语音命令。
  17. 根据权利要求1至11任一项所述的方法,在对所述媒体数据进行编码,得到所述目标视频的录制视频文件之后,所述方法还包括:显示第三预设界面;
    判断是否通过所述第三预设界面接收到保存指令,所述保存指令用于指示对所述录制视频文件进行保存;
    如果判断出通过所述第三预设界面接收到所述保存指令,响应于保存指令保存所述录制视频文件。
  18. 根据权利要求17所述的方法,所述判断是否通过所述第三预设界面接收到保存指令,包括:
    判断是否接收到由所述第三预设界面的预设按钮被触控所产生的保存指令;
    或者,判断是否接收到由所述第三预设界面的键盘快捷键被触控所产生的保存指令;
    或者,判断是否通过所述第三预设界面接收到与保存指令对应的语音命令。
  19. 根据权利要求1至11中任一项所述的方法,在对所述媒体数据进行编码,得到所述目标视频的录制视频文件之后,所述方法还包括:
    保存所述录制视频文件至所述第一虚拟现实设备的软件开发包中;
    或者,保存所述录制视频文件至游戏客户端的软件开发包中;
    或者,保存所述录制视频文件至游戏引擎的软件开发包中。
  20. 一种视频文件的处理方法,包括:
    终端检测第一虚拟现实设备的类型,其中,所述第一虚拟现实设备用于显示待录制的目标视频;
    所述终端根据所述第一虚拟现实设备的类型检测所述第一虚拟现实设备的应用;
    在所述第一虚拟现实设备的应用中,所述终端获取所述目标视频的媒体数据;
    所述终端对所述媒体数据进行编码,得到所述目标视频的录制视频文件,其中,所述 录制视频文件的视频内容与所述目标视频的视频内容相同。
  21. 根据权利要求20所述的方法,所述终端获取所述目标视频的媒体数据,包括:
    所述终端捕捉所述目标视频的视频画面,根据所述视频画面获取所述目标视频的图像数据;获取所述目标视频的音频数据;
    所述终端对所述媒体数据进行编码,得到所述目标视频的录制视频文件,包括:
    所述终端分别对所述图像数据和所述音频数据进行编码,得到所述录制视频文件。
  22. 根据权利要求20所述的方法,所述终端获取所述目标视频的媒体数据,包括:
    所述终端捕捉所述目标视频的左眼视频画面和所述目标视频的右眼视频画面,根据所述左眼视频画面获取所述目标视频的左眼图像数据,根据所述右眼视频画面获取所述目标视频的右眼图像数据;
    所述终端对所述左眼图像数据和所述右眼图像数据进行拼接处理,得到所述目标视频的图像数据;
    所述终端对所述媒体数据进行编码,得到所述目标视频的录制视频文件,包括:
    所述终端对所述图像数据进行编码,得到所述录制视频文件。
  23. 根据权利要求22所述的方法,在所述终端对所述图像数据进行编码,得到所述录制视频文件之后,所述方法还包括:
    所述终端接收用于指示播放所述左眼视频画面的第一播放指令,根据所述第一播放指令播放所述左眼视频画面;和/或
    所述终端接收用于指示播放所述右眼视频画面的第二播放指令,根据所述第二播放指令播放所述右眼视频画面。
  24. 根据权利要求20所述的方法,在所述终端对所述媒体数据进行编码,得到所述目标视频的录制视频文件之后,所述方法还包括:
    所述终端根据所述第一虚拟现实设备的类型确定与所述第一虚拟现实设备相适配的视频播放终端,其中,所述视频播放终端用于与所述第一虚拟现实设备相连接,获取并播放所述录制视频文件。
  25. 根据权利要求24所述的方法,所述终端根据所述第一虚拟现实设备的类型确定与所述第一虚拟现实设备相适配的视频播放终端,包括:
    所述终端根据所述第一虚拟现实设备的类型确定与所述第一虚拟现实设备相适配的平面视频播放终端,其中,所述平面视频播放终端用于以二维形式对所述录制视频文件进行播放,所述视频播放终端包括所述平面视频播放终端。
  26. 根据权利要求24所述的方法,所述终端根据所述第一虚拟现实设备的类型确定与所述第一虚拟现实设备相适配的视频播放终端,包括:
    所述终端根据所述第一虚拟现实设备的类型确定与所述第一虚拟现实设备相适配的第二虚拟现实设备,其中,所述第二虚拟现实设备用于以三维形式对所述录制视频文件进行播放,所述视频播放终端包括所述第二虚拟现实设备。
  27. 根据权利要求26所述的方法,所述终端根据所述第一虚拟现实设备的类型确定与所述第一虚拟现实设备相适配的第二虚拟现实设备,包括:
    在所述录制视频文件保存在终端的情况下,所述终端根据所述第一虚拟现实设备的类型确定与所述第一虚拟现实设备相适配的固定虚拟现实设备,所述第二虚拟现实设备包括所述固定虚拟现实设备。
  28. 根据权利要求26所述的方法,在所述终端对所述媒体数据进行编码,得到所述目标视频的录制视频文件之后,所述方法还包括:
    所述终端对所述录制视频文件进行处理,得到处理数据;所述终端发送所述处理数据至预设网站;
    所述终端根据所述第一虚拟现实设备的类型确定与所述第一虚拟现实设备相适配的第二虚拟现实设备,包括:
    所述终端根据所述第一虚拟现实设备的类型确定与所述第一虚拟现实设备相适配的移动虚拟现实设备,其中,所述移动虚拟现实设备用于通过所述预设网站根据所述处理数据对所述录制视频文件进行播放,所述第二虚拟现实设备包括所述移动虚拟现实设备。
  29. 根据权利要求26所述的方法,所述第二虚拟现实设备包括头戴式显示设备,其中,所述头戴式显示设备用于通过插件以所述三维形式对所述录制视频文件进行播放。
  30. 根据权利要求20所述的方法,所述终端根据所述第一虚拟现实设备的类型检测所述第一虚拟现实设备的应用,包括:
    在所述第一虚拟现实设备的类型为第一类型的情况下,所述终端启动预设进程检测所述第一虚拟现实设备的应用的进程和所述应用的窗口信息;或者,在所述第一虚拟现实设备的类型为第二类型的情况下,所述终端调用所述第一虚拟现实设备的软件开发工具包以获取进程ID,根据所述进程ID获取所述第一虚拟现实设备的应用的进程和所述应用的窗口信息;
    所述终端保存所述应用的进程和所述应用的窗口信息,并根据所述应用的进程和所述窗口信息加载所述应用。
  31. 根据权利要求20至30中任一项所述的方法,所述终端检测所述第一虚拟现实设备的类型,包括:
    所述终端显示第一预设界面;
    所述终端判断是否通过所述第一预设界面接收到开始录制指令,所述开始录制指令用于指示开始对目标视频进行录制;
    所述终端如果判断出通过所述第一预设界面接收到所述开始录制指令,响应于所述开始录制指令检测所述第一虚拟现实设备的类型。
  32. 根据权利要求31所述的方法,所述终端判断是否通过所述第一预设界面接收到开始录制指令,包括:
    所述终端判断是否接收到由所述第一预设界面的预设按钮被触控所产生的开始录制指令;
    或者,所述终端判断是否接收到由所述第一预设界面的键盘快捷键被触控所产生的开始录制指令;
    或者,所述终端判断是否通过所述第一预设界面接收到与开始录制指令对应的语音命 令。
  33. 根据权利要求31所述的方法,在所述终端判断是否通过所述第一预设界面接收到用于指示开始对所述目标视频进行录制的开始录制指令之后,所述方法还包括:
    所述终端如果判断出通过所述第一预设界面接收到所述开始录制指令,显示用于表示对所述目标视频进行录制的录制标志,其中,所述录制标志包括以动画形式显示的录制标志,和/或以时间形式显示的录制标志。
  34. 根据权利要求20至30中任一项所述的方法,在所述终端对所述媒体数据进行编码,得到所述目标视频的录制视频文件之后,所述方法还包括:
    所述终端显示第二预设界面;
    所述终端判断是否通过所述第二预设界面接收到结束录制指令,所述结束录制指令用于指示结束对所述目标视频进行录制;
    所述终端如果判断出通过所述第二预设界面接收到所述结束录制指令,响应于结束录制指令结束对所述目标视频进行录制。
  35. 根据权利要求34所述的方法,所述终端判断是否通过所述第二预设界面接收到结束录制指令,包括:
    所述终端判断是否接收到由所述第二预设界面的预设按钮被触控所产生的结束录制指令;
    或者,所述终端判断是否接收到由所述第二预设界面的键盘快捷键被触控所产生的结束录制指令;
    或者,所述终端判断是否通过所述第二预设界面接收到与结束录制指令对应的语音命令。
  36. 根据权利要求20至30任一项所述的方法,在所述终端对所述媒体数据进行编码,得到所述目标视频的录制视频文件之后,所述方法还包括:
    所述终端显示第三预设界面;
    所述终端判断是否通过所述第三预设界面接收到保存指令,所述保存指令用于指示对所述录制视频文件进行保存;
    所述终端如果判断出通过所述第三预设界面接收到所述保存指令,响应于保存指令保存所述录制视频文件。
  37. 根据权利要求36所述的方法,所述终端判断是否通过所述第三预设界面接收到保存指令,包括:所述终端判断是否接收到由所述第三预设界面的预设按钮被触控所产生的保存指令;或者,所述终端判断是否接收到由所述第三预设界面的键盘快捷键被触控所产生的保存指令;或者,所述终端判断是否通过所述第三预设界面接收到与保存指令对应的语音命令。
  38. 根据权利要求20至30中任一项所述的方法,在所述终端对所述媒体数据进行编码,得到所述目标视频的录制视频文件之后,所述方法还包括:
    所述终端保存所述录制视频文件至所述第一虚拟现实设备的软件开发包中;
    或者,所述终端保存所述录制视频文件至游戏客户端的软件开发包中;
    或者,所述终端保存所述录制视频文件至游戏引擎的软件开发包中。
  39. 一种视频文件的处理装置,包括:
    第一检测单元,用于检测第一虚拟现实设备的类型,其中,所述第一虚拟现实设备用于显示待录制的目标视频;
    第二检测单元,用于根据所述第一虚拟现实设备的类型检测所述第一虚拟现实设备的应用;
    获取单元,用于在所述第一虚拟现实设备的应用中,获取所述目标视频的媒体数据,其中,所述媒体数据至少包括所述目标视频的图像数据;
    编码单元,用于对所述媒体数据进行编码,得到所述目标视频的录制视频文件,其中,所述录制视频文件的视频内容与所述目标视频的视频内容相同。
  40. 根据权利要求39所述的装置,所述获取单元包括:
    第二获取模块,用于捕捉所述目标视频的左眼视频画面和所述目标视频的右眼视频画面,根据所述左眼视频画面获取所述目标视频的左眼图像数据,根据所述右眼视频画面获取所述目标视频的右眼图像数据;
    拼接模块,用于对所述左眼图像数据和所述右眼图像数据进行拼接处理,得到所述目标视频的图像数据;
    所述编码单元具体用于:对所述图像数据进行编码,得到所述录制视频文件。
  41. 根据权利要求39所述的装置,所述装置还包括:确定单元,用于在对所述媒体数据进行编码,得到所述目标视频的录制视频文件之后,根据所述第一虚拟现实设备的类型确定与所述第一虚拟现实设备相适配的视频播放终端,其中,所述视频播放终端用于与所述第一虚拟现实设备相连接,获取并播放所述录制视频文件。
  42. 根据权利要求39所述的装置,所述第二检测单元包括:
    检测模块,用于在所述第一虚拟现实设备的类型为第一类型的情况下,启动预设进程检测所述第一虚拟现实设备的应用的进程和所述应用的窗口信息;或者,调用模块,用于在所述第一虚拟现实设备的类型为第二类型的情况下,调用所述第一虚拟现实设备的软件开发工具包以获取进程ID,根据所述进程ID获取所述第一虚拟现实设备的应用的进程和所述应用的窗口信息;
    保存模块,用于保存所述应用的进程和所述应用的窗口信息,并根据所述应用的进程和所述窗口信息加载所述应用。
  43. 一种终端,所述终端包括:处理器以及存储器;所述存储器,用于存储程序代码,并将所述程序代码传输给所述处理器;所述处理器,用于调用存储器中的指令执行权利要求1-19任一项所述的视频文件的处理方法。
  44. 一种存储介质,所述存储介质用于存储程序代码,所述程序代码用于执行权利要求1-19任一项所述的视频文件的处理方法。
  45. 一种包括指令的计算机程序产品,当其在终端上运行时,使得所述终端执行权利要求1-19任一项所述的视频文件的处理方法。
PCT/CN2017/100970 2016-10-26 2017-09-08 视频文件的处理方法和装置 WO2018076939A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/293,391 US10798363B2 (en) 2016-10-26 2019-03-05 Video file processing method and apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610950464.8A CN107995482B (zh) 2016-10-26 2016-10-26 视频文件的处理方法和装置
CN201610950464.8 2016-10-26

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/293,391 Continuation US10798363B2 (en) 2016-10-26 2019-03-05 Video file processing method and apparatus

Publications (1)

Publication Number Publication Date
WO2018076939A1 true WO2018076939A1 (zh) 2018-05-03

Family

ID=62024315

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/100970 WO2018076939A1 (zh) 2016-10-26 2017-09-08 视频文件的处理方法和装置

Country Status (3)

Country Link
US (1) US10798363B2 (zh)
CN (1) CN107995482B (zh)
WO (1) WO2018076939A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112291577A (zh) * 2020-10-16 2021-01-29 北京金山云网络技术有限公司 直播视频的发送方法和装置、存储介质、电子装置
CN112333552A (zh) * 2020-07-31 2021-02-05 深圳Tcl新技术有限公司 视频播放的智能控制方法、终端设备及可读存储介质

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11030796B2 (en) 2018-10-17 2021-06-08 Adobe Inc. Interfaces and techniques to retarget 2D screencast videos into 3D tutorials in virtual reality
JP7164465B2 (ja) * 2019-02-21 2022-11-01 i-PRO株式会社 ウェアラブルカメラ
CN111629253A (zh) * 2020-06-11 2020-09-04 网易(杭州)网络有限公司 视频处理方法及装置、计算机可读存储介质、电子设备
CN112312127B (zh) * 2020-10-30 2023-07-21 中移(杭州)信息技术有限公司 成像检测方法、装置、电子设备、系统及存储介质
CN112969038A (zh) * 2021-01-29 2021-06-15 北京字节跳动网络技术有限公司 数据传输方法、装置、电子设备及计算机可读存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104580986A (zh) * 2015-02-15 2015-04-29 王生安 结合虚拟现实眼镜的视频通信系统
CN105117021A (zh) * 2015-09-24 2015-12-02 深圳东方酷音信息技术有限公司 一种虚拟现实内容的生成方法和播放装置
US20160086386A1 (en) * 2014-09-19 2016-03-24 Samsung Electronics Co., Ltd. Method and apparatus for screen capture
CN105915818A (zh) * 2016-05-10 2016-08-31 网易(杭州)网络有限公司 一种视频处理方法和装置
CN105939481A (zh) * 2016-05-12 2016-09-14 深圳市望尘科技有限公司 一种交互式三维虚拟现实视频节目录播和直播方法
CN105979250A (zh) * 2016-06-26 2016-09-28 深圳市华宇优诚科技有限公司 一种vr视频数据处理系统

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5116498B2 (ja) * 2008-01-31 2013-01-09 キヤノン株式会社 映像処理装置及びその制御方法
US20100073471A1 (en) * 2008-09-24 2010-03-25 Kabushiki Kaisha Toshiba Video Display Device and Video Display Method
JP5336578B2 (ja) * 2009-03-26 2013-11-06 パナソニック株式会社 映像加工装置、映像加工方法、映像加工集積回路、映像再生装置
US20170337711A1 (en) * 2011-03-29 2017-11-23 Lyrical Labs Video Compression Technology, LLC Video processing and encoding
JPWO2012132424A1 (ja) * 2011-03-31 2014-07-24 パナソニック株式会社 立体視映像の奥行きを変更することができる映像処理装置、システム、映像処理方法、映像処理プログラム
US9288505B2 (en) * 2011-08-11 2016-03-15 Qualcomm Incorporated Three-dimensional video with asymmetric spatial resolution
US9661047B2 (en) * 2012-04-30 2017-05-23 Mobilatv Ltd. Method and system for central utilization of remotely generated large media data streams despite network bandwidth limitations
CN102724466A (zh) * 2012-05-25 2012-10-10 深圳市万兴软件有限公司 一种视频录制方法及装置
CN103414818A (zh) * 2013-04-25 2013-11-27 福建伊时代信息科技股份有限公司 移动终端的运行程序监控方法与系统、移动终端与服务器
CN104602100A (zh) * 2014-11-18 2015-05-06 腾讯科技(成都)有限公司 实现应用内视频、音频录制的方法及装置
US20160195923A1 (en) * 2014-12-26 2016-07-07 Krush Technologies, Llc Gyroscopic chair for virtual reality simulation
CN105847672A (zh) * 2016-03-07 2016-08-10 乐视致新电子科技(天津)有限公司 虚拟现实头盔抓拍方法及系统
CN105892679A (zh) * 2016-04-28 2016-08-24 乐视控股(北京)有限公司 虚拟现实图像的播放方法及装置
CN105933624A (zh) * 2016-06-17 2016-09-07 武汉斗鱼网络科技有限公司 一种用于网站视频录制方法及装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160086386A1 (en) * 2014-09-19 2016-03-24 Samsung Electronics Co., Ltd. Method and apparatus for screen capture
CN104580986A (zh) * 2015-02-15 2015-04-29 王生安 结合虚拟现实眼镜的视频通信系统
CN105117021A (zh) * 2015-09-24 2015-12-02 深圳东方酷音信息技术有限公司 一种虚拟现实内容的生成方法和播放装置
CN105915818A (zh) * 2016-05-10 2016-08-31 网易(杭州)网络有限公司 一种视频处理方法和装置
CN105939481A (zh) * 2016-05-12 2016-09-14 深圳市望尘科技有限公司 一种交互式三维虚拟现实视频节目录播和直播方法
CN105979250A (zh) * 2016-06-26 2016-09-28 深圳市华宇优诚科技有限公司 一种vr视频数据处理系统

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112333552A (zh) * 2020-07-31 2021-02-05 深圳Tcl新技术有限公司 视频播放的智能控制方法、终端设备及可读存储介质
CN112291577A (zh) * 2020-10-16 2021-01-29 北京金山云网络技术有限公司 直播视频的发送方法和装置、存储介质、电子装置
CN112291577B (zh) * 2020-10-16 2023-05-05 北京金山云网络技术有限公司 直播视频的发送方法和装置、存储介质、电子装置

Also Published As

Publication number Publication date
US20190199998A1 (en) 2019-06-27
CN107995482A (zh) 2018-05-04
CN107995482B (zh) 2021-05-14
US10798363B2 (en) 2020-10-06

Similar Documents

Publication Publication Date Title
WO2018076939A1 (zh) 视频文件的处理方法和装置
WO2020083021A1 (zh) 视频录制方法、视频播放方法、装置、设备及存储介质
US9855504B2 (en) Sharing three-dimensional gameplay
US10609332B1 (en) Video conferencing supporting a composite video stream
WO2017140229A1 (zh) 移动终端的视频录制方法和装置
WO2015196937A1 (zh) 一种录制视频的方法和装置
US10965783B2 (en) Multimedia information sharing method, related apparatus, and system
US10271105B2 (en) Method for playing video, client, and computer storage medium
WO2018000619A1 (zh) 一种数据展示方法、装置、电子设备与虚拟现实设备
WO2022257699A1 (zh) 图像画面显示方法、装置、设备、存储介质及程序产品
WO2015072968A1 (en) Adapting content to augmented reality virtual objects
JP6379107B2 (ja) 情報処理装置並びにその制御方法、及びプログラム
WO2017185761A1 (zh) 2d视频播放方法及装置
CN112261481B (zh) 互动视频的创建方法、装置、设备及可读存储介质
JP2012244622A (ja) コンテンツ変換装置、コンテンツ変換方法及びその貯蔵媒体
CN112261433A (zh) 虚拟礼物的发送方法、显示方法、装置、终端及存储介质
KR102099135B1 (ko) 가상현실 컨텐츠 제작 시스템 및 제작 방법
CN114040230A (zh) 视频码率确定方法、装置、电子设备及其存储介质
CN113411537B (zh) 视频通话方法、装置、终端及存储介质
US20180160133A1 (en) Realtime recording of gestures and/or voice to modify animations
CN104888454B (zh) 一种数据处理方法及相应电子设备
KR20210056414A (ko) 혼합 현실 환경들에서 오디오-가능 접속된 디바이스들을 제어하기 위한 시스템
WO2018000610A1 (zh) 一种基于图像类型判断的自动播放方法和电子设备
US11962743B2 (en) 3D display system and 3D display method
US20240012558A1 (en) User interface providing reply state transition

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17864615

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17864615

Country of ref document: EP

Kind code of ref document: A1